 Good morning, everyone. Thank you very much for coming to this roundtable. If you've not read the schedule correctly, the subject will be neutron overlay or SDN. And the question is about the choices of networking models that neutron offers. And it's not only a vendor problem when you're choosing that, but it's also a question of how you're going to set it up. And one of the questions that one can ask is whether your tenants should be able to touch your physical network or whether this should be limited to accessing an overlay. Another question is what are the limits of these models? Well, I've asked five very fine gentlemen to come help me ask this answer this question. I have with me Mike Cohen. Mike Cohen is director of product management at Cisco. I also have Dan Dimitriou, who is the CEO of Midokora. I have Pedro Marquez, who works at Juniper as a software engineer. And finally Chris Wright from Red Hat, who is head of the CTO office. And I think I miss you, David. I don't know why I did that. So I also have David Dynrow, distinguished engineer at HB. So the rule of the game is that each of them is going to be giving a five minute statement to start with. And then we'll open the floor to questions either from the people on the roundtable or from the room. Thank you very much. Mike, do you want to start? Yeah, thanks a lot. So this is a really interesting question. And it's one that we've been thinking about with, you know, at my group quite a bit. And ultimately, one of the conclusions we came to has been driving some of the work we've been doing at OpenStack. And really, if you start thinking about how tenants want to consume OpenStack, there's multiple classes of users. And really, it's about how you want to launch and create applications and making that process as fast and easy as possible. And ultimately, if we look at the way we've been, you know, a lot of OpenStack projects have been created to date, you know, the goal has been to create an abstraction layer across the underlying hardware, and then allow that abstraction to be things that tenants access directly, expose a set of tenant APIs. You know, but the reality is is, you know, as we looked across the different projects, we really saw that, you know, many of these constructs that we're exposing are still very low level. We saw some of these things with overlays and you know, in virtualization layers. But ultimately, the tenant was still describing relatively, you know, low level constructs, you know, effectively, you're directing touch, directly touching the hardware through an abstraction layer. You know, this is still relatively complex in our minds. And you know, essentially that complexity makes it actually harder to launch your application, harder to scale your application, harder to operate your cloud. So the tenant experience is more complex than it needs to be, because a lot of the low level details of managing, essentially managing the infrastructure still bleed through in the APIs that we're exposing. So the mindset that, you know, we've had, or the way I've been thinking about this problem, is about you really capturing user intent and essentially how you model applications and be able to model an application separately, completely separately from the underlying infrastructure. So I guess probably in, you know, in, you know, in Xfranacular, this may feel more in kind of the full SDN. But really, it's separate, you know, it's to some degree a bit separate from these underlying models. The idea is, you know, we need to, you know, we need application developers or the underlying kind of end users, you know, in the tenant to describe what they need in an abstract way from the cloud environment. And then be able to translate that through the system to map that to the underlying resources. So this gives us the best of both worlds in a way in that it actually exposes a lot of power to the tenant and that it allows them to fully describe the environment they want. But we don't want them to do it in a way that is directly tied to the low level constructs. Because then we actually have very limited flexibility about what we do. If they ask for something low level, we have to give that to them. That's what they asked for. And we're honoring that contract and automating it. If we, if they ask for something abstract, then we actually have a lot of flexibility or the underlying system has a lot of flexibility and how that can be managed and deployed. So as we've been thinking about how, you know, how opens that can evolve, we've been thinking about these kind of models. You know, part of this is present in the group based policy project, you know, that we've been working with the nutrient community and also the broader opens that community around so far. So you can you check out that project to see a little bit more of your sort of this this concept in this architecture. You know, the other idea that's really important and heard in this is that you need to separate concerns. So as you give tenants a very simple application model by which to, you know, by which to describe what they need from OpenStack, you also want to have a separate input, you know, a separate place by which providers can, you know, actually describe on the underlying system or handle the operational aspects of running the cloud. These two systems need to be separate. They need to take inputs from different places. So as we think about the models we want to expose, we want to be able to allow the provider to set out, create a set of underlying rules that can be merged with the abstraction models that we've offered the tenants, and that can actually be put together to program the underlying hardware. So again, this is another case for adding the layer of abstraction and not tying the very low level constructs all the way through to the tenant. Because if they ask for something, you essentially have to give it to them. But you know, then it makes it very hard to marry it together with the request of the provider and ultimately control the system. And as we've seen with, you know, some of the other work that's gone on is you layer on things like governance and compliance constraints that even adds more complexity. Because if your low level details bleed all the way through to your tenants and they say ask for something they're not allowed to do, you either have to fail this in different ways or disable something they've already asked for. You want to separate out these different requests and give them a very simple API, but one that actually has the full underlying power of the hardware underneath. So at a high level, that's the way we've been thinking about this problem. You know, how this approaches SDN, there's obviously a number of ways to, you know, to approach the implementation of your overlays, you know, and how overlays can be integrated with physical hardware in different ways to achieve these kind of solutions. And with Cisco, we've been working on a range of these things. People may have checked out our application-centric infrastructure, for example, which actually ties together an overlay with the underlying hardware to give you essentially a system that can tie across hardware platforms and software and give you integrated visibility. But at that layer, there's a lot of different approaches by which to achieve things. The key concept is actually the separation of intent from the underlying system that I'd love everyone to take away. Think on that, I can hand off back to Nick. Thanks a lot, Mike. Dan, do you want to continue on? Great, thank you. Good morning. Mike, why don't you go and sit up in the front? We don't have chairs, so we'll make do. So I pretty much agree with everything Mike said. But honestly, I think the question was, initially, should we allow the tenants to have any kind of direct access or control over the resources that are provisioned in the physical layer when they want to provision their applications and their virtual network constructs? And I think we pretty much agree at this stage that the answer is no, not directly. And there are many reasons for this, like Mike already mentioned, you know, I'm going to repeat several of them. We don't want to tie the users and the applications to specific hardware. We want to abstract them away so that we can change that underlying hardware model at some point or optimize it. And, you know, we want to actually get away from some of the limitations of the underlying networks as well. So one of the reasons why we go for the overlay approach is because we try to overcome some of the existing, perhaps still existing, limitations of the hardware models, right, like the number of isolated network segments that we can effectively create and such, right? So perhaps some of those, you know, are resolvable in other ways, but we have to do that in software, right? There was a good point made to about allowing the administrators, the operators of the cloud, to set up the infrastructure and control the things that tenants don't really have access to, but that are important, like, for example, provisioning connectivity to external networks, either to the internet or to, you know, private corporate networks. And I think that this is one area where currently Neutron actually is falling short. So from, for creating a tenant-facing API, you know, for provisioning network segments and routers and other network constructs for application developer owners, it does okay. Of course, we do need to layer the policy of stuff on top of that so that it's even more abstract or so that people don't need to care about the application, people don't need to care about the low-level details, like IP address management and firewall rules and things like that. But on the other side, we're not doing a great job of expressing realistic networks for the administrator, because practically everything's modeled like a layer two segment right now. You know, and I think realistically, a lot of networks have more complex scenarios, like, you know, of course they have routing to the internet, peerings with BGP networks, sorry, MPLS networks, various types of carrier ethernet and such. And none of those are actually modeled in Neutron Raw. So it's actually something we're addressing, at least on the layer three side, I'm going to make up cheap plug for one of our blueprints, you know, the provider router blueprint, and it's associated IPAM changes. Please check that out. So it's a small point, you know, but it's something where we think that for the administrator, we really need to do a better job of providing the capabilities, right? So, you know, overall, getting back to the original question, so the tenants can't directly modify or affect the underlying networking, right? But indirectly and through various policy mechanisms that are in place, they do. So for example, when we're provisioning networks, currently we just kind of assume that we go over the top as an overlay on IP, and we don't pay a lot of attention, you know, to what's going on on the underlying network. Realistically, there's going to be, there's going to be a congestion, resource limitations, and we have to take that into account. We may have to actually do things like rerouting traffic, you know, paying attention to priorities. Currently, we don't expose priorities to tenants, right? So we probably should do that at some point in some form. Not sure exactly what that is, but I think that's important because it comes up frequently, like people say, well, my application, A, B, and C, are very, very important and, you know, some of these other ones are just best effort. We have no way of expressing that in a standard way, let's say. Maybe some of our products individually do, but we don't have that in the base. Right. So it's a lot of things of this sort. I didn't talk much about the policy, because, you know, Mike already talked about that, but I definitely agree that the more abstract, declarative policy-based mechanism for describing networking requirements for the applications is ultimately the way to go, right? So whatever we all decide on and gets de facto standardized, you know, I think is is going to be good. And then it's up to us to implement that in an efficient way on the on the other line network. Thank you. Thanks a lot, Dan. Do you want to join us on these wonderful shares we have? So, David, your turn. Thank you. Morning. I'm a I'm a big SDN guy. So the answer is full SDN. Thank you very much. No, I'm kidding. I think in the SDN world, we're building a more nuanced sense of what a tenant actually is. And that, you know, I do believe we should expose full SDN in OpenStack because it's going to bring some richness and some new capabilities, including sort of a hierarchical notion of what a tenant is. You know, one person's provider is another person's tenant, and you could end up with a scenario where you have facilities based provider that owns physical resources, switches and routers and things of that nature. You could have multiple virtual providers that had resources carved out from the physical provider's space and allocated to it that are under the control of that virtual provider. They're going to have corporate tenants that are allocated certain resources and certain things they're allowed to do in the network within the corporations, you know, business units and in the business units, administrators and etc. So the notion of should a tenant be able to do dangerous stuff, I think we need to think about it more in terms of which types of tenants should be allowed to do dangerous stuff and which shouldn't clearly at the top of the stack, it makes sense to have a completely abstracted model where nobody can hurt themselves and they don't have to know anything about networking, you know, completely agree with everything that Dan and Mike said and that's kind of the beauty of open source is we work for different companies, we compete with each other. But in this world, we're just talking about reality and what makes sense and we're all drawing the same conclusions. So I think there are people who should be exposed to the low level protocol details and the vendor specific and the media details. And I don't think it's as simple as one's an administrator and the others a tenant, but that it's spread across the hierarchy and that you can use policy to define what it is that your child or your tenant inherits from the space in which you live. So I'm hoping to see OpenStack and Neutron expose more of the richness of control over resources that SDN offers, you know, a range of abstractions and the appropriate abstractions pointed at the appropriate system users. Thank you. Thanks a lot, David. Pedro, your turn. Good morning. So I mean, I think I'm taking the question a bit as the question of network design. And to me, network design is about requirements. So when I look at the cloud space, in my opinion, we're going to see a smaller number of very large clouds. And if you look at network design requirements, I think for a lot of these large clouds, you're looking at, you know, network requirements of, say, 10,000 to 100,000 ports, non-blocking, full 40 gig. We know of our subscription as an example. And within the industry, we kind of know how to make 100,000 port network work. You build it as a clause designed and you will end up with something like four or five thousand switches, you know, at that point, it becomes really important to cost optimize those switches and to limit the function of those switches. So and, you know, it's in a clause network, really, VLANs are not possible at the underlying. It's just not possible. Multicast is not possible. So both, you know, VLANs and multicast based VXLAN are not, you know, feasible technologies. So if you take that point, which I think it's relevant for some very large designs, I think there's at the very least for very large scale networks an absolute need to build the overlay if you're going to implement the semantics of the Neutron API. So if you're a cloud provider that implements the Neutron API and if you allow your tenants to bring in any address space and implement sort of what Amazon calls the VPC API, which is what Neutron does, there's no other practical way to do it other than an overlay, in my opinion, at that large scale. There's also at that scale, I think a lot of those operators really want to drive the design to simplify the requirements of those devices, such as those devices can be part of the word, but commoditized and commoditized, not just to drive the on price, but to drive a standard of operations. So that's kind of, you know, the first observation I wanted to point out, you know, there's no one size fits all. If you have a small cluster of, say, you know, a few racks, the point that I made previously does not apply. So that's kind of one topic that I look at. The second thing I look at is, when you, at least for me, when I think about networking, I don't particularly think about L2 segments. To me, networking is about routing and very commonly in terms of private clouds, which I tend to work with from a manageability perspective, people want to deploy different tiers of applications into different virtual networks because they are different administration domains. And, you know, like in your traditional cloud application, web servers don't really talk to each other. They just talk to the caching layer. The caches don't talk to each other and they just talk to maybe a database or some storage. So traffic tends to flow from external, front end, cash tier, app tier, database tier, and never side-to-side traffic on that L2 segment. So in, if you're, you know, logical assumption is that pretty much your L2 segment is irrelevant. What is relevant is what you do about routing. And I think in a lot of discussions we're having in terms of OpenStack, we're kind of assuming, well, routing is this thing on the side. It's either, you know, a completely external system or it's this L3 agent, which is Linux VM, with some rules being pushed into it. I mean, the approach we've been taking from the beginning, and I think if you look at what's happening in terms of the distributed virtual router work in Neutron, I think the, you know, the community is really going in that direction is that the key thing is what happens to routing? So first of all, you know, do you have to ping-pong the traffic? Has it's traversing layers? And, or can you do distributed routing? And how can you do interoperability between a Neutron network and some other network virtualization technology outside? So I mean, taking the needs of large scale, taking routing as the primary factor, you know, the approach we've been going down is built on using an overlay with fully distributed routing and interoperability with existing network-based network virtualization technology. You can, you know, that's why. I'm sure that there's many other valid network designs if you take a different set of assumptions. I mean, it's, you know, with different assumptions, you end up with a very different logic. Thank you. Thank you very much, Pedro. Chris. All right. Hopefully, it's on. The benefit of going last is really, I, I, yeah, there's still people out there, okay? Benefit of going last is everybody said some variation of what I would like to say. So maybe I'll try to change the pace a little bit. Some, some key pieces that I think we need to reflect on are what are we trying to achieve with giving users access to the network in a cloud? And I think that Neutron right now is fundamentally failing us. It's presenting to the users as Mike started off this discussion. It's presenting to the users a fairly low level set of network engineering focused APIs. And then, as Pedro mentioned, what is the value of a Layer 2 segment in this environment? So we've built a system whose primitive is give me a Layer 2 segment. And I think that that's something that we really need to address to help move the whole state of the art forward. Definitely decoupling the tenant notion of how they manage the network from the underlying physical infrastructure is important. And I think everybody's touched on that. So call that an overlay. There's a lot of different technologies you could use to build an overlay. Some of them are common to the modern SDN, definition of SDN. Some of them have existed for, for decades. So I don't think that's in question. I think that we should consider where, really, how are we going to simplify the life of somebody trying to use the cloud, expressing a simple desire to connect their applications to each other and to the internet? And that is not going to be giving anybody direct access to the hardware. I think that's been made clear by a number of folks on this panel. But it does beg one interesting question, which is most of what we've done with Neutron and with current cloud technology is based around IPv4. And IPv4 is a dead end. IPv4 is out of addresses. IPv4 is helping us or pushing us into some of the corners that we have defining network virtualization. There's some nascent work in Neutron to look at IPv6. How do we support IPv6? And one of the interesting components with IPv6 is it's possible that we give every instance in a cloud a publicly routable IP address. That really changes some of the notions of what we think network virtualization is there for. We still are trying to connect applications to each other and to the internet and make sure we have the right isolation and privacy around our individual application containers and their network traffic. But it's not clear that doing that in a way that requires building virtual private topologies is the right way or the only way to do that. So those are the things that I wanted to point out, which I think are fundamentally different from what my fellow panelists said, which for the most part, I just agree with their great points made. Thank you. Refute to us, each other. Thanks a lot, Chris. So, do every of you agree with each other? It's no fun if we agree on everything. It sounded a little bit like it, but I think Chris brought some very valid points to the debate. Why don't you move a little bit more into the light? So, you know, if I can, I really wanted to disagree with Chris here on, you know, most of it, the things he said, I mean, I do respect his opinion, but I really believe there's nothing fundamentally wrong with the Neutron API. I think often people confuse the Neutron API with some Neutron implementations. So, you know, I don't think the Neutron concept of a network is an L2 domain. I- It's actually defined that way. If you read the documentation. It's a very literal interpretation. I mean, the API is- It says layer two broadcast domain. It is just- The concept of an L2 layer two broadcast domain is not, means different things to different people. So, if you talk to, you know, traditional bridging guy, an L2 broadcast domain means an IEEE bridge. If you talk to somebody that wants to pass IP packets, an L2 broadcast domain means that when you broadcast to 255, 255, 255, 255 packets end up on all those endpoints. So, for instance, the implementation I work on does the later without doing the former. You know, really some of these things are given to interpretation. Second point regarding the VPC APIs, the Amazon VPC APIs. I mean, a lot of the people we're working with really want that as sort of an administrative building block. So, you know, the fact that you could have public IPs to every VM does not solve the issue of segmentation for administrative purposes, right? So, I mean, the thing with networking in a data center is it's a form of access control. And you could say, well, and I've heard some people say, well, let's get rid of networking as access control. What we'll do, the only thing we'll do is only have SSL sessions with full authentication and all my VMs will only speak SSL and bring access control to the key management. It's interiorly possible rather in practical because that would require a unique RPC infrastructure across all the infrastructure, everything you use. I mean, I do know one institution that does that, but they write every single piece of software they use from the database to everything else. For the rest of us, I don't think that's a strategy to replace access control with SSL. To be clear, I wasn't suggesting that. Actually, not at all. In fact, I was suggesting that access control is fundamental to isolation of tenant networks. So it's not about SSL-based authentication to the application, but it's about leveraging the existing network infrastructure that we have to build isolation. And that could be what we've been doing. We started with VLANs and we know that those don't scale and that's part of the layer two issue that I was alluding to that I think you mentioned as well. We've moved to layer two over layer three. So that's the overlay. It's not clear that we need layer two over layer three. I think we could do layer three over layer three and that's really what I was suggesting. And I actually don't think we have the same disagreement. I think we probably agree. I mean, I think L-tree over L-tree is, it's what we've started doing from the get-go and it's actually how you implement the semantics that people need. Nobody really cares about Ethernet headers on a cloud. Yeah, that was part of my point. The other part is with IPv6, it's not obvious that we need that L-tree over L-tree. It's certainly possible that we could do L-tree. In theory. L-tree, just direct connect to L-tree. In theory, IPv6 has mechanisms for mobility but do they actually work well enough? You know, I don't, I think we, I think it's a valid thing that we should push on and explore. Yeah, it's time to find out. No, it's true. I mean, definitely just a meta point here is that we have been designing things based on limitations of technologies and we should think a bit more outside the box. Me, even Amazon's original EC2, right? The reason that the whole model is with NAT everywhere is because of technological limitations. So I think they wouldn't have done that, you know, later on. But I think, you know, Neutron provides really powerful and important virtualization with very little abstraction and it's making the interface more abstract for a certain set of users who don't know anything about network protocols and they're not network engineers. They know that these 400 workloads have to talk to those 200 workloads and call me when you're done. Sure. That's the API. There are orthogonal concerns and we can probably have a much longer discussion about each of these. But just to build on for a second what Dave was saying, that at least a number of implementations I've seen around Neutron do literally take the Neutron network as an L2 broadcast domain. And maybe you're saying like maybe they shouldn't interpret it literally but they do and that leads to all kinds of constraints around the underlying implementation because the definition, if you actually, you know, if you read formally what it says, it says it's a broadcast domain. So they have to honor that semantic even though it's the wrong semantic, a lot of people don't need it and it limits a lot of underlying behavior. If it hurts, stop doing it. Thank you. So there is at least three people that have questions so I don't know which one wants to start. Okay, you're starting. So this is for Pedro and Mike. As performance becomes more of an issue, how does Juniper open-contrail or an OVS, a Cisco ACI approach, deal with VxLan offload on adapters? Yeah, so well, from an ACI perspective, so we think VxLan offload is essential, right? Doing the VxLan in-capping software really doesn't make sense. I don't know if you were here for the previous talk, they had some nice performance slides, you could probably run them yourself and you see that you're getting like one or two gig out of the server and out of your 10 gig, Nick. It doesn't make much sense. The approach we have with ACI today is largely, essentially leveraging the capabilities in the switch to be able to do offload and essentially using VxLan gateway functions in the switch to essentially have full hardware acceleration, full on-rate performance with no overhead. You know, now what we're seeing now- When you say the switch, are you talking about the physical switch or are you talking on offload? No, I'm talking about in the top of rack, in the top of rack layer being able to do- Right, but I still have then the overhead and the OVS switch. It depends what you're using for your driver. So if you're asking specifically about, you know, in the ACI world, there's a lot of ways of approaching this problem. One of the ways we do it is you can actually use locally significant VLANs, which don't have any overhead out of the V- from the V-switch up to the top of rack and then do the VxLan at that point. If that's, you know, if that's what's required, there are a set of NICs coming on board, we'll be using them with ACI, you know, there's architectures that'll support them, you know, other architectures that will support them, where you're using VxLan offload in the NIC, it's also a viable approach. You need to do VxLan offload in one of these places. It's crazy to do it directly. And ACI will support the VxLan? ACI will support it, so right, yes, ACI, so right now, as I said, our solution, we essentially use, you know, we will support VxLan out of, you know, out of the V-switch and software, if that's what people want to use, we don't recommend it. We think you should use, you know, VLANs up to the top of rack, but we'll do the encapsulation in hardware. We're also, again, you know, looking at doing it in the NICs, as those NICs become more prevalent in the servers that people have. Pedro, do you want to add to this? So, I mean, I have a different view. So, at the moment, the performance we measure with two active 10 gig is about 16 gig in iPerf throughput, which if you look at encapsulation overhead, it's just about theoretical maximum. The PPS numbers, though, are not as good as the iPerf throughput, and that's actually more because, typically, we measure with KVM, and there's a performance impact of KVM. Linux Preach is also not very good for PPS numbers. So, what we're actually looking at is looking at using technologies like Intel's DPDK to try to go look into PPS performance, because I think throughput is okay, and improve PPS over what Linux Preach gives you. Which I think is also, this is excellent work too, and actually, there's folks on my team that are also working on the DPDK project. I think that's a very valid approach. Tying DPDK properly into offload techniques on the NIC is still work that needs to be done in the community, so that's actually an important gap still to fill, though. Wow, we've got Jennifer and Cisco agreeing, once more. Yes, this is Michael Delzer, American Airlines. For me, I do believe we need to start focusing more on IPv6, so I agree with the gentleman from Red Hat. I do see that there's a fundamental problem for us as enterprises looking at clouds, and one of the things I think you guys are not addressing that I was hoping you had addressed is enterprises need more than just my data center or a subset of just the open stack in an SDN. I want my entire corporation all over the world to be in an SDN type of an environment. My cost of doing labor today with a traditional network design is killing us, or killing us financially. My intent is to try to have one kind of concept that works and my ability to create micro instances of compute power anywhere in the world where I need that power is what I'm trying to get to, and the idea that I'm having to create just one cloud and that one cloud is its own entity and doesn't really have any relationship to these other clouds is kind of a problem for me, and also the concept of I'm going from a front-end web server to a database server as a siloed end-to-end path doesn't not work for me either. I need a design for microservices. I've already spent 14 years building an environment of SOA services where I have tons of east-west traffic, so the concept of having a linear flow does not work for me. So I need the focus of how I get this power to go forward where we can do a mesh where things can talk to things, where I have the ability at a PAS layer to provide that really abstract layer where people don't have to worry about networking, where the COTS applications with a lot of enterprises have to deal with, they're probably going to be the people who live on open stack and they're not using the PAS layer, they then need a more customized ability to say I have these kind of requirements. For companies like American Airlines, we need multicast. For people who deal with radar data, they need to be able to take that one source of radar data and multicast it out to anyone who wants to consume that radar data. So there is needs for us to have applications being able to consume things in a multicast kind of format in a microservices type of environment. You realize you're asking for a lot. I don't know if I have a great insightful response, other than I did intend to mention and I forgot one of the things that we're also not doing well is connecting applications between clouds. So that might touch directly on your point. It has come up several times over the last couple of days that at the very least what people want is an Amazon VPC peering type of model between clouds, maybe within the same administrative domain with federated IAM, but at least that doesn't seem so hard. I think we can do this. So part of it is the open stack layer has to evolve to be able to present this sort of image of one ginormous cloud that's the whole world and then the SDN layer beneath it also needs to grow and that and a lot of the work in the SDN space is young enough that we just sort of haven't gotten there yet or walking before we run kind of a thing but I think sort of the federation to create big things out of lots of little things is kind of the next big problem. So I mean one of the advantages in our case we built overlays out of technology that's 10, 15 years old which is called BGPL3 VPN. The cool thing about it is you can talk to any carrier and they'll give you an L3 VPN across the world and you can peer between your cluster and your L3 VPN network or peer between multiple clusters. So I do believe that dynamic routing is really important for open stack neutron clusters. Traffic does not necessarily begin and end in the cluster and I believe dynamic routing is required. I think there's some work starting in dynamic routing into Neutron. So yeah, the community will get there. It just takes some time. So the last comment I'll add. I think people touched on the inner data centers. There's a clear need there and there's work going on. The piece I wanted to address was the second part of your point that we're talking about a lot of kind of new cloud architectures where you can blow up a lot of the ways you've been doing work and we can give you potentially IPv6 addresses, your public IP addresses on every machine. There's a lot of new paradigms that might be very interesting that you might be able to move to eventually but it's not what you have today and it's not even potentially what you want out of the gate. So we also have to look at how do we serve the traditional enterprise versus the I'm building a new cloud from scratch and I can start everything from a clean slate. We can't leave some of those designs behind. I think most of the way you'll end up getting served there, best is by working with different commercial vendors. So this is where the work we do on ACI, for example. We have very strong multicast support inside our ACI fabric because we're tying things together at the physical switching layer. So we can solve those kind of feature requests really well. If you wanted to build, but those kind of things come from an enterprise environment, you may not be present, you know, new age cloud environment, you may say no multicast. But I think the way you're going to find those kind of requests best served is you have to go vendor to vendor and see the different offerings that exist and how they can check different boxes. My cast for multicast across the world. That was the part. I don't think ACI is doing that yet. Not quite. That's fair. Hey guys, I'm Karthik with Red Hat. So coming back to some of the architectural comments that Tris had made and Pedro and a few of you had made. So I think there have been some fundamental discussions and debate within the internet community, going back 25 years, 30 years, on the notion of the use of an IP address, both as an identifier and as a locator, a routing locator essentially. And there have been numerous approaches over the last 25, 30 years. Very smart people have proposed ways where you can decouple the two. And most of those are kind of fallen by the wayside. There have been approaches in IPv6 in the early days around multi-homing, renumbering. And I think it would be worthwhile, even if we don't necessarily change the neutron abstraction as presented to the end instance. But at least how we look at the treatment of the underlying neutron abstraction below the surface in such a way that we can decouple that identification purpose versus the routing purpose. And I think some of the approaches that have been proposed over the last 20, 25 years might be worth revisiting to help with things like mobility, to help with some of the other things that we're trying to do in a more artificial way right now, just with an open stack. So I would really invite us to have some of those discussions offline and look at things that have been proposed over the last 20, 25 years. And I think it's a great opportunity going back 20, 25 years. I know there are a bunch of folks within the IITF community going back to the early 90s. So just a quick comment there. I think we all took the IP encapsulation approach or MPLS encapsulation approach precisely to make a location and identification separation to separate those concerns. But it's application transparent. That's nice. I'm not sure about the vast number of approaches that you mentioned, but I think some of them do require application changes and that's not going to fly. Not all of them. By the way, Karthik, we're actually looking at using Lisp to do the cloud to cloud connection so that we can build mapping databases that span. Thanks. You're next. Hi, Anthony from Comcast. I have some questions about how you guys plan dealing with external L2 networks that aren't exactly part of the cloud's specific domain, but may actually be connected to the same L2 segments. A couple of specific use cases. So being one of the guys that did some of the IPv6 work in Neutron, I'm actually interested in forming an L2 adjacency with a router to do MTISIS and potentially using some devices that aren't exactly L2 intelligent to use a L3 gateway that's actually a service VM inside the cloud itself. I can speak for my product. We do EVPN interoperability with existing physical devices. And you can turn off you can create a virtual Neutron network and then turn off Neutron semantics in order to make it a pure L2 broadcast domain. Yeah, I mean like provider networks turning off the L3 agent kind of thing. Yep. More than just that, you turn off everything, so it becomes just a ether network. But the provider network, that's the only primitive we have in Neutron to do what you're asking. So much for the notion that L2 broadcast segments aren't used, right? There's always exceptions, right? Exactly. I wish they would just go away. There's always exceptions. I mean, but that's not probably your number one design point. I think you don't need to design for making L2 your primary consumption. Can I just say one thing? I think one reason why we don't make more progress is because we're addressing several use cases that are all different. If we're all building large clouds we would do everything with L3. If we're dealing with enterprise we would do things in a different way. That's right. I 100% agree. The environments in many cases are different enough that you layer on a lot of complexity when you look at the enterprise environment that you can strip away in the cloud environment. I think we are over time. Unfortunately, we could have continued this discussion easily in our drinks. Thank you very much for your help and for the room as well. Very interesting questions. See you soon.