 All right, so can you guys hear me? Okay? All right, so I'm just gonna go ahead and launch into it. So I'm Martin. I work at VMware. I was formerly at Nasera. And I'm gonna talk about virtual networking both in and out of open stack. And so I was being pressed interviews before this and someone was really confused about the title. They're like, what does vagabonds have to do with virtual networking? So let me just kind of clarify this really quick. So I spent a lot of time in airplanes. And I talked to a lot of partners and a lot of customers. So I'm the vagabond and I'm talking about virtual networking. Vagabond isn't like some open source project that has to do with virtual networking. I don't wanna be super clear about that. So I've kind of composed this talk into two pieces. So the first piece is a little bit of a retrospective which is kind of how we got here. And then I wanna talk more about virtual networking and that will be more of a, I've probably had 400 customer visits in the last two years. And just kind of give you an idea of who is doing what and why they're doing it and what's hard and what isn't hard, right? So let me start a little bit about a retrospective. So this is a picture from a talk that I gave in 2010 at the OpenStack Summit. And the world was a totally different place back then. I mean, there was real questions to the viability of OpenStack. So those of you who know who Ben Horowitz is, he was on my board. He's a principal at an Indonesian Horowitz. And like one of the biggest arguments I had with him had nothing to do with Nasirah, which I was founder and CTO, it had to do with whether OpenStack would be viable. I'm like, it's totally gonna do something big. It's gonna be really important and he disagreed, right? At the time, nobody knew what OpenV switch was. There was no quantum, SDN, like the term SDN had been coined about a year previously, right? So, you know, even though I gave this talk, it was to a very sparsely populated room, like less than half the seats were full. And if you look really closely, you'll see that those people weren't even paying attention. So, okay, so the networking wasn't like this super hot topic at the time. All right, we fast forward. Clearly a lot has happened, right? OpenStack is awesome. Ben conceded he was wrong, which was a big personal victory to me. We see a number of deployments. I don't have to tell you about that because you're here, you probably know about it or you've heard about it. But this has been a very big change. Not only that, OpenStack is, before when we would talk about networking, nobody knew what we were talking about. It was very different. And since then, OpenStack has become a beacon, certainly in open frameworks for networking, right? We saw the emergence of the quantum layer and we see an enormous ecosystem around it. And I actually don't think people understand how significant this is. And so I just want to try and give this a crack. So networks have had standardization for a really long time, right? We've had interop for a really long time. We've had control plane standardization. We've had data plate standardization. It's called Ethernet. So standardization isn't new to networking, but we've never had it for operations really. Meaning you can integrate at the control plane, you can integrate at the data plane. But boy, when it comes to things like managing a CLI, doing the provisioning, being the OAM, the operations and management, it's always been proprietary, it's always been a lock-in. Always. And it's not necessarily nefarious plan. I mean, people were focused on other aspects of it when these things were being developed. And the thing that quantum does, is quantum says, hey, listen, we actually have an abstracted interface for the network for management, for provisioning, for the ability to create new virtual networks and attach things to it. And if you look at the list of the people that participate, like, this is significant. I mean, this is a real community. And I think people are like, oh, you know, quantum is great, it's like a subset of OpenStack, but this is a really big change in the networking industry, like independent of OpenStack. So I'm gonna talk more about this later, but I really wanna point that out. So in the meantime, people talk a lot about SDN. So I'm at this point not really sure what SDN means, but it's clear that people are really excited about it and they talk a lot about it. And whenever I kind of give variance of this slide to the talk, I mean, like in the morning I kind of say, okay, what silly thing can I Google that happened recently and then that's gonna pop some article which shows how confused we are about SDN. And so this is what I actually pulled in New York, not too long ago, a few months ago, which is like, you know, like there's now investment networks, it's on the Motley Fool. Like, I mean, clearly people are very confused about SDN ever using it as an investment strategy. But this morning, I did the same thing, I actually looked in my inbox and I got an analyst report that came from the ONS which is an SDN conference that's happening right now, which is like, it's pretty clear that we're still trying to define what SDN is as a term. And we could talk about that later, which is there's clearly a lot of excitement, there's clearly a lot of hype. I don't think there's a lot of understanding about what an SDN is. So for the purposes of this talk, I'm not gonna be talking about SDN per se, I'm gonna be talking about virtual networking, which I'll describe and I'll define it's a very crisp destination and I'll be talking about OpenStack. And in the Q&A, we can talk about SDN and I'll most likely try and ask you to define what it means and then we'll try and have a discussion. Okay, so on a more personal note, so this here got acquired by VMware. And soon after that happened, well actually, around the time that it was happening, VMware got behind and got very involved in OpenStack and it wasn't just networking, right? For Grizzly, we're a top 10 contributor, but on the fourth, we announced support for vSphere. Let me see, we announced the partnership with Canonical yesterday, I believe. On our side, we've more than doubled the number of developers that we have working on it. We've increased the number of developers working on OpenVswitch. And so this is something that VMware has really gotten behind. And certainly we've been involved in it and the CIRA side has been involved in it for a long time, but we continue to be behind. And so just to kind of put this a little bit in context, so after the acquisition, I hopped in an airplane and I flew around and I talked to our existing customers and potential customers to see how they felt about the acquisition. And existing VMware customers and existing CIRA customers, generally the view was positive, which is ESX is a very important platform. VCD is a very important platform and getting integration with that was a big deal. But there was some concern about what would happen with our involvement and VMware involvement in OpenVswitch. There's a lot of skepticism. And in fact, our CEO, Steve Malaney, who actually now runs the BU, has this story where he was talking to a customer and the customer's like, listen, I believe that you believe that VMware's gonna do the right thing. I believe you believe that, but my experience is totally different. So I just wanna spend one slide to try and describe why it makes sense. And then I'm gonna go into the virtual networking talk. Okay. All right. So I think at a macro level, this is all about the movement to the software-defined data center, software-defined infrastructure, whatever you want to call it, which is things that existed in hardware and were provisioned with atoms, like physical atoms that you picked up and you moved around, those are moving to a software layer. So I'm a software guy. I've always been a software guy. I like to build software and I like to provide software to people. VMware is a software company. So let's say that you build something like a software networking layer, which is what I'm interested in building. So you have this and you want to be able to provide it to customers in a way that they want to consume it. Now, having been spent so much time on the road and figuring out how to sell this stuff, it's very clear to me that there's two consumption models. One consumption model, my consumption model is like, I am a customer. I want to consume software. One consumption model is like the vertically integrated stack. I sold against this stack for four years, isn't this here, right? I'll go in, I'll be like, listen, I've got this cool thing and never mind the wires singing out and like the sparks and whatever, it is cool and like you can modify it however you want, but it's a piece, it's a component. It's a component of the software-defined data center is not the entire thing. And like a lot of customers, very legitimate, will be like, listen, this is not what I want. And I would say this is probably 70% of the engagement cycles I want. This is the whole, forget VMware and forget OpenStack and think about just converged hardware infrastructure. This is a very common sale and consumption paradigm that some people like. It's not me, it's not what I would consume, but it's valid and this is something that VMware is very comfortable with and it's been selling into for a long time. However, there's another consumption paradigm. By the way, I made that slide, that's why it's kind of crummy, which is horizontally integrated. Meaning, I've got good developers, I think that I differentiate by technology, I like or don't like open source and I want to compose things on my own. This is the standard horizontal consumption mode. This is what I'm used to selling. This is what I've been spending four years. It's a much longer sales engagement cycle. It's very technical, but people like it for a certain reason, right? And this is where OpenStack has fit very well traditionally, right? And so from the standpoint of VMware, you want to sell into as many customers as possible. So certainly you do this, which VMware is very comfortable with and now there's interest in also selling into the horizontal model, which is, we think we've got good software. We think that it is pluggable, it can be competitive at an independent component level. Why not consume it of that way as well? So that is the interest behind selling pieces of, aligning with OpenStack. With OpenStack is a fantastic way to horizontalize the industry so you can have these best of breed components. I mean, even in the time of Nassirah, like we had a closed source controller, it wasn't open source, but it was about open interfaces. It was about getting behind OpenStack, which we still want to maintain. We're actually pretty successful with that model. So let me go on to networking. So I'm going to talk about a trend that has nothing to do with OpenFlow or SDN or even network virtualization. I want to talk about a trend that's been happening for the last 10 years. And I think that this is one of the most significant trends in networking that nobody's talking about. So if you look at a lot of the big data centers that are built, whether it's like a Web 2.0 data center or it's infrastructure as a service or platform as a service or whatever, you have this trend where you have these very simple data center networking fabrics and functionality that's traditionally been in the network is an x86 at the edge. So I'm going to give you an example of Web 2.0. So in a Web 2.0 data center, very often you've got an L3 ECMP fabric from whatever vendor. This is not a, I mean, from whatever vendor you decide to build it. This is a great way to build the fabric. It's not over, so you can build it in a way that's not over subscribed. You've got very quick convergence. You've got great load balancing. It's technically a great solution. And then the application or the web server or the ADC, the load balance or something else is implementing the functionality stuff like load balancing or security or isolation or whatever. Like this is done. Like there's no point in arguing about it because this is some large portion of the workloads today or under this model. And like the reason is the model's awesome. Like you get to put stuff in software. You've got software innovation cycles, right? It's at the edge, so you don't have to solve the aggregation problem. Like you're like, if you think about the amount of networking that's already happening on the end host, it's a lot. You know, like you've got your stock, you've got all your buffering, you've got the kernel interface, you've got all the timers and checks and like all this stuff is already happening on the edge. And like the closer you are to the application, like the more semantics you have anyways, which is why like a totally valid way of building a data center is to like have the application do as much stuff as possible and pull it away. I mean this is like the convergence of the end-to-end argument, right? I mean this is a good place to put functionality. And if you look at a lot of these new data centers, it's exactly that, right? You've got the x86, whether it's a middle box or x86, whether it's a server and you've got an L3 fabric. So this is clearly a trend. I mean I'm personally familiar with dozens of these and they're some of the largest data centers in the world, but I'm also familiar with, this isn't just a service provider big data center thing. I mean there's a number of enterprises that are doing this on all sorts of scales. Because it's actually a great way not just to scale, but to actually build functionality in software. Software's awesome. There's kind of like a basic problem which is whenever you build something like that, it works for that specific application. But it's a little difficult to extend that to support all workloads, right? So if I have a VM and I give that VM an ethernet nick and I'm running a legacy operating system on it, it's gonna expect some semantic, some service model from the network. It's like, well, you've given me an ethernet nick, I want L2 or something like that. So this doesn't work for everything. But it certainly is setting the bar for the cost model, the software innovation model, the flexibility model, and the hardware independence model. I mean this is setting the bar. Like if you want something to like compare yourself against, go to look at one of these because it's a great way to build systems. So that's the first trend. Okay, so the second trend. This one's even more shocking to me. So the first trend, this has been happening over the last 10 years. It has nothing to do with me. It's a trend that's happening out in industry. This trend also has nothing to do with me. It's been happening in industry. It has to do with VMware. Which is, last year, the number of virtual ports that are ports that are attached to VMs globally succeeded the number of physical access ports. That means there are more virtual ports out there than there are physical ports at the edge of the access of the network. And oh, by the way, those virtual ports are on x86. Right? I mean, it's pretty easy to do the numbers. The majority of workloads are virtualized and best practices are three vNICs for one of these. And so the numbers actually pencil out pretty easily. But the implications are pretty phenomenal. Which is, it's on x86. You already have a couple dozen VMs often sitting on one of these servers. You already have to do an awful lot of networking there anyways to actually handle communication between them. Right? So you're already doing an awful lot of networking on the server at the edge. So I just want to put virtual networking in quick perspective here. Which is, it really, independent of who's doing it, it really is kind of alignment of these two macro level trends. Which is on the top, you've got this, okay, I've got all of this functionality that's moving from the physical network into x86. And then the second trend is, we have all of these virtual ports that we actually have to add functionality to anyways. And so I'm gonna do like a quick animation here, which is, okay. So what you want, is you want kind of the cost, flexibility, innovation model of that side. But you want that for every workload. Right? You don't want to do it like per application in a way that you may be only partially supporting it. And so that's it. So if you have a virtual networking layer, the idea is you can have, again, from the top side, you have whatever hardware platform that you want to do, again, else for UCMP, great model. But again, this is independent of what happens on the physical network. The physical network is its own battlefield. You want to have new workloads, share a common layer, and then of course you want to have existing enterprise apps. So you maintain the cost model, but then you support pretty much any workload. So that's the big idea of virtual networking. You guys probably already noticed that I'm gonna go very quickly about this. So the idea is on the server, just like in the days of compute virtualization, like we've decoupled the operating system from the server, but we haven't decoupled it from the network. So network virtualization says, listen, I'm gonna run stuff within the server. And then that way VMs instead of connecting to a physical network, which today, I mean, if you look at the ARP cache, if you look at the default gateway, often these are physical addresses that these things are attached to. So if you take a VM and you move it, like it's view of the network changes. And this is where a lot of the operational complexity comes from. So instead of saying, listen, I've virtualized a VM, I've pulled it up, a virtualized an operating system, I pulled it into a VM. Instead of having it talk to a physical network, I'm gonna have to talk it to a virtual network. So now VM spin up, instead of attaching to a physical network that are kind of a pain to manage, I'm gonna attach it to a virtual network. It looks exactly like a physical network so I can support any workload, but it's gonna have the operational model of VM. Great dynamically, you can go out, you can shrink it, you can move it around, but it has the same kind of like high level system design principles that I described previously. And you guys have all probably heard the pitch, which is okay, so you get like really complex networks and then you provision them really simply. So before it was a really complex network and the idea is instead of having to like string wires and this and that, like I can go ahead and just in the software, create arbitrary topologies, L4 through seven services, yada, yada, yada, right? That's the big pitch of virtual network. And so going forward, I wanna talk about like what this means. And like the first thing I wanna say is like, this is not like science fiction. Like there's so many deployments of this stuff today and there have been. I mean, we've been in production for years now. And so for the rest of this talk, I want to talk about experiences with organizations that are trying to consume this type of technology. Why they're doing it, what works, what doesn't work and so forth. Dela Daganets is one of the best movies ever. And I think it actually underscores the most important lesson of virtual networking, which is like, if you think about the SDN space and what people talk about, it's normally like commoditized hardware or flexibility of software or operational savings. And in my experience, it's none of that. And it's like, it's one thing to really like the technology and it's another thing to think it's cool and it's a totally different thing to actually buy it. And if I look back at every one of the deals and the experiences I have, like what makes people buy virtual networking is speed. That is the primary, like what is the pain point of my solving? It's not, it saves me a lot of money. It's not, I can innovate in software or I can program my infrastructure. It's like, something took me a long time before and now it takes me less time. Is that clear? And so, the other one is like, like lower operational overhead is definitely important. But it's a secondary value proposition. Lower CAPEX is definitely important, but it's also a secondary value proposition. Meaning at the end of the day, if I'm not taking something that used to take a long time and making it short, it's probably too early to consume a technology like this. So let me talk about the few use cases. So this is like the driver for why you buy it. But let me talk a little bit about what people are using it for. Okay, I didn't make this slide, that's why. So the primary use case, and all of them the primary use case is how do you take a data center and make it multi-tenant for some definition of multi-tenant? Whether this is infrastructure as a service, which is this is a dev cloud, which is this is focused on a data center. I've already got virtualization. There's a disparity between how long it takes for me to spin up VMs and configure VMs and how long it takes for me to configure the network. I mean, there's certainly dozens of deployments that are just focused on this one problem. Another use case, and by the way, when I talk about use case, I'm talking about like, like these people deploy independently. Like for DMZ virtualization, this is something to say, listen, my data center is fine. I use VLANs, they're fantastic. But I'm not comfortable with how long it takes for me to deploy the DMZ. And so actually, there's probably more deployments and interests in just DMZ virtualization, which is like the data center is fantastic. I'm running OpenStack with VLANs or the open V switch plugin or whatever, and it's fantastic. But actually like dealing with all the all four through seven services, and I need a different service chain for different customers. And, you know, I'm aging out these appliances or whatever, I need to somehow provision and virtualize that as much as possible. And especially in the enterprise is an enormous use case. And the last one, which is independent from the other two, is like, listen, my DMZ is fine, like my data center is under control, but like I'm having a very difficult time taking customers and bringing them on because of whatever reason, because I don't know how to have open relaping addresses or I don't know how to do all two extension or I don't want them to download something and configure it or they don't have a special type of hardware that I need them to have or whatever. So I just want to point out, like cool technology is flexible, I can do all sorts of stuff, but if I really do a cross section of the people that are deploying this stuff, 80% of the time it fits into one of these. One of these use cases. And in all of these, we've got pretty significant deployments. Okay, so I want to talk about hurdles. So I had a customer meeting in Europe. And like before the meeting, so we've been talking for a long time with this company. And like before I went in, to have the meeting, I guess they did some preparation work because when I sat down, it was very clear that like, listen, to do this, you need this type of hardware over this configuration. And to do that, I can use this type of hardware and this configuration. To do this, the thing that you can do with virtual networking, I need this type of hardware and this configuration. So it's very clear that anything you can do with virtual networking, like if you have enough resources and you have enough time and you have enough people, you can do. It's not like we're bending the laws of physics. And it's not like we're changing technology. It's very difficult to come up with use cases for early compute virtualization that you couldn't have done with an operating system. Virtualization has been around for a really long time. We've had virtual memories, right? But it's very, very different to give an operational abstraction, right? And so if I think about like, what is the number one hurdle? And this is probably generalizes for cloud, but certainly for virtual networking, which is, if there isn't an understanding that all of this technology is just about provisioning and automation, then there's probably not, the organization is probably not in a position to accept this stuff. Now, of course, if you look through each one of these slide decks, well, in order to do L2 you use VPLS and L3 you use VRF or whatever. I mean, of course, you need different configurations and you need people to do this and you need different types of components. Clearly, this is a resource-intensive, expensive way to do it, but you can do everything, right? And it's funny, I love press. Press is fantastic, but a question you always get is what is the number one competitor to X, Y, or Z? And independent of where I am in a company, the number one competitors to something like virtual networking is nothing to do about the new incumbents in the field. It's all about, here's one way of doing it. People are very comfortable in that way. And it's very difficult to differentiate against a million people with big tech books, right? Because they can figure out how to do this over a period of time. So it's important that you realize that the claim is not that we can do something new that you can't do before. The claim is that it actually follows like an operation-efficient software model. All right, so let's assume that, okay, customers, they understand there's a lot of value in this, right? So like they've gotten past the status quo inertia and they're like, okay, we wanna do this, we've got this pain, we're gonna take this. So now you're like, okay, let's figure out how to, let's figure out how to install this stuff. Like the number one hurdle, by far, even from a willing customer, is that like, and I'm sure this is the case for Cloud more broadly, it's just the organizations aren't structured generally to consume this type of stuff. I mean, the amount of turf war is over, like who owns the virtual network edge? Is it the networking guy? Is it the virtual admin guy? I mean, I've seen it all. I've seen whole teams fired, right? They're like, listen, if these guys, so why don't we fire them? I've seen the creation of new economic entities. We're like, listen, we're gonna create a new team. I've seen Skunkwork projects. We're like, okay, we're actually creating a new project and we're gonna hire people into it from the organization, but we're not gonna tell anybody what we're doing. And all of these are these kind of abysmal contortions to get out of the need to just deal with the internal organizational dynamics. And it's very interesting because technology, school value proposition is still very difficult to consume. So the third hurdle, and it's actually really shocking to me how significant this is, is we're still in the architectural war, which means before you have products, you talk architecture because you don't have products. And when you talk architecture, it's kind of who can say it the loudest and who's supporting what cause and everybody's confused. Like imagine if you went to go buy a car and you walked on the car a lot and the guy's like, okay, listen, you've got this car, you've got this car and the way we build the engines are totally different. And we use this paradigm for building the engine as far as speed and it's not like, come on, this is silly, right? You get in the car, you drive two cars and you go ahead and compare them to each other. But for the last five years of network virtualization and STN, it hasn't been about driving cars because there really haven't been that many cars. It's been about having these idealistic arguments about which architecture's better or not better. Like the same thing happened in compute virtualization. Do you guys remember like the wars between dynamic translation and para virtualization? Like I'm sure like, I'm sure marriages were like ended over the state. Like people really care, like dynamic translation is like, you don't have to modify the guest and you can do anything. And then the para virtualization guy's like, but it's slow and like the IO's not gonna work. And like, you know what people don't talk about anymore? It doesn't matter. Like at the end of the day, if you have good products and people are using it, you don't argue architecture. You're like, listen, this is fantastic. This is fantastic. Let's go ahead and try them. And you look at the checks and fortunately we're getting to that point. The last five years has been so much craziness, right? Like, I don't know what STN is. I was there when the term was created. There's total confusion between mechanism and solution. So STN actually had a meeting at one point and that meeting was like a way to build systems. There was no customer value. It would make no sense to say, hi customer, I've got STN. That's like saying, hi, I've got Python, like good for you. But that's not what I want. But this is because we're still talking about it. It's not like you're bringing more product. You're trying to like argue from like some core architectural points. There's also like mass confusion on the implementation versus like product design, which is like I've got this implementation. I tried it out. It doesn't work. Therefore, like the approach sucks, like, you know, and this is immature and there isn't a lot of information. So nobody's really default, but like it's really important for everybody to realize that like this is what happened when you don't have anything is you talk about it and you argue. And like, and once you have products, it'll mean much, much less. And maybe instill in some dark corner of the internet we'll still be arguing over it. And I'm certain I'll be part of those arguments, but like it's going to be good once the industry evolves. So I've just got two more slides. So many times I went network virtualization in OpenStack. So I actually think OpenStack has been incredibly significant in general for SDN and OpenStack. And there's actually a reason that I am here in this conference talking to this audience. Because, you know, I've been involved, you know, in open flow. I wrote the first version. I've been involved in SDN. And like at the end of the day if you look at who's deploying stuff and who's using stuff, it's actually here. Like we can talk about this stuff all we want but until you have products and until you have software until you cut your teeth, until you have customers it's just like more discussion. And I think that the networking industry is starting to recognize how significant OpenStack is which is why things like Quantum are as diverse and as broadly supported as they are. And I believe it's going to continue to be that way. And so I think that when it comes to networking OpenStack really is somewhat of a beacon. And if you're interested, of course people can sell you stuff including me but that's actually not the point of this talk. So my last slide is I just want to let you know I'm not just a spokesperson. I'm a user. We've been using OpenStack for quite a while. We've got a very, very large deployment. And it's a great technology. Certainly for trying out some of these core networking functions. And with that I will go to questions. Is there are questions? Yes? Let me see if you were to do the relationship you know inside. So for those that you don't know OpenDalight is an open controller platform by back by a number of industry. Big guys in the industry like we're a gold member you know Cisco, IBM, HP, Red Hat and so forth. So there's many open source controller efforts that are out there. And I think they should all be compatible with OpenStack. So this is like, you know OpenDalight is how do you build an SDN controller for managing networks. And so for example, one of the things that we've offered to contribute to OpenDalight is actually a quantum plugin so that they could be compatible. And so I'm not sure I can comment behind that other than this is it's nice to see more open source efforts in this area. And we're actually the ones that stepped up to commit to plug it into. I think that's the right way to do it. I mean I think ultimately when you look at orchestration frameworks you want to abstract out pieces of infrastructure and OpenDalight is something you abstract out. I mean I think that this is what like the primary like one of the primary values that OpenStack provides is it you know like you know before we had atoms and atoms have very clear lines to plug things into each other right. And now we're going into software. And so you have like software infrastructure you need these clear lines to kind of mix and match pieces. And so any piece that fits within that like OpenDalight should be abstracted out by something like quantum. I mean this is exactly what happened. So we sell a virtual networking solution that you slot in to something under quantum. Yeah so. So I'll tell you what OpenFlow was created for. But so OpenFlow was created to as an interface to control physical hardware and at the time the focus was in campus. So it came out of you know my thesis which was ethane which was low level control for security purposes of the campus network. That's what it was created for. Since then it's been generalized to some general interface to program switch hardware. And so I would say that it would apply to any environment in which you want a generalized interface to control switch hardware. But I'm gonna be very clear about one thing. OpenFlow doesn't do anything new and it doesn't do anything that you couldn't have done before, right? So let's go to the days before OpenFlow. The days before OpenFlow you could go get an SDK from whoever and you could write software to things and expose it to a remote thing. All OpenFlow does is standardize that. So when you talk about use cases the use cases not around can I do some cool new stuff because you've always been able to write software to switches because there's software running on switches. OpenFlow says if I have somebody else's switches can I control it in a unified platform? I want to be very clear about this. Like again, having wrote the original version it's not, it doesn't make like it's not a new technology, it's a standard. Use a mic. Is it still applicable today? And do you see that being evolving and use cases developing around OpenFlow or I want to make a distinction SCN versus OpenFlow and OpenFlow was created, I understand but today and tomorrow where do you see that going? This to me is a question of customer need. In as much as there's an interest in customers and having an open interface to switches that third party vendors can talk to it's applicable. In my experience, this hasn't emerged but that doesn't mean it's not there. Like I have worked with large companies that have deployed OpenFlow networks and seem value. Now they're often they're just kind of doing something that's very similar to MPLS. But again, I want to just for everything else I want to make the distinction. Like the reason you use OpenFlow is not because you couldn't do something before it's because you want to do this in a way that's decoupled from the hardware. Whether this relevant or not these aren't the conversations I have with customers so I'm not in the right position to say. I mean, we deal with V-switches and go ahead. Apologies for pounding on the same topic but to your very point, and you brought to a very good point about the difference between mechanism solution and like you clearly pointed out, I mean virtual certificates have been around for what? 20 years now, right? We had ATM virtual certificates and MPLS LSBs and we could have controlled them programmatically. Sure. And we understand that now we have the virtualization now suddenly we can control things in software. I'm so glad you brought this up. You're absolutely right. Okay, so I want to go back to compute and server virtualization. So we've had virtualization for a really long time in the compute space, right? Like remember when VMware or whatever was just getting started, they're like, listen, we have virtual memory and of course we've got processes and we virtualize all this stuff and then we have virtualization, right? But you didn't have a virtual machine abstraction. You had tons of virtualization in networking. You've had tons of virtualization for decades but you don't have a virtual network abstraction. It doesn't exist and that is the operational abstraction that operators use. They operate on networks, not on independent components. Like it's very difficult to structure a data center where you can do both L2 and L3 and arbitrary ACLs and arbitrary L4 through L7 interposition. It's not that you don't have the technologies to that. You don't have the abstraction. So it's exactly right. We've had virtualization in networking for a long time and we've certainly been able to program it but we don't have a virtual machine abstraction and if you want to change the way you operate something you need an abstraction, like I need something I could hold on to and like read counters and snapshot and rewind in time and move because that is the way that I think and I manage networks. And short of that, I'm trying to recreate that abstraction by running around. A quick follow up of that. So if I interpret you correctly, what you're saying is that suddenly the new operational model that you're saying is that fundamentally networking makes business sense due to the fact that it's a communication model which follows the VM abstraction that we have and suddenly ties the whole thing together. I'm just saying that if your goal is to change an operational model, like prior to compute virtualization we had tons of compute virtualization but we didn't have VMs and as a result you've got a bunch of disparate virtualization technologies that virtualize IO and virtualize memory and virtualize CPU and that's great but it doesn't affect my operational model. It's the exact same thing with networking which is like we've got awesome virtualization you can solve awesome problems with it but I don't have a virtual network abstraction so I can't change the operational model because that's what operators do is in many cases it's funny if you look at a lot of these large cloud deployments that are using traditional networking technologies these guys are like hand creating crappy virtual network abstractions. I mean that's what they're doing and they're trying to do this but they're like they're not all that abstracted because it can only be L2 or you don't aggregate stats or you lose things when things move around. So all it's doing is taking all of the virtualization technologies that already exists like tunnels, like overlays, like whatever and they're saying here, virtual network abstraction. I'm so glad you brought up that point because it's really important in this discussion. Excuse me, so I'm wondering what your thoughts are on there's concern, I don't know if it's valid or not but there's concerns about doing a lot of the networking heavy lifting in software on X86 versus doing it on Silicon. What are your thoughts on that and do you see sort of the heavy lifting of virtual networking being moved up into hardware like having switches and routers aware of SCT and so forth? Yeah, yeah, so this is a good question. So I wanna take it back to like the previous, the first time that I talked about which is like today, I don't know if this is true but I believe if you spin up in UVM 30% of the chance it's already all of the heavy lifting is being done on software on X86. And then we've been doing this for 10 years. I mean this is how you do it in platforms of services. This is how you do it in a lot of these web clouds. And if you look at it like you've got great throughput, you've got security, you're doing all of the things you're doing on X86 today already. So certainly just from like an intuitive perspective. Like remember on X86, you don't have aggregation. I don't have to do 48 times 10 gig. I've gotta do one times 10 gig and I already have to do it for the entire software stack at the edge already, right? I mean it's like just because I went from having to an application level segmentation to a network level segmentation it's not like it's more work. So I mean if you look at a crash section of again not the data centers that I'm typically working with but other ones where you already do this you can make very, very simple arguments of this is a fantastic way to do it. The convergence properties are great. And if you do all of this phenomenal stuff at the edge already and then you don't have to have the discussion anymore. When it comes to networking we're not comfortable with that yet. And we're still early in the technology cycle, right? And so like implementations may not do the thing. I'll tell you where STT came from. So way back when when people heard overload they would puke just because they've been doing networking for a long time and like anytime people deal with tunneling it's been a very complex thing. Now the reality is if you look at a large web data center it's an overlay, right? It's an HTTP overlay but it's definitely an overlay and it's definitely an x86. And so we started looking at the performance as okay if I tunnel from a V-search to a V-switch like what's the most expensive thing? Well it turns out the most expensive thing is the interrupt between the guest and the hypervisor. That's very expensive. You trash your TLB, you've got your memory copies, this is an expensive operation, right? So when you do something like TSO, TCP segmentation offload I'm setting a 32K frame to the NIC instead of a 1500 byte packet, right? So like I've got this massive interrupt coalescing and like anytime that you throw a tunnel header on there we're like oh all of a sudden like it doesn't know how to interpret the packet I can't get hardware segmentation. So people are like oh this is this big networking and nothing to do with networking and how to do the interrupt between the guest and the hypervisor but then we were like it was actually we were joking we were like sitting around the coffee pot one day totally joking, you're like oh I wonder if you can make it look like a TCP packet and then the NIC would interpret it correctly. And so then we came up with STT and you make the outer header look like a TCP packet now you can send a 32K frame all the way to the NIC you can do the segmentation and you're done. So this is like a system solution solving an end system problem that have nothing to do with the network but then it all becomes like a networking discussion. And so I think we're, I think technically clearly you can do it in X86 because we do all the time. And I think that trend is done if you look at the majority of systems that are being built. And I think it's just time that we get comfortable this happening on the networking side. I'm sorry for that long-winds and passionate answer but like it's a really important point. Any other questions? Please. The VLANs, there's a certain maximum in the standard and I thought I'd heard that OpenFlow would allow you to go beyond, I think it was 65K to allow scalability. So that was one thing that maybe you could do with network virtualization if that's true that you can't do just in a normal kind of hardware environment. Yeah, so I think the coolest thing about network virtualization at some point people are gonna stop working about protocols and they're gonna start worrying about virtual networking like abstractions which is the reason that you have things like VLAN limits is because at some point someone's like this field is this large because some hardware interprets it as this big, right? And so you've got these limits that get baked in. But let's imagine OpenStack and let's imagine Quantum. Like with that you don't know what protocols are running and yes it's important from an operations and management standpoint but it's not important from an operator standpoint. So like if all of your operations is against a virtual machine or virtual network then independent of how that's implemented on the bottom it's still going to be the same. So if you have some sort of a limit you can change that limit and not change your operations. And so yes, VNs have limitations. There's probably a bunch of ways to get over those limitations. New protocols will have limitations but I think the macro level point here is like if we could just stop getting mired down and mechanism in detail and focus on abstractions and value we don't have to have these discussions that end up going nowhere. And kind of a macro level point of my talk is like I think we're actually starting to get there. I think we're actually starting to move past that like my architecture is better or worse than your architecture. I actually talk about products and systems that are being built. And we went through the same pain in the compute virtualization phase. The same pain. But I think that we're coming out of it. I'm glad you brought that up. Any other questions? Cool. All right, thanks very much guys.