 Well, perfect. Good afternoon, everybody. So my name is Guido Appenzell. I am from a company called VMware that actually makes open stack products. We really like open stack, contrary to some popular beliefs. But what I want to talk about here today is actually networking. And then very specifically, currently, the very way we do networking in data centers and enterprises and almost any kind of organization is fundamentally changing. And I want to walk you through a couple of the changes that I'm seeing. And I mostly want to talk about what's changing in the industry, not about any particular product. At the very end, I want to have one slide where I talk a little bit about what we're doing with NSX to address these challenges. But the main part is really about the revolution that's happening in networking right now. Before I jump into that, let me actually talk about the different revolution. And that's the revolution in compute. So we're currently in the moving from sort of a client server model to a cloud model. That's only the second time in history that we're really fundamentally changing how we're delivering IT. So we started out with this idea of all kinds of random computers. And then mainframes emerged as the first widely adopted model. And this was basically one big monolithic system, one company making the silicon, making the system, making the software on top, everything vertically integrated, probably sold to you by a friendly IBM rep with a red tie and a black suit. But then the client server revolution happened. And these vertically integrated systems started coming apart. Suddenly, I could take an Intel CPU in a Dell system with windows on top, alternatively an AMD CPU, an HP system with Linux. And I could mix and match these. There's no longer a particular software tied to a particular hardware. And that completely changed how we did compute. There was this Cambrian explosion of creativity. And basically all the modern software companies have started doing this time. And now we're yet again in a different change of the compute model. We're moving from the idea of client service, where I necessarily have to operate the whole stack to a model where now the hardware may be operated by somebody else. And I'm just consuming it as a service, putting my own software on top. That's what we typically call cloud. This can be private clouds. Say I'm running an OpenStack cloud on premise, but I'm automating it so the end consumer doesn't have to worry about hardware anymore. Or it's a public cloud where I go to Amazon and sign up for a service. And they're doing all of this for me. So this change in how structurally IT is delivered had huge implications on us. If 1996, I was a grad student back then, I was setting up servers. And back then, how would you set up a server? So you would take your terminal. You would with a serial cable. You would plug that into the server. You take a CD. Does anybody remember these round silver things? They had the OS distro on it. Exactly. And they had the OS on it. And that would probably take me two to three hours before I had a server up and running. Today, when I go to a private cloud, like OpenStack, or I go to Amazon, this is down to minutes. I pick an image. As I spin up this image. And a few minutes later, I have a running server. It's unbelievable how much better this has made our lives. And the amount of productivity gained during that time. There are some studies by Gartner. It's just the typical server administrator has gone 10 times productive over a period of about 10 years. So it made huge progress. So that's compute. What I want to talk about here, though, is networking. In networking, we started with exactly the same model. Classically, even today, in many cases. If you buy a piece of networking equipment, it's a box. Chip is from one vendor. System is from one vendor. Software is from the same vendor. Until a few years ago, that was probably still the dominant model of how networking was delivered. And the fact that the model, that the technology stack, like the way how we build systems, didn't change during the time. It had a profound impact on how we run networks. If I go back to my grad student days, 1996, if I were to configure a switch, it would look pretty similar to configuring a server back then, about Telnet. And then, you know, type CLI commands until I had the network configured. Each switch managed separately. And then you compare that to 2010, or maybe many organizations still today. Pretty much nothing has changed, right? You still connect to that switch and you manually configure these network element by network element. A method that's highly error-prone, right? Very, very inefficient. There's actually one thing that changed. We went from Telnet to SSH, right? But that is pretty much the innovation in network management over a decade here, or 15 years. It's very sad. Now, somewhere around 2010, slowly, the monolithic networking systems that we built were starting to come apart, right? Where basically, the chips were suddenly coming from merchant-silic inventors. My Broadcom today is the dominant one. They're sort of the intel of the networking world. And the software started to become separately from the hardware, right? There's different models for that. There's sort of the model that the folks from Verizon were talking about when they were talking here, right? Where you take a company like Big Switch, right? That provides software that runs directly on the switches, right, bare metal switching. There's another model where you say, let's actually move this into the hypervisor, right? Let's have a neutron plug-in of some type that talks to the hypervisors. And then, with something like OVN or NSX, we can create a network overlay that solves this problem. So in 2008 to 2010, I was actually at Stanford as a professor, and I ran a little project there called OpenFlow. And we had Kate Green visiting. She was a reporter from the MIT Technology Review. And she asked us how we call this, this general movement, this general idea. We were like, well, what general movement? This is like a research project. She's like, no, there's something bigger here, right? This is really changing how we do networking. And she at that time was writing about software-defined radio. She was like, it's kind of the same thing. So she called it software-defined networking. And that is sort of stuck as the term for this revolution in how we do networking, right? And it can be network virtualization or bare metal switches with software on top of it. I think for me, the key thing is the separation between hardware and software. These two things coming apart, that gives you as a user choice to swap them out in any way that you want. And I think this SDN revolution at this point, I think it has succeeded, right? I mean, we're seeing some of the very early companies have gotten pretty big. We're probably this year, I mean, at least if you extrapolate past growth linearly, right? We would expect this year, the first products to hit the billion dollar annual revenue run rate, or these bookings run rate. So these are like some big changes that are happening here, right? And where these technologies are going mainstream. And I think until recently, my thinking was, well, that's probably it, right? So, and SDN revolution is done, SDN is winning. We're going to networking software, great. And then I sort of, you know, drew this diagram here and was like, there's something not quite right with this. Seems like we're actually not done. There's one logical step that has to follow this, which is that the current way we do SDN is really still focused on the classic client server compute model, right? It's not yet really optimized for cloud. So I think there will be a next step. And let me call this sort of cloud mobile networking. And you know, before anybody protests, I'm going to admit here, that is sort of the mother of all buzzwords, right? I mean, but let me try to make an honest attempt here to convince you that something is structurally changing again in how we're going to do networking in the future. And let me start this out by giving you a very concrete, tangible example. So when I was making the slides of an earlier draft of this presentation, I was actually sitting at Starbucks, you know, and working my laptop and was working on the slides as well as on an Amazon demo we were building at that time. And if you think about what's happening on the network when I'm working there, right? Whoops, here we go. So I did a little network trace, right? And here's the network trace, you know? So basically my packets go from my local Starbucks in Menlo Park. They go into the Comcast network, which turns out to be Starbucks service provider. And from there, their Comcast apparently appears directly with Amazon. And now we're in the Amazon network, right? This friendly gentleman here is actually Baskire. He's the CIO of the Ember, right? So he runs all of our networks. Which of these switches or routers that we see in this path are actually under his control? Well, not a single one of them, right? It's Starbucks networks or Comcast networks or Amazon networks. But if data leaks out, you know, or if somebody somehow uses my communication there to compromise the VMware systems, he will still get fired for it, right? That's the unfair world of today. So this is kind of interesting, right? Because we now have a CIO who's responsible for networking over network systems where he doesn't control a single one of them. He doesn't control any piece of hardware, right? And that's certainly different from, I think, how I had thought about networking in the past, right? So that for me really is the difference, that in the future, right, when we think about networking for public clouds, for private clouds, for multiple clouds, for mobile devices, right? We're talking about networking, we don't even control the hardware anymore. We're running on somebody else's hardware. And that makes a big difference in terms of how you run and how you operate networks, right? So in this sort of brave new world, I mean, there's some people who are saying, look, we probably don't need network level controls at all anymore. Like just assume all traffic is untrusted, you know, you're gonna have application level, layer access control, you need an authenticate of each and every service you wanna use, you know, whether you're a user or an automated process. And with that, all the networking problems go away, everybody can be on the public internet, right? I think there's some truth to that in the sense that I think sort of authenticated connections will become much, much more prevalent, you know, in this world, and then sort of encrypting them, running them over SSL. That being said, I think it's naive to think that sort of this idea alone is that you're saying, every connection between, say, web tier and app tier now has an authentication credential, that that alone will automatically lead to a secure architecture. Because imagine, you know, somebody hacks into, yeah, here's a little key, imagine somebody hacks into, you know, one of your web servers, right? So what can they do, right? Assuming this is sort of, you know, you're running in a cloud or you're running in your data center. So first of all, I mean, the first thing you're gonna do is they're gonna turn off every security mechanism you have running inside that instance, right? Secondly, they can start scanning the network, right? Let's sniff some traffic, let's see what traffic is passing around here, let's probe, let's scan ports. You're not gonna see any of this because, you know, all your security is now at the application layer, right? So this will all fly under your radar. If you manage to compromise two servers, they can now communicate freely and you won't see it anymore, right? Because he's just gonna completely bypass any application layer security. But even without that, right? You can find out what versions of servers or frameworks you have running on the other things. You can wait for vulnerability, you know, if a new application layer vulnerability comes out, maybe use that to jump around. The beauty about network layer security is not that it's fundamentally different what you could hypothetically do with it from what you can do at the application layer. The beauty of networking is that the network layer security, in my opinion, is that it's really a completely separate trust domain, right? If your application gets compromised, you can rely on the fact that your network layer security is still the same as before, right? And that's a big deal, right? Because that just means you can have much cleaner security architectures. You know how to think about your systems in scenarios where you have a compromise. You know, and this is not only blocking connections, this is also just seeing what's going on, right? If you have two web servers and a web tier, they suddenly have a port 22 connection between them, there's probably something funky going on, right? This is not supposed to happen in a normal network. So there's a second big change in how we're doing networking security in this great new world. And that is that the way we do network security classically is with middle boxes, with firewalls, right? You buy a big appliance, you plug it in, you run all your traffic through it. And in a world of sort of, you know, large clouds of distributed apps and a lot of east-west traffic, that doesn't scale anymore. And let me explain to you a little bit what I mean here. So today, the sophisticated attackers that we have, right? They're no longer the typical drive-by, you know, smash-and-grab kind of attacks of the past, but they typically take their time, right? They find some way to go through your perimeter firewall and get into an initial virtual machine, right? And they do this with a zero-day exploit by social engineering. We've seen cases where they've physically broken into an office just to get a security key, just in order to infect one server and to get across the firewall. Once they're inside, they're pretty smart about this. They lay low, they deploy sleeper payloads in place, their primary one gets discovered, they start sniffing, they start, you know, probing. Eventually, they figure out a way how to compromise additional servers. And they're typically active for months before they're being found, right? So how can you protect against these type of attackers? And you can do this by saying, look, I think we can just build such a good perimeter firewall, we'll never have a compromised server again. It's probably not gonna work, right? It's not a sound strategy. A better idea is to say, let's stop the attacker from moving around laterally inside the network. And, you know, the naive approach to that is to say, like, hey, let's just put firewalls everywhere. Let's run every single packet to a firewall, right? And if you look at it, sort of the bandwidth required for that, required for that in the modern data centers, just no longer possible, right? Let me give you some concrete numbers here. So let's assume for a second, we want to do this with classic physical firewalls. If we put a firewall at every server, this means every packet of east-west traffic runs through a firewall, right? So if I pick a spine switch, every packet through that spine switch needs to go through a firewall. So let's just take for one spine switch, let's figure out how many firewalls do we need to firewall all the traffic, right? So this is a pretty big spine switch. Arista 7508 has about 22 terabits, or 23, 23 terabits of capacity, right? So quiz, how many firewalls do I need to firewall all the traffic through the switch? Anybody want to guess? No? So I went to the Palo Alto homepage, and this is sort of the biggest firewall, at least until recently, you could buy from Palo Alto. That can do about 120 gigabytes. I think you see the problem, right? So, you know, I did the math, and it turns out you need 192 firewalls. I imagine this, right? You have your one spine switch, and you have racks and racks and racks and racks and racks of firewalls. What really kills me is the 500 kilowatt power consumption. So you can either power a small subdivision, or you can firewall your traffic through one of your spine switches. So we're at a point where the amount of east-west traffic that we're seeing in a modern data center makes it completely impossible to do this with classic hardware firewalls. We have to move this up into the software layer. There's just no other choice anymore. It's very simple. So one thing that came with these new networks that have to run, you know, across multiple domains is the rise of overlays, right? The idea of an overlay is very, very simple. You know, you have your physical network at the bottom, then you create some kind of abstraction on top. We like to call it a network hypervisor, but you can call it whatever you want, and then basically, on top of that, you're able to build essentially arbitrary network topologies that you can custom tailor for each and every application. And you can, depending what software you use, at the very least, you can sort of create the equivalent of a VLAN, so a private network. You probably have an equivalent of a router, right? You may have equivalent of a firewall, maybe stateless or stateful. You may have a distributed load balancer, you know, additional VPN capability, integration for physical switches. It depends a little bit what solution you're using. But this idea of having overlays, I think is architecturally actually not very nice, right? You know, you're stacking headers, that seems like a bad idea, but I think it's actually here to stay. If you look at it pretty much all the modern solutions in this space are using overlays to some degree. And you know, whether it's a Cisco ACI, or VMware NSX, or pretty much everybody else in the industry. So I think they're not gonna disappear. I think that they have become a fundamental architectural component of a modern network architecture. So that was of a quick run through of what's changing in networking. Let me talk a little bit about what's changing on the application side. Because as our infrastructure changes, applications are adapting to that. And I think the biggest observation there is that an application today is a distributed system. I think how Martin Casado, a friend of mine phrased it, is that the application has become the network, right? That this is really driving the network. So imagine you're sitting in the audience and say you're tweeting about this presentation. What happens on the backend? The tweet, it's sent up to Twitter. We have some kind of REST API. It gets replicated, it gets indexed for searching, and it gets backed up. It goes into some massively parallel infrastructure. Probably your traffic runs to firewalls and load balancers along the way. This is a very, very complex process. And this is today pretty much the case with any modern application. If you could do your banking portal or something like it's exactly the same thing. This talks to a lot of different backend, both third party services, as well as components that were written by your bank. Now with the architecture of an application changing, we're starting to see the infrastructure change that supports these applications. It seems like the architecture of choice that's emerging is this idea of microservices together with containers as the infrastructure construct to support these microservices. So the basic idea is I'm, you know, instead of having these old fashioned tiers, I'm now basically saying, okay, I have microservices. They can all talk to each other, or at least a subset can, there's a matrix who can talk to whom. They all have a very clearly defined API, typically REST, and you know, then basically I can compose an application out of a set of microservices. If you drill a little deeper inside one of these microservices, they usually have some kind of container scale out group, you know, that scales up and down with built-in redundancy. And in front of that I have a load balancer that distributes my traffic to these containers, probably with a firewall to make sure I can only reach those parts that I should be able to reach, and that somehow connects to your router that runs the network. Often this is also sitting behind NAT for security reasons, but also just because we're actually now some of the largest customers that we have, they're having problem with IP space exhaustion that even on the internal 10 slash eight network, they don't have IP addresses for all their containers anymore. So NAT is pretty common here. So if you put together the internal container architecture plus the sort of this overall microservice architecture, you run to a couple of interesting security challenges, right? And let me sort of point out one very simple one. So let's assume you're running containers on a farm, farm of containers, maybe use Docker or Kubernetes or Mesosphere, and you will mix and match containers from different apps just for efficiency, right? You don't want to have standard capacity. Then currently the isolation between these containers tend to be fairly weak, right? If I'm an attacker and I managed to break into one of the containers, there's a couple of things I can do, right? And most container applications today, there's actually no network, no real network isolation between these different containers on the same host. So I can just, for example, hop over to the next container here. The other thing I can do is I can say, look, the isolation of containers today is primarily done via the Linux kernel. And so if I can find Linux kernel vulnerability, it allows me to do the right type of privileged escalation, actually you can take over the entire container host. And that's very, very bad, right? Because that allows me to take over all containers, sniff the network, so I don't have access to the BIOS, I can try to hide there, right? It gives me a lot of different tools as an attacker that I can try out. You know, once I've compromised them, I can actually get to the data in the backend and exfiltrate all of this out of the network. So today, in the enterprise, which is, you know, that we're mostly summing to, I would say 100% of container deployments I know actually run inside of virtual machines, right? It's not because they use the virtual machines for scheduling, they have probably one virtual machine per server. But it's because they basically want the VM layer underneath as an additional layer of security, right? Once you have that, you can then use that layer also to provide network security. So put a virtual switch inside the network, right? Maybe OVS inside your KVM host, put firewalls there, and now if an attacker breaks into your system, then they can still compromise an initial container, right? But when they then try to go on from there, and here we go, then basically they will be blocked by the firewall. More importantly, I can detect that they're trying to do this. I can do things like, for example, taking them and mapping them to a honeypot in my security backend. Or I can, you know, sort of start running scanning tools on these containers, right, to see if I can understand what's happening there and what's bad. So basically, I suddenly have all these tools that we developed in a classic data center can now use for containers if I have a network virtualization layer underneath. So I think, for my observations, I think this will be the future. I think this will be the typical way how we run containers, at least in a typical enterprise. But if you're running hyperscale, different rules apply, but in enterprise, I think this is gonna be the typical model. So what does it mean overall for networking, right? So I have my overall microservice architecture. Now, my developer deploys a new microservice and how the network works at the microservice level is probably gonna be driven by the developer, right? He's gonna figure out, okay, I have one virtual IP or several virtual IPs, how are they gonna map back to the internal containers in my service? You know, what kind of role does they wanna use? What kind of firewall do you use? What ports need to be open? What kind of backend connections can they do? And that's sort of one type of networking that's called this developer networking for a second or maybe app level networking. But then there's also the question, how do I tie it together all these microservices, right? If you're running, say, a bank, right? A typical bank today has, say, five to 10,000 applications, right? Let's assume the future, they turn into, you know, five to 20,000 microservices. The developer builds the individual microservice, but then how these microservices, which microservice can talk to each other? You know, if we want to divide them up a little and say, well, these are so extremely sensitive ones, they can only be accessed from inside the organization, these are ones that are open to the outside world, right? That's a task that's probably still gonna be up to the central IT department that sort of figured out how to coordinate all these things. So I think we're gonna have two levels of network, we're gonna have a developer networking, you know, like, forgetting the application network, and then we have the IT department creating the, let's call it, you know, inter-app network or the enterprise-level network, right? And, you know, just to make it a bit more interesting, we're now seeing actually customers that have a third level of networking. So we have the app-level network from the developer, we have the enterprise network that connects together the microservice or apps from the enterprise sort of networking team, and then we have the hardware team that runs underneath, right? And I have not actually seen a customer that has an overlay over an overlay, but I've seen the two combinations of the upper two or the lower two, so I'm expecting the all three at any time, right? So for example, this will be like a Kubernetes with networking, you know, on top of an NSX, on top of an ACI, right? I haven't seen it yet. If you see one, please let me know. But I predict this will happen pretty soon. And I think at the end of the day, having, you know, three levels of encapsulation on the network is probably a really bad thing. We don't want that. And hopefully we'll get to architectures where actually all three can be driven through one control plane. But I think what is here to stay is this idea that different parts of your organization want to effectively configure and manage different layers of your network. And that'll make our life, in terms of running networks, more complicated. So what does all of this mean for OpenStack? So if you talk to a modern enterprise today, you know, what I'm hearing a lot is that actually they're looking at VMware, they're looking at OpenStack to run their on-premise data centers, but there's also this thing called public clouds, right? The Amazon and the Azure's of the world. Yeah, which I think both with an OpenStack or VMware hat on, I'm a little like not entirely happy about this thing drifting away, but I think it's a fact of life, right? It's gonna be there. The customers are doing it and it's gonna become very, very common. So, you know, I talked to one customer in Europe and they basically said, look, you know, there was one business unit that built an application on Amazon. It works really great. It's here to stay. We have another one that built one on Azure. You know, they wanted the Office 365 integration. We built our stuff with IBM, the soft layer, and you know, then they have, I believe they have both VMware and OpenStack in there in the Enterprise Data Center. It's like, okay. And now they have an audit coming up, you know, security audit, but one of the checkboxes is show a coherent firewalling policy across your entire IT infrastructure. They're like, what does this even mean, right? We have completely different silos, right? Where visually the security configuration is different. The development teams are different. You know, they're very, very different in terms of how you operate them. How we show that something is common, right? So I think what these companies are looking for right now and where there's a big need is basically any kind of infrastructure that allows them to manage and control, you know, the basic aspects of a data center across these different silos, right? And so very specifically on the networking side, right, have something that allows them to build a network that's not only within a particular cloud, but you know, that allows them to manage networking in Amazon versus Azure versus OpenStack, you know, versus VMware all in a coherent way. And like any sort of abstraction layer, it's never gonna support all the bells and whistles that each of these clouds offer, but at least it's gonna be a common denominator that allows me to treat them in a more homogeneous way. So everything I said so far is really about the data center part of networking. The one I'm actually not gonna talk much about here is the mobile part, right? Because where classically your users came in via a plug that was on your campus network, right, those days are over. Today they're with a laptop on wireless, maybe not even your wireless, they're basically coming in as untrusted. So part of the task there is that in order to run a network, you need to take all these connections that come in and then basically tease them apart and map them to the right networks internally, right? At least you want to use networks in any way, shape, or form for access control. So for example, saying, hey, if here's a mobile device, if the user is part of the CIS admin group, right, well, then he's allowed to make SSH connections to certain backend systems. If he's in the call center group, then he's not allowed to do that, right? And in reality, we have now some customers at VM by that, but this actually has a lot more granularity, right? But depending on, you know, you have, for example, a VDI desktop and there's a huge list of active directory groups you can be a member of and based on that, you get very different firewall rules of who you can talk to on the backend, right? And so the figure out how to manage this complexity, I think will be a big part of networking in the future. So let me talk briefly about what we are doing as VMware. So at VMware, we have a product called NSX, right? It's basically a network virtualization layer. It works with KVM, it works with VMware vSphere. We have Hyper-V on our roadmap. It also works with containers. So if you want to plug this under Docker or Pivotal Cloud Houndry or something like that, that's fine as well, right? We announced it's gonna work with public clouds, so we're gonna support them as endpoints, right? We have integration with virtual desktops where you can first use the active directory example, right? From your virtual desktops or your mobile devices. And you know, working with other things such as internet of things and integrating so branch offices for retailers or outlets for retailers. And really where we see this developing is that, in the future, running networks and running IT will mean you have all these different types of endpoints and you're trying to understand and manage traffic across them, right? Any solution is just part of one particular cloud. It's gonna be a piece and for your overall network solution, you need adapters into that. But it in itself is just no longer sufficient to really solve the problems that typically today a network organization in a large enterprise has. And for all of these, I think it'll be very important to figure out how can you build a solution that gives you a high degree of automation, right? Because I think the only way how you can be efficient in a modern IT organization is by automating everything. You wanna have a DevOps mentality for deploying these things. So I usually want to be as much possible should be API driven or at least highly integrate with the UI. And you also wanna worry about security, right? Where in part, this means building security based on segmentation into your network infrastructure. In part, this means having certain basic infrastructure services such as, you know, firewalling and the visibility and tapping built into the, just have your network platform. But a lot of it will actually be inserting services from third parties into this. So another thing we're doing with NSX is giving you the ability to say, take a Palo Alto network's firewall and plug it somewhere in between, right? Somebody wants to take a mobile device to talk to Amazon. You can now insert Palo Alto network's firewall in the middle. The future will have that capability because that's really part of managing this overall equation. All right, that's actually all I had here and wanted to open up for questions if there are questions. And there's two microphones, so please use those if you have anything. Yes, please. So you talk about the overlay and the underlay. So the flexibility of overlay is obvious but it sort of, the complexity is times two, right? So do you see any hope that in the foreseeable future that the underlay and the overlay network can be sort of optimized in control by a single control plane or you don't see this as a legitimate use case for typical enterprise because you can just have a very fat underlay and be done with it? So look, again, the overlay over and overlay over and overlay, right? That makes no sense to me whatsoever. I mean, it makes sense to me in terms of having three administrative domains that influence network policy but hopefully on the wire we're gonna have one header or worst case two but certainly not three, right? I think we're starting to see solutions. For example, NSX, you can plug that into your container solution by the container and your application developer uses one set of APIs to build networks and then your IT folks use another set of APIs, but at the end of the day how it is actually implemented and enforced is all coming from one platform, right? Or we have some integrations, for example, with some of the hardware folks, you know? I mean, if you look at the capabilities of a big switch, right, and you look at the capabilities of an NSX, it's very natural to put them together, do a little bit of UI integration so you can actually say, okay, I'm, you know, if the big switch guys run the fabric, right, you know, we run the overlay network but for example, if you're trying to debug something we give you one coherent view that shows you everything, right? So I think, I'm with you there. I think we need to get to a point where this is more integrated. It's gonna take a little bit of time but we'll get there. Thanks. Yes. Good afternoon, Guido. Scott Fulton with the Newstack. I'm sorry, who are you with? The Newstack. Oh, the Newstack. Oh, yes, of course. Throughout this conference, we've seen NFV network functions virtualization as a major emerging theme and there are two prevalent points of view on this. One, which comes from AT&T, which is NFV will help refine the way all of OpenStack does networking and that over time, we will see the benefits of NFV trickle down throughout the enterprise. It's depending on whom you ask at AT&T, it could be two years, it could be four years but it will happen, they see this as an inevitability. The other view is that NFV is a completely different way of networking and orchestrating workloads in that network than you would have in an enterprise because the needs of orchestrating traffic are different and scale differently than the needs of orchestrating resources. Does VMWare have a viewpoint on this? Do you have a viewpoint on this? Okay, two different questions, I guess, but let me see. In terms of what I'm seeing coming as requirements from the carrier community versus the enterprise community, I would say at the moment, it's I see more of a divergence than a convergence. Does that make sense, right? This doesn't mean you can't build one solution that addresses both, but it means you need a somewhat different feature set for the two things, right? The second thing is that I think in the enterprise, my impression is at least certain types of network virtualization have become mainstream, right? On the NFV side, I think we're a little early in the process, right? And I think there's some of the architectures that I see make sense to me, some of the architectures that are out there, I still haven't quite reconciled how this is gonna work in reality, right? And so if I have to guess, I would expect them to be somewhat different. Very good. Thanks, sir. Any more questions? Ah, here we go. I was just wondering if you had a view on how NDN, CCN, those technologies will drive SDN evolution, like name data networking, content-centric networking. Yeah, so I mean, if I understand correctly, you're asking about this idea to say, let's embed richer information to packet headers that describe what this traffic is about and then sort of make networking forwarding elements more aware of how to deal with data based on a much richer set of parameters. Is that an accurate description? Right. Yeah, okay. So it's a great question. I'm personally a skeptic, and let me explain why, right? The, when networking was fully distributed, meaning when every switch was operating all by itself with basically no central management or operations, or control, we had to embed lots of information into the protocols, right? Just in order to carry state between them, we needed, what was it, synchronized state like BGP, but we also needed to put in information like QoS bits or so to say, okay, this type of information, you know, handle in a certain way as it travels down, you know, the network. And, you know, this was actually a lot of challenges because we need to do this on all switches at the same time, otherwise we don't get into our solutions, right? And so that's why I think classic networking often starts with the idea, this is really about designing protocols first, right? And then I think about what's the application state model behind it. Now, the way SDN is typically implemented is that it's implemented with at least a central management plane, in many cases a centralized control plane, right? So the moment you have a centralized control plane, where basically you have one server that has perfect visibility into everything that's happening in your network, these things become different, right? Because now I know that this particular flow, right, should be, should have a certain quality of service. Instead of carrying the information that this should have, you know, of the properties about the flow in the packet header for every hop, I can just push this out centrally and tell every switch, look, this traffic is coming, right? This is how we can identify it and this is how we should handle it, right? So we no longer need the data plane to exchange this information, we can now do it via the control plane, right? And I'm probably a little biased here from our OpenStack history, but I think that is the right way of doing things in the future, right? So instead of, you still probably want protocols that are richer, that have richer headers to, you know, at the handoff points, right? So basically if I'm running, you know, here's an OpenStack cloud I'm running with some kind of networkization overlay, I have an edge somewhere where this may go over into the hardware world, right? At that edge, these protocols make sense to me. Inside, you know, we can do these things directly through the control plane. I think it's architecturally a much better solution. Can I ask a follow-up? Yeah. So the centralized control plane scales to an enterprise, but can it scale to the internet and do you think NDN and CCN can help achieve that scale? So, I think no. Today, look, scaling control planes for networking is unbelievably hard, right? I mean, it took us three tries at Big Switch to get this right. It took us three tries in the server before they got it right, right? And it's really one of the hardest engineering problems I've seen. You know, most SDN systems I've seen are scaling limited by their control plane, right? You try to scale as high as you can, but at some point, you can't anymore. I think, stop breaking. So, you know, can you do something of internet size today? Absolutely not, right? I mean, maybe management definitely not control, right? Is this the right way of doing things? This will be a pretty big failure domain, right? I mean, I would feel very uncomfortable with that. I mean, even large customers deploying, for example, NSX today, typically shard the system, right? And saying, like, look, I'm using 10,000 virtual machines at a time or so, but then I'm going to create something self-contained, fairly isolated. Just after that, it just doesn't make sense to scale anymore, even if totally you're running over 100,000. So I think the better approach is to federate between different centralized control planes, right? And today, we're doing that with things like BGP or OSPF. I think if you would take a smart PhD on databases and give them the problem of saying, we have one router and another router and they want to exchange routing databases, today, knowing everything we've learned since BGP was invented, we would probably, hopefully, develop something much better than BGP, right? Something actually synchronizes and not just blasts out updates. So I think there's room there for new types of protocols, right? And there, these protocols could be interesting. But now, I don't think we'll ever have a centralized United Nations control plane for all of the internet. Thank you. Good. No more questions? Well, then, thank you very much. And if there's one talking privately, I'll be around after this. Thanks.