 Hello everybody. How you guys doing? All right, you guys having a good conference? All right, you guys need to wake up a little bit, you know, it's getting late in the day, but look my name is Namdee Rockway, Vice President of Cloud for Dell and I'm really excited to be here today and introduce an awesome panel. Dell is obviously an excited and passionate supporter of OpenStack. We're really excited to sponsor a lot of the innovation. We're doing a lot of work around OpenStack. Here today, we're going to talk a little bit about OpenStack and SDN. We've got a great panel discussion for you today. We're going to talk a little bit about open standards. We're going to talk a little bit about the work that we're doing around with Force 10 and the work that we're doing to add into you know Grizzly and SDN. I think what I'll do now is just turn it over to Mr. Joseph George, the leader of our private cloud effort and we'll go from there. Thank you Namdee. So hello everybody. If you're a regular at the OpenStack Design Summit slash conference back when it was considerably smaller, you'd remember me. I'm the product manager that brought the Dell OpenStack solution to market as well as the Crowbar solution on the product management side. And I'm also on the the board of directors for the OpenStack Foundation representing Dell. But if you've been here long enough there's one characteristic that I have that you everybody remembers and it's my jacket. So the guy that was been wearing the jacket since 2010. That's who I am. So it's my privilege to moderate this panel here today. We're going to talk about software defined networking. I know that there have been a number of sessions this week focused on software defined networking and what we're doing in the quantum project as a community. So today I think what we want to do is focus on some of the new and emerging trends that are happening with SDN. How it fits in with OpenStack and where it's going next. There's been a lot of a lot of big news that's going on. There's been a lot of traction in this space. Our objective here is to, you know, touch on software defined networking and so that we can get a basic understanding of what we're trying to accomplish with it. And then also have an opportunity for you to interact with our panels today. There's a number of esteemed colleagues here that they'll introduce themselves just a second. We will reserve some time at the end. So if you have questions we've got mics there in the middle of the room be prepared to have some questions. The first person is always going to be the key right. So have a question ready. And you know somebody just figure out I'm going to be the first one to ask a question. Okay, let's get a line going there. A lot of great stuff that's been going around Project Daylight that was recently announced. I'm sorry, Open Daylight. It is a project. But you're right. It's Open Daylight. And as well as OpenFlow is another topic that's of interest to us as well. So without further ado, I think first I'm going to have all of my colleagues here introduce themselves. Let's start with Sam. Hi, my name is Sam Greenblatt. I am the chief architect and chief technologist for our enterprise solutions group and proud to be associated with OpenSack. And nice shirt by the way, Sam. That's a great shirt. Wearing a purple bunny around Portland has been a very unique experience. Somebody actually said just so everybody knows somebody actually complimented Sam and said nice bunny. And he was not sure what to do with that. All right, Mike. Hi, I'm Mike Cohen, director of strategic alliances at Big Switch Networks. We're a software-defined networking vendor and we build a network virtualization solution that plugs into Quantum. Hi, my name is Dan Demitriou. I'm the co-founder and CEO of Midokura. And we are also an SD inventor, overlay-based network virtualization solution. Also available via OpenSack Quantum. Thank you. Okay, great. And let's do this way. Let's just start with some very basics and we'll start with you, Dan. Let's talk about what, you know, it's very basic stance. What is software-defined networking? Rather than the academic, let's maybe bring it down a little bit. How would you define STM? It's a good question because I think this term emerged well after we started building our solution. I think STM is all about programmability at multiple layers in the network. So for instance, we're focused entirely on the infrastructure as a service use case and we use an overlay-based approach there. So for us, software-defined networking is virtual switching, overlays, controlling tunnels, rewriting packets, and such. But I think in the meantime, of course, there are other things going on like being able to control various aspects of the hardware switches, routers, like configuration from a centralized place or controlling the routing tables, the forwarding tables with open flow perhaps. It's lots of things, but all about programmability, basically. So maybe I'll start with a slightly more academic definition, which is the idea of separating the control plane and data plane. So networking traditionally will work by selling you a box where control logic is embedded directly on it. It speaks to some kind of ASIC and it does what it does and when you're done with it, you throw it away. There's no concept of programmability or really changing the control software on it. They're deeply embedded and wedded together. So SDN threw that notion away and basically said, let's separate out the control plane. We could run that somewhere else. Maybe it's on that box in a distributed manner. Maybe it's running centralized somewhere, but you actually have a separate controller and then your data plane is essentially a hardware component. This actually opened up the opportunity to create some of the APIs that Dan was alluding to and actually make a programmable system where you can actually change the software independent of the hardware and actually change the way the hardware functions. Very similar to what's been going on in the server world for years and years. Well, the interesting thing is SDN has become the cloud word for this year. Everybody has a different definition. Everybody, depending upon which way they're headed, wants to define it differently. But what's important to us, exactly what everybody's talking about, is the programmatic capability to change the flow and the destination of the network, change the configuration within context because we believe the next big wave of computing is going to be total disaggregation. And that means you're not going to recognize the difference between a network and a channel. You're not going to recognize the difference between a 10 gigabit ethernet to a 40 gigabit to 100 gigabit. And what we believe is the programmatic capability is the only way to harness the true power that's coming into the network as we see it. Okay, great. Yeah, that's very good. And so obviously we are at an OpenStack summit here and Sam, I'll pose this question to you and we'll go the, how does we, you know, as a project, we've had a quantum as a project within the OpenStack community. Yeah, right. OpenStack networking, right. How does, how does what evolutions are happening in SDN? How does that impact how we should look at quantum and networking? Well, quantum was a major project, is a major project, I said was because it's been renamed, but it is a major project for us in the abstraction of the cloud being able to talk to the network. And we think that's important. And we think open flow is important. We love protocols. We think there's going to be much more to come in the future. And we also believe that the whole concept of SDN is not just the flow. It is really the northbound and southbound APIs, what's going to come out of the switch into the cloud. And there's no better group that's going to show innovation on how we're going to use those northbound and southbound APIs than the OpenStack foundation. Yeah, I think, I think we're going to have a lot of agreement here, at least on this question, that quantum has been an excellent point for us. OpenStack is probably one of the ideal use cases for SDN because it creates an environment where there's a lot, it encourages change in the environment. People spin up virtual machines, they turn them off, they migrate them around. You get a highly dynamic environment, which is actually where SDNs really shine because you actually want to change your network on the fly. And quantum basically started providing a simple set of stable API abstractions that actually let vendors come and confidently build something that can actually connect in SDN to an OpenStack cloud. So short of having those, we would have all been building slightly different custom integrations and every time OpenStack changed in different ways, we'd all probably all be changing our integration points. And quantum really, they stabilized the first piece of northbound API in SDN, which has been this hugely debated topic where no one can really agree. And the one thing everyone actually agrees on is the quantum abstractions are pretty good. And if anything survives of northbound API, those will probably be the first pieces that people gravitate to because they're working pretty well and we have these working implementations by a large number of companies at this point. I'm just going to repeat actually what I said at the panel on Monday, but quantum is doing great at providing a stable API that's facing the tenants, things like virtual networks and now router, security groups, ACLs, maybe at some point, and so on. But it doesn't do much for the operator so far. And I think that's the next step where we have to work on. So. Okay, very good. Now, we're going to delve into where things are going momentarily, but I'm going to give just about a few minutes. I'll start with you, Mike, because I know you were probably thinking about how you're going to get short into the stick here, but I'll start with you. And obviously we don't want to make this a commercial for product here, but I do want to talk about, have you talked about how implementing SDN will actually look today? So with what offers and solutions you all are offering today, what does it actually look like in a customer's environment today? Talk a little bit about what you offer. And then we'll start talking about where we see this going. So, Mike, why don't we start with you? Sure. So I can definitely talk for a second about that. So we offer a SDN controller and a product called Big Virtual Switch, which is a network virtualization application. That application can actually run in two modes. It runs in a pure overlay mode where basically you run an open flow-enabled V-switch on the hypervisor that may be open V-switches, but it's typically used and we have some modifications and enhancements to it. And then you actually use some kind of tunneling protocol between different V-switches in your network over an IP fabric. And we can run entirely in that mode in a non-invasive manner and that's actually where many of our customers begin with network virtualization. We also work directly with physical open flow switches, in which point we can actually integrate without tunneling protocols. So we run open flow through a larger portion of the network and then we can actually do interesting things like both managing the physical network and the hypervisor edge together and give the customer a unified view of their networking and actually give that you end up with a lot of interesting capabilities like a richer set of debugging features and a single management domain and access to a broad number of open flow-based switches and a vendor independence as a result of it. So I think that's what this approach where we actually integrate the physical and the virtual is really one of the things that sets V-switch apart. Dan? Our product is a pure overlay solution. So we actually don't do anything with the physical switches. We believe that the physical network should be an IP fabric that is built using the traditional tried and true protocols like OSPF or IBGP. Now that's not to say that there's not room for improvement there. There's lots of room for improvement, especially around centralizing the configuration management of such a fabric. Some vendors actually have solutions for their own product lines. I believe Dell has the Dell Fabric Manager. But let's say there's no multi-vendor solution for that yet. Personally, I don't really think that solving the central configuration management of the fabric is best achieved through an open flow controller. I agree with what Mike said that control data plane separation is important. But to a large extent, it's already there. It's just that in a lot of products, that separation is not exposed and can't be leveraged. So I don't really think that centralizing that aspect of the control plane is the best idea because to a certain extent it can lead to a loss of resilience. That's why we really advocate just using an IP fabric for the physical connectivity of the servers, the hypervisor hosts, and then doing the overlay solution for the virtual networking. Sam, we'll talk a little bit about what you're just doing. We see SDN as the first step in a very long journey. As I mentioned before, the word disaggregation. And I mean that from the standpoint. And most people look at it as everything, including internal. Wow. We said something wrong there, Sam. Okay, well. That's a cue calling. Well. Nice ringtone. Okay, thank you. And he shut it off and he's going to keep going. I can't believe it went off. I thought it was a vibrate. So what I was going to say was the fabric is not just going to be the network. We believe that the fabric is going to be internal to the processor. We believe it's going to be internal to the data center. We believe that the fabric is going to be the key staple that everybody is going to consume in bits of computing. And I'm not going to go into my favorite on those law and my favorite Moore's law on discussing it. But what's important is the programmatic nature of the network because it's going to be so pervasive is critical. And literally configuration has to be able to be context aware of what's going on in the network to be able to not only do multi level switching, but also be able to handle the flow beyond what open flow does and everything else. So what we think the future is going to be is, by the way, if you notice, I don't even mention L2 and L3 anymore. Nobody seems to want to mention it, but you got to coexist with it. And you can do that in software also. And we're doing it. We believe that we have to have an answer for the entire fabric. And whether you call it SDN, whether you call it software defined data center, everybody loves to put acronyms and names on things. We believe that you're going to see a lot more go into the cloud. And it's not going to look like what we think it is today. And I know my friend Rob Hirschfeld did a panel on OpenStack 2025. And when we come out with the Zebra release, but the bottom line is not to take too much time. We believe that SDN is a first step. And breaking apart the data plane from the physical plane and the making the flows work right, it's only a step until you get to the more meatier stuff, which is going to be the internal fabric. All right, great. Now, last week, the Linux Foundation announced the Open Daylight Project to drive transparency and standards in networking and software defined networking, specifically. So, Sam, if you wouldn't mind, I know Sam was instrumental in some of this stuff. So maybe you could talk a little about it. And then, Mike and Dana, welcome your comments as well. Well, my wife said to me one night, she said, I don't have to worry about another woman you have, obviously, a Linux Foundation. And we were on the phone with about 31 different companies talking about it. As you know with OpenStack Foundation, the biggest problem in open source is when companies are joining for the first time and do not understand the concepts of meritocracy, the concept of contribution, and the concept that you've got to put skin in the game. And not for your own good, but for the good of moving the ball forward for the community. And during those conversations, when it was originally daylight, there were some companies trying to drive it. And at the end, it went to Jim Zimlin and the Linux Foundation with the guarantee that this will be run as an open source project. And you saw membership levels and all that. And that was really interesting. But any member can contribute to open daylight. Open daylight is a true open source project. Now, the key is, as Eric Raymond used to say, can the feed will meet the bizarre and actually do something good for everyone. We hope that open daylight is a true open source project. That's why we joined. But on the side, to make sure everybody was working well together, we encouraged 26 companies to go to a standards body like the open management group to basically set standards for SDN so that it can go to the ISO level and international. Because those of you have been around a long time, we remember phones when they were managed by different phone companies. You want one standard. I know I'm too old. I don't think I remember that at all, Sam. It's okay, Joseph. I'm going to retire after this panel. So, the bottom line is we want to make sure that it's an open source project and there's a standard they have to adhere to. And by the way, just one quick thing. At ONS today, we demonstrated a R switch is talking to big switch. So we're pretty proud of that. No, I can definitely talk about this too. I spent more than my fair share of time as this open daylight project was coming together. So big switch has been involved with open source SDN almost from the very beginning. So I was the original product manager for the floodlight project which launched over about a year and a half ago and actually has a genuine open source community around the SDN controller that does open flow. And we have a vibrant mailing list and people download the project and they use the project. And it's been incorporated in a number of places. We have had partners adopt it. So it's actually been working well for some time. And we're actually very excited to see daylight come together to actually see a number of companies embrace open source this way and actually want to build together around the standards for SDN. And we're contributing, we're obviously contributing code around what's in floodlight. We're actually contributing a large amount of other controller code and network virtualization code from our products to the daylight project. And we're very optimistic that it's going to become a great project the way OpenStack has, where it actually builds a community and you really see a meritocratic community development process. Now obviously daylight is at very, very early stages. It began a little bit differently than OpenStack in that all the companies joined before any of the code existed. And that creates a lot, there's a lot more inherent tension there than with OpenStack, it was clear who the tech leads were because they were already working on the project. And we don't have that in open daylight. So there's a lot more to be sorted out. And we'll just have to see how it develops over kind of the coming weeks, you know, in coming months. And hopefully the project will settle down in games team and we're optimistic that it will. Damn. So I'm going to look like the black sheep here because our product is not open source. Boom, yeah. Thank you. Appreciate that. So, all right. To be honest, when we started building out this product, we were looking to solve a problem and that problem is how to achieve multi-tenant networking and isolation in this type of cloud environment. And we built it, we built the whole thing ourselves from scratch. At the time there were no standards for us to leverage. So we had to build the whole thing end to end. And we believe that we're still innovating just fine with that approach so far. I think standards are important when multiple vendors' products need to talk to each other. Naturally, if there were no standard for BGP, the internet would not function. So that makes a lot of sense. The question is, does there need to be a standard protocol between a software virtual switch and a software virtual network controller? I don't know. It depends on how the whole thing is architected. Maybe in some cases it does, maybe in some cases it doesn't. It seems like the question that I would ask is, for example, for OpenStack, does Nova API talking to Nova Compute need a standardized protocol? It's all part of the same project. I think that protocol probably changes every other day, you know, the way this community operates quickly. So for controller to controller or product to product interoperability, I think there is work going on in the IETF right now. Somebody told me about the NVO3 group doing some work with NPVGP to standardize, let's say, an entire virtual network domain to another domain communication. And I think that has a lot of value. There may be a lot of value that comes out of Open Daylight as well. And if there is, I would leverage that. At the moment we're going to wait and see which direction it goes. Thank you. Okay, very good. So obviously a lot of variety of opinions. When something's new, this is what happens. This is great conversation. If you would like to learn more about it, you can go check out opendaylight.org. Now, I'd also like to say we're probably about 15 minutes left in this session. If you do have a question, go ahead and line up. I am going to ask the panel one more question, but if you have a question, go ahead and line up at the mic and then we'll take your questions there. So I'll kind of just, you know, Sam touched on a little bit. Where are we going? What does the next two to three years of the evolution of this space look like? You know, Dan, maybe I'll start with you and we'll work our way this way. I'm getting so tired. I'm not going to be able to answer this. I'm picking on you now. I noticed. I noticed you yawning. Nobody yons on my panel, Dan. All right. I'm sorry. I apologize. Just kidding. I think what we're seeing from the customers that we're talking to right now is that first they're looking to solve a couple of specific problems like scaling and they want to get rid of VLANs. They want to build their networks without using large bridge Ethernet domains. And that's what their initial motivation for going with virtual networking is all about. But as they incorporate more services into that, they're not going to be satisfied with just the layer two segments or virtual routing. They want the whole thing. They want the full Amazon VPC experience with like the scalable load balancers and all kinds of other services that they can insert. And so some of those things will be built by the specific vendors like us. And some things will need to integrate with third parties. I think another thing that I've been seeing, you know, we've had a lot of inbound requests from carriers, from telcos. They haven't been super active, I think, in deploying things. Some have, but they're looking to leverage their existing assets in their very large global networks and lots of data centers to provide some extra value. And that's why they're looking at deploying things like OpenStack. But since their scale is so very large, they want to create lots of clouds and federate them and very similar to what Rackspace was talking about at the keynote yesterday. And so maybe one of the things that'll be interesting is integrating MPLS into the whole mix for the carrier space. Yeah, I think Dan touched on a number of good points. I'll echo some of them and add to them. We're definitely hearing people, a strong interest in the inter-data center story. The network virtualization is at a data center at scale. The solution, we have them in middle-core has them and enough vendors have something like that now that it's not a solved problem, but it's actually coming online in a serious way. And now people are saying, well, how do I run that across multiple data centers? And can you do anything about my inter-data center connectivity and security? That's definitely something we hear a lot of requests for, and it's definitely something that's coming. The next one is around network services. OpenStack and Quantum has been leading some of the way there with load balancers, the service, and firewalls, the service. But basically, how do I create abstractions around these network services and actually insert them into my virtualized networks and actually define connectivity in terms that I can understand between these different services in a secure matter and predictable matter? That's also really important. And the last one, we hear a lot of requests for, partially because where we sit in our products is people that actually want to integrate this physical and virtual domain. They may be starting out with overlay products, and there's some challenges going back between servers and networking teams, and they actually want to integrate those technologies and provide an end-to-end solution where it can actually be managed in a central location across a physical and virtual domain. And that's stuff we're bringing online as well. Well, I kind of touched on it before, and I agree with a lot of what people are saying. But I'm going to question one thing. The change is coming a lot faster than we think it is. If somebody would have told you that we were going to be at 100 gigabit ethernet, you would have said he was crazy three years ago. We think that's only the limit. This lower limit of how fast we can go. And TCPIP v6, by the way, which most people like to think is the dirty secret, is going to create volume of traffic like we've never seen. Gartner likes to use the word the Internet of Things. So if you're thinking about a private network, yes, you're right. You may be able to contain traffic. But as you go into gateways, as you go to different people, as you get the sensors out there and all the cell phones and all the mobility, we're going to see a whole new world in the next three to five years. And we better start planning for it today. And we're all thinking about it in SDN. And this is just the start of a much faster future and a much longer journey. Great. Thanks. So at this time, I'll open the floor for any questions that you might have for our panelists here. You can just raise your hand if you don't want to come up to the mic, if you have any questions about implementation of the future at all. Any questions? Yes, go ahead. So a lot of our current data center currently running on proprietary layer two switches, right? I don't want to name all those. Now we're talking about SDN, which has a separated control panel versus data panel. Now, how are we going to leverage the current layer two proprietary device? And those are devices that doesn't have any APIs. How are we going to leverage the current deployment at the same time bringing some sort of SDN concept into our data center? We are absolutely working on that one issue, given to the other two panelists. We believe that you're going to see a virtual interface that it will talk L2. It's going to be able to talk to protocols, but it's not going to be a hardwired switch like you're used to. As long as we can be compatible, you'll move up the stack. If we fail on it, which we haven't, we've been able to accomplish it, you'll end up being able to move to SDN pretty easily. So we've worked with customers before that have L2 fabrics in place. And essentially what we do is we still can implement virtual networking. We do it at the V-switch and we leverage the L2 fabric. We leverage the L2 fabric for connectivity. It creates a number of interesting challenges that's different than having an L3 fabric behind, but it is possible to do. We don't have a whole lot to add either. I mean, we're an overlay solution. So as long as you're a fabric and transport IP, then it will function end to end. And we also have an L2 gateway in case you need to bridge your virtual L2 networks into existing L2 networks, of course. All right, very good. Any other questions? That was a great one. Okay. If you do have one, go ahead and come up to the mic and maybe I'll pose this one and this might be our last one for the session here. So for people that are looking at implementing an SDM strategy now, let's talk about who is it best suited for right now? Is it something that every customer should be looking at? Are there specific types of use cases or specific implementations that make sense to start now? That would be the first part. The second part is, what can everybody expect the first phase of that to look like? Is it tear downs and is it going to be things off the network? What can people expect? Sure. Well, I'll speak about the experience of installing our product, if you want. We think it's applicable to any size open stack installation. There's nothing too small. Scale is not the only benefit of using a solution like this. First of all, it's much easier to configure and if you do happen to scale out, you really don't have to change anything. In terms of using it, it's really just as easy as installing another package along with your, you know, you've got quantum, you got a plug-in, you're installing another agent, which is our software, in lieu of, you know, the, for example, the open V switch agent, something like that. So I think the experience of using this type of solution is really not that different from just installing software. Of course, if you start sniffing packets on your network, they're all going to be encapsulated. So it's going to be a little more opaque. But right. Well, that's something to improve for sure. But I really don't think that there's any significant obstacle to using this today. Yeah. I think I, you know, I agree with that statement that, you know, small deployments actually do work very well and the reality most of our customers, you know, with OpenStack, they are starting out small, but they're actually, you know, very few people build an OpenStack cloud planning for it to be very, very small. You know, they're doing it because they eventually want it to be big and they want to be able to go from small to big without having to throw it away and start over again. So it actually makes a lot of sense to adopt an SDN solution very early on as you're planning and building your cloud, you actually want to install it then, you want to have it in the test environment and, you know, and roll it forward so that you actually have experience using it. So as you scale out your cloud, you can actually do it seamlessly. And that, again, that's the process many of our customers follow. Most of them begin an overlay model because it is non-vasive. It works on whatever hardware they have today. You know, there's a number of them that are jumping straight to, you know, straight to open flow switches. They're working with partners like Dell. You know, they're working with the Switch Lite product that we launched recently that can run directly on a Switch, on a physical Switch. You know, so we're seeing some of that happen as well. But so there are multiple paths to adoption here. Unlike some of the companies that are in the market, we're not advocating jumping in the water without knowing what's in how deep the water is. So you better have a good use case for what you're doing. And as you do, you can run hybrid, you can run standalone. But what's important to everything is make sure you understand your use case before you even start to touch SDN or anything else. And that's not always as easy as it sounds. Good point. Question. Yeah, my question was, you know, for each of you, what do you offer or see as, you know, support for monitoring, you know, fall performance and so on on SDNs? Well, one of the things that was mentioned before is we just came out with an active fabric manager. And that's really good to see the traffic in configuring a top rack or a switch. But one of the things that we do believe is that the monitoring is going to be a much different thing once you start to encapsulate the packages, once you start to encrypt the packages, once they start going down parallel routes. So we're looking for new tools beyond what we're producing today. So we, we, I guess, have a new tool to some degree. We have a product called Big Tap, which works with, you know, it uses open flow switches and actually can live off physical taps in a non open flow network, or operate directly out of a production open flow network, and actually does traffic steering and traffic filtering directly through an SDN controller. And you know, so this can be used to give you very fine grain access to any traffic in your network and send it basically to any kind of analysis device you want. So we've been deploying this with a number of customers, sometimes in an open stack context, sometimes directly in kind of a general networking context as a monitoring device. So you would use external tools, you know. Right, the actual analysis you could use Wireshark, but you'd actually be able to get, you know, the SDN capability gets you the traffic from anywhere and gets it, you know, you actually filter it and bring it to Wireshark or some other analysis to wherever you're running that. So in our product, like I said, it includes the virtual switches that run on the hypervisors. And so those take samples of various metrics and send them to a time series database, which you can then query, again, with various tools. I wanted to say one thing here, which is that as this fabric manager idea of the physical network, that's not really integrated with what's going on in the overlay at all, as Sam mentioned at all. Right. And I think that's an area of opportunity where perhaps something like quantum can bring the two things together. This has been discussed in other meetings or, you know, in various discussions during this week. Is that a good place for quantum to go to be managing both this fabric as well as the virtual network in the overlay? Thank you. Great. Thank you. I'm just curious as to what, if any degree of resistance in some of your customer engagements from, say, the networking team to these products. And if so, what sort of diplomatic ways have you… That's a good question. That's a relevant question. Have you found to disarm those situations? Well, let me just answer it this way. In high school, I was not voted the most diplomatic person. So I'm not going to touch this question. That's a real surprise. I guess so. I'll go near this one because we definitely see it a lot. And we engage at both levels. We actually engage at the server team of our overlay products and our open flow products, you know, tend to engage a little bit more closely with the networking teams. And, you know, the reality is, you know, our story around integrating physical and virtual actually plays a little bit better to networking teams. And, you know, the objections they tend to have is, you know, there's a loss of control and visibility in overlays. You know, they feel like they're seeding some control over how the network is running. They're sort of becoming a dumb IP fabric. And as you actually talk about integrating, you know, integrating the solutions together, you know, there's a degree in which they actually regain the control and it can actually have visibility. You know, they can use our big tap product to actually see traffic at any point. So that actually helps bridge the gap between the two groups. But to be frank, that this is certainly an area of challenge in deploying SDNs and it's still got a period of time where it will be. Yeah, I actually think it should become a dumb IP fabric. So in that sense, you know, it might come with some resistance. Actually, in recent engagement, we had a pretty good experience. We tend to me feel more comfortable engaging with the server team because that's where we come from. You know, we build software. We don't necessarily come from the networking world at all. And that's kind of our aim. And then they like it. It works. And they pretty much convinced the network team that it was okay to deploy this thing. But in some past engagement, that didn't go so smoothly, to be fair. And at various points, the networking team was, let's say, resistant to change in terms of the way that they were doing certain things. They're like load balancers. Well, we connect those up to the aggregation routers, you know, and then we do this and that your system is not going to work, you know. So I don't think it's entirely smooth when you want to virtualize all the network services. But I do believe that's the direction things are going in. And in terms of the way these clouds will be operated internally in enterprises, it's going to require a change in the way the teams are managed as well. We really, I don't think we can really have a separate network team and server team for this cloud. There are a lot of disciplines that come together. Great. Well, I think we're actually almost out of time here. So I did want to say thank you to all the panelists for participating here. We'll keep them up here for a few minutes. Dan from Midekura, www.midekura.com. I think big, big switch, I'm sorry, bigswitch.com. Thank you, Mike. And then if you want to learn more about what Dell's doing in OpenStack and networking, dell.com slash OpenStack. Thank you. And thank you for attending.