 Hi, my name is Prashant Ganpati. I work for Dell, primarily in Dell Networking. And I'm part of the Dell Open Networking team. Today I'll be talking about how we can build physical networks in different ways using our Open Networking partners to build OpenStack networks and to help you with your OpenStack solution. As you can see, I don't have a title. This is a fairly new team within Dell, which is trying to promote this new initiative of Open Networking. So we end up doing pretty much everything, from product management to sales to talking to partners to building the strategy. So that's what I do in Dell. In my previous life, I was a sales engineer in Dell, covering some very large accounts from a networking perspective. So from agenda, I just put a brief summary of the agenda. I'll be talking about what Open Networking is, what it stands for, what we are trying to achieve through it. And then we'll go into the OpenStack aspect of how our partners are integrating with OpenStack and making physical networking part of it easy to solve so you can focus on your OpenStack solution. And end it with a summary and questions at the end. Before I start, I was thinking, how do I make this presentation to stick out? And how can people remember this? And one thing, if you go to every networking presentation nowadays, what is the one thing everybody keeps talking about in networking presentations? Anyone? Yeah, exactly. And STN is probably more used, overused than an FV. And it's not that I don't like the word, but it's overused and it dilutes the message of what it's actually doing underneath in terms of technology. So one thing I'll promise, I'll not use the word STN in my presentation. I'll hopefully talk more about the technologies that are enabling that particular word. So if you catch me using the word STN, I have some selfie clicker things, and I'll hand them out if someone catches me doing that. So with that, I'll kind of move on to what is Open Networking and how Dell came about doing it. So when I start my career about in the early 2000s, networking was still a black box. You had proprietary ASICs. And I was part of 410, and we used to do that to proprietary ASICs. So once you have a proprietary ASIC at the bottom, you would build OS, which would only work with that ASIC. And then on top, the tools would be ones which could only work with that software. So it was a complete black box, as you can see in the left. That was traditional networking. But around the mid-2000s, commodity chipsets started coming into play. Intel, Fulcrum, Broadcom, bunch of other guys started coming into the picture. And over the last few years, that has become mainstay. Like commodity chipsets and networking has become mainstay. And many companies have successfully implemented it, including Dell. So once that happened, now the next question is, then why are you tying a specific OS to a commodity chipset? Why can't it be like a server? Today, when you buy a server, if someone tells you, I can only sell you one OS on it, you look at him like he's mad. Why don't you do that with a switch? You automatically assume it comes with that OS. So we wanted to say, hey, Dell has done this before. Dell has disaggregated the server before. Why can't we do that in networking? And working with some very large customers, we realized that's what they've been doing over the last few years. They took commodity white boxes and ran their own OSes on it. Their reasons for doing it were different. They wanted to control their own destiny. Their skills were huge. They wanted to control costs, get fixes faster, lots of other things. But there is value of doing that in the enterprise as well. And that's what Dell saw. And one of the key things that happened as we went along trying to do this is companies like Humilis and Big Switch came along who said, we are a software company. We're going to build a software that can run on a commodity-based switch. And so what we decided was we're going to jump headfirst into this, right? Not test the waters, jump headfirst. So Dell, having acquired 4.10, has its own OS. It's called OS9 now, OS9. So we will support that on our ON switches, right? But what open networking is, is we started saying, we will support other OSes on it as well. So we started with Humilis and Big Switch on our only bootable Dell open networking switches. It's our flagship product. It's not a separate line of switches. It's the same switches we sell to everybody. And along with that, we support our own OS. We just introduced another one called IP Infusion. I'm not going to spend too much time on it because it's not related to OpenStack. But what we bring to the table with IPI is MPLS. So we'll be supporting MPLS on the same switches, layer 2, layer 3 VPN, LDP. So you can solve cases like edge routing and things like that, lower-end routers. So we're bringing that into the mix as well. So we have 4 OSes now. That's our stable right now. And what we hope to do is something on the right, right? So we have Merchant Silicon. Primarily, we are based on Broadcom. We use Broadcom chipsets. Going down the line, there are others coming along, Melanox, Barefoot, Cavium. Lots of guys coming in in the 100 gig play. We have Dell, you know, the top of rack, single chip. One RU boxes are very simple to make. That's our standard commodity hardware. On top of it, we support all these 4 OSes. And all of these bring in the whole set of new orchestration tools, automation tools, the way you can manage it. So it opens up a whole number of choices for the customer. Customer can decide and evaluate which OS works best for his environment and use it that way without having to rip and replace hardware. And you're not locked into hardware. That's the key message. You don't have to rip and replace hardware when you want to try different things. So that's what is primarily open networking. I think there is a general network paradigm shift. This has been received really well in the industry and we're seeing a lot of traction. So that's our primary of what open networking is and what we're trying to achieve through it. Before moving on, I also wanted to kind of mention another aspect of how Dell is driving innovation in this industry, right? Dell networking, what we realized is we opened up the operating system environment. We allowed different OSs to run on our switch. But underneath, the silicon has an SDK and a networking operating system has to program to that SDK. If a new silicon comes along, I have to program that SDK. Now, many people have developed their proprietary abstraction layers, but there's nothing open out there. It's hard to do for a company to kind of program to a new silicon all the time, right? So we collaborated with Microsoft, Broadcom, Melanox, some of the chipset guys as well as Microsoft, and defined a software abstraction layer called PSI, Software Abstraction Interface. It's a low-level abstraction interface, C++ APIs. But if the chip is compliant to it and the networking OS is compliant, can understand it, then all they have to do to bring on a new silicon is to talk to the PSI layer. And we contributed to OCP. It's been well received. Multiple chip guys, multiple OS guys have kind of jumped on it. So hopefully this will become a standard and enable more chip vendors to come into the play as well, because competition is always good. Broadcom has done a great job trailblazing this, but competition is good for innovation and it brings the best out of the industry. So if you can see what Dell networking has done, it's opened up the OS environment as well as the chip environment. And now you get a whole set of permutations of combinations of which OS you can play with which chip and get different use cases out of it. So I think we'll drive the time to market as well as innovation in this industry. Before we get into the specifics of what our partners can do on our switches with respect to OpenStack, I just have to call out the Dell and Red Hat Enterprise Cloud solution, which is the main aspect of our booth and as well as our presentations here. It's a very closely integrated solution with Dell hardware and Red Hat software. If you've not heard about it in other sessions already you should check it out. It's a pretty good solution. It can be predefined as well as custom made and I'm not the expert on it but there are other people around here who can talk to you more about it. I have to talk about that. So going into why Dell Open Networking and OpenStack, where's the connection? Physical networking has a role in OpenStack but it's kind of the invisible part. You just want it to work so that you can focus on OpenStack and that's where the connection comes. You just want it to work and it doesn't always work. That's where we see the integration. So I just put down a table of why I thought it's relevant in other ways too. One key thing is we are promoting commodity-based chipsets in our switches and OpenStack is also ideally should be working on commodity hardware. That's where OpenStack is best at. Choice of networking OS. We offer a choice of networking OS. OpenStack is open source as well as there are a bunch of commercial vendors. So you get a choice of whether you want to go all open source and do your own thing or you want to leverage commercial vendors to do some of the different aspects of it. So similar principles. You know, long-term CAPEX and OPEC savings with Dell Open Networking, right? You get choices. You decide what works best for your environment. You know the cost of your hardware and your software. So you make intelligent CAPEX choices and then you streamline your DevOps operations and things like that to cater to your needs. So long-term you get OPEC savings as well. Similar principles in OpenStack too. Eventually you want to streamline your DevOps operations. You want to have a common DevOps layer which kind of manages everything and then you have specialists in each area. So again, long-term CAPEX and OPEC savings if you do the OpenStack solution right. And at the end of the day you want to speed up the innovation, right? You want to speed up innovation and time to market through Open Networking. And similar principles come in OpenStock. Similar things happen when, you know, at the end of the day you want the OpenStack solution to increase your business, whatever your business is to bring in more, you know, do the right analytics to gain the leads that you need to generate more business from your segment, right? So similar principles, I think. So that's why Dell Open Networking is relevant in the OpenStack arena. You know, during the run up to this presentation you know, there were a lot of questions, right? So my physical networking aspect people were just trying starting out with OpenStack or I've been using it, you know, questions come up like, should I use layer two? You know, traditionally enterprises have used layer two. How do I go to layer three? Do I need overlay? Do I not need overlay? Everybody's talking about overlay. Maybe I should do overlay, right? That's not the right way to approach it, right? So these questions come up. What about security implications? HA, multi-tenancy, you know, can we actually try this out without spending a lot of things, right? Like, a lot of people I meet are all proof of concepting OpenStack, right? So can we try all this out without having to spend a lot of money before we make what is the best way to do this, right? So these are the key questions that come up. And Dell Open Networking, I think, enables all of this, right? It helps answer some of these questions. It also helps you kind of say, hey, you know, we can test, evaluate different OS options on the switches, see what works best for our environment, and then decide what is best suited for the overall OpenStack solution. Now, diving in deeper into the two partners I'm going to focus on in this presentation is a cumulus and big switch. Big switch is a very simple message. It's one big switch, okay? So it's simple plug-and-play of your fabric that you have connecting to the controller, right? So you bring up your switches, booting with only on the Switch Lite OS, which Big Switch has, and then put in the controller IP. After that, everything is managed from the controller. That is your supervisor of your big switch, right? In a chassis, your supervisor would be used to manage everything. So that becomes your supervisor of your big switch consisting of your spine and leaf, right? You don't have to log into any of those switches or do any troubleshooting on them or configure them, right? So configuration, management, and troubleshooting all happens from the controller. That's your one-stop, single pane of management. And their philosophy is of a pod, right? You build a pod of a certain scale, 48K endpoints, 16 racks, about 700 servers, you know, decent scale of that pod. And if you want to scale beyond that, you build another pod, right? And all these pods kind of connect up to a super spine kind of your core network, right? And if you see a lot of the big guys, Facebook, Microsoft, the world, that's what they're doing. They're building pods of networks which connect into a bigger core network, right? So that's the approach they're taking. And the key things they bring in with respect to OpenStack is they have a Neutron ML2 plugin. So they interact with the OpenStack environment, get the data out of it, and help in plumbing the physical network. And they have made some contributions on the horizon GUI side, which I'll talk about in deep. The other partner I want to talk about is Cumulus. Cumulus Networks has a very simple message again. OpenStack is heavily relying on Linux, right? And if you have a Linux environment, if your sysadmins and all the resources that you have for your OpenStack are familiar with Linux, using a lot of Linux tools, their simple message is extend it to the switch. Because still now, you would always say, hey, I have to do something special for this switch. So they're just saying, use the same provisioning tools, management tools, troubleshooting tools that you use with rest of the Linux devices and extend it with the switch as well. Because they are a pure Linux OS and they work with all the Linux tools that you already use. They also integrate with a number of different overlay partners as well. One of them being Dell's partner as well is Midokura, which I'll talk about, right? Other than that, they also leverage with other overlay partners like PlumGrid and Nuage and other such overlay solutions. And as I mentioned, Midokura and VMware are also partners that both of them, both of our partners integrate with, from an overlay standpoint. So going into the actual solutions, right? What are the different solutions? And a disclaimer right here is, these are not all the possible solutions. I'm just gonna talk about some of the approaches to give you a hint of what you can do with it to start off with or a more complex solution. And then there are a bunch of other solutions, right? So there is no end to this once you get into open networking so the number of solutions you can approach using these different OS partners, right? So the most basic one, if you are using Cumulus, is you know, if you're just doing a POC, you just wanna try out OpenStack, is to build a simple no one-net network, right? And you just provision your Cumulus-based fabric network using Puppet or some other automation tool that you use in your environment. They have MLAC technology, they have some cool layer to active-active technology to build a very simple network using provisioning tools that you already have. And then you build a simple no one-net network with all the VLANs in all the ports, right? So just gets you going, you're able to kind of leverage what you have in the environment and you don't have to worry too much about the physical network itself. Again, I think this is more to get you started. I think as OpenStack migrates into Kilo, you'll see this moving, no one is kind of getting deprecated, so you'll move into a neutron ML to plug in with OVS and if you just want to keep it simple, right? So that's one approach to it. This one, they have a validated design guide as well. And if you come out our booth or their booth, you can actually see the step-by-step version of how they actually set it up, all from provisioning and managing the whole fabric. So this is one simple approach to just try and test it out, I think, and build a small environment if needed, right? So that's one approach to this. There is no overlay, there's no complication here, keeping it simple. Now if you want to go to a more production grade solution, there needs to be scale. So there are a bunch of overlay options out there. One of them is with Medokura and Cumulus. They have some integration with Medokura, where the Medokura management console can provision the VTAP gateway on the switches. So you don't have to do that separately. And you can build a layer three fabric, right? So if you want to build a scalable fabric, layer three is the way to go. And so you can build a layer three fabric with an overlay solution, kind of build a more scalable network. Now the next question is, oh, IP addressing, right? I'm always scared of doing IP addressing in a layer three environment. So they have some cool features like IPN numbered. IPN numbered is a feature that's been there forever with Cisco, but it's kind of been more adapted now. So you kind of reduce the number of IP addresses using features like that. So they kind of help you set up the fabric quickly using their provisioning tools, and then use the Medokura solution to get a very rich solution, right? It's a distributed routing, distributed firewall, load balancer, SNAT, DNAT, all of that cool stuff comes in. If that's the solution that you're looking for, you can try that option out as well with the Dell Open Networking Switches. Moving on to Big Switch, right? Again, Big Switch offers both no one at, and now moving into the Neutron, they have a Neutron ML2 plugin, right? And the main thing that I want to stress over here is ease of operations, right? Easy to provision, easy to manage, easy to kind of troubleshoot, right? That's their message with respect to OpenStack. With the Neutron ML2 plugin, now when you first set it up in an OpenStack environment, and I think they are collaborating with a bunch of certified OpenStack distributions like I think Mirantis Fuel and RDO and some of the other ones, they have a script which runs in the beginning, and it sets up the whole environment from an OpenStack perspective. They set up the plugin, they set up LLDP on the compute nodes, auto LLDP, and a couple of other things, right? So what happens is when the fabric comes up, the auto bonding happens from the leaf switches down to the compute nodes, that part is up, and then the Neutron ML2 plugin, once you start creating projects and networks in your OpenStack environment, the Neutron ML2 plugin conveys all those messages to the Big Switch controllers. So they have all that message, right? They know how many projects are there, how many networks are there, how many tenants are there, all of that information is there, and they're on the back end, they're plumbing the network to make sure all of that connects to each other, right? So they use simple things like VLANs and VRFs within the confines of the Broadcom chipsets to give you that scale, right? So there's like 4K VLANs, I think about 1K VRFs to, and that's the scale that you achieve, right? So there's no overlay involved here. We're using regular switching and routing. You're able to get a certain scale with it, right? And it's plugin play, right? You set it up, the controller talks to the OpenStack side, and sets it all up. So that's the key message here, right? If you don't want to deal with the physical fabric configuration provisioning, and this is a one-stop shop to do it. The other two aspects of it, which I wanna talk about is they've contributed to the Horizon GUI, extensions to the Horizon GUI. One is their heat, version of the heat template. So in an OpenStack environment, you know, there's a network admin, and you know, he has to talk to the tenants, he has to talk to security guys, he has to talk to a bunch of guys to create the networks that are needed for an overall OpenStack solution, right? So what he can do is leverage, if they're using big switch, he can leverage all this information, you know, create a template, and add it to the catalog of heat templates that are being built by other people. So at the end of the day, there's one place which is the source of truth, right? And people don't have to like, again, what did he say and what did he say? It's all documented in one place, and when the time comes, it's easy to deploy. So that's one aspect of the heat template, right? It saves time for everybody, and you kind of collect information in one place, and that's the network that you're going to build in OpenStack. And the other part of it is a BCF test path, right? So once this environment is up and running, on the horizon GUI, the OpenStack admin or the tenant admin, you know, he's able to see, without reaching out to the network admin, he can say, hey, okay, this VM is not able to talk to the other end. Should I call the network admin right away? No, you can go and kind of do the testing, first level of testing, using this test path, and do the end-to-end testing of that path and give you the result. So at least he can give some useful information to the network admin if this, the initial level doesn't give you the result that you need. The other part of it is the VMware and SX integration. If any of you want to go down that path with the VIO solution and things like that, I think all the OS partners that I've talked about have some level of VMware integration with NSX. And so that's another aspect to keep in mind when you go down the Open Networking path. So in summary, I just want to kind of highlight what Open Networking brings to the table, right? So all this is, whatever I talked about is happening on Dell Open Networking switches, right? You can try all of this on the same set of switches without having to go to a different set of switches, and you can decide based on that what works best for your environment. And I think each of them has their own strengths. So based on the choice of OS, the customer can decide what works best for him and decide what's the best way to kind of do the physical networking of OpenStack in a seamless way, and then they can focus on the OpenStack solution. And I know OpenStack Networking has started, delving into it, it's not easy. It's pretty complicated. So you want to spend more time on that and you want to make sure the physical fabric just works, the plumbing just works, right? That's the whole idea that I'm trying to bring, that Open Networking brings to the table. And I think at the end of the day, what all of this does is Dell Open Networking is enabling the word I can't say. So SDF, right? So that's what it's doing at the end of the day. I think you're getting all the options. At the end of the day, SDN is about being able to try out different options, change things, play around, and figure out what's best for you. And I think we are enabling that by providing all these options. So that's the summary of my presentation. Sorry for the dry mouth, but hopefully gave you some cool information. There is a Dell, at the Dell booth, we have two stations with the cumulus and big switch, demos running and remote labs if you want to access them. And you can also go to the individual places, individual booths of both big switch and cumulus if you want more information. They're definitely the experts at this. And questions. Oh, you can use the mic so it's recorded. Sorry. So one of the pain points we have currently in our VMware infrastructure that we anticipate we'll have in the open stack as well is, we try to do open trunks or just basically get as many VLANs down to our compute nodes as we can so that we don't have to focus on every time a new VLAN's added, getting it added to each of our nodes and that sort of thing. So one area we're looking specifically at Dell Open Networking and the cumulus piece is handling the plumbing like you're talking about. So I was curious if you could go into a little bit more detail about what's supported, like what we could actually drive from say NSX and have it end up in the physical plumbing. So from the NSX standpoint, I think with the VIO integrated solution, I think they also have a plugin into Neutron. So from that aspect, they should be able to, when you create a network, they should be getting that information and what VLAN it is. And they should be able to plumb that into the physical network using that as far as I know. Other than that, I'm not sure if NSX, so it will be the center of configuration would be NSX. If you're using NSX, the manager from NSX would be what would be doing that. So I believe they have a plugin. So what the plugin does is it just listens to everything Neutron does. So once you create the network, it should have that information to be able to create the VLANs that you need on the physical infrastructure side if you're going in that direction, I feel. Because on the VXLAN side, you don't really have to do anything on the physical network because that's the whole point of VXLAN is gonna is gonna plumb across all of that, right? That's happening from the VM level. And if there is what it does do is if it's crossing over to like physical networks, like non-virtual networks or non-virtual workloads, then the VTAP it configures the VTAP gateway on the physical switch so that it can de-capsulate the VXLAN encapsulation, right? So that's what it does. But I think from a VLAN standpoint, as long as it's able to get the information that's being done by Neutron, it should be able to plumb the network unless it's already done so. Thank you for presentation, it was very enjoyable. If you go on Dell's website and you're looking at the Dell switching options that S55 and the 4810, there's the Force 10 or OS9 version and then there's also the ON version. I think it actually shows up as a different part number. Is it possible, say a customer says, I'm not sure if I wanna do open networking at this very moment, can they buy the non-open networking switch and then do a, you know, only installation of, say, big switch or cumulus after the fact, or vice versa? Yeah, so that's a good question. So starting 2015, so in last year when we first started out, yes, there were different versions. We didn't know how far we were going with this and things like that, but starting now, all the new switches that we have introduced. So the S4048 ON, which is the 10 gig Trident 2 base switch, the S3048 ON, which is the one gig switch. So all the new top-of-rack 1RU switches that we are coming out with are all only ON-based. What that means is they are all only-bootable. They will all support our OS, which is OS9, as well as all these other options. You can actually buy the switch without an OS and then decide which OS you want and buy it separately, or you can say, hey, I want it with OS9, or I want it with cumulus, or I want it with the switch light. So you can do that, but all the switches are ON, so there is no differentiation there. And the same switches, yes, if you want FDOS, if you start with OS9, you can start with that, and then if you want to move to cumulus, sure, go to pay the licensing for that. But if you go down one of the partner's roads, and this is something not against our partners, but to encourage people to use our partners, is if you go down the road, what we say is, and many of the customers and successful customers, they were always not sure, because it's a new thing that people want to do. So they said, hey, what happens if this doesn't work out? Not because the partner is bad, because it's not the best thing for their environment. What happens then? What do I do? So you can fall back to OS9. That's what we say. That's how we offer you protection, that if you go down this path, this is a new world. You can fall back to OS9 if it doesn't work out for whatever reasons. So that's what we say. But all new switches, top of rack, not the chassis or the bigger ones, but all new top of rack, single chip switches from the data center side, not the campus side, from the data center side, are all going to be ON switches, supporting one or more of the partners on it. Thank you. No more questions? That's the end of the presentation. Hope you liked it. Thank you.