 Good afternoon and welcome. In this session today, we'll be discussing how to deploy OpenStack with Cisco UCS and Nexus. We'll also have a deep dive into, thank you, sir. We'll also have a deep dive into application-centric infrastructure or ACI, including a demo of ACI. I'm Dwayne DeCappet. I'm Product Manager for OpenStack at Cisco. Join today with Mike Cohen, who's our Director of Product Management. Cisco is dedicated to the success of OpenStack. Who here saw the press release a few weeks ago that Cisco has pledged to invest over a billion dollars in OpenStack-related technologies in the next few years? Excellent. Cisco is dedicated to OpenStack, and it's a key part of our DNA. We've been involved in the OpenStack Foundation since its inception. Lou Tucker, our CTO of Cloud, is Vice Chair of the Foundation. We've been active in contributing code and blueprints across all the major components of OpenStack, including Nova, Horizon, and Neutron. We've also very aggressive in terms of creating automation technologies to make it easy to get started with OpenStack and to scale OpenStack. We've contributed a lot of this open-source technology upstream into OpenStack, as well as to our Cisco GitHub. We've created plugins and integrations with product lines, including UCS, Unified Computing System, Nexus, as well as the Cloud Services router. We've also very active in terms of working with customers, applying best practices that we've learned in data center technologies and other cloud architectures to help them be successful with OpenStack. We're supported by all the major Linux distributions, Red Hat, Suze, and Canonical. Seeing great traction with OpenStack and customers in all verticals, including service provider and enterprise. A recent survey done by IDC with some of Red Hat's customers said 85% of every Red Hat enterprise customer they've surveyed plan to deploy OpenStack in production in the next few years. So tremendous interest in OpenStack from the customer base. Just a snapshot of a few Lighthouse customers that are in production with OpenStack. I'm always curious how far I can move to see if the camera follows. So far, so good. Awesome. Who here was at the OpenStack Summit in Portland about one year ago? So excellent. We saw Comcast on stage, one of their key products in production on OpenStack. What about the Hong Kong Summit six months ago? Excellent. Photo Bucket, a very large online photo sharing service. Over 3 billion photos, over 100 million users in production with OpenStack with UCS and Nexus. Also WebEx as well in a previous OpenStack Summit. So just a snapshot of the few Lighthouse customers already in production. Now let's take a deep dive into the UCS or Unified Computing System product line. UCS has been around for a few years. Tremendous traction, double digit growth, a very large customer base, very large percentage of the Fortune 500. Very high performing product line, both Blade and Rackmount, many performance records. Excellent fit for OpenStack, compute and storage deployments. Both Blade and Rackmount UCS systems, they're managed by UCS Manager. UCS Manager actually sits on the fabric of the UCS chassis itself, allows a service profile to be created and applied to both Blade and Rackmount servers. These service profiles are very powerful because they define things like, what is the boot order? What is the host bus adapter? What are some of the BIOS capabilities? And then you can create these service profiles, apply them to new servers, as well as decommission old servers and apply them to new servers. So it makes it very easy to scale your compute and storage deployment for both Blade and Rackmount servers. We've also innovated on automation based technologies, which makes it very easy to get new UCS nodes automatically part of your OpenStack deployment. So we have a Python based SDK, which is publicly available. You configure the UCS chassis one time. And then from that single point of contact, then the UCS chassis will discover all the UCS nodes that are part of that chassis. If they're bare metal and they need to be a pixie booted, that will happen. Then once the nodes become online, they'll be registered for Cobbler. Then the event listener kicks in. And this is very powerful because what this means is anytime a new UCS is plugged into the UCS chassis and it fits your service profile, it can automatically be part of NOVA as part of your OpenStack deployment. So if you have a remote data center, a remote site, it makes it very easy to add new nodes and scale new capacity. Then once these new nodes are discovered, if they need a new image that can be pixie booted whether it's RHEL 6.4, 6.5, or RHEL 7.0, and EFT, and then those are automatically part of your OpenStack deployment. So it's just some examples of the automation that we have which makes it very easy to get started with OpenStack and UCS to scale your deployment. We've also been very active in blueprints. There's a blueprint for NOVA to improve the NOVA scheduler. So this is not a default round robin scheduler. Because the service profiles that we saw, we have information, how close is the compute to the storage? What is the server load? You know, what are the network characteristics? We can use some of that information to make a better scheduler. There's a session on the scheduler this week at the summit. There's also a demo downstairs in the Cisco booth. So lots of innovations on the compute and storage side including this new NOVA scheduler. Now let's take a deep dive into the Nexus and CSR product lines. So the Cisco Nexus plugin for Neutron supports essentially every Nexus in the product line. All the way from the 1000, the software-based solution to the new 9000. Also supports Nexus 5000, 6000, 7000. Very common components in say a V block or a FlexPod. The 9000 series was launched last year. Very innovative product line. It's essentially the best of breed combination of merchant silicon, ASICS and software in terms of NXOS software. With this combination and an innovative design with no backplane, we're able to have a very low price structure from one gig to 10 gig to 40 gig with less ASICS and less costs. Also very high performance. You know, over 1.9 terabits per second per line card. 100 gig ready without a rip and replace of the optics. Very high port density, over 20% higher port density. Native programmability capability including containers, which is very important, right? As REL7 has Docker containers. Who here is looking at container-based technologies for some of their deployments? Excellent fit for containers. Also the very innovative design with no backplane at all. So it's line rate as you go to 100 gig. Very low power and cooling costs as well as translates to savings in your data center. So the Cisco Nexus plug-in, which is part of Neutron, supports the 9000 today, supports all the Nexus products in the product line. Many reasons to do it. Automated VLAN provisioning. So as part of OpenStack, add your VM, add a network, add attached to a network, your ports, your VLANs, everything was automatically configured as part of the Nexus. It also relieves one of the traditional bottlenecks in Neutron networking, right? So by default, the Layer 3 agent, which is typically a generic Linux router with IP tables. Instead of doing that, you can use your Nexus top of rack switch as your Layer 3 gateway and use provider networks. So this is a very high-performing solution, very scalable. You can do high availability in terms of virtual port channels to multiple Nexus's. And you basically get your pick. You can have the performance benefit of hardware or you can have the flexibility of the software with the Nexus 1000. The Nexus 1000 is also very nice because you can do service chaining today. So we're working with the Neutron group to define Layer 4 through 7 service chaining services. While the standard is being defined, you can actually implement that today with the Nexus 1000 through VPath. It's also very nice for a VXLAN overlay-based functionality. So you can do things like get around the 4000 VLAN limitation as well as have a nice data center interconnect technology. The CSR, the cloud services router 1000B. So think of the operating system of an ASR 9000 on a virtual machine that you can spin up as part of your OpenStack deployment. Think of all the innovations that have gone into iOS-based technologies over the past decades. EIGRP, SysLog, NetFlow, best-of-breed VPN solution. And now you can have that in part of your OpenStack solution. We've been very aggressive in terms of contributing blueprints to Neutron. The VPN as a service driver for the cloud services router, that's a blueprint for Icehouse. The open daylight plugin, who here heard Kyle Mestery's talk earlier this week on open daylight, it's very good. The APIC driver application programmability interface controller, as well as the UCS Manager ML2 plugin. So this is powerful because with this blueprint you can actually use the VMFX, the hardware of the UCS fabric to use that to offload your hypervisor so you can have more compute resources on your hypervisors. We've also innovated an IP version six. Here's an example of a blueprint we did with one of our customers Comcast to get route advertisements upstream. Also group-based policy abstraction. So this is a new innovative way to apply a group policy to a leaf and spine network fabric. And now we'll turn it over to Mike for a deep dive on ACI. All right, thanks a lot, Dwayne. So I'm gonna be talking about Cisco ACI today, Application-Centric Infrastructure. So one of the key building blocks of Cisco ACI, the Nexus 9000 series of switches. So these are switches that can run in two modes. It was Dwayne mentioned they have a standard NXOS mode which offers an optimized hardware platform with better port density, performance and power efficiency than any other switch on the market today. These switches also offer very rich programmability features including support for our Nexus plugin and OpenStack. However, they also have a number of unique capabilities. So one of these, for example, is VXLAN routing which we can actually do because we have a merchant plus strategy as part of the switch tying together merchant silicon along with Cisco ACI. So this actually lets us bring to bear the best of what we can through this integration at Cisco. So these same switches that run in NXOS mode can actually also be run in ACI mode. And in that mode, we actually use Nexus 9500, 9300 along with the 8-bit controller. So this is an Application Policy Infrastructure Controller. The idea here is to extract from the network, not so much network configuration, but defined policy in an abstract sense designed to capture user intent. And then this policy can then be distributed across the network as different endpoints and different virtual machines or physical machines are instantiated by the user. So effectively, the way this works is the APIC is now your central point of policy across your entire network. And it will actually push the policy to the edges of the network wherever it's required. The network itself becomes a leaf spine topology, a very simple network built on 40 gig integrated fully line rate networking along with an integrated line rate directory that lives inside the spine. So there's actually no performance penalty as you discover where endpoints are. The architecture was also designed to scale to over a million endpoints and do this in a very secure manner in a way that can also support any hypervisor and multiple forms of encapsulation tying into the hypervisor layer. I mentioned very quickly this idea of policy. I would say this is one of the key innovations we came up with as part of our ACI solution. And I wanted to talk to you specifically about what we mean when we say policy. So we created this concept of an application network profile. So this is really essentially a unit of text can be described as JSON or XML that describes what an application developer might want the network to do. And this would cover security policies, connectivity policies, quality of service, layer four through seven services, but it's even extensible beyond the network as well. You know, covering compute requirements or storage requirements over time. So if we look at what this policy looks like, we entered, there's a couple key primitives that end up popping up. What is this concept of an endpoint group? Now, one of the simplest ways to begin thinking about it in classical networking is a VLAN, but it's actually not a VLAN. It's a generalization of a way of tying machines together that all have the same properties. They all need to be treated in the same way by the infrastructure. You essentially place them in a group and that defines the entire policy around those endpoints. These can be physical machines, virtual machines, Linux kernel containers via Docker, or potentially even various pieces of applications running directly on the host. And we wrap these endpoint groups in what we call contracts. Contracts are just a way of describing a set of rules that describe how groups interact. You know, they include rules and then actions that either forward traffic or may redirect it to different services or various like, you know, or even redirect it out to some kind of tapping, for example. And once we have this concept of a contract, it actually makes it easy to handle things that with Neutron, for example, we've been struggling with for some time, which is how do you describe complex things like service chaining and allow the user to tell you what they're trying to accomplish? So we achieve this in ACI by adding network services as essentially part of these contracts. I might say that my web group speaks to my app group and in doing so, it goes through a firewall or a load balancer. It's a very simple requirement to describe, complex to implement in the network, but once we know what the user wants, that information can actually be passed into the ACI fabric to implement it. These are the sort of powerful primitives that come out from using an abstract API like this. Now, we're doing a lot of work in the open source community to drive home some of the APIs and innovations we developed here. You know, one of the key areas is this concept of group-based policy. So you probably observed from my previous slide that the policy described in an abstract sense is not wedded to our ACI solution. We have a great backend for implementing it, but it's actually an abstract way of capturing what the users want the network to do. So we're now working with the neutron community and the open daylight community, along with a number of vendors, some we collaborate within the market, some we compete within the market, but the goal is actually to define a very simple way of capturing what application developers want the network to do for them and do it in a way that doesn't require them to become networking experts. Application developers think in tiers of applications and how those tiers interact. They don't necessarily know where a router needs to stand, for example. And the reality is they shouldn't need to. So as we take this policy model, and we can actually help instantiate it through these different projects, we can actually make it easier for developers to do in tenants to describe what they need out of these systems. And this can be done across a number of different backends. Obviously we'll be able to hook it up to the work we're doing with ACI, but everyone in the ecosystem will be able to leverage it in their SDN solutions as well. The other piece of work we're doing is around something called OpFlex. So OpFlex is a new protocol designed to distribute policy using an agent framework. Essentially the concept is to push abstract policies, as I've been talking about, down to different devices in the network and allow them to render locally into more complex specific configuration. So we're actually building an open source implementation of this in the context of open daylight, which will essentially be an OpFlex agent that can live on top of OpenVSwitch but then receive a policy like this and render it down into an OpenVSwitch's case, OpenFlow, but that agent could be modified to work on any device. Now this is interesting to us from an ACI perspective because it also allows us to tie policy down to a broad range of devices. Essentially creating an open API that anyone can take advantage of. But it'll also be useful to the broader community because they'll be able to leverage it in the context of open daylight across any device as well. Now let me talk about some of the advantages of tying together Cisco ACI with OpenStack. So one of the key advantages of ACI is really tying together the physical and the virtual infrastructure. We essentially give you a merged overlay and underlay solution that can extend from the hypervisor all the way across the network. This greatly simplifies your operations in that you're no longer managing specific tunnels or you're actually figuring out what's going on between physical and virtual domains. That's automatically a natural property of the fabric. We'll take the policy from you. We'll distribute that policy to where it needs to go. You'll never need to worry about where tunneling is happening. You'll also never see overhead from tunneling. So we can do all line rate, hardware accelerated encapsulation directly at the top of RackSwitch and we can do this across a number of inputs. VLANs, VXLAN and VGRE, you can feed all these things into the top of Rack where you can actually have hardware accelerated tunnels. And for this reason, we can extend it across physical servers and any hypervisor as well and we can do that seamlessly because it's extended at the hardware layer. We also are bringing to bear this application-centric policy model which I mentioned is an easier way for application developers to describe what they need from the network. It also provides this aspect of a self-documenting behavior. So essentially, you'll now understand what your tenants really wanted the network to do in a very clean sense rather than understand how they thought about mapping that to network constructs. We can offer you very, very rich telemetry features. And these are features that extend both into the hypervisor but across the physical network as well. So if a tenant is having connectivity problems, you can go to one single place and actually debug the entire network. It may be a problem in the virtual switch, maybe a problem in the physical network. You can see that directly through one console because these solutions are tied together. We give you very detailed help scores across the entire system as well so you can proactively discover these problems before they come severe. And finally, we have a number of advanced capabilities built into the fabric itself. I mentioned one of the key ones, this concept of service chaining across physical and virtual devices. And we have a very rich model for tying this into third-party devices and describing these chains as part of a policy model. We can also do things like application acceleration via flowlet switching. This is another capability that we can do because we actually have control at the fabric layer over the traffic and we can accelerate a number of common applications. So let me talk about the two paths we're taking with ACI to tie it into OpenStack. So the first one is what we've been calling the APIC plugin, which essentially you could think of as a standard Neutron ML2 plugin. We're using the Neutron APIs completely unmodified and essentially using a plugin to map them back into our ACI policy model. So this allows you to leverage OpenStack in a standard way, the way you've been using it so far but have the power of ACI at the back end as your networking solution. We're in parallel working with the community around this concept of a group policy API. And that API will allow you to actually change the abstractions that you expose to users and actually allow them to begin describing things in terms of endpoint groups and contracts. We'll be able to map that into our policy model as well and instantiate those policies directly via ACI. If you're interested in this topic, I believe there's a session tomorrow at 1.30 on this. We'll be presenting along with middle core and IBM and showing a demo as well of some of this technology. So today what I wanted to do is actually focus in on the APIC plugin and actually show you that along with the APIC itself and the solution we have. So just to quickly set the context of what I'm gonna be showing, I essentially have a small fabric, this is back in our lab at Cisco, running two leaves and a spine switch and I have OpenStack nodes hanging off them and in APIC control you'll attach to the fabric as well. And I'm gonna be showing you how the APIC plugin can actually map neutron and neutron configuration down into ACI. If you bear with me, we will actually take a shot at live demo and see how it goes. So I have two windows here. OpenStack, which I'll be spinning up and then our APIC. Quickly gonna log into both consoles, so take a second. And in OpenStack, this is standard unmodified OpenStack running our ML2 plugin. And if you look over here as that's coming up, you can actually see the APIC UI. So the APIC UI is designed to, it has two key tabs that I'll show for the purpose of this demo. One is the tenant tab, which shows you all of the policy configuration. I'll get to that in a second. Before I start, I actually wanted to show you the fabric tab. So this is actually meant for networking administrators. We're actually gonna be understanding what the physical fabric is doing. So in this case, I'm looking at a particular pod. Just gonna come up in a second. And I'll quickly show you the topology on it. It's actually big enough for people to see. So as you can see, I have APIC, two leaves in a spine in my fabric. And actually I can do more interesting things. Let's say I dive in on one a little bit. I can actually see the switch. I can see the switch directly and get a view of the different ports and where they're connected. And given time constraints, you can also dig in and start seeing port stats, et cetera, from there. We'll get to that in a separate demo. Now let me actually spin over and show you how this works from a tenant perspective. So I'm gonna be using the demo tenant. It's the same thing I'll be using in OpenStack. And I mentioned this concept of an application network profile. So this is a configuration that describes what the user wants from the network. And this profile is naturally distributed across the fabric depending on where machines are instantiated. And it's made up of the set of endpoint groups. So so far there's one endpoint group for private network. This was actually created as part of our OpenStack bring up. So now let me actually go in to my OpenStack. Again, I'm logging in demo project. And I will quickly show you, if we go into networking, so network topology. So right now we have exactly one network. We do private network. So the way I'm gonna do this demo again just for the purpose of time, I could obviously click and launch a bunch of things here. We're just gonna run a quick script to show you to bring up a bunch of networks and a bunch of VMs. That's actually, so that's in progress right now. CLI, as this runs, what you're gonna see is you're actually, and this is just us running CLI commands in the background. Again, for the purpose of making the demo simpler. So I'm creating a number of networks and a number of router and a number of virtual machines attached to them. And as this happens, you can actually see that as we create each network, we're mapping that back to an EPG inside the APIC controller. We're also mapping things like subnets back as well so that we actually have that, so we actually have a matching set of state across these different VMs and across these different OpenStack instances. And essentially what we can provide here is we give you completely distributed L2 behavior and completely distributed L3 behavior across our ACI fabric as a result of our integration here with OpenStack. The other thing I'll point out as different VMs attach, the way this integration is working, we're actually using the OVS driver in ML2 along with our APIC driver. The OVS driver, and this works in an unmodified manner, is essentially configuring VLANs per network. Those VLANs are then fed into our ACI fabric at which point they're encapsulated and fully managed by ACI. So for example, if we look at one of these, let's say we're looking at net two, we can actually see that it's on VLAN 100. Now if I were to dig in, say, for example, on net two here, I can actually see the bindings that are actually being used. Sorry, these are actually VLAN one and two. I mean the wrong one, two, one, two, sorry. No, that was the private. So essentially we're using VLANs from OVS directly tied into the ports on the physical switch to identify the different VMs and essentially identify the different networks. And these VLANs can actually be used local to the switch themselves, so it actually gives us a pretty broad scale. We can also do various forms of NCAP here, so we can do things like VXLAN or NVGRE passed into the fabric as well. So this would just be a configuration of what you're using in OVS, for example. And as I mentioned, you can actually even do more complex things. Now that you have this configuration in the APIC, you can even modify it and say, if you wanted to activate some of our service chain features, you can even take advantage of them via these constructs directly through the APIC as well. So I think on that note, I should probably wrap up and leave some time for questions for either me or Dwayne, the work we're doing, both an integration with Nexus and the Nexus plugin, integration with UCS, and our integration with ACI and the ACI fabric. If there are questions, please head up to the mic. Yes. So my question is, you talked a little about the 9000 series. Is this specific to the 9000s or do we have the ability to leverage our investment in the 7Ks and the 5Ks as well? So, you know, there's a long answer to short answer and so the policy model, the work we're doing with the policy model, particularly around neutron and open daylight, and actually some other work we're doing, the work we're doing with open V-switch will allow us to bring the same policy model and policy capabilities to other switches as well, like across the Nexus line. Now, some of the unique capabilities of the 9K, the telemetry, for example, those features will not be present in other switches. Those are actually capabilities that we built into the Nexus 9000. So, the policy and extending policy across the domain, that'll be possible to do, but 9K will still have a unique feature set. Again, things like the application acceleration, some of the service chaining capabilities, those will be things that are in the 9000 only. Okay, great, thank you. And there are custom A6 and the 9000 also, which help too. Yeah, and I should, and I mentioned before the, the merchant plus strategy is essentially how that's possible. There's unique features in the Nexus 9000 that are actually yeast-driven and that's where these capabilities come from. So, those can only be exposed obviously in switches with those A6s. All right, got another question. I saw you are promoting OP-flex instead of some other standard like open flow other protocols. So, what's the different and what's the advantage of using the OP-flex than other open protocols? Sure, so the concept behind OP-flex is essentially to design an agent framework and push complexity from essentially a controller and actually push that device-specific complexity down to the devices themselves. So, this is not purely at odds with open flow for example and that open flow could be a native local API with an OP-flex agent sitting in front of it. In the open V-switch case, that's exactly what we do. But the concept behind OP-flex is to build a scalable system. We actually don't want the controller to understand all of the specifics about how every device works. So essentially that's defined in this policy. The policy is pushed via OP-flex or essentially distributed via OP-flex across various devices and then devices can render that into any API they may have present. That could be an open flow API. That could be some device-specific API. But by doing it with an agent-based system, you actually achieve better scalability because the controller actually doesn't need to have all of the state of every single device. So it actually has less state to manage, less state to scale. If you want to launch an app or remove an app, it's just a change in policy. And all the complex behavior is local to each device rather than the controller having to know everything and modifying the entire state. And OP-flex integrates directly into the hypervisor. And well, OP-flex can be integrated, OP-flex can be integrated either at a very, very deep level or at an API level. We actually have, you have complete flexibility about how you integrate there. But the idea is to be able to push a policy down to a device and then give that device control. And there's a more scalable architecture there than what we've seen with some of the other solutions that they do. Great questions. Excellent, thank you. Just in summary, so we have a complete compute storage and networking solution with ACI. It's a great network fabric solution with proof policy abstraction. Please go to sysco.com slash go slash open stack for more information. We appreciate your interest and attention in this session. We hope you enjoy the rest of the summit. Thank you. Good job.