 It's lots of things, all waking up. A common part of the global phenomenon we call the Internet of Everything. Trees, we'll talk to networks, we'll talk to scientists about climate change. Cars, we'll talk to road sensors, we'll talk to stoplights about traffic efficiency. The ambulance, we'll talk to patient records, we'll talk to doctors about saving lives. It's going to be amazing and exciting and maybe, most remarkably, not that far away. The next big thing, we're going to wake the world up and watch with eyes wide as it gets to work. Cisco, tomorrow starts here. Good afternoon and welcome. Today we'll be discussing OpenStack with Cisco Compute, Storage and Networking. So, we'll be talking a lot about the Cisco Unified Computing System or UCS Product Line for Compute and Storage. We'll be discussing Cisco Nexus, the data center switching and software product line. I'm Dwayne DeCappet, Product Manager for OpenStack at Cisco. I'm joined with Ashok Rajikapallan, who's the UCS Marketing Manager for Cloud and Virtualization. So, OpenStack is very important to Cisco and OpenStack is a key part of Cisco's DNA. We've been involved with OpenStack since the formation of the foundation. Our vice president and CTO, Lou Tucker, is vice chair of the foundation. We've also been contributing code and blueprints since Diablo. So, who here has been involved with OpenStack since Diablo? So, that was a long time ago. Excellent. We've innovated on things such as NOVA scheduling, quantum neutron networking plugins, high availability, as well as dashboards and other scheduling functionality. We've also innovated in areas like automation and orchestration, a lot of puppetry, a lot of which has been submitted upstream in the Community Edition, some of which is in the Cisco OpenStack installer. We've also had plugins and integration with our key product lines, UCS, for NOVA, innovations on storage, Cinder, Swift, Ceph, networking both hardware and software for our Nexus plugins, also the CSR-1000V, the cloud services router-1000V. We've also done a lot of work with the community on layer four through seven services, firewall as a service, VPN as a service, load balancing as a service. We've also done a lot of work with customers in real-world use cases. Who here was at the summit in Portland six months ago? I know those guys were right there. So, on stage, we had Comcast, you know, one of their key services, Xfinity, Cloud DVR, on OpenStack on Cisco UCS. This summit, we're talking about photo bucket. So, large online photo sharing service, 3 billion customers online on UCS with Nexus plugins. And we're driving innovation through these real-world use cases, working with customers. A lot of the things we've learned from other architectures like DMDC, virtual multi-tenant data center, we're bringing those lessons learned and best practices into OpenStack. One of the benefits of OpenStack is it's a great abstraction for software-defined networking. Customers like the fact that they can deploy their applications on OpenStack and then they can plug in multiple networking stacks underneath it and the networking complexity is essentially abstracted. You can have a hardware-based networking plugin through the Nexus product line, software-based with the Nexus 1000D. You can have multiple SDN plugins, open daylight plugin for an open flow-based solution. We also have technology around 1PK API. So, this is a standard API, which as it's adopted by Cisco product lines and as 1PK API functionality is added to a neutron plugin, you can have additional scalability and product options. So, customers, we mentioned Comcast last year. Their key service, Xfinity, on OpenStack on UCS. Photo Bucket this year, press release and blog is out this week. So Photo Bucket, 3 billion photos, 100 million users, and they're heavily using OpenStack on Cisco UCS with the Nexus plugins. And then the previous summit, the Folsom, who here was at the Folsom summit. So, there we had WebEx talking a lot about how they use their software as a service on OpenStack on UCS. So now, we'll turn it over to Ashok for a deep dive on computing storage with UCS. All right, thank you, Duane. So, about four years or so back, they also launched their first X86 server business and prior to that, our pedigree was primarily networking. So how many of you are familiar with UCS and Cisco Compute? So, okay, quite a few of you. That's good. So, one of the paradigms we did when we launched UCS was just as this whole concept of application-defined XYZ was becoming popular, we built this construct for compute. We abstracted all the state information out of a compute platform and built a controller, so to speak, which would manage a large-scale-out environment. And, you know, fast forward three, four years, and that's become the de facto standard in the marketplace where everybody's abstracting state information out of either a switch or a server or a storage, and it's become application-defined something, right? It's storage or networking or compute. So this is kind of the precursor to application-defined computing, so to speak, and that's kind of our entry into the server market. And this is just for those of us who want to get familiar with the business. We are number two in the blade server business today. We are one of the leading server vendors in the market. In four years, we have $2 billion plus business. We have a number of Fortune 1,500 customers and the like, and, you know, our scale has definitely expanded. We lead a number of benchmarks in the marketplace, both virtualized and bare-metal benchmarks, so we are definitely one of the leading players in this market. But I think our key differentiation is that, like I said, we're not just another server vendor, but we actually built a computing platform which abstracts state information from the server. So we have a management paradigm which talks to all these compute nodes. These are the various compute portfolios we have. So, you know, various rack servers and blade server compute portfolio. But what is the key premise is this portfolio, we don't have to go to touch any of these servers. There is a controller which talks to each of these server nodes. And it's made it very easy for us to work with Nova, very easy to make it, to work with puppet scripts and the like to kind of provision this environment. I'll talk about some of the tools we have built to deploy a large-scale cluster where it's, you know, a few hundred nodes or a few thousand nodes in really minutes. Like I said, the fundamental paradigm we have is this notion of a management plane which is actually embedded on the top of the rack switch and which manages tens of hundreds of server nodes. The key point is that all the state information, we abstract all the information as an infrastructure template. It's called a service profile. And the idea is the service profile, you can define what you want out of the server. I want, you know, XYZ compute power, XYZ memory. I want a number of networking ports. I want this kind of QoS on my networking ports and the like. And you slap that configuration down to the server nodes. These server nodes assume that, should I say, personality. And you bring up, you know, you're able to bring system or CentOS or whatever it is that you want to bring up. You're able to bring up on this environment. So it's extremely scalable, stateless compute environment which is, you know, very, very nice architecture for service providers and non-scale cloud offerings. I guess I was talking about the service profile. So what happens is the service profile, you kind of define everything. You don't have to go figure out what is your IP address or WLU port names or MAC addresses, you know, XYZ specifically for each of the servers. All you do is define the block of what is required for a large-scale cluster, and we automatically will pick up all the addressing information from this block. We'll automatically define state information. If you want to replicate the state information across multiple nodes, it's a simple click. There's something called the template you create for a node. Let's say in an open stack environment, you have a specific compute node definition. You can replicate that compute node definition across multiple nodes and deploy them in a very rapid fashion. If you have storage node definition, you can again copy that definition across multiple nodes and deploy it in a very seamless fashion. And we've kind of automated all of this for an open stack deployment. I'll show you some of the scripts a bit later. The other piece is what we talk about scale. So when we're talking about cloud, the typical notion is you talk about tens of hundreds of nodes. So we have something called the UCS Manager, like I said, which is embedded on the top of the rack switch, which manages about 160 nodes. That's kind of our domain, so to speak, of cluster. And then we have another piece of software which is called UCS Central, which is a VM, which can reside anywhere inside the cluster or outside the cluster, which can manage up to 10,000 nodes. So again, a very simple common plane of glass to manage a very large cluster environment. You can have various kind of connectivity models with its L2, L3. The key advantage of this is it can manage domains across multiple data centers. Physically, they could be located within the same four walls, or they could be spread geographically. As long as there's IP connectivity, you can have a common view of all of this environment. So a very powerful environment, especially for large customers and service providers where they need to have a simplified view. They need to have one IT admin kind of provisioning all of this environment. That's a very, very powerful story. So for OpenStack, what we've done is we have made it easy for consumption. So we have typically seen a couple of use cases where the compute-intensive workloads or mixed-use workloads or storage-intensive workloads. We have got these definitions defined, you know, the service profile templates and the like based on what kind of use cases you've seen your customers do. And we have got these what we call as bundles of accelerator packs which is predefined configuration. So a customer can deploy either a half-rack configuration or a full-rack configuration as needed. And it's a very, you know, to get somebody off the ground in a very fast fashion. This is kind of a deal approach. We also have automation scripts which will kind of deploy all of this in a very rapid fashion. That's another key advantage we have. I think I have... So this just talks about in our high-density configuration, what is our starter configuration and two control nodes, two compute nodes and storage nodes. Again, it depends on what the specific customer requirements are. But this is just to get these services started on specific nodes to get somebody started, to get a customer off to deploying OpenStack in a very fast fashion and running some of these services and getting started, right? That's the primary goal of these accelerator packs. So let me kind of skip through these and spend some time on the automation. So what we have done from automation is one of the key requirements we heard from customers is they want to deploy, you know, a few hundred nodes or a few tens of nodes. And typically the configuration required and the services required for each of these nodes are slightly different because you might want to run a few controllers, three controllers, and a handful of compute nodes and a handful of storage nodes, and you might need some Ceph nodes and some Swift nodes and the like. So what we have done is you can create a template of what your requirements are. You could say that a two-socket server with 128-gig memory with two spindles is going to be my compute node, a two-socket server with 24 spindles and, you know, 256-gig memory is going to be my storage node, my Ceph node, and maybe another version is going to be my Swift and Gateway so you can define those up front and that's the only touch you have to do. We have a Python SDK environment where, you know, you can either script it or we're building GUI if you want to go and, you know, touch it through GUI, but the idea is we have a scripting process and there are a few credentials you have to give because you're going to talk to the UCS manager so you give a login credentials and the like and after that the whole process gets completely automated. We would go provision the servers. First step is UCS manager would discover all the servers based on what gets connected to the top of the rack switch will automatically discover all the servers so the UCS manager will figure out I have, you know, five two-socket servers with XYZ configuration connected. Based on that, I will slap in the appropriate service profile because these are my storage nodes and these are my compute nodes and the like. Based on that, we use a combination of Puppet and Cobbler to get this whole process done so we reduce the nodes to the cobbler database and then based on that we have an event listener who can listen to all the nodes. If there are new nodes getting added, it could be also dynamic at any time of the process. We would do the Puppet apply, add the systems into the open stack environment and then we would pixie boot depends on what operating system you want to do. It's Red Hat or CentOS or whatever it is. This is an example we took for that operating system installation. We'll sync all the plugins. If there is any additional features or scripts you want to add in, that's another capability that we can add in as well. So this is a very scalable framework. If your customer has additional capabilities and the like that they want to turn on, that's something they can explore as well. And finally, what you see is everything is registered. Then we hand it over to open stack. So there's inventory of all the nodes into the controller. Then it's ready for VM provisioning. You can go to your horizon dashboard and you can go provision whatever you want. So that basic idea is it's a framework we've put together. It's extensible. This is kind of a starter kit, so to speak. Obviously customers want some customization. They want to add additional capabilities and the like, and that's definitely possible with this framework. And in Cisco we have something called CDN, which is our developer network. We've posted all these scripts to our developer network, so feel free to take advantage of it, leverage it and extend it if you see fit. But what we do want to show is the power of the automation framework we have with UCS Manager. Since we have the complete inventory of all the hardware, we can tweak a lot of that stuff. And I think Duane pointed out earlier about the scheduler. So the lot of innovation that we're adding on top of our environment for open stack is building the relevance of workload placements to the specific hardware layout that we have because we have the complete visibility of network and compute platform. So we can definitely add that piece of innovation. So this is just basically talking about Nova. So I'll not spend too much time into this, but one of the key pieces we have seen is the way it is structured today, everything, the most common denominator of an atomic unit here is about a virtual machine. And there's a lot of customers who want both bare metal as well as virtual machine environments. So what we are trying to add to the scheduling policy as well as add to our capabilities is how can we provision a bare metal as well because that's a requirement in the marketplace, especially for Hadoop workloads and the like where they still want bare metal services. So one of the pieces we are looking at is how would we, you know, expand the existing framework. This is, you know, I've abstracted the existing framework. It definitely is a lot more complex than this. But the idea is how can we add the existing framework and make sure you can kind of configure bare metal servers as well. So this is an example of what we have submitted from a blueprint perspective. And the idea is, you know, this is a considerable framework both for virtual machines as well as physical machines. And then we would, you know, kind of configure the environment based on the request that you get. So let me build this out. Next is networking. Just before I hand it over to Duane, if you have any questions on UCS, the framework we have, the management plane that we have, any questions if you have, I'll be glad to take any. No? Okay. Thank you. I'll bring it over to you. Thank you. So now we've learned how to deploy OpenStack compute and storage with UCS. And we'll do a deep dive on deploying OpenStack networking with Cisco Nexus. Nexus is the de facto data center standard. So number one data center switching for Ethernet, number one market share fiber channel over Ethernet sandwiching, over 40,000 customers, over 11 million 10 gigabit Ethernet port shipped. We are leveraging Nexus for OpenStack. Just like UCS, there's a UCS model for whatever application there's a Nexus for whatever your application needs are also. So for OpenStack, we tend to focus on the top of rack switches. So the Nexus 3000, one RU, Nexus 5000 and 6000, one to two RU, the Nexus 5000, for example, is part of a flex pod or a V block. We also talk about software defined networking with the Nexus 1000V, so the software version of Nexus. So Cisco has a quantum or neutron plug-ins to the Nexus product family. So whatever you need, hardware based or software based, we have a solution with Nexus. We've also contributed plug-in diagrams in terms of how the Cisco Nexus plug-in works and can be designed. Here you see in the middle the top of rack switch, 3000 or the 5000 or the 6000, that's connected to an aggregate switch like the Cisco Nexus 7000, for example. Compute nodes are attached to the top of rack switch. All this information is in on Doc Wiki. We also have one of the authors, Shannon, in the back there. So there's many benefits to using the Cisco Nexus plug-in for quantum or neutron networking. So one is automated VLAN provisioning. So who here has kept a spreadsheet of VLANs and managed VLANs that way with the spreadsheet? It's kind of a long way to spend the day at times, isn't it? It should be automated. Just like we're automating VMs, compute and storage, we need to automate the VLAN management also. Also for scalability, you can use your network switch as a layer 3 gateway. Rather than using a generic Linux server with IP tables for networking, you can use something designed for high scalability for networking. You map the Nexus SBI switch virtual interface to the tenant, and you can have your Nexus top of rack switch be your layer 3 gateway. Also, high availability. VPC, virtual port channel. You can have multiple connections to multiple Nexus switches for high availability. And also, you get to choose. Do you want the performance assist of a hardware-based solution? Do you want the flexibility of software? Do you want a combination of the two? Whatever your technology and application needs, there's a solution within the Nexus hardware and software-based family. So the Nexus switch is a layer 3 gateway. So this allows you to turn off the layer 3 agent so you're removing that network node bottleneck of generic Linux server with IP tables. You can do overlay technologies. You can do GRE or VXLAN. Very high scalability for networking with Nexus switch as a layer 3 gateway. The Nexus 1000V has many advantages also. One advantage is it has VPath service chaining today. So while firewall VPN and load balancing as the service standards are being defined, while plugins are being created, you can start using service chaining with the Nexus 1000V. It's also a very nice VXLAN overlay functionality so you can extend the VLAN from the data center to areas outside of the data center. The CSR, the cloud services router 1000V. So think of this as the OS for an ASR 9000 in a VM. Now you can spin up as many copies as you want, one per customer for billable services. You can put it on the network node. You can put it on multiple compute nodes. So think about the innovation there. Thousands of engineers for decades. All the networking technology, EIGRP, BGP, Syslog, NetFlow, you now have all of that technology integrated into OpenStack as a VM. Other ways that we're helping to deploy compute storage and networking is with CVDs or Cisco validated designs. Here's a CVD that we did with Red Hat. Some of the components that we've talked about earlier, UCSM, UCS Manager, the Fabric Interconnect, the 6248, so 48 upgradable ports. Fabric Interconnect, the UCS 2000 series. This allows connectivity to the Fabric to blade servers. UCSC 220M3, excellent for compute intensive. UCSC 240, good for both compute and storage intensive resources. Also the VIC, which is the virtual interface card, which has functionality such as VM effects. In addition to scaling with Red Hat, we have resources to help scaling with Ubuntu also. We have the Cisco OpenStack installer, which as we can see here, pulls down from GitHub, the Community Edition, makes it very easy to get OpenStack on Cisco UCS with Nexus plugins installed and scaled out throughout your network. We're also innovating with High Availability. This High Availability option here listed on DocWiki is a superset of the High Availability as part of the OpenStack reference architecture today. This reference architecture provides active, active for all the major OpenStack components. It uses technology such as Galera Cluster for my SQL, HA Proxy, so it'll keep alive D. So with this mechanism, we use 13 nodes, all of which are UCSC 240M3s, three controller nodes, three compute nodes, two load balancer nodes, first proxy nodes, and three Swift storage nodes. So a great way to scale your deployment, compute storage and networking is using some of these High Availabilities based on lessons learned with other architectures and other customer environments. We're also introducing... I'll take your hands in the way, so... Okay. New Advanced Services. So just like Advanced Services from other cloud technologies, this is services to help you get running with OpenStack, whether it's an assessment, is my network right for OpenStack? What are the first applications I should start to deploy on OpenStack? Validation. Predefine test documents, diagrams, good for pilots to production. Design and deployment. How do you scale out from that initial deployment to multiple deployments? And optimization. How do you apply best practices? How do you continue to reduce OPEX? OpenStack cluster. So we are providing these advanced services, strategy and assessment and validation today with design, deployment, and optimization later this year. We're also innovating in other areas, such as spine and leaf technologies. Who here has been at Cisco Live past year? So it was a great demo for DFA on OpenStack. So spine and leaf technology, great way to reduce latency and have increased functionality. So in conclusion, Cisco offers a complete compute, storage and networking solution for OpenStack. We have large-scale customers deployment in deployment today in excess of 1,000 nodes. We offer advanced and technical services to help you accelerate from OpenStack pilots to OpenStack productions. We'd love to apply some of these best practices and lessons learned to your customer environments as well. Please let us know how we can help. Please send us an email at openstack-support. Also, more information can be found at Cisco.com slash go slash OpenStack. So Shuk and I will be around later to answer any questions. On behalf of Cisco, the UCS, Nexus and OpenStack teams, we thank you so much for your interest and support and Cisco and OpenStack. We hope you have a great remainder of the summit. Thank you.