 Okay. Thank you all for coming. I'm going to take just a quick second to introduce our speaker. Also wanted to remind everybody who's just come into session four, we are having a little giveaway at the end. iPad mini three is certainly worth your time to fill out a little card. With that being said, I'm going to turn the stage over to Mike Cohen, our director of product management. He's going to be giving his talk on ACI. And Mike, with that, it's all yours. Great. Thanks a lot. So I'm Mike Cohen, director of product management inside the Cisco ACI group. I'm actually joined by Clayton Wise, director of cloud services at Key Information Systems, who's going to be joining me in the presentation and talking about how he's using some of the technology that we're developing alongside as part of an open stack deployment. First, I want to lay the foundation for some of the genesis around how Cisco ACI fits into open stack. And ultimately, when we start talking to our users about open stack, this comes down to running applications. At the end of the day, cloud environments are not about managing infrastructure. They're about building and deploying applications in a way that's scalable, fast, and easy to use. But the reality is if we look across the cloud environments we've seen today, the way they're managed in terms of automation is highly suboptimal. Automation to date has really been essentially confused with micromanagement. All the things we used to type into our CLIs or directly into consoles or scripts, we've essentially built into our modern automation tools. We're thinking about things in a very imperative top-down manner, where we essentially codify the things we used to type into our consoles. You can almost think about this as a type of human middleware. But the reality is we need to actually think differently about the world today. And I think we're learning this as we start seeing things like how Docker and microservices are becoming more prominent in the open stack environment. So the world we're hearing people look for is one built around cloud applications and cloud services. And really, that world is one where you have very simple broadcast or multicast requirements, essentially fault-tolerant independent services, usually managed by DNS rather than by VLAN or subnet. And they're loosely coupled tiers. To date, what we've been able to deliver in open stack environments is actually much more focused around modern-day networking. We offer virtual networks, virtual subnets and routers, use existing network models, but we've made them virtual. And we don't have any concept of application views or dependencies between pieces of an application. Again, this is a suboptimal way of helping people design network and actually build things that map to the way they're thinking about cloud services. What we really need to do is change our paradigm and start thinking about modeling intent. So what do I mean when I talk about intent? Well, we actually need to start modeling applications and their dependencies using an abstract language that can help us deploy these applications. A base level of that may be modeling compute, images, scaling properties, requirements that might happen at boot or at install time, and they may be grouped over a number of VMs that have the same set of properties. And ultimately, that needs to be mapped back to how you consume the rest of the infrastructure, storage, compute, and placement across a cloud environment. These are all things that can be described in abstract policy terms. And then I need to understand that, well, this unit of compute has dependencies. It interacts with something else in the system. I need to understand that that dependency exists, and there's another block similar to this one with similar qualities. And then I need to capture a notion of an API. If we think about these cloud services or cloud service architectures, they expose simple APIs for tiers of an application, and other apps consume those APIs. We need to model that behavior directly, and actually start capturing these providing consumed models directly in the applications. As we do that, the world starts looking a lot simpler. Things like network connectivity and security actually start simplifying down. And we'll start seeing that those things are now implicit in the design of the architecture. If you describe your application in this way, it's very easy as someone managing a network to understand the security requirements you have, because you've essentially told me what they were in building this model for me. It's also easier if you're working on the application side, because now you no longer need to think about all the low level constructs. You can build a template for your application and know that it can be deployed in many different scenarios. So the key around doing this properly is doing it in a way that's sufficiently abstract from the underlying infrastructure. So you can deploy in multiple scenarios, multiple hardware, portable across different clouds, and self-contained. So the environment completely describes the application that we need. One of the ways we've been driving this forward and actually helping OpenStack move towards this model is a project called group-based policy. Group-based policy is 100% open source, patchy licensed, designed to work with OpenStack was released originally with Juno. It's also available on top of Kilo. It was a project designed for capturing application intent. We started with focusing on network intent, but our broader vision, as you saw on the previous slide, is to give you a set of tools that allow you to capture requirements across compute, storage, and networking, and actually achieve this application modeling behavior that you can efficiently deploy across your infrastructure. Built by a community of developers, Cisco is a major contributor, but by far not the only one in the community building these different tools. The idea behind group-based policy and some of the core primitives we offer are we introduce this idea of a group. You know, you could think of a group as a set of neutron ports or a set of network endpoints that all need to be treated the same way. Then we tie these groups together, what we call policy rule sets. These rule sets are different than the security groups you've seen today in OpenStack. They're not tied to IP addresses, they're not tied to any specific domain. They're completely domain agnostic sets of requirements that describe the APIs of groups and how they connect together. So by doing this, we end up with a completely portable, independent way of describing tiers of an application and how they might fit together in a cloud application model. And also critically, a piece we added to this was a notion of network services and network service requirements. We wanted to make it possible to, one, describe network service chains and also make it easy to talk about how those chains need to be employed in the infrastructure without asking anyone to understand what the underlying networking behavior is. So logically, the way this works in group-based policy is very easy. We allow you to compose chains built out of essentially different logical instances of devices and insert those chains between two different groups. How that underlying plumbing works from a network perspective is not the end user's problem. And honestly, they don't need to understand exactly how it works. That would be delegated down to an underlying system that can implement the chain and enforce the traffic steering. So our goal, again, with this project was to make it very simple to describe high level policies that spread across layer two, layer three and even layer four through seven, and allow those things to be portable and scalable across different environments. Now one of the primary ways we see this being consumed and one of the interesting ways we're promoting this is Cisco is with our application-centric infrastructure. So Cisco ACI is a solution that brings together the APIC controller along with a network fabric built out of Nexus 9000 switches. The fabric alone is a least spiked apology, 40 gig networking, high performance, non-blocking, extremely scalable network design. On top of that, with the APIC controller, we actually add a policy layer. We make it possible to describe the policy you need completely independent from the fabric itself. And then what ACI will handle for you is actually taking this policy, could be authored by group-based policy, directly through the APIC, through the REST APIs, or through any other means you choose, and efficiently disseminating that policy into the ACI fabric. We'll make sure the policy goes where you need it, when you need it, so you have proper security enforcement across the entire environment. So what you end up with is an extremely scalable solution that can act as the back end for an OpenStack Cloud. And since we've launched the APIC, we've been doing a lot of work in the OpenStack community actually tying these things together. We've been building plugins for OpenStack, ones that work through standard ML2 drivers, as well as via group-based policy, to give you the option of how you want to consume OpenStack and interact with OpenStack Cloud. And also, additional plugins that can run on the hypervisor side, on top of a standard unmodified OpenV switch. This is where our OpFlex component comes in. OpFlex actually allows us to extend our policy enforcement and do policy enforcement directly down in the hypervisor, in addition to the network fabric itself. Again, this gives you an end-to-end, highly efficient, scalable policy domain across the entire fabric. Another critical element that I need to touch on as we look at ACI in OpenStack is telemetry and operations. For anyone that's deployed and operated in OpenStack Cloud, one of the critical things you'll run into is how do you track issues that are happening within your virtual networks that your tenants may be seeing and mapping that into issues that could happen inside the physical environment. ACI was purpose-built to handle this use case. It can act as an operations console for OpenStack, showing you the locations of all the OpenStack VMs and also showing you faults in the physical domain and mapping them all the way back into the virtual network environment. So if you get a call from a tenant that says virtual network has lost connectivity, we can give you an immediate chain of faults by which you can resolve that issue. It's that kind of solution that actually makes ACI one of the perfect backends for an OpenStack environment. I think with that background, I wanted to actually bring Clayton up here to talk about how he's thinking about ACI and how he's thinking about OpenStack as well. Hello, everyone. You want to mic? Okay, good. My name is Clayton Weiss. I am the Director of Cloud Services at Key Information. Key Info is a regional service provider based out of Southern California. They actually acquired a company I used to work for and they were also a business partner of ours, so we kind of split the balance of doing equipment resale and private deployments of infrastructure and equipment and all the rest of that as well as the management tools that go along with it, as well as run our own data centers, have our own environment. Our typical client is mid-market to large enterprise and for anyone that's tried to deploy OpenStack for that type of a client base, there are a number of challenges that go along with this. So what I'm going to discuss here a little bit is some of the challenges that we faced with this, some of the things we've run into and our experiences with it, and then areas where we think ACI has added some value to us and allowed us and we think moving forward, it's the great technology choice for us that's going to allow us to kind of separate ourselves amongst the sea of different cloud providers, because in the grand scheme of things, right, who is Key Info compared to AWS or Rack Space or SoftLare, one of these other players, there's got to be something that we have that's kind of a niche and part of it is the local area and the clients and the types of clients that we service. And ACI is one of the tools that empowers us to be able to provide that. Excuse me. So there are a couple of architecture decisions that you make designed in cloud environment, right? This is the way that we do it now. It's all horizontally scaling workloads. You know, the Amazon model is if your application wasn't designed to withstand failure of some sort of some component of it, then your application wrong, you design it correctly. Problem with that is a lot of enterprises, they have a vertically scaling application, right? So they just keep packing more stuff on there. If my database server isn't powerful enough, I don't add another 10 database servers, I add more processors, I add more memory, I add more IO. This is a problem that we face in trying to shove this in, right? There's a concept of ephemeral workloads inside of OpenStack, right? Because in the OpenStack world, everything's cattle, right? All things are the same. If you have a problem, shoot it in the head, start it up somewhere else. Up until what, year and a half ago, there was no concept of emotion, right? The ability to migrate a VM from one host to another, because why would you ever do that with OpenStack? In an enterprise world, right, everything's pets. So every little VM, everything about it, everything is unique. There's no concept of ephemeral storage. I've had this conversation with CIOs plenty of times, and they said, what do you mean when I shut down my VM, everything goes away? And you go, yeah, everything goes away. It's designed to be a workload once it's served its purpose, you dump it and you build another one. They live in a totally different world. So these are some of the challenges, right? Not everything is designed to be ephemeral. So for example, when was the last time you shut off your database? Probably never. So these are some of the issues that you run into as an enterprise when you're trying to adopt OpenStack or make use of a cloud environment, right? So there are some challenges both on the compute and the architecture side as well as on the networking side. So before Neutron, right, we had Novenet, and Novenet treated the network as an afterthought. Networks are big, flat, everything was compute, and that's what everything was all about. There was no concept of client or tenant segregation. Security was all done through the security groups or running IP tables on a box. There wasn't really a good way to handle that and to really segregate things out. And so for an enterprise trying to adopt a cloud solution didn't really work out. So what you had is you had purely public-facing workloads that went into cloud environments. Anything else was completely private. It was the only way that they could really gain control. So at scale, right, we took the SDN route and we said here's how we solve this problem as a cloud provider. So for anyone that's familiar with VxLens, I've always summed them up as the o-dog of networking. It is networks inside of networks, right, layer two and layer three. So this is how we solve that challenge. Great idea, right, NVGRE and its early implementation, very cool concept. The issue was there was a performance penalty for doing that because now what you've done is you've taken all of these years of data processing and packet handling and everything that we had built in ASIX to perform at that line rate on a switch and we've pushed it into the hypervisor. The advantage is we have a lot of control now, right. Complete control over what happens, lots of flexibility because hey it's software and anything is capable in the software world. The disadvantage is you pay a performance penalty. So the early days of NVGRE we would see 40% loss in terms of capability. You know, you could push a 10 gig, maybe six gigs, right. So there was a big performance penalty that you had for doing that. So we think that it's much better to have hardware and software working together, right. And the key to being able to do that is having that same software capability and flexibility at the hardware level. And that's one of the things that Nexus has done on the 9K and on the ACI side has taken that capability and rolled it into hardware. So you no longer have your hypervisor processing all this. So it's a question of where is that workload best served, right. Is the networking best done on the network devices and the switches or is the networking best done in the hypervisor. And there are times when it's best done on the hypervisor. So if you have east west traffic between two VMs and the same host, guess what, process it on the host, don't push it to the switch, right. If you have east west traffic between two hosts in the same data center to have the host constantly processing all that traffic of ingest and egress and all the rest of it, doesn't make a lot of sense if you can offload that task to the switch. And the key to doing that is to having some link between the two to where they can communicate clearly with each other of, as Mark said, the application intent, right. So how do the two need to communicate with each other, what is it that brings that today. So this is kind of an example of the early days when Cisco where it was first getting started for anyone that was around like IPX, SPX, Apple Talk, IP, how do we make all these networks work. You had a device that did that translation. We have an odd thing that we have a lot of clients run in AS 400s that we do hosting for. They have zero concept of VXLan. It's an impossible thing to try and introduce them to. But ACI allows us to take that legacy architecture and mix it with the modern day deployment and allow it to the talk through a policy that we build that says, hey we want a contract that exists between these two nodes. And it doesn't have to know about the power system specifically but just the role that it plays within it. So it's a question of where is that done and in our experience it's best done in these types of scenarios done on the switch, done in hardware processing. One of the other problems that comes along with that right is, hey this is all cool but now you're locked in. Now you've got a vendor. And the answer that is actually not necessarily. So with OpenStack and with GBP with the policy-based engine that's in it and there's a lot of work being done so I don't want to fool anyone to think like hey you rolled out tomorrow problem solved. Right there's a lot of work in getting this done. But in doing so if you have a layer of abstraction in between your hardware and what you're trying to run you are no longer locked in. And the other side of this too there's a bit of a misnomer on this. But once you buy gear you're locked into the gear. Even if it's only for a limited time. If I go out and I buy a bunch of UCS servers from Cisco or I buy a bunch of boxes from HP or I get a bunch of Super Micro and I just throw them in a rack. I'm locked into that for that period of time. The idea is that what I want to be able to do is shift vendors at any point when it makes sense and when it's convenient. And doing things through policies while it takes more work in the initial setup gives you more freedom in the end to shift around based on whatever it is that you need to be able to do. Right now for us we think Cisco has the best hardware solution. I think Juniper and Contrail and some of those things are very cool but I think the way that Cisco has done their processing and the way that they've handled a lot of the offload within the switch just makes sense. All right, so policies really kind of solve a lot of these problems, right? So you build a policy within OpenStack in a way that's supported within that framework. You push that policy out. Now you're not locked into any particular vendor or any particular solution. So the idea here is you're opening yourself up to additional freedoms by doing it on a policy basis. We have a mixed environment like I mentioned before we have the IBM Power Systems, right? We have VMware. We're going to be rolling out KVM. We're going to have a mixed environment. We're going to have to support that. Policies are really the only good way to do that, right? Otherwise we're going to be managing lots of individual siloed environments and it's going to be impossible at any kind of scale. So with that, any questions? Yes? Hang on, we've got a mic coming. So I have a question for Mike. You mentioned in one of the slides that the fabric provides advanced services. I was wondering whether you could elaborate on that. Sure. So inherent in the fabric is the ability to do stateless, a stateless ACL behavior. So this is essentially directly as a fabric capability we can do stateless enforcement of policy rules. When you want to handle stateful policies, that's when the fabric can redirect traffic to a network service device that can then deliver that kind of capability. And we're actually working with a number of ecosystem partners that have solutions in that space as well. Any other questions? Just throw it really hard now. So question is related to the North-South through netting traffic. So where do you realize that? Do you offload that to the fabric as well? So the answer North-South traffic, it kind of depends on how you classify it. So let's say AVM which sits inside a private network doesn't have a floating IP. So how does that access the public network? Yeah, go for it. Okay. I'm just I just want to because I just want to make sure I understand the question. So it's the functionality offloaded from Neutron or Open V-Suite. So there's a couple different ways we do it today. So the hardware fabric itself does not support NAT as a feature today. So we handle this in a couple different ways. Today one option we have is to allow people to use the Neutron router still, in which case they would actually be handling the NAT function, the way you would be doing it in the standard Neutron. The other thing that we'll be doing in our next ACI release is we'll be rolling out OpFlex agent support. So OpFlex is an open source component. It runs on top of an unmodified Open V-Switch. We'll actually be using that agent to essentially insert NAT policies into the host and we'll be doing NAT rewrites again native through the policy but you'll manage via ACI via OpFlex as well. And will OpFlex be like DVR, like distributed virtual router or will that be HA or... So yes, we'll be getting out of that as a distributed capability. So the ACI already has a distributed anycast gateway built in in the hardware in the leaf switches. Again, this is one of the things that attracts a lot of people to the platform is you're no longer running a centralized Neutron node or even a pure software router. We actually have a full hardware routing capability in every switch. What we'll be doing with OpFlex is also allowing that capability to be present on the hypervisor as well and that's where the NAT function occurs. So I just want to expand on that a little bit and specifically with our implementation. So to answer the question in two forms, right, so North-South traffic that stays within the environment so it just goes up the switch and kind of cross which depending on how you look at it could be classified either way which is why I was asking. Anycast I think is probably one of the more graceful solutions to that problem. The DVR is kind of a cool option but there are some for me the complexity required to pull off anycast once you do it means that address lives everywhere and there's a great advantage to that because it means from an availability standpoint you don't have to worry about VRRP and heartbeats and all these things having to find each other, right, everything is just everywhere all at once. So that's one of the things again to in order to do that processing right it's usually best done at the at the device level or that type of thing where you can manage that complexity. In terms of North-South what's handling then at in our case we're a mix so we're doing neutron for a lot of that but then with some of the limitations of the firewall and load balancing as a service that's currently in place and in the latest iterations we've had to get around it by adding routes just within neutron to be able to force at a different path so like for VP and termination we have VP and concentrators because the VP and support inside of OpenStack is little little wonky and a lot of our clients do have VP ends it's it's part of how we can act as an extension of their enterprise network and their enterprise compute right and bring it back in is to have that on a totally unrelated topic we are we are investigating intercloud fabric and using that as another way to help bridge that gap but we haven't crossed that bridge yet so thank you absolutely more of a less technical question I totally get this architecture from a service provider multi-tenant standpoint I'm not grasping it from the enterprise or large commercial are you seeing these one or two key drivers that are causing enterprise customers to kind of gravitate towards this ACI architecture sure so let me start first and clean if you have anything to add you know you know so we're seeing a couple different reasons people are adopting this one is actually having the hardware support that Clayton was speaking about right just from a reliability standpoint this is a major win for enterprises you know they tend to not want to see their clouds go down and actually having hardware based distributed routers is a much more you know is a much more friendly way where it essentially takes the burden of managing that off of them and puts it inside a fabric that is doing it for them as a service the other big thing we see enterprises really attaching to is the operational visibility they need to be able to offer an SLA to their users and be able to understand what's going wrong and have a mechanism of troubleshooting that they have in their control and you know and one that can actually be managed well across different teams that may be doing networking compute separately so with APIC we actually give you an operational console and we suck in information from OpenStack and we'll actually show you what's going on for your virtual networks and we can map that directly to how that you know to its physical instantiation and that's a really powerful effect for many for many of these enterprises who want to you know they want to be able to solve a problem quickly without calling six different teams and and having a painful troubleshooting experience so I'm just going to expand a little further on that for the clients that we serve as well to his point I remember the conversation I had when I completely broke our CEO's spirit when we were talking about OpenStack and deployment and some of the things on the enterprise side and I explained to him oh and by the way the default option is a Linux box that does all of your networking and there's only one and if it goes down you lose all layer three to which the answer to which the answer from one of the companies I was talking to they said well it's fine all your layer two services stay up and running it's like what cloud application doesn't route right what doesn't need the internet if you're hosting it so the answer is the network goes down right and he just about lost it and so we needed to have another solution that had that same type of resiliency because we service that client base the other is I I see a shift occurring and we see it on our side both as a technology vendor in terms of the resale side of the business and also being the provider there's this there's a resistance that we see from from either directors or C level people on the enterprise side when we start talking about cloud solutions and they're looking at it like you're putting me out of a job right you're coming in here you're going to take away the infrastructure and the thing is their role is changing the role of it within the enterprise is undergoing a massive shift because how we consume resources has changed virtualization made a huge shift in that and we're starting to see more of this that that you don't you don't look at a server as a server that's no longer your sequel server that's no longer your web server right that's just a box it's got resources you consume resources to serve an application and all the users care about is they click and they get the thing they're going for so the role of it within the enterprise is shifting from servicing the infrastructure to servicing the application and the infrastructure is simply a provider tools like OpenStack and ACI make the management of that far easier especially at scale this doesn't make sense for an SMB has got half a dozen boxes but for an enterprise that has hundreds or thousands of systems or even more than 50 right there's a huge advantage to being able to manage things from a policy basis it requires a shift in mentality but in the end it's completely worth it thank you absolutely