 That's on good. Thank you. So they've asked me to introduce Dave. This is Dave Ward give you a little bit of his Expertise and background here He's worked directly with the number of our larger customers and therefore directly involved in their networks At Cisco. He's a Cisco fellow, which is to say someone that you know one of our more senior engineers He's a senior vice president, which means people actually care what he thinks and He's it our chief technology and Architect chief technology officer for engineering and our architect, which means that when an SVP yells Please march up the hill and everybody starts marching up the hill. He stands over here. So no that hill Okay, that's the job of a chief technology officer But so so let me Dave it's all yours Fred So as an SVP friend pretends to care what I think about he's actually on my team and all those Thanks Fred for the intro. So if you came here for a marketing conversation or hearing all the glories of how Cisco's using OpenStack, you're not going to get that instead actually I wanted to have a Conversation with you about not only how we're using this technology But how we see the trajectory of the technology going as related to trying to deploy and beyond just trying to deploy actually deploying network function virtualization so Talk to you about you know How we use OpenStack really really quickly and then Frankly, these are my comments not necessarily Cisco's you know my perception Having worked in trying to build products and solutions with OpenStack where it is Taking a look at if you want to deploy NFV what it means you have to look at much much more than just some APIs you need to look at multiple layers of that architecture and then We'll go into some detail after I lay out the argument Then I'll then I'll try and discuss with you the logic behind the argument and then some immediate work That's right in front of us to get this done So we're all in on OpenStack. We're using it in a number of different Products from public clouds to our inner cloud. It's the cornerstone of our NFV solution and What I want to mention here in particular is that all the rage today is talking about network function virtualization But the reality is is that? virtualizing any and cloudifying Anything that's currently running an appliance is the target that we're aiming at so the network side of it I mean, it's a Cisco conversation We're going to talk about networking but the applicability of this architecture and the applicability of looking at the whole stack is Really towards VXF where the X can be anything from trans coders to video to content ingest And on and on and on so the networking side is interesting right now today But again the architecture is much larger than that really the the piece that I want to mention is that OpenStack is one of the tools That we use and I'll discuss the the larger number of tools that we use to build the solution To to create the offer. So what are some of these challenges? OpenStack is everything. It's floor wax. It's dessert topping It it's working on everything that that seems imaginable To be able to do with codifying something what are the concerns look there's met We all know this and I realized that we're at an open-stack conference And so I started off by saying we're all in on open-stack, but let's also face it There's a ton of work to do constant API churn. There's performance at scale issues There's synchronization and failure problems with the distributed architecture of how neutron works and we'll focus on networking today it leads to some ha issues as well as how to how to do driver and component design and the thing to note is That the goal of the architecture that I'm putting forward is to get the right expert expertise In the appropriate niche and the question that we want to answer or ask ourselves is open-stack Everything and should open-stack do everything with with NFV Pros are open-stack is truly open. This is a major community effort and that's great And it's a large and growing community But the cons are that it's frankly got classic feature stability and scale trade-offs in how it works and Have we outstripped the means and is it fracturing too much? And are we forking too often on parts of this architecture and forking equals equals bad? Because that's the primary challenge that I'm running into that there are multiple versions and distros of everything and this is becoming a challenge, but Let's take a look at this whole stack approach Really just a couple years ago people just talked about the virtual overlay and if you could connect if you could fire up a workload And you could connect them together by any means possible. You've got a cloud That is not a cloud that is firing up virtual machines and connecting them together Nothing more nothing less and to build network function virtualization or in particular services out of this There need to be multiple views to be able to operate and deploy These virtual functions into an actual service offering and you can read these as I'm speaking here policy view service virtual topology physical topology and then the resources as To be able to place a service chain or to be able to place these services into a data center You need to optimize for more than just CPU cycles and memory and storage footprint But you also need to to understand the capabilities of the devices how you're going to place them whether in a hypervisor or a container and then of course what's different about these workloads they're Opt they need to be optimized for IO and that's a fundamental difference between NFV workloads or these type of These types of workloads versus a traditional enterprise workload Okay This is the picture that I'm gonna come back to but let's fill in a few details So what's the point of this the point of NFV is to create a service that that's just kind of obvious To be able to be able to create This particular service We need to take a look at all of those layers and how they in not only how they interact But how they actually construct the overall service and are orchestrated to be able to deliver what what People want to use or people need to buy so this layered model of networking Adding the service layer and the policy layer on top Which is fairly untraditional in most diagrams that you see you pretty much just see the bare metal as I mentioned and Then you see the the physical connectivity between them for which is a variety of approaches The virtual overlay for which is a variety approaches and often it stops there as I mentioned But taking a look at all layers of that stack is key to be able to deploying NFV So the first piece looking at the policy layer is associated with group-based policy or the way I look at it It look it's a it's really pretty simple It's a top-level abstraction where the workloads which are gonna be groups of endpoints are Edges of that particular policy plane. How are they gonna relate to each other? What are the contracts that one virtual function needs to provide another to create that service chain? Where are the features being deployed and what are the features that are being deployed? That's all this is really about And then that policy is what do we want to have happen? That's the contract nature of this so group group-based policy is one critical extension and Look the key is to be able to abstract the policy From the underlying hardware or from the layers below such that wherever these workloads are placed that policy or those contracts can be adhered to and So the goal here is not to think of virtual functions as you do as Individual endpoints that are to be automated But instead it's the entire service chain which needs to have contracts between them if we make the fundamental error and treat a Virtual appliance the same way we treat a physical appliance the same way we treat a physical switch And we treat them all as singleton endpoints programmed in a serial nature. You're gonna have the same cluster And insert the next word you're gonna have the same CF that you have right now And frankly the way we manage and orchestrate networks today is a complete foobar And it is because we treat everything as a singleton individual endpoint unrelated to anything else around it That's what group-based policy is trying to solve the second piece that's being added to this is the notion of the service abstraction or service function chaining and This is how That service layer or how that graph if you want to think of this in networking terms because that's the way I think of it Which is basically everything is a topology in a graph that's being put together and a service topology is a directed Asicclic graph based upon the flows that are going through those particular nodes so therefore what we're trying to accomplish with SFC is that there's a notion of direction Order and sequence that the traffic is supposed to traverse through the NFV service chain and that service function chaining is how you create Logical linearity through that service chain so the the power of what NS what what SFC does or of the network service header NSH is that it's Decoupled from the actual transport your addressing and underneath could be layer 2 could be layer 3 could be v4 could be v6 But that met that service path is decoupled from the transport It is a notion of metadata and and really what we're talking about is being able to measure on that metadata But of course notion of a verph context or the routing table or the or the tenancy that the service chain is within notion of user context etc by doing this we also move beyond just talking about the network and Virtualizing these functions. This is how we can bring in not only network function network virtualized network functions But virtualized any function that we need to do and what I mean by this to keep it simple is that You may need a virtualized network function like a virtual router to be able to accept the traffic but behind that of Course maybe another network function like a like a load bouncer, but think of media ingest think of being able to do Transcoding think of being able to do packaging are those really network functions? No, they're in fact media functions that are related to that particular service chain that has to be tied to a network topology the power of this of Using a contract or a notion of group-based policy to render How that service chain is going to going to work is a direct reflection of what that business policy or what you're trying to achieve creates and So therefore as you've probably read as I've been as I've been discussing here You can attach to the particular flow from either the source or at the ingress point to a particular cloud The context necessary to apply that service as is stated here So towards this end these two concepts we've been working on for for quite a while There's a boatload of other players some of them mentioned here Unfortunately, some of them not that we've that I've just forgotten by accident But nonetheless, I'll mention some of that as I go forward that we've now have this in not only proof-of-concept pilot, but also in deployment and that standardization Primarily done at the IETF in conjunction with how to open source that through open daylight and open stack is the Pathforward that we see with a number of these players in the industry So taking a look at some of these pictures Really wanted to show you how it fits into the overall open stack architecture and how group-based policy fits in and you can see I've highlighted it here is that that when you're taking a look at the different ways of interfacing into open stack And then you're taking a look at the way the drivers map into this you can see that Open daylight is taking a Critical role in this to be able to render those service chains from group-based policy Into service function chains and manage both the overlay and also the underlay that are below those two layers as I mentioned So I'm to give you a couple of other pictures of how this works and kind of walk you through how overall these this system works together There is an orchestration management system That's trying to create that service chain and that that's what's seen on top that provisions The service function chains provisions the policy Tells the policy to open stack of where to place those workloads or those virtualized network functions or virtualized any function and then Those workloads are placed via open stack open stack passes that metadata onto open daylight leaving the state of the multiple virtual forwarders as well as the service function chaining and Programs the appropriate features as necessary for the group-based policy contracts And so this particular piece has been has been proven to work and dramatically reduce state in Neutron utilize the high availability of open daylight use the scale-out capabilities of open daylight But nonetheless have those APIs that folks are currently using with open stack be able to create Not only to sorry to create this service, which is the ultimate goal of all this infrastructure and My goal as an engineer is state reduction Improve high availability create the service chains that rate relate directly to business value and be able to put together things in addition to just network function virtualization for for multiple industry solutions what this looks like in Open daylight is in fact Using the model-driven sal notion where in fact you create a model of the topology You create a model of the different devices that that build this out But the reason why this is this also becomes key is not only the fact that open daylight can render this it can speak Any SDN protocol currently known to mankind there's let's say 10 today I can guarantee you if I stand here next year there's going to be 10 more There's an if it's not labeled NFV or SDN and networking right now. It's not sexy and you're not going to get attention But the get out of jail is a controller that speaks everything So whether it's open-flow or it's OBS DB and remember open stack Neutron on top actually has no idea that it's passing it through this particular plug-in It makes the API calls it passes the data on to open daylight And we can keep this state as I mentioned directly in open daylight for the virtual forders and be able to program lifecycle management associated with all of those different features and functions to build out that service But the keys to be able to do this, you know, there are some keys here in open stack, which is Some extensions to Neutron in which we can pass Again some data through open stack down to open daylight associated with the port object And I know there's a number of conversations happening and discussions in in sessions this week and Then being able to take that what's in that string and I'll show you in a second And be able to bind that to the policies associated with group-based policy in the endpoint groups Whether they're what type of device they are what the associations associations there or what the order is etc Passing this data down into open daylight to be able to orchestrate that chain and so This these expressions for different environments actually enable open daylight to be a multiple renderer for those service chains and those Appliances that that are being created Okay, we went through that section pretty quickly. So let me put this back into context For those who want to understand what I'm trying to get across a little bit better as I did go quickly What's the point of all this? There's three main things You need to be able to place the workloads on the appropriate in the appropriate container or hypervisor on the correct Devices in the correct location second you want to be able to create policies between these and this is pretty normal conversation for anybody who's who's in networking that One forwarding device passes day down to another forwarding device and there are features that are Built sorry are configured and provisioned into those Appliances or or networking nodes which effectively forms this contract And so that's the policy and then we want to be able to chain it So moving this one step forward. Let's connect these dots We want to be able to to connect placement to policy Such that we can Communicate between the groups that are forming forming that service We want to be able to link together placement to chaining So given that these these virtual appliances are resident somewhere Define the policies that need to be applied between them and then connect policy to chaining itself to communicate what traffic Should go into which service chain Again, this is all pretty normal from a networking point of view, but these are basically Really quite new concepts to virtue to the NFV world and to the virtual appliance world the overall goal Make it easier to define this and deploy and operate the services that need to be deployed together And these pretty straightforward stuff load balancer firewall web server. That's a service chain You want to deploy those at one time. There are features that need to be at each point in that service chain Make it as simple as possible to render that particular service chain provision it regardless of where it lands in the data center and move that policies move those policies and move that Configuration as the endpoints have to move around based upon this the current status or state of that data center So realizing this an open source and we can talk about standards as well But realizing this an open source open stack is great at placing the VNF's absolutely great at this but the policy piece the provisioning configuration has some challenges and this where group-based policy comes in again and Then the chaining this This has been built into open daylight And so all these pieces when you take a look at that architecture and I'll get to it again in a second to remind you You look at the service plane the policy plane the service plane the virtual overlay the virtual underlay and the resource use All of these need to come together to be able to deploy that service So as we connect these dots There are some problems some things don't currently exist that notion of The semantics to communicate EPG membership to workloads is missing right now The placement to chaining lacks the semantics to communicate what type of VNF that workload represents We can we know how to fix that and then the policy to chaining This is upcoming in the lithium release of open daylight So we're right on the verge of of seeing this architecture come to fruition and completeness and open source To close those gaps and as I mentioned I was going to tell you what that metadata looks like And this is really what we're talking about if we can pass some of this information along with the port We can indicate Whether you know what the workload type is and what endpoint group it's a member of And so this metadata actually ends up becoming quite important on on how to be able to place The VNFs identify them create the service chains and treat them like a service topology that they are And so you can see on the placement to chaining You have the ability to say That the VNF is actually a specific type Which then will key up the type of provisioning configuration and policy that's necessary for that particular device In that chain so these are some of the immediate steps in front of us that we need to work on So let me show you some architectural pictures some of them are familiar some of them are not this is a this is in fact a Cisco picture where Putting that architecture into action You can build a virtual managed services solution with open stack open daylight with an orchestration on top as I just mentioned And in fact this builds out those service chains Using this architecture as I described and the key piece here is that it allows independence of the physical resources independence of the virtual overlay and addressing semantics and allows the VNFs to be Deployed as a chain and to be referenced as a complete chain or a complete service to both the user or tenant or To the operator so that they can perform service assurance Associated with that service for that particular user and tenant and treat the entire Service topology or NFV service chain as a whole Looking at a couple of other pictures if you're a fan of Etsy This is going to orient to your brain towards an Etsy type diagram And how the different pieces that I mentioned in the previous diagram fit directly into that Etsy architecture this is Again, this is one way to to represent it But do note that there is a fundamental difference, which is I'm not having sorry We do not create an architecture where there's an independent element management system or management system per VNF That is done via the orchestration system based on those contracts I actually believe this is a fundamental flaw of the Etsy architecture I'm not the first one to say this on stage, but treating Each and every virtual machine or virtual workload or network function virtualized network function Requiring it to have an independent NMS or EMS. I think is simply ludicrous and so instead that that deficiency Is addressed in opn of V they're headed down the same path They're using all of the same tools open stack open daylight OVS KVM heading towards containers They're heading towards their first release as well but the thing that I want to mention here is that The open NFV community rose Not only as a midstream project not only from the members originally an Etsy NFV working group, but also from a fundamental Slowness or lack of adoption of some of the blueprints when telcos MSO service providers are arriving in the community. They're having challenges with their blueprints in their use cases This is something that this community has to address there needs to be an outlet Seeing a rise of more and more midstream projects. I think is not advantageous to the industry But right now the midstream project of integrating all of these open source pieces together I think is key for this point in internet history to be able to make it work and Hopefully it'll be proven out very shortly And then we can move from the use cases that they're trying to build midstream Directly into blueprints and it can take off and there's a variety of community communities I think that they can continue to catalyze to to have this outlet so my next point is that Building these architectures in open source as an example. I give you a quick preview. I hope you read the last line but a quick preview of this The challenge we have is that there isn't one answer for how to do these things I think for every project and sub project regardless of the community open stack Or opnf v if one way of doing it is good Seven ways of good doing it is better and often those seven different ways of doing it are somebody's own distribution and somebody's own way of doing it and needless to say this turns into a forking nightmare and the though the nemesis of a successful open source community is a forked project and I and in this particular case NFV and deploying these virtualized services getting very very challenging because of the seven different options for everything you could possibly do That requires not just seven times the resources probably 49 times the resources to get any of this done and We do need to find a way within our communities to I don't want to say reduce but begin to make decisions of which want which Valuable piece of technology can move forward towards these solutions or we will see a rise of a continual rise of midstream projects trying to integrate these So stop forking up my NFV So let's take a look at at the future and and what's necessary So my goal is to take this platform concept or this whole stack architectural concept From policy through service through the virtual layer through the physical layer and through resource management and take this to multiple industries right now load balancers and Nats are Interesting, but do not make an NFV solution the number of virtual appliances that need to be brought into the fold and Can be service chain together for high business value? It's taking too long Now instead of just ranting and riffing up here, I also will mention a few things that we're working on as well in particular Service assurance has been a complete lagging Caboose at the end of this train It's incredibly challenging to operate look if operating a network is hard to begin with and a lot of IT pros are saying This is way too complex automate simplify etc. Etc Without some of these tools that I'm describing operating an NFV solution is unbelievably challenging and so again passing some of that metadata and passing some of those identifiers of where Workloads are placed where service chains are how they're connected together and be able to represent that as a service makes things easier and That service assurance piece and managing that whole stack is a key focus of Where we're headed both in the open daylight community and and at Cisco the resource awareness and reservation this particular piece is Really close to a black magic for the industry. There are Cloud and data center management tools that allow you to take a look at where workloads are placed and how much CPUs are using and how much storage and how much Ram Necessary but not sufficient when you are an IO optimized workload that IO piece of what is contributing to Sorry, what is the limiting factor or limiting variable in my deployment of NFV most frequently as IO and that isn't part of our overall Cloud and data center resource management system yet. So that one on That one is key on how to be able to build these service chains and guarantee that you're going to get the service Availability and reliability that you need out of it security and at a station of this Trust and security is still almost in an open loop on a lot of these pieces There are a number of proprietary solutions that are that are emerging But nonetheless this needs to be brought into the entire architecture and then other architectures of deployment You know hypervisors containers bare metal all of it needs to be able to be orchestrated and workloads have to be placed and You know to the open stack community Please work, you know harder and faster because we need we need every form of deployment possible So my goal of of building out this architecture is to not only bring attention to What what we need to be able to deploy it NFV or any? virtualized functions But also we need to have the stability and performance and scalability needed to be able to deploy these in a basic distribution so To our navel ourselves, I'd say that automation is necessary, but not sufficient We do need this whole stack architecture to be able to deploy ubiquitously and we do need a notion of all of those planes policy planes service plane Virtual overlay physical and then the resource management So I've got a few Few minutes for some questions comments or conversation You know, there's a there's a mic over here, or I can repeat your question if it's too far for you Yes, please A quick question about the container networking So if everything this goes well and the containerizations containers are gonna be big thing for the next generation Not working and the applications in terms of the NFB BNF so what's your Cisco's view what's your Cisco's ideas about? Networking those containers across multiple data center multiple hostess et cetera Okay, so there's there's really kind of two questions in there one is where we headed with containers and Containers hypervisors bare metal. I want to be clear. I don't care I mean the money is sorry anybody's business function is not based upon how they actually wrap their application And we need to support them all. I've got no religion on any of this stuff Although it's a fun bar conversation to have religion The second question you asked is how do you federate these data centers together? You know that clearly is also black magic at this particular point with a lot of these deployments and having Geographic redundancy or being able to move workloads from one data center to another data center based on proximity or high availability is a key piece Then the best thing Emerging out of this whole stack architecture with respect to group group based policy is that it's not associated with the underlying Physical resources so you can take those policies and contracts and move those workloads to another data center without Redeploying new addressing without reassigning all those variables that has been traditionally necessary when moving network functions around So the trajectory is there towards data center federation But this infrastructure I believe needs to be in place to be able to have to make it so and in particular to abstract it above those above the hardware itself I threw about a bunch of contentious comments out in this in this presentation. I can't believe folks are taking it sitting down Lou You're taking this sitting down go ahead You can I'll repeat for you if you like hundred percent hundred percent In particular because as you know through history in neutron There was state held in neutron when we added more than one forwarding plug-in to that There was now cross-state across those plug-ins then the ML2 drivers came in and there was state associated their High availability and scaling and consistency and coherency became frankly a very very challenging proposition Obviously people made it work They made it work with a ton of effort and when you step back and just look at it look at it architecturally You'll see oh you've got your state distributed around too much and it's very very challenging to keep everybody consistent on how it's working Let's have a let's have something that's abstracted and move that state into one location associated with the entire chain so 100% agree that state reduction and having neutron focus on massive performance and scale and and clarity and ease of those APIs is is what I'd recommend and Move that state to something that can hint that speaks any language of networking I mean you could kind of see the trajectory that neutron would be on trying to handle multiple forwarders and the Infinite number of drivers that or plug-ins that would have to come out of it each one from potentially different vendor or multiple open source communities You know, you know danger ahead and I think that'd be very challenging Hey Excellent point. So, you know as they say I'm an old SD. Oh man myself in fact, I am but really that where what this is coupled with is With respect to the service models Those are being described in Yang and Yang Yang is a modeling format being standardized at the IETF and Working with the IETF in particular and if you know that body Well, you'll know that they distributed out all work to all relevant working groups as if they were associated with a particular protocol or feature and If we want to deploy a tenancy in a layer 3 VPN or its own IP address pool That has a extreme likelihood in a distributed nature to take. I don't know 10 years and maybe 20 years And I say that because SNMP is in the state that it's in so Using using Yang and working with some IETF folks. They're focusing on defining services first you know to be able to provide an end-to-end service like a layer 3 VPN like a layer 2 VPN or or Building out these service chains. So Yang is key from a policy point of view From a service view and service chaining that's being done at the IETF as well creating the communication between open daylight and the different Virtual forwarders or virtual machines underneath like I mentioned most of the SDN protocols have One standard body or another associated with it whether it's open flow or it's not confiang or its path computation element I could go on and on the one that's a bit challenging is OBS DB and OBS DB really needs something written down and you know, I'm not nest I'm not really a big governance guy, but it would be really nice if something was written down We could agree on on some aspect of the trajectory hand-in-hand with the code that's being developed One thing about this architecture when it comes to either Yang or the SDN protocols or open daylight or NSH or others The play that we tried to run if you want to think of it that way is that it's being standardized in an SDO and the correct SDO and again that could be Etsy IETF ITU IEEE ONF on and on at the same time that the code is being written one thing I do know from being an old standards guy and this is going back a while now is that Standards written at the same time of the code produce superior standards a Committee of people who after two years produce a piece of paper that isn't worth the paper that it's printed on But they really thought hard and succumb to group think in a variety of other ways to fool yourselves into thinking It's a good idea with no code next to it Those tend to be standards that either take a very very long time to implement or never arrive in the marketplace so that can join nature of Open source and open standards are key. So this entire architecture is is both open source and open standard Dave we're gonna close the questions down now and do our drawing So if you guys have filled out your cards, I want to put them in the fishbowl We're gonna do the drawing and I think Dave might hang out a few minutes outside if you have other questions Are we giving away a car is this open? So I thank you all for your time this morning have a great open-stack summit and I'll talk to you soon