 Good afternoon everybody. Oh, sorry. I've interrupted obviously, so please no carry on Thank you for coming. This is our fourth and final session of the day we've Covered multi-cloud networking. We've covered our CVM product and now we're gonna get into building n of the applications With open stack and Cisco ACI. We've got Domenico de Stolli and Ifty Rathore down here two of our engineers and away we go we are we are gonna build time in for Q&A at the end and For those of you who are always very very busy trying to take pictures of the screen Video of the session will be up on the open stack foundation YouTube channel later tonight, so Save the space on your camera for pretty pictures of Berlin and away we go. Thank you very much Thank you, Gary and Welcome everybody to this session. So as Gary said is about open stack and ACI So how we are gonna build your NFV solution based on open stack and ACI. My name is Domenico de Stolli I'm a technical marketing engineer from NCME business unit, which in Cisco is the business unit responsible for ACI So I'm going to present the first part of this session today And then my colleague Ifty will also take care of the of the rest of the presentation So thanks a lot for being here by the way. I heard that someone is playing against us. There are beers Maybe I shouldn't tell you but I I really Appreciate the fact that you are here instead of drinking beers in the other section So let's get it started the agenda today is indeed going to be split on different Things so obviously since we're talking about Cisco ACI and open stack I guess if you're here, you know about what open stack is you may not know what ACI is So we're gonna have a brief introduction about Cisco ACI what it is We're gonna Understand why eventually you may want to run the Cisco ACI solution together with open stack So what are the benefits there and then we're gonna go more towards the NFV kind of challenges That typically our customers have or share with us And how eventually with the Cisco ACI you can solve better those kind of challenges So Cisco ACI and open stack better together if you will So let's get it started What is a Cisco ACI Cisco ACI is indeed a Cisco software defined networking solution This is basically based on three main components, right? So we've got the Underlay or the switching layer if you like which is based on nexus 9000 switching family so those guys here and this is based on a Bipartit graph as we call it or be or rather it in a topology of a leaf and spine Kind of switches. So the second part of ACI is actually made of the AP controller Or rather the AP controller cluster, which is the brain of the solution So we see we'll see later that the Cisco AP controller is basically the single point of management of configuration of troubleshooting and visibility for the entire infrastructure of ACI So the third component is obviously the software that runs on the nexus 9000 and the ACI AP controller, which is indeed called ACI application-centric infrastructure Now the reason why we call it application-centric infrastructure is because ACI is indeed introducing a sort of network network policy framework, which is indeed shared amongst different kind of compute Architectures so you can basically build your application framework, which can be then Allowing connectivity from both burn metal server as well as virtual machine different kind of virtual machine managers And eventually also containers. So all this kind of architecture Or computes can be eventually be part of the same kind of network security policy framework Which is offered by the ACI solution So the beauty of this is that you can eventually move from one kind of architecture to another or having multiple cable architecture Communicating with each other so being switched routed and eventually being also having some service insertion to have communication allowed between all of them How does it work underneath? Open well the ACI solution sorry ACI Works with a protocol, which is called all flex so all flex is an open source declarative model Which allows the communication between the AP controller so our ACI controller and the rest of the node of the fabric So what it means is that? ACI AP controller Instructs all the nodes of what are the policy that should be defined and configured in the nodes in terms of What is basically the intent of the user? So if I can give you an example? The declarative model is the way how I'm telling an instruction Rather without telling each and every step to get to the final configuration I would like to have so if I'm thirsty for example In a declarative model I'm just telling my colleague if the if the I'm thirsty and it will figure it out How are the steps in order eventually to pour some water in a glass and bring to me the the glass right? In an imperative model, which is the kind of old or legacy fashion Model of configuring things. I should tell instead if the exactly how to pour water in my glass walk towards me Going through the stairs and eventually bringing to me that the glass right so with the old flex protocol We really want to abstract the way how the controller doesn't have to know exactly The configuration that must be configured in each and every device, but rather only the intent of the user so obviously Those nodes or devices to which we are pushing the configuration must be smart devices if you like So that's why those devices Will be running some sort of opflex agent so some sort of agents which is capable of this understanding this declarative model So the the interesting thing is that? This opflex agent is not only running in the ACI fabric, but is also running in the hypervisor So can be running in the computer nodes that you have for example It can be running obviously in the computer nodes from an open-stack perspective and we'll see it later You know how exactly the paradigm works But also this can be extended to other agents running on VMware if you're having some VMware Computing your data center or it can be also running on Microsoft CVMM as well as on some type of routers or switches that you have in your data center So this is a final slide before we jump into the open-stack Way of integrating with the ACI and these try to summarize very briefly how ACI works with several kind of domains I think it's It's very well sharing the kind of ACI anywhere vision that we have because ACI is an SDN solution that wants to Be a solution not only for your on-premises data center and when we talk about on-premises We talk also about the possibility to extend your data center across multiple pods or multiple site around the world so having VX LAN end-to-end kind of policy enforcement, which is Completely shared across multiple locations, right? But also we work on a remote location or branch offices Where you can extend again your VX LAN extension as well as well your network policy Enforcement so the remote location here that we are working with is the possibility basically to have Small sites where you can deploy just a pair of these switches So the Nexus 9000 switches or even having ACI extended through what we call the virtual pods So virtual eyes lift and spine components that can run in a very many minimal footprint in your extended branch offices At the same time we also work on the public cloud on the other side. So who's not talking about public clouds? I think some of you may have Attended the previous session from our colleagues in CISC about the multicloud solution So indeed ACI is capable to extend towards different clouds So we are working now with AWS but also Google cloud platform and Microsoft Azure are in the pipeline And the idea there is that obviously you can extend through VPN or think like direct connect your data center network policy To the cloud or to multiple clouds and you can decide obviously based on cost or based on different kinds of business analysis How you extend and where and what you extend exactly from your data center So again, that's a 1,000 Foods view of ACI, but indeed it's it's a very brief introduction just For us to be on the same page that when we are talking about what is ACI Now obviously why we are here. We are here because When talking with customers we talk with several customers every week. We understand that OpenStack has a number of challenges, especially from a network instant point, right? So especially we see that many customers are seeing the fact that the distributed layer 3 services are typically a challenge with OpenStack distribution That's not always true obviously because we have got some distributed virtual routing function but in general if you move to more advanced kind of feature like NAT or floating IP or You know other service insertion then having net distributed network services in OpenStack is not really Something trivial at the same time performance, especially you are here probably because you're interested in NFV So performance in the NFV world. That's very very key, right? So performance is Also sometimes in OpenStack a challenge together with visibility and complexity of troubleshooting Why? Because you typically focus on the overlay so the architecture of OpenStack how you create, spawn, virtual machine, Networks, etc. But you have very minimal visibility or at least you don't have a visibility of a kind of merge infrastructure with your underlay so with your switching layer So with ACI we are trying to solve this kind of problems and we have Possibly or hopefully a Response for each and every of those so first of all we replace completely the data pass from a neutron perspective And we distribute basically the routing function starting from NATs, but also floating IP As well as obviously DHCP and then metadata Optimization which is completely distributed in each and every Compute node that you have we'll see more in details actually in a few slides from now The second thing is that we also support obviously hardware acceleration So with the ACI plug-in we do have the possibility for you to run SRUV or OVS DPTK And in the future also VPP you may have heard about VPP also in our other previous presentation Right at the same time sometimes we talk with customers and many are interested to to run other VLAN or VX LAN But VX LAN comes as a challenge sometimes depending on what kind of Nick interfaces you have in your servers, right? So ACI provides you VX LAN in the back end So basically between leaf and spine We always run VX LAN and you can decide if between the compute node and the top of the rack switch You run VLAN or VX LAN But it doesn't really matter from a scalability standpoint because ACI is capable of doing purple VLAN significance So you can indeed run the same VLAN in many leaf pores, but that VLAN may be Significant Significating something different from an ACI standpoint Again from an integrated overlay and underlay perspective we'll see later, but ACI basically completely automates the configuration that is Pushed from a neutron perspective and therefore you're going to have a one-to-one mapping between the neutron component and the ACI Component and that means that you have a much better visibility of both your overlay and underlay You'll have information like in which hypervisor your VM runs But also what kind of encapsulation that one is using and to which leaf or to which top of the racks which That specific compute node is connected to right so really end-to-end kind of understanding Also of how the packet is flowing within the ACI architecture from one end point to another one and Finally the trouble shooting Is significantly improved also from the L score and the telemetry system Which is part of the ACI architecture with some kind of L score which in percentage will tell you You know how else is your system and eventually if there are any problem will warn you with some others and falls so The ACI solution works with multiple different open stack distribution specifically Obviously with our Cisco V solution, but with red hat OSP director or Canonical Juju charms and what I mean by that is basically the all Installation part of Cisco V more OSP director or Juju charms Take care for you of installing already the ACI plug-in and we'll see in a slide What are this ACI plug-in components that we're going to install but basically there's minimal efforts From your standpoint or is completely transparent? I would say the fact that you are running the ACI plug-in or you are not from an effort perspective, right? So you'll see mostly the benefits and rather No, no pain in terms of installation of the whole architecture So what are the main components of the ACI plug-in? There are mainly three there is obviously the ML to plug-in provided by Cisco ACI So you may know better than me what ML to plug-in is is a framework provided by the open stack distribution in order to Configure your underlay switching layer if you want so Cisco obviously provides one plug-in for ACI The second component which is a key of the environment is the ACI integration module the AIM This guy is actually Representing the one who is doing RESTful API call in order to create ACI objects in your ACI Architecture and finally we're going to have the OpFlex agent that one that I was presenting before and this OpFlex agent is Actually deployed in niche and every compute nodes that you have in your open stack architecture So this is eventually the overall Architecture picture where you which you see how that the entire flow works and talking about the flow Indeed we can see here You know a sort of how you are operating your network from an open stack administrator perspective And you will see actually that it doesn't change much from your normal utilization of open stack So the open stack tenant will still interact with neutron rather than Nova you know all the open stack projects from an open stack controller perspective and when creating Obviously all the network architecture architecture rather The the neutron router the neutron network subnets etc. These will in turn Push the ACI integration module to have some RESTful API call to ACI to the AP controller specifically and creates ACI Objects, so we're gonna have a one-to-one mapping of the neutron network created into ACI what we call endpoint groups, right? So when you will attach virtual machine through Nova to your neutron network Eventually the AP controller will take care of configuring the old fabric Infrastructure so that you're gonna have you know your pervasive gateway a configure all the network policy configure in your Nexus 9000 switches environment So this is the the one-to-one mapping that we have I'll go very quickly through it But the idea is that for each and every neutron objects that you have you're gonna have the corresponding ACI object right, so you have a full visibility of the things there and Well, this is a screenshot of the ACI GUI But I just wanted to highlight you the fact that when you create each and every object for example here an open stack project We're gonna have in turn one ACI tenant when we create an ACI an open stack Neutral network you're gonna see objects created automatically in ACI like ACI endpoint group and bridge domain with a corresponding Subnet attached to that which is representing your default gateway Distributed in each and every node and finally you're gonna have VCB on the Nova virtual machines that you create and attach to the Neutral networks, so you're gonna have information right like your VM name The network the encapsulation that VM is attached to The compute node where that virtual machine is running on right? So you will have really a full whole visibility what I was talking to you just a few minutes back Now I want to switch to NFV Kind of challenges and sure Shortly, I will also pass the word to my colleague ifty But in general when talking with customers about challenges on the NFV kind of architecture What we see is that in the NFV world we see that you you may want to create Very rapidly and eventually create and destroy VNF in your whole environment So the challenge there is that you may have VNF Distributed everywhere in in your data center attached to different top of the rack switches even possibly across multiple data center Right, and the challenge is that the scale of sorry the scale of your architecture in terms of VNF That you're gonna have will be much wider, so you're gonna have obviously a need of Representing or doing equal cost multi-pass towards all the VNF that you are creating At the same time also optical optimal performance on the VNF is also a challenge there, right? so the idea of ACI is also to try to work in the in the sense and in the Kind of the field of NFV and reporting and supporting customers around the different Kind of capabilities that are lacking there So if they were gonna talk shortly about a couple of features here I'm gonna brief introduce you here that ACI supports neutron trunks port obviously Also, we do support neutron SVI so the creation and again if they will talk about it shortly the creation dynamically of BGP SVI enabled Networks on the top of the racks which is towards the VNF components Or as well the service function chaining of neutron, right? So you can create eventually service function chaining in your in your environment and be fully supported by Cisco ACI and As well we support the OBS DP DK and SROV and as I said before VPP is also Some roadmap item that we have with the Cisco ACI In the in the short future So I mean say that I will pass now that the word to ifty which is going to talk more about the neutron SVI and the SFC kind of architecture For yours Hello, hello, it's working right anybody can hear me. So thanks Dominico for doing the hard part So it was great. I'm gonna start with this the SVI slides basically what what do we have for actually the Neutron SVI features and which one do we write so? So First thing is in the as actually my colleague has told you that the SVI is used for a couple of different Yeah, so for the for a couple of different reason one is of course you have VNF's that are adding services Dynamically in your data center or in the in the data path of your Traffic that is maybe going from your branch office to the internet So as you add more services, you want to make sure or you add no net networks You should be able to peer with the external world and advertise the routes using BGP or SPF or whatever protocol you like And the second thing is actually giving you the ECMP you have Services that you're deploying that actually are exposing endpoints So those endpoints could actually be the same IP Addresses and if you actually advertise the same IP address from multiple points in in the ACI infrastructure the ACI actually uses the The automatically will load balance the traffic to all of those endpoints depending on the location so it goes to the It first get loads balanced to the 9k switch and the 9k switch will load balance to the the closest and The least used VNF. So that's one of the main thing and and so we need to be able to peer with the rest of the world to actually Basically advertise our routes and the second thing is actually we can use it for a 64 way ECMP which is actually spanning across multiple sites multi-pod so that's one of the main benefit for the SBI So so you could actually support six different pair of switches and And as I mentioned you can actually distribute your load Further so it's it's more like L3 load balancing coming all the way to your Load balancer VNF and then the load balancer VNF is actually doing extra load balancing Layer four to seven down to your application. So it actually gives you far far more scalability and efficiency So this basically is showing that you can I cannot move because of this mic, but let me see if I could use that so so we are Advertising the same web and and and Basically letting ACI decide which which one will actually getting the load So so it actually goes from the external world to the leaf switch automatically because the ACI policy allows it to and then for that leaf switch will actually Load balance it to if there are multiple VNFs Deployed on the end of the same leaf switch. It will actually load balance to the to those And so yeah, the leaf will automatically extend it to that one So and this is basically we have this demo actually running in our Booth if you want to actually come and see we actually have the fall where we have Basically deployed three different networks. One network is actually representing the external world and the second is the our load balancer, which actually has multiple instances, but is is Advertising the same web through BGP which allows us to actually have the external world come in and and Load balance and then this we have this load balancer which actually load balances to a to a real server form So it's basically L3 load balancing here to here L4 7 4 to 7 from here to here So and actually I should I should have practiced the animation. So that it's actually now a say telling one so we are actually it's we're advertising 10 10 10 10 and the and actually when the first flow so it's flow based right the first flow comes in it's going to Just go come to the ACI and ACI will actually round robin it One by one to the to the load balance. So the second flow actually gets goes to the second one and so and from this you we can actually the Load balance the traffic using, you know any any load balancer There are a lot of actually distributed load balancers available that will give you this scale So this application is for those if you can build yourself with their commercial distributed load balancers available from Actually the companies that are they are here and today So it's basically is going to load balance to the load balancer L4 to 7 is going to load balance to the server from that's outside the ACI's ACI will provide switching and writing for that But what the main component is actually the the Tom is too big So the main component is actually the the ECMP that we're providing for for external L2 load balancing So that was the part for the scalability of your application now The SFC is The new Tron standard new Tron Service function chaining API, right? So what it allows you to do is create port pairs Deploy your VNF on that port there using ingress and egress port and then Define some kind of a classifier that will allow you to actually start Sending the traffic to your VNF or whatever you're deploying and then the return path so It is done using What we call is a multi-node PBR PBR in ACI is what it's called policy based redirect a policy based redirect is done with with With a construct that we call a service graph that gets applied to a Traffic or a router for example and anything that is flying through it. We can say from using this classifier redirect this traffic To this port pair and then bring it back So in multi-node means you could actually have multiple bumps in the wire Which means you could actually take a service to VNF one from VNF one to VNF two We have to do VNF three and then back so you can define all of those things without actually having any domain knowledge of ACI That's what actually it allows you to do So you do not need to know anything about ACI you best basically use the neutron calls Which is create port pairs Port groups create flow classifier Then you can create a service chain when you as soon as you create the service chain the traffic will automatically get redirected because of the ACI service graph functionality and then you and Dynamically you can add and remove nodes to that particular service chain by using update service chain And it's it works out of the box without having to do anything on the ACI side so this is the basic so we have basically four chain API and that actually managed by by The new trunk and then the driver manager Passes that and then is what happens is a I am will actually which is My colleague mentioned it's the the main module that pushes all the all the neutron constructs to the ACI I will actually push it to the to the ACI fabric and But for the neutron site it actually the same way we're creating the port chain. We could take the port Port pair we're taking multiple port pairs making them port pair group if you take multiple port pairs and Create a port pair group. You're deploying those VNFs vertically and ACI will take the responsibility of load balancing to those so that that's also Provided automatically the load balancing to vertical VNFs and if you have multiple Port the port pairs in the service chain Port pair groups and they're deployed Horizontally, which means if you have three port pair groups and you're applying into a service chain It's going to take your traffic to the first four Port pair group first VNF then second VNF and the third VNF if you add another port pair group update that that that service will automatically be inserted into your path so What We support is actually the API and the functionality the CLI For actually creating the the neutron port port pair create command the neutron port Pair group create more classifier create all those commands are part of the neutron CLI so that CLI extension has to be provided by the By the vendor, but we we fully support the API and it works out of the box. Of course, you can always get the Get the neutron SFC a CLI is just a Python to SFC and that actually will allow you to create this But of course the vendor that is giving you the open stack is the one who has to support that part We support it past the so so open stack creates everything so basically When you're creating the SVI as the open stack will create the SVI It's an extension to neutron. So you'll say I want to create a net neutron net create type SVI it will It will actually automatically push the configuration. So neutron Side will create the SVI. It'll manage the life cycle of the VNF and it will manage the the port Port chain, right? So all those three things are are declaratively done from the neutron side and what the it's actually pushed to the to the ACI and ACI actually Does the data path orchestration which means that the traffic automatically will start Going or if you're actually peering it will make sure that the that you can actually have peering capabilities with the external world And so once we created it actually is implemented as a server world as a saying it was a multi-node Service graph so you basically have the client and then you have a server you have two networks and Within that you're inserting this graph. So To create this what we do is we have to create networks Which means you have you created two networks and you created ports ingress and egress port after that what you do is you Insert that into a service chain. So and that VNF will actually start receiving the traffic and And then inside is ACI we have a different semantic, right? So we have the the consumer and the provider instead of the ingress and egress But it actually means the same thing. This knowledge is not required at all to to use it All you need is a sniffer to see that. Yes I'm actually seeing my traffic come to this VNF and it's going out or if you have any monitoring that automatically gets Done, sorry, we don't have too much. So This is just a simple example of Creating it so that we create two networks We create a float a flow classifier Then then we basically create the ports we create the ingress port we create the egress port We basically use the nova to To start the VNF using the ingress and egress port We also create the so we take those two ports and we Put them in a port pair using just the neutron port pair command Then we take that port pair and we create a port pair group. It's just a single Instance if you have multiple port pairs and you create the port pair group from them then it actually will be deployed vertically and Then from the port pair group and the classifier that we created in the last page. We actually Create the neutron port there using the neutron port pair creates. So it's basically completely neutron workflow there is absolutely nothing non-standard about it and and and the magically you'll start seeing that the traffic that you're classifying is automatically reaching the ingress port of the VN and that traffic that you send out from that VNF actually goes back so that's that's the That's the VNF part you can as I said you can actually add and dynamically remove VNFs by just using the the update The port chain update command. So here are basically creating two more networks You see that I Think it's running out of battery or something So yeah, we're creating the source and add destination at another port pair and then we basically add that With the same plus if I sorry, I went too much back Yeah, more pumps in the wire. So we create another another network. We create two more ports. We add a Basically create another another port pair group and then we use those two port pair groups So if we're using initially when we created the Chain we only had one port pair group then we actually update and we add the two Two port pair groups like cluster one and cluster two And it automatically will add the second port pair which means the second VNF will also start receiving the traffic that is leaving the first VNF so and So yeah, and that that is what is a multi-node PVR, which means you see now multiple VNFs on the ACI. This is basically the ACI side But as I said like ACI and knowledge is not really required But just to see you can actually get full visibility on the ACI side in actually what is being inserted in your traffic So this is a very very typical Use for all these things together What we have is basically we have these VMs which are basically brass VMs We have an external net VM and this basically is customers data center. They're all actually running on open stack So from here we create an SVR We can actually say the route to to Google or 888 is through here So all the traffic will start coming here here We can actually Advertise here outside and say the route actually to the data center is here That's the peering part if you want a react also serving A different type of applications here. You can also use To scale your thing and now that actually and then this SVI Will start having all the customer traffic flow through this transit network and go outside Now what we can do is we can use those Neutron port commands we can create these networks and actually start inserting dynamically Applications into the path of that traffic and another thing is we could actually have multiple copies and we can use segmentation using the Neutron port to separate different branches or different Customers or different tenants traffic and we can treat them very differently depending on that segmentation Sign that so this is going to be a trunk port here and a trunk port here and then this actually is Going to have multiple VLANs pass through it and then you can actually define depending on the networks in the VLAN that you're creating dynamically you can treat different Customers traffic different types of traffic differently by actually using different VNFs Depending on what actually you need for for that. So this is a very simple Use case that you can put together There's a lot of a lot of resources to to run all of these things But you're really really welcome to come to our The demos that we have so that was basically it and if you have any questions Back to So I'll sum up very quickly. We're running out of time. I guess we highlighted a Number of benefits to run ACI and open stock. So ACI and open stock better together I think that should be hopefully clear by the end of the session, but We've got some want to know more some links where you can find more information Also, we're gonna be at the booth Just the other side of the of the building so you can find us there and Yeah That would be all from our side So if you've got questions will be around will be at the booth so you can find us anywhere you want Thank you all for coming