 Good to go. Hey, folks. Thanks for joining us for this marketplace session. So we're going to talk about Brocade's NFE orchestration using OpenStack Tackup project. My name is Sridhar Ramaswamy. I'm a principal engineer in Brocade. And with me is Jeff, Jeff Remeter. He's a solution architect in Brocade in our BU. So let's go ahead and get started. So the agenda is we're going to talk about, in general, this OpenStack Tackup project, what it is, what is the scope. And we're going to talk a bit more on the features and how some of the features function as a workflow. And we will move on to do a demo, which is probably the main piece. I'll probably spend maximum time showing how this project functions, how this whole feature goes, and kind of wrap it up with what we plan for this project down the line. So with that, so what is Tackup? So some of you might be familiar with this diagram. It's an HCMano architecture diagram. And what Tackup project is, is essentially it's an orchestration project that geared towards the red box that you see around VNF manager and NFE orchestrator. So those are the two components that that's in scope for OpenStack Tackup. Our initial features are mostly geared towards VNFM, which is in the bottom. But as we add features, we are progressing up to an NFE orchestrator. So again, many of these things, again, this architecture diagram comes from the HC, NFE organization. And the theme for this OpenStack Tacker is to align with that architecture. I know that the word OpenStack and orchestration is used fairly widely. But the scope for OpenStack Tacker, it's an HCMano orchestration solution. So some of the features, if you're familiar with the basic orchestration, the main few building blocks that's required in VNFM is what something that Tackup supports today. There are features like the VNF catalog, which is essentially is used to onboard task templates for VNF descriptors. This is something specced out in HCMano. So we support that where you would first thing you would do is basically onboard your VNF. And we have other basic lifecycle management features, basically instantiating a VNF through the life of the VNF, instantiation, termination. There are other aspects where some of the templates need not be fully art-coded. It can be actually parameterized. So this feature is something we introduced recently in Liberty. So beyond that, we made sure the features that are really relevant for the deployers, which includes the VNF should be instantiated with the initial configurations. And sometimes there are two ways to get the configurations in. Like one could be through user data where things get injected during Buddha process. The other mechanism that we have is the instantiation finishes, and there are functions that you need to do on the VNF, either through REST API or we are actually going to demonstrate a case where you can use an SDN controller and use NetConf Yang to actually configure the VNF that just came up. So now beyond configuration, we also have a basic health monitoring where you can monitor the KPIs for the VNF that you just instantiated. And if there is any events in the health, attacker can actually yield, basically by respawning or other methods, it can actually rectify the situation for that particular VNF. So I'm going to briefly walk through the architecture and the workflow. So this is a fairly big colorful chart here. So it has different components through the functionality that I just talked about. And one thing I want to stress, again, is that attacker is like any of the open stack project. It's its own API server where there's a CLI, there's an API, and of course we have a horizon dashboard. That's something you would see. So I'll quickly walk you through the workflow. So the first thing you would do here is you need to onboard the VNFs. So again, the open stack attacker project envisions this to be a general purpose VNF manager. So you can use it for it's not tied to a particular VNF type, like a router or a DPI VNF. It can be broadly used for any VNF type. So that's our focus, so that's our vision. And another thing is it's, again, being an open source and open stack project, it needs to be multi-vendor. They kind of go hand in hand, but it's actually important to take that into account that this catalog can have multiple vendors VNF and we're going to show a few using the brocade VNF, but it can be from any vendor VNF. So that's the first step. So once it's things in the catalog, you can actually instantiate. So we use EAT. So that's, again, there are different drivers that you can potentially plug, but current default is EAT. We also have NOVA, and in fact, VNVision, there might be other newer infrared drivers down the line to orchestrate. This could be even containers, perhaps, down the line. So this would actually place the VMs in the target VEM. And so you can imagine there are multiple VNFs and we're going to demonstrate using brocade VNF in this session. So once the VNF is up, the first next thing you want to know is how the VNF configured for the services it's supposed to provide. So that's the next step. And Tacker has a management driver framework where it can actually use a management driver either to directly configure the VNF, or in this case, we're going to demonstrate how it can use an controller, an SDN controller to actually configure. So we're going to use the brocade SDN controller, which is a commercial version of an open daylight controller. So that's what we're going to see. And it will, in turn, use a net confiang to configure. So there is also a plan to do SFC once the VNFs are up that's still under development, but that VNVision that will follow the configuration of the VNF. And another key aspect of Tacker is, again, monitoring. This is important. Instantiating is one thing, but having the operators know what the VNF is doing and how it is doing is very, very critical. So this is something we enable where you can actually have your, so there are a couple of ways you can do it. Tacker has few inbuilt monitoring capability that are simple, but they are ready to go. You can make use of, but your VNF health might be determined in different ways, right? You might need to probe your servers running in your VNF to actually determine the health. So this is something we enable through the Tacker framework where you can bring in your own management monitoring driver and Tacker will just basically make use of that to determine thumbs up or thumbs down. And if there is a issue, it can actually detect that and heal the situation. Like perhaps respond to something we support today. So with that background, and I will hand it over to Jeff to continue the demo. Cool. Okay. Just give me a. Yeah. It's one second folks. Use that. This one, perhaps. Oh, I actually tried the other one. Oh, you can try it. That's fine. Yeah, okay. I think it should work. Just look at it. Oh, now do the other one. Yeah. This is good. Did you check to see if this works? Sorry? Did you check to see if this one works? Yeah, this should work. Okay. All right. All right. So thank you, Sridhar. So as Sridhar mentioned, I actually have two demos here today. The first one is going to show a full lifecycle management of a Brocade V router, VNF. And we'll take a look at some of the features that Sridhar mentioned. One being the management driver. So we use a BSE management driver which will help connect this VNF to a Brocade SDN controller. And we'll take a look at some of the monitoring features that you mentioned. And so we can see what happens, how Tacker monitors the VNF and how it can respond and do, perform certain actions in case it detects there's a failure. And then the second demo will be demonstrating how Tacker can not only provision simple VNFs, but also highly complex ones as well. And in this case, we'll be showing how we can provision a Brocade Connectom virtual Evolve packet core through OpenStack Tacker. Okay. So I'll just move straight into the demos here. Yeah, so the first one we'll be provisioning, we'll take a look at how we can provision a Brocade V router VNF. And we'll do this all through the OpenStack dashboard here. So if we log into the OpenStack dashboard, Tacker has a NFV section through and within Horizon. And the first thing we can do here is take a look at the VNF catalog, right? This is a listing of all the different VNFs that users can deploy through Tacker. Okay. So the first step we wanna do really is just onboard a new VNF. This will be for the V router. So it can be, this is an example of the Tosca template used to describe this particular VNF. So if you look at the left side, this is the template that specifies some global parameters. And then there's a section specific to each of the VDUs. So there is a VDU here. And it contains some information like the virtual machine image is gonna pull from the Glance database, the instance type or the flavor. And then if you notice where the red arrow is shown, there is the BSC management driver. So this VNF upon instantiation, the BSC management driver will actually mount this VNF on the SDN or Open Daylight Controller. And then from there it can be further managed and configured through Open Daylight. And then if you look a little bit further down, there's the monitoring policy. So in this case, it's a simple ping driver which will in the background do pings to this VNF. And if it detects any kind of failure, it can do a certain action. And in this example, it will do a respond. The two short files on the right, they're on the upper right. In case you wanna separate data from the actual VNF descriptor template, you can create a separate parameterized file that can be used for each instance of a particular VNF. So we have some user data parameters in that file. And then on the bottom, we have a VNF BSC configuration file which basically contains some of the information required for attacker to authenticate with the Open Daylight Controller as well as login to the VRouter itself. So that's basically the template describing this particular VNF. So first step we want to do is onboard this VNF into the attacker database. So we can click on the onboard VNF button in the VNF catalog. We can give this entry a name and then we're just going to browse and upload that YAML file that we were just looking at, okay? And so now this template is being stored in the attacker database. And so you can see it there. The next section, the VNF Manager, is where we can deploy an instance of any particular VNF. So now we wanna deploy the VRouter VNF. We can click on deploy. We give this particular instance a name. We call it VRouter demo. We select the entry in the catalog that we just uploaded. And then there's two other sections, right? The one is for the parameterized value file that we just looked at. And then the configuration requirements for the Open Daylight Controller, okay? And then so we click the deploy button and so now this VNF is being provisioned in the OpenStack Cloud, okay? And you can see the status is active. So this VNF has been launched. If we move over to the Nova portions, we can look at the instance and we can see the VRouter here is provisioned as a Nova instance and we can get directly to the console. We should probably disable our instant media. Yes, lesson learned. Okay, so yeah, so continuing on. Here you can see this is just the console of the VRouter that has just been provisioned through Tacker, right? So we can log in through the CLI. We can see it's the actual Brocade VRouter, okay? And so if you remember, we were using the BSE Management Driver, right? So after this thing has been spawned, it should have been mounted as a NetConf device within the Open Daylight Controller. So this is the GUI for the Brocade SDN Controller and we can just go ahead and log in to the controller. And once we log in, there should be two applications available here. So there's a topology manager, which we can take a look at first and this will just show us the current topology of NetConf devices that have been mounted. And you can see there's one NetConf device there. If we zoom in a little bit, we can see that's the VRouter that we just launched through Tacker, okay? So this VRouter is now available through Open Daylight, through applications to configure and manage. And we can go back to the beginning. And so we have this other application here called the VRouter Firewall Configurator. And this is essentially just a quick little kind of demo application that is used to basically push down a firewall configuration onto that VNF instance. In reality, you may have your own application or automation from a higher level orchestrator that may automate pushing some sort of configuration onto this VNF. Here we're just, you know, very simple demo. We can go to the Configurator and select the device we want to push the firewall configuration onto. And then you can see it's successfully been created. So now if we go back over to the console on our VNF and we look at the running configuration, you can see there is indeed now a firewall configuration on the VRouter itself, okay? And so the other feature that I wanted to talk about was the monitoring, right? So basically we have that block or that ping monitor going on in the background. So in order to simulate a failure of this particular VNF, I'm just gonna disable the management interface. And I'll just do that by applying that particular firewall role, which is a block ICMP role. I'm gonna apply that role to the management interface on this VRouter. And so once that happens, basically the attacker will no longer be able to monitor this VNF. It will detect that it has failed. And in a second, you should see that this, once it has been failed, it will destroy this VNF and it will respawn it once again. So here I'm committing the action. And in a second or two, okay, you can see so this VNF has been destroyed. And if we move over to the VNF manager section again, we can see this VNF has been marked as dead, okay? So now attacker in the background is going to respawn this instance and create a new one for us. And you can see the status is already in the active state. So it's already provisioning this new VNF. And again, if we go to the instances tab, we can see there is now a new instance of the VRouter that has been created. And again, we can go to the console and we can see that it's booting up right now, okay? So we can log in, we can see that it's been successfully reprovisioned after a failure. And then it would be obviously remounted back on the SDN controller as well, okay? So that's basically it for the first demo. We have one more quick demo. So that was basically a very simple VNF that we provisioned and this case is going to be a slightly more complex one. So in this case, we're going to provision a virtual Evolve packet core, okay? And I won't get into all the details on the architecture of VEPC, but as you can see, there's a lot of different components to it. There's actually seven different VDUs that we'll be provisioning through attacker. So there's things like an element manager or a session database, et cetera. So seven VDUs plus an additional one for the EMS or the management system. So moving along, you kind of go through the same process, right? You log into the OpenStack dashboard and go to the VNF catalog. And here we can onboard our VEPC template, okay? So just to take a look at the template really quick before we onboard it, you can see here there's actually, right? Similar to before, except in this case, there is a separate section for each of the individual eight VDUs, okay? So this is actually a slightly longer descriptor. If we drill into one of the VDUs, in this case VDU two, we can see kind of data unique or specific for each VDU in this particular template. And there's a user data section here, which basically passes some script to this instance upon instantiation. And this will basically configure it for its particular role in the VEPC architecture, okay? So this is our template. We can go ahead and then onboard this into the catalog. We can give it a name. We'll select the template file we just took a look at. And again, this template is being stored in the attacker database. So now if the user wants to provision an instance of this packet core, we can go to the VNF manager and we can go ahead and deploy this VNF. And in this case, there's no parameterized values or any configuration file. So it was just that single Tosca template that we initially looked at. So we'll just select that template from the list and click on deploy. So attacker is now converting a portion of that Tosca template into heat template so that OpenStack can handle it. You can see it's in a pending create mode right now. If we go to the heat section within the dashboard, the orchestration section, we can take a look at all the different resources that are actually being provisioned now. You can see there's a stack that's in the process of being created. And if we drill down into that stack, you can see, initially you see the GUI representation of all the different resources being provisioned. And then you can take a look at the individual resources. And you can see at this point, pretty much all the different resources have been provisioned already. There's just one lagging on a little bit here. But so while that one's still being created, we can take a look at the network topology in Liberty. So we have the new network topology GUI that's available in Liberty. So if we look at that, go to the network and network topology. And here you can see all the instances of all packet core that have been created. So there's a different instances, all eight of them attach to their appropriate networks within the OpenStack cluster. And if we go back to the VNF manager, you can actually see that this instance is in an active state. So this evolved packet core, according to this, has been successfully provisioned within the OpenStack cloud using Tacker. So I'm just gonna run a validate deploy script here, which is basically gonna open up a term window into six of these VDUs. And I'm just gonna log in and basically validate that this has been successfully provisioned through Tacker. So I just connect. There's a validate deploy script available. So I'm just gonna run this on each of them. And what we should see is basically green, showing that this has been successfully provisioned. And each VNFC is in an active state and ready to be used as a packet core. And it just takes a second. So everything is green, everything is ready. Everything looks good. There's one final detail we can just take a look at. And that is to log into the GUI EMS app of the packet core solution. So we can connect to that over a web browser. And if everything is successful, basically we should see that this EMS app has successfully discovered each of the different VNFCs as part of this whole VNF. And it is able to successfully monitor them and view their state. So you can see on the left in the orange tab, you can see seven VNFCs and it lists each of them below, okay? So that's basically it for the two demos. I'll hand it back over to Sridhar for any closing. Thank you. Thank you, Jeff, for the demo. Just want to leave the screen back to this. So the value here, I don't know if we can move back. So if we can get the slides. Oh, the slides? So instantiating a VEPC is usually a very, very high-touch thing. Like it involves a lot of VNFs, VMs to be instantiated. Here the power of Tacker and templatizing is to, you can write that once and you can instantiate any number of times. So Tacker can again maintain a collection of VMs, both simple and complex, and it can instantiate in a push button format. So that's the goal for Tacker. So in conclusion, so the features that you saw here are mostly geared from a VNFM point of view, but we are actually adding a lot more features moving to the NFEO area as well. In fact, it's actually complimentary where currently Tacker is instantiating only in the same VM. We are actually removing that restriction and introducing an ability to instantiate VNFs in any target open stack controller, any target VIM. And we have few other features, there you go. Thanks, Jeff. So and beyond that, SFC service function training is a widely asked feature beyond a simple VNF manager. So that's where a lot of interest, we are actually introducing the SFC APIs in Tacker in the next upcoming cycle. The other important thing for the VNF is not just a simple placement, it need to be placed efficiently. So that's another thing that we're gonna take it up in the next cycle to make it in a way that it's optimal and it's in a performance efficient way. And eventually we also want to introduce a network service descriptor support. This is something called out in the HC Mano. So I guess this is sort of the roadmap, we are working towards, again, I welcome anyone to join the community. This is an interesting space in this old NFE orchestration area and we have a lot of things to do and I believe we are just in the beginning phase of this whole endeavor. Here are some information where you can contribute both on the open source side and you can actually reach out to some of us if you need more information on anything specific around VNFs or broken state controller. Thank you.