 OK. Hi, colleagues. My name is Adrian. Also, we have today Sasha. I am an engineering manager from Murentis. And Sasha is senior deployment engineer. We would like today to share with you our experience, how we made the SDN solutions with Murentis OpenStack even fuel. So today's our agenda will cover the next questions. How fuel plugins works and how they will help to building country-plus OpenStack clouds, any of your features with country and OpenStack, and also we'll provide a demo how it was done by us. So let me quickly explain fuel country plugin description. Fuel country plugin provides possibility to deploy Murentis OpenStack with Juniper country SDN as a network backend. Plug-in versions that currently will be showed today supports fuel 7.0, Murentis OpenStack kilo, and Juniper country 3.0. Actually, this plugin automates deployment of country controller nodes, providing settings for country services, adjusting configuration of OpenStack components. I'll give a quote to Sasha. Sasha will more deeply technical explain how it works. Thank you, Andrey. So basically, I would like to start with some description of fuel plugins and how can they be used to customize OpenStack deployments with fuel. So basically, plugins are some additional software packages that can be installed on fuel master node. And they can add customizations and new capabilities to environments which are built with fuel. Technically, inside there are actually RPM packages which can be installed or upgraded or deleted from master node. Inside this RPM package, you can find a package repository of packages that can be brought into the environment with plugin. Also, then you can find some metadata about the plugin, some UI settings for the plugin, and its main part is deployment tasks. Basically, most plugins deployment tasks are written in puppet, and they work just as deployment tasks in fuel core library works. So plugins allow to inject the plugin tasks into the deployment flow of fuel. So fuel offers open-source framework for creating new plugins. And it provides UI integration for plugin settings. So if you install the plugin, you can see a tab with plugin settings in settings tab on fuel. Also, fuel provides tools to develop, build, and test plugins. So basically, starting from fuel 7.0, there are some new features in plugin framework that helped us very much to build a fuel-controlled plugin. The main feature for us was an ability to define a new node via a new node role via plugins. So basically, it's a new kind of server role deployed with fuel. Another feature is an ability to allocate custom highly-available VAP, which uses the same fuel, HA possibilities, which are used for another OpenStack VAPS. Third feature, which was very useful for us, was plugin-defined disk partition. So using the plugin metadata, we can define separate disk partition, which can be used for the features that plugin provides. So let's come to high-level overview of logical architecture of the environment built with fuel-controlled plugin. In these boxes, green ones are the nodes which are deployed by basic fuel installation. Blue boxes are the items that are added by plugin or configured by plugin. So I will start, for example, I think with a networking part. Fuel provides a private network for nutrient communication. So fuel-controlled plugin uses it as an IP fabric for control. So here you can see some arrows that show you the communications between neutron, HA proxy, and control API web UI. So here you can see three new node roles that are added to environments. They are control database node, control config node, and control control node. Further, I will describe more in-depth what changes plugin introduces on each kind of node. So let us start. Let us start with control-DB node. Control-DB is a custom role which is introduced by plugin. And on plugin tasks deploy such software components as Cassandra. We use a package recommended by Juniper and configurations that are recommended for Juniper. Also, it installs the Keeper and Kafka. Please note that we use custom partitioning scheme for control-DB nodes to ensure that we have enough space for Cassandra database files. So the minimum size is 256 gigabytes. So if you do not have such disk space on the node, you can not start the deployment. So our next node role is control-config node. This role covers three control components. So the first component is control-config, which includes discovery service, API servers, scheme transformer, and device manager. Also, on this node role is installed control-analytics, which include collector-analytics API query engine. Also, we install control-vabuy with its web server and middle-war services on this kind of node. So you can combine this node with database node on the same server, but it is recommended to have odd number of nodes to have high variability. Let's come to control-control node. Here we install such components as control-control and control-dns and provide configuration for them. Also, I would like to mention that field control plug-in uses custom VAPs for providing VAPs for all controlling points. So now it is possible to distribute all of the control node roles across different L2 segments to achieve high variability or resiliency. I will describe more when we will go to control-control node. So what actions are taken by field plug-in for control task on open-stack controller nodes? And introduce the following changes. Of course, it installs core neutron plug-in for control to let us use control as networking back end. Also, it configures the HE proxy front end in the private network for RebitmQ service, which is used by Contrail. So we don't use RebitmQ service on Contrail database host, but we use the same RebitmQ open-stack nova uses. The next task for plug-in on controller nodes are creating HE proxy configuration for Contrail web UI and API in public network. So Contrail web UI and API are available under the same IP address as Horizon is. Also, we support SSL. So if you enable SSL in fuel and you have your certificate for public endpoint, so this certificate can be used for Contrail web UI. Also, we use a custom VAP feature from plug-in framework to locate an additional web and run HE proxy for endpoints of Contrail services, like discovery and API. Also, plug-in installs some additional packages for Contrail-specific heat templates for say, changing, and add some Contrail-syllometer plug-in to account some Contrail-specific metrics by Syllometer. On compute nodes, we have the following tasks. First, we update the nova configuration file to make sure that we are using Neutron and we are using proper Neutron points and Keystone points. Also, we remove open-v-switch packages and open-v-switch kernel modules because we cannot use the same bridge for v-host and for open-v-switch. But we configure the private network interface to be used with Contrail v-host. So first, we move it out of private bridge, create a configuration for v-host 0 interface. Also, we configure repeat tables to allow incoming and outgoing traffic for Contrail ports. The main task for compute nodes is to install v-rotor and v-rotor agent along with v-rotor, the KMS kernel modules. And we provide proper configuration files for v-rotor agent where we specify the AP addresses and interface names for v-rotor. So basically, starting from plug-in versions 3.0, we enable some Contrail NFV-specific features to achieve high network performance. They are SRAOV and DPDK. Let me give you some insights on how plug-in enables SRAOV for environments. So we have also custom node role, which has the name SRAOV compute, which can be applied to the host, which you want to act as a SRAOV-enabled host. If this role is assigned to the host, you will have SRAOV-enabled on all SRAOV-capable interfaces, but which are not assigned to existing fuel network groups and are not bond slaves to ensure that you have a dedicated SRAOV interface. Also, plug-in updates are uploaded with custom kernel settings like Intel IMMU to let SRAOV function. Also, plug-in scripts are created, at least in device entry on Nova compute to let Nova know that it can use these devices. And also, we configure a Nova scheduler on controller nodes to add the SRAOV filter, PCIe pass-through filter. What comes to DPDK by default? Contrail virtual router runs as a kernel module. To get more optimal performance for NFE, the plug-in scripts can install and configure user-space router instead of kernel one. To get this installed, you need to enable DPDK globally in plug-in settings and assign a custom DPDK compute role to the host. But first run user-space router, we need some to do some prerequisites. They include setting up huge pages on compute host and creating a custom flavor with huge pages enabled. Also, the host aggregate is created, which includes our DPDK-enabled computes to let the user run the DPDK-enabled VAM only on the host that supports DPDK. Also, we add some configuration for CPU-PIN for the router process to supply the number of cores it can use. Also, we provide some patching for Novot to support user-space router, because user-space router is not out of the box in fuel 7.0. I hope this will be included in next releases. And also, we have an option, Toggleable1, to install Qoomoon LiveVirt from Contrail repository. Because packages from most also do not support huge pages and user-space router, but you can enable installation of these packages from Contrail repository. Now I will show you a demo how you can create an environment with a fuel plug-in enabled. And also, we will show some basic control operations on the deployed environment. So hold on a second. I will start the video. OK, so let us start. We have here in our folder a Contrail plug-in package and Contrail install packages. So our first step is to copy these packages to Fuel Master Node. I see that they are being copied. Next step will be to install a Fuel Contrail plugin on Master Node. Let's wait for the packages to be copied again. So we SSH to Fuel Master Node, verify that we don't have any plugins installed yet. Here we go. No plugins. Let's install Fuel Contrail plugin. Using the Fuel Plugins dash-dash install command. Here we go. OK. So let's check if the plugin was installed successfully and issue Fuel Plugins at least one more time. OK, so we have Contrail here. So the next step is to unpack the Contrail install packages file and populate the local repository, which contains all the Contrail packages. So this repository will be available from Master Node. So first we copy the packages to the plugin folder on Master Node. And run the install Sage script to which impacts on the package and create the repository indexes. This can take some time. Almost done. So after the index file was created, each of the slave nodes can install packages from this repository. OK, we are good. So the next step is to create a new OpenStack environment in Fuel. So let's switch to browser window and enter the Fuel Web UI. Let's create a new environment, give it some name, for example, Contrail, select the proper hypervisor. In our case, it's KIVM. And select the networking. In our case, this is Neutron, Vistana, and Segmentation. Also we can adjust some additional options for the environment. OK, so our environment is created now. So let's check for the plugins. Here we go. So here we can enable the FuelContrail plugin and provide settings for this plugin. For example, huge pages, size, amount, CPU pin in a mask for the router, and toggle whether we need to patch Nova and install custom KIVM packages. So the next step for us is to configure the networking for our environment. So we start with setting up in IP addresses for public network, the gateway for public network, the IP ranges for public network, and all other IP addresses which can conform to our lab environment. Here we set the IP addresses for private network, which will be used as Contrail IP fabric. After we have set the settings, we can start adding the nodes. As you can see here, we have custom node roles here, like S3V compute, DPDK compute, Contrail config, Contrail control, and Contrail to be all these roles are added to this plugin. So let's now add the roles to our nodes to our environment and design roles for them. We set up three OpenStack controllers here and add a node with all three roles, ContrailDB, Contrail control, and Contrail config. Also, we can add a usual compute with storage and one compute host with S3V and DPDK capabilities. This is this hardware server. After we have added all hosts, we can configure the network interfaces for this host, excluding this one compute. So we can click the button, configure interfaces, and drag and drop our networks on the designated network interface cards. The network settings are common for all nodes that we have added in this group. Now we can configure the networking for a hardware compute, which has a different number of network cards and different layout of networks in them. So after we have saved the changes, we can verify all network settings, check the range for private network, and verify that our BGP Gateway API address is on the same network. After we've done this network configuration, we can verify networks. Phil uses this feature to verify if all network cards are configured with the same network segments, if we don't have any ROGDS per servers, and so on. So after our network configuration checker succeeded, we can come to actually deploying the environment. So deploying the environment takes some time, so we have skipped it, and I think we can start with a view of the deploying environment. So with this, you have our nodes under ready state, and we can proceed to horizon. Let's copy the IP address of horizon to access on the control web UI on support 8143. Login with default credentials. So here you see status dashboard. You can see that we have two virtual routers and two computes. We have one control node running, one analytics node running, one config node running, and we have database node. So let's check the status, verify that they are up and running, and verify that our BGP peers are OK here. So let's log into horizon and create our first instance. This will be a regular instance. So we select parameters like ability zone. It is nova, without huge pages. I use the default test VM series image and connect this instance to network net zero for X. As you see, our instance is booting. OK, it is running. So let's use this IP address to enter the instance and verify that it's working. We can pin this IP address, or sH to it. Our environment uses a BGP router, so we may obtain external connectivity, like pinning, for example, Google DNS. OK, so now let's do some more advanced thing, like creating a 3V-enabled instance. So first step is to create a neutral network with provider network with net one and specify some segmentation ID for it and give it a name, for example, 3V-demonet. Now we can create any subnet for this network. So now we can create some 3V-enabled ports in Neutron. For example, we can create a port in this 3V-demonetwork and specify the bin for the port, NicktypeDirect. And also, we add a regular port to this instance for this network. So now we can spawn an instance with the ports. So we can specify the flavor with huge pages. The image Ubuntu, because in SARS, we do not have the drivers for virtual functions. And we specify to Nick's port IDs first, is regular port and other is 3V port. Here we go. Also, we specify a key part to let us to sH to this instance. OK, now our instance is booting and we can enter this instance via sH. We use a previously created keeper and the IP addresses for this instance. OK, so now let's check the lspci command for virtual functions. So as you see here, we have an Ethernet card, which is Intel IS35O, is net controller virtual function. Now let's do some dpdk. So let's create two instances in huge pages availability zone and specify the image with dpdk picit again to verify that our dpdk is working and it has sufficient performance. Let's wait for the instances to boot. Now we can enter the consoles for the instances and verify the configuration of dpdk picit again. So as you see, here is specified the SRC and DST IP addresses and MAC addresses. Now we can start the picit again program. On first note, and let's do the same on the second note. Do the same, so refine the configuration and start in the dpdk picit again. So please be patient and may take some time, I think, around 30 seconds to let the traffic flow start. So we have 10 gigabit links here and in some seconds we will see the traffic flow. There we go. Now we have around more than six and about seven gigabits of traffic and around more than one million, two million of 64 byte packets are flowing from one instance with dpdk enabled to another. So that's it for a demo. I think we have some slides left. So what are our plans for future? So we are going to add the new field version and new stack version with new control releases. So also we are going to do some Vimverve center integration with control and we are going to implement TSN node support for bare metal servers in control. Here you can find some useful links. The first link is about the wiki page on OpenStack about field plugins so you can find more information how to build your own plugins, more information how to test them. Also here's a link to Mirantis plugin catalog where you can see all plugins, partnered plugins and validated plugins from Mirantis. And the last link is a Duclin to GitHub repository of field control plugin on GitHub. So I think that's it. Questions? No questions, thank you so much. Thank you.