 Our next speaker is Tom Tomas, check follow, who is going to be talking about fast data stacks, which is an OPNV project related to the integration of VPP and other FDIO projects into a stack, okay, so, do you still need to do your presentation? Okay, so hi everyone, my name is Tomas, I work on FDS, I contribute to this project, which is based on software defined working or let's say virtual network function. So, yeah, so what do we do? So, OPNV fast data stack is a platform that allows you to use very complex solution where you can just define your business logic that you have or that you want to implement in your network. Given the fact that you don't have to really care about how the devices are configured, all you need to be aware or all you need to define is some kind of topology or policy or some kind of rules, who can speak with who and how and you don't need to care how it's done in the underlying network. So, OPNV is a pretty complex suite, it's designed in a way that you can just install it and you don't really need to go through the all the complicated installing of multiple components that it used there and once you have it, then you can just use it out of the box, let's say. So, in this picture, there's a very brief description of how, when we decompose our program, then we are operating mostly on the network controller, network controller layer, where I'll follow up those routers and switches and we're trying to manage them and configure them by some kind of automation that we use that we implement in components above. Then there's a virtual machine layer when we actually spin up of VMs in our suits, so you just need to tell I want a VM on that node and then all the suits, all the suits should give you the node and you can use it right away and there are also services one layer above that takes care, let's say, about managing your nodes and running some services that are related to it. So, let's say a metadata service that can inject some configuration to your node and other services. Okay, so there's an open daylight, it's been discussed before, so we use that for configuring various underline network types. We currently work on VPP, we also down some work on OBS and also physical devices can be also incorporated in our suit. So, when speaking about GPP, GPP is a plugin in open daylight and it's a plugin that allows you to transform a policy into a network configuration of devices, so you take your logic, business logic use cases and GPP will transform it in a config. So, let's go on. Okay, this is another view on our suit, so there are also layers and as you can see on the bottom there are four layers, OBS, OBS, DPDK, which are also in some suits available as a use case and in our use case we add VPP, so our scenario is mostly open stack for controlling VMs and for having an abstraction layer on a very top. Then there is an open daylight where we use a GPP for receiving those requests from open stack and transforming them to underline network and also hypervisors for hosting the VMs, right, there's in the middle and then there are four layers. So, we're currently trying to involve VPP as an underline technology, so there are also other scenarios that currently are in progress. The first one which you see with Apex, open stack, open daylight L2, KV and VPP, that's already what's working in our scenario. Then there is L3 which is in progress, so we're kind of around traffic between the nodes without really touching the kernel space but just keeping the routing in the user space by using DPDK. There is also another scenario which is in progress and that opens stack directly talks to VPP. It's just that it exists, we don't really work on it now, but someone else is and here are some more details about our suite. Open stack is on a top, neutral plugin which is there, it's a key component because it allows you to define networks, subnets, ports, security groups and all this abstraction that is pretty simple to define. Then there is a REST API, so Neutron sends a REST request into GPP. There is a project Neutron or Bound that has a model, this is also a model-driven plugin. That's where the data starts. Underneath there is a GPP project which listens to this data model and reacts on changes in config data store, transforms by using render manager, VPP render, all this policy into a configuration on FDIO. FDIO is actually a collection of projects that includes DPDK, VPP and also Honeycomb. Honeycomb is like lightweight ODL that has it's capable of network network communication between ODL and Honeycomb. So as you can see we use Netconf there and because VPP doesn't have any API that could be directly used by ODL, Honeycomb actually transforms this network network request into and then uses a CAPI that is available on VPP. So it's not that we directly configure VPP with ODL, but we also use Honeycomb for that. So given the fact Honeycomb has to be on every node together with VPP. And there is also a system installer, Apex, and there are also system tests that are involved in PNV. So Apex is actually the project that or it's a program that installs all of this for you automatically and bug tests ensure that everything's working as it should. So here's a detail view on how let's say when a port is created, how it goes down to the VPP from very top of neutral in OpenStack. So let's say we create a port with some biting details. Those data are written into Neutron or Balmin ODL. Then they are mapped to GVP policy because GVP is a standalone application. You can define any policy also in there. It doesn't need OpenStack. It can work on its own. So the matter of fact is that the abstraction that comes from OpenStack has to be transformed into one that is in GPP plug-in in Open Daylight. And then we have distributed rendering of configuration on different nodes, different type of nodes. So if there is a VPP node, then the renderer manager will create a configuration for VPP renderer and VPP renderer by using topology manager or directly sending requests to Honeycomb can configure interfaces and bridge domains on VPP. The topology manager component is not a part of GPP. It's a standalone plug-in and it helps you create a full mesh topology based on where do you create the bridge domain. So if you have a bridge domain with the same ID on different nodes, then the VPP project will create and configure VXLanternals between them in a full mesh. So it kind of helps you. And VPP renderer, it configures interfaces on a VPP itself. So this is how it's currently done. Okay, here's a workflow, let's say, of how OpenStack nodes that some of the nodes or one of the nodes is VPP. OpenStack doesn't have the knowledge about where is VPP node or where is OVS node unless we tell it directly or unless ODL tells it. There are features for per-degree scanning into ODL data store and looking for a configuration for a given node. So it's a job for ODL to write data for OpenStack into its data store correctly so that OpenStack can read those data and save the proper configuration for a given node into its database. It's called agent database. It's in Neutron project. So whatever there is a configured by networking ODL, those data are parsed and they are saved in this agent database. And then later when you want to create a Nova instance or deploy VM on a given node, then you, before Nova actually creates it, it looks into this database, looks for the configuration for a given node. If it finds that configuration it uses that config and and then we'll create a VHOS user interface right there on the corner. So Nova will create a VHOS user interface in a server mode and on the other way, ODL will receive a notification that port should be created. The networking ODL is going to tell it and then GPP will configure an interface on a VHOS user interface but in a client mode. So when the VPP detects the socket file on a VM and opens it, then the VM is successfully bound to the VPP. Okay. So here we have an L2 scenario that we currently support. So as you can see, there are bridge domains on every node. We have a same bridge domain ID on these three nodes. So VPD projects configures the excellent tunnels between them for us. And because it's L2 scenario, which means that ODL doesn't route packets, it only switches them or formats them through the tunnels. Then we use Qouter, which is an open stack component for routing the traffic. So all we need to do here is to have a center, some kind of centralized node where we create the tab interfaces and they will connect to the Qouter or let's say Qouter will text them into its namespace and whenever there is traffic going from bridge domain to a different bridge domain, it has to go through the Qouter. The next scenario is L3 scenario where we do the routing by using GPP, Open Delight. So there's no need for having a Qouter anymore. All we do is to configure routing on a VPP. So what does it mean? It means that you need to specify a VRF where you would like to forward the packets or route the packets and into every bridge domain you need to assign a BVI interface. So it's like an interconnection between a bridge domain and VRF. That's what BVI interface does. It's a L2 type of interface. And the third scenario I was talking about that lacks GPP or ODL entirely. It's shown here. So there has to be an agent on every node which is actually managed by OpenStack and OpenStack directly talks to VPP and configures it with CAPI. Okay, and now here's a simple cookbook for how to create a VM. So what you need to do is to get an image as shown there and then by turning on huge pages and using the correct flavor, you can then create a VM out of that. There are also commands for creating a network, router, bridge subnets and ports. And then Novaboot will spin up the VM and VPP will ensure that or GPP will ensure that it's going to be connected to the VPP. Okay, so here are some outputs of such a demo but let me show you a brief video that we pre-recorded. Okay, let me stop it for a while. So this is a pretty complicated scenario that we're currently trying to work with. As you can see, there's the excellent tunnel, the green highlighted with the green car and for which we would like to do some performance testing. So we are currently about to test some performance. But in this video, we're just doing switching between the green bridge domains on one between those two nodes. So it should be running. Okay, so here you can see interfaces on the VPP. That's a command line output. So for reexampt tunnels, we need the IP address so that they can reach each other. Okay, then we install or we start graph. And there's a graph command for seeing whether there was a connection between the nodes, the remote nodes, and the controller. Okay, so this is probably another node, IP address of another interface. So these two are going to be used as a vague slant tunnels. Okay, and now we're starting to create some abstraction with OpenStack. There's a service listing. So services that NOVA uses and services that Neutron uses. As you can see, we have an L2 agent enabled, but we would use it if we would try to ping to the external networks, because there has to be some routing done. So we put a BVI interface in a bridge domain and then into an external network. And then we could go outside. But in this video, we'll stay inside. So as you can see, there's a metadata agent which injects some configuration into VMs and also DHCP agent that runs DHCP service on a control node so that we can observe an IP address. Okay, there's a network that's been created, a subnet. So this is a network. Here we have networks, external and the one we created, and also subnets, external and the one that we created. Router comes handy, but we're not going to route here. And now we create a port, which is actually on a kind of abstraction. It's unbound yet. It's going to be bound when we create a VM. So it's going to be binding details will be filled after Nova successfully creates a Nova instance. Okay, so here is a command for creating... Yeah, now we don't see anything. Okay, so now we try to spin up a VM. There's first and VM on a second node. As you can see, availability zone defines the node on which the VM should be created. This is horizon, OpenStack interface for managing entities that were created. So we're going to log in there and look for VMs we created. So they are booted. There's a config. So there was an IP address, which means that the RCP could assign IP address through the bridge domain. And there's another one. There's also going to be a leave config and now some traffic. So now we're pinging QRouter, which means that VPP created a BVI interface in the bridge domain. Now we're pinging the RCP. It should work. And now the other VM. Okay, thank you very much for your attention. That would be everything from my site. So there's pretty much time for questions, if any are. So if not, thanks again.