 So hello, Yoruban. So in this presentation, so we will introduce about new packet logging API in Neutron and how to gather and virtualize logs collaboration with Monaska project. So let us introduce ourselves. So I'm Yushiro Furukawa, working at Fujitsu. And I'm a senior software engineer. And currently I'm mainly focusing on Neutron and Neutron Firewall as a service project. Good afternoon, everyone. Thank you for being here. I'm Ann. I have been working on OpenSec, 2-year, more than 2-year. And currently I'm developing logs in our security group and pagua. My name is Vitek Bedek, and I'm a senior developer at Fujitsu EST in Munich, Germany. I work on Monaska. So here is today's agenda. OK, so let's start with overview. First, I'm actively working on network packet logging development. And the things I'll explain today are subject to change. Let me explain why I started the development briefly. So I'm a networking guy and mainly worked on troubleshooting, on network-related problems. Correcting and monitoring and analyzing network packets is obviously very important for troubleshooting. We collect the packet data, strong physical network switches, and routers in on-premise network environment. In virtualized network environment, we can also collect the data from software network functionalities as well, such as virtual switches, virtual routers, and so on. Actually, there are some advantages. One example is because it bender news log, we can always get the data in the same format with the same operation. And that makes it much easier for us to troubleshoot problems. And that goal of the development is provide a framework to collect network packet logs for OpenStack. In packet logging API, there are some software network functionalities. I'm focusing on the filtering functionalities first, such as the security group and firewall. Since security group is the default filtering functionality for OpenStack, that's the starter. We just start. Now, we can collect network packet logs, but also need to organize the logs. The logs are stored in different files on different servers, gathering and integrating them in the painful job and take a lot of time. We'll explain how we can address the issue. So Neutron, I'm focusing, packet logging API can solve the correct packet logs and store load data. Neutron can achieve that. So also, since log data are stored in host servers, project users or tenant users cannot access log data. So we also need a mean to provide the data to users. So in many methodologies exist, but the one of the example is to collaborate with Monaska. Now, let me explain how to collect logs. So before starting this section, so let's revisit the basics for security group and firewall. So security group is the default security functionalities in Neutron. And it filters ingress, egress packets. And security groups are attached on the Neutron port. And security group has many of security group rules used like a white list for filtering. And it provides a stateful based following background technologies. The first is the open VCH, and the other is the IP tables on the Linux bridge. So the background technology of security group is currently we have two options. First is the open VCH, and the other is the Linux bridge. So just a hybrid configuration will be deprecated for some reason. So now, open VCH is the default configuration called a native open VCH firewall driver. So the security group is the flow rules of the open VCH for each OBS port so that in infrastructure environment, so you can confirm the following commands, OBS, OST, CTL, dump flow, or something. You can confirm the OBS flow. That is a security group. And the firewall, especially, I'm talking now, is the firewall as a service V1. So this is filters ingress, egress packets. So this is the same as security group. And this, on the other hand, security groups works on port. However, the firewall as a service V1 works on the virtual router. And it uses a white list and black list for filtering. And it's also a stateful based on IP tables. So the difference with the security group is the difference between the security group and firewall is the firewall can realize a multiple layer protection. So firewall can prevent unnecessary packets from entering the internal network. That prevents the packets from eating bandwidth. That's why firewall is realized in neutral network. So virtual router is a background of IP tables. So note that this is only a firewall as a service V1. Now, our firewall community is developing firewall as a service V2 and the firewall V2 can work. Not only virtual routers, but also VM port, like security group. So this side is the how log is collected in accept rule. So as I explained in the previous page, security group is the rule of the open slow rule. So the how we can correct a log, this is simple. So just insert a new flow rule for logging into an integration breach called vlint. So if rule is matched, the packet will be sent to obvious controller and parsed and stored into log file. For example, so slash var slash leave slash syslog or somewhere. This is an example for security group rule allows port 20 to SSH port ingress rule. In this case, the following rules are inserted in obvious vlint. And OK, on the other hand, drop rule. In this rule, this is the same approach. This rule, drop packet, the following rule will be inserted. So first is the CD mark, 0, x1. And the other rule is the state is EST. So two rules are inserted. So the first line means the log drop flow for inverted packets. And the next line is the log drop flow for packets are matched to any SG rules. As a result, drop packet will be captured and sent to the open slow controllers. So now this section will explain how to set up packet logging API. So here is an architecture of packet logging API. So as mentioned at the beginning of this presentation, implementation is being discussed now. So I will skip the part. Neutron servers in implementation, this is now under discussion. So I will skip it. So just however, the less API definitions totally uprooted and agreed from our course. So I just focused on how to use a CLI. So here is a less API definition for packet logging API. But I think it is not easy to explain how to use the less API directly. So just focusing on how to use the CLI. So how we can enable packet logging? At first, we can provide an OpenStack client. And you can check what Neutron resource can be configured for logging. And you can check this command. OpenStack network load supported list. If you execute this command, the supported list for logging will be returned. So as I explained initially, so security group is the first target for logging. So now we can support a security group. So after checking that, let's begin to set up security group for logging. So there is some three patterns I will introduce. First, create a logging resource with security group ID for drop event. So if you specify, if you want to correct the logging for security group SG1, you can configure this command. And you should create a logging resource at first. And specify three points. First is what event you want to drop. You want to collect drop event, drop event, and accept event, and all event. So this is an event drop. And resource type, what resource are you going to correct the log? So currently, supporting this is only security group. So there are resource types to specify resource type is security group. And a resource option, what do you want to correct log? So you can specify SG1. As a result, target for logging is just configured for SG1. On the other hand, so if you want to correct the logs, especially focusing on VM2, VMSHAB2. So in this case, you can specify a target option. And you can specify port ID. As a result, SG1 and SG2 are configured for logging. And the third case, create logging resource for only VMSHAB1 and SG1. In this case, so the difference of the first case is not SG1, but SG1 on port A. In this case, you can specify resource option and target option, the combination. And specify resource SG1 and target port B. As a result, you can specify only SG1 on port A. So after creating the logging resource, now we can start logging. And if you'd like to stop logging, change the value of the enabled field to false, so such a command. So disable log, you can stop the correct logging. If you start continued logging, you can execute the enable log option. You can start. Now, sorry, this is a so small character, but this is a current actual packet log date detail. So in this part, you can check the following information. So date and packet option accept or drop, and project ID, and log resource ID. If in order to separate the project, project ID is necessary. So in order to show multi-tenancy for log data, project ID can be used for selecting each project. And this is the actual packet data. So IPv4, the source IP, and source mark, and destination mark, and source port, and protocols. This is an actual packet data data. Currently, log format is under-discussing, so maybe it will be a little bit changed. But currently, all of the data we can output it. So OK. I just explained some REST API sending information into the Neutron server. So now, to organize this log data, we will introduce a way using Monasuka. So in this part, so with that, what's going on? Thank you, Yoshido. Yeah, right. So after you have saved the Neutron logs to the file, you perhaps want to persist these logs in the database to be able to efficiently search, analyze, and visualize this data. And that's where Monasuka comes in handy. It offers a comprehensive solution for log management. So Monasuka is monitoring and logging as a service solution. And logging as a part of it fills here our goal. So the goal of adding logging management to Monasuka was to bring standardized solution which could replace vendor-specific tools which are widely used. We base on state-of-the-art open source solution Elk stack, but we add a whole bunch of values to Elk. So as first, we offer logging as a service. So it is a single point where other projects, as for example in our use case Neutron, can integrate and post the logs to the service. We bring authentication. The agents have to authenticate with Keystone and then send the token to the Elk API. And the logs are separated by the project. So we have multi-tenancy, and we have so-called cross-tenancy. So the agents are also able to send the logs on behalf of another project. We offer role-based access control to centralized logging. So the administrator can control which users get access to the logs. Through the use of Kafka, we get greater scalability and performance. And at least, but not least, through the integration with other Monasuka components, we offer thresholding and alerting based on logs. Let's have a short overview of the architecture of Monasuka logging feature. So the agents are responsible for collecting the logs. It authenticates with Keystone, adds the token to the request, and sends it to the Elk API. The Elk API authorizes the request, validates the input, validates the log, validates the dimensions, takes the project ID from the token and adds it to the log event. It is then sent to the message queue, which is in Monasuka commonly Kafka, Apache Kafka. And these components at the bottom are based on LogStash. Transformer is responsible for parsing common patterns, like, for example, severity of the log. Log metrics can be used to generate the metrics from the logs and persist the logs to the database, which is Elasticsearch. As I said before, we separate each project logs to its own index, so it makes it easier to filter them. And for visualization, we use Kibana, which is extended with our multi-tenancy plugin. It is responsible for two things. First, authorization of the request. So the user has to have given role to be able to access the logs. And secondly, we filter the request only to give a project. So the user from one project cannot see the logs from the other one. With the dashed lines, I have marked here that there is a new development from the company called StackHPC from UK. They started developing query API for the logs. And also, they are extending the Monasuka data source to support logs so that you will be able to visualize logs and metrics in one Grafana dashboard. And one more feature I would like to show in the context of our use case is the cross tenant log submission. So we have the agent, which collects the logs from Neutron. It authenticates with Keystone, as I said. But it also has the logic to detect that every of these individual log entries actually belongs to a different project. So we add that query parameter tenant ID. And we are sending the logs on behalf of this project. They are then separated so that every user from every project can see their logs. And that's all from my side. I will now hand over to Ann, who has prepared the demo. OK. Well, as we mentioned above, operator will start or stop, collect package logs. And then Monasuka will collect data and so to each project. In fact, I really want to give you the demo directly but it takes a long time. So I have the demo video. You can visit here. You can visit the YouTube link. And if you are interested with the API, please visit me. And I will demo later. And follow me about Future Plan. So now, Future Plan, we are targeting in PyCycle. But just so we like to do our best. So in Neutron side, currently our logging spec is just pending a blue bar. So just start counting needs of a blue bar. But in this cycle, we can support a security group first. And the next cycle gives more logs like firewall is the first. And some virtual router, if the VM can go to the internet, so just the private SNAP and NAPT will be changed. So the SNAP log is important for the traceability from internal to the external. From external to internal, so just the traceability. Regarding the traceability, SNAP logging is necessary. And the load browser as a service and VPN. So regarding the monaska side and integrate with the monaska analytics for anomaly detection. OK, that's all of our presentation. Thank you for listening.