 Hi, good morning, everybody. Thanks for coming so earlier. And today, we are going to talk about this topic, integrating salameter with existing monitoring solution using Zebix. I'm Chen from Iziztac, which is a startup company in China. And my two other colleagues, Jin Wang and HLU, because of the visa reasons, did not come to the science. So I'm sorry to tell you that today, there is only me to share this session. And let's see this topic again, integrating salameter with existing monitoring solution. Before we talk about that, someone would ask why we should use sector 2 to integrate salameter and Zebix. That is because there are some customers already using Zebix to monitor their physical nodes, or network, or something else. So they don't want to manage two monitoring platforms. So we need to prove salameter monitoring data to Zebix. That is why we are willing to do this. And let's take a look at this introduction. This IP solution involves two commonly used software, MongoDB, Zebix, Grafana, and WebMQ. Here's the unit to mention. There are three other over stack projects. They are salameter, Kaston, and NOAA. We will combine them together to meet our needs. And in this agenda, first I will talk about what is TCP and how we designed to solve our problem. And then we will sign up the main features of TCP. Then I will explain how does this work. And because we want to use this pitch to meet some customer's requirements, and we also want it to can be used in a large scale. And so we did some tests and estimated the scope of scale it can be used. And we also have a video demo to make it clear to understand. At last, we will talk about JCP's roadmap, about what we want to do next. JCP is a start for Zebix salameter proxy. And we use it to prove the data from salameter to Zebix. About three years ago, there are a project on GitHub to do the same thing. And we have improved this program and improved its performance, make it running under Mitaka release version, and make it support Kistong v3, make it support for Zebix version 3. And we also add the login and testing system in it. Then we make it support multiple precise and multiple backends. Now we have it running on one of our customer's environment, and which has about 14 nodes and about more than 200 instance running on it. I think it will be extended to 100 nodes in the next year. And I think it will be a big challenge for us to make JCP more powerful to handle it. Here is the architecture of JCP. I will show how we design it and how all the things work together. And the notification bus on the top, we use the RedMQ in our design. And OpenSec use the RedMQ to send the notification of Kistong or Novus events. So these events are what we need to get from the notification bus for this JCP. And in this design, Syllameter also use the bus to collect all the metrics and store them in the Mongo or Nokia. So here, this JCP is designed to listen from RedMQ to know that when we create an instance or when we create a project or just we just delete them. JCP is also designed to get metrics from Syllameter or directly from MongoDB or maybe not tree in the future. Then JCP will call the Zeppes API to publish all this data to Zeppes. Here on this page, firstly, as I introduced before, this JCP will collect events notifications from RedMQ. And as we know that Noah and Kistong have their own topic in the Q system. And as we know that the Syllameter will listen the topic of the Q to get the event notification. So for this JCP, we should create another queue for its listener and also need a new banding to band the Noah or Kistong exchange to this JCP queues. Then we will get the event notifications from this queue and for this JCP. It is just like a miracle making both this JCP and the Syllameter to get a message and not affect each other. So secondly, we can use Syllameter API or MongoDriver or maybe Nokia in the future to prove the metrics. I think because using Syllameter API will increase the network delay and we prefer using the MongoDriver directly. Also, in the future, we will support Nokia as one of the backends for JCP's data source. As this page is a relationship banding, I will show you which concept in Zebix will map to the things in OpenStack. We will create a host in Zebix and create a host in Zebix when an instance has been built in OpenStack. And the host group in Zebix will map to the project in OpenStack. We use Proxy in Zebix to distinguish different domains we have in OpenStack. After all the introduction of this JCP's design, let's sum up the main features this JCP already have. Now this JCP can support Keystone v3, make it possible that Proxy can map into the domain in OpenStack and JCP also can take advantage of Syllameter notifications queue to get the information we need and it also supports Rebian queue cluster. We can make the JCP queue durable to make sure it lights. It won't miss any message. And we also check the instance list as the first run or even every pooling cycle to sync OpenStack information to Zebix. This JCP can retrieve resources or metrics from Syllameter API or MongoDB driver and publish them to Zebix. Also, we can use GraphNet to show those metrics just make it look better. This page shows what should we do before running this JCP. Our OpenStack version is Mitaka and all configuration I will introduce on this page based on that. Firstly, we should set up notification driver in Nova and Keystone. We added the configuration file which make sure that Nova and Keystone will sign the event notification we need. And secondly, we should set up Syllameter's configuration file like on this screen, you want pipeline YAML to filter what events we really want to collect. For example, this event on the screen will generate hourly, so we better drop them if we don't need them. Then, we need two directories in our environment. One is for the log, one is for the configuration file. After that, I did the configuration file proxy conf. There will be link and... That's Kenny. Where am I? In the configuration file, there will be the link and the notification of the information for Keystone and for Syllameter, for MongoDB, for MQ cluster, and the number of workers and the time of period we can also change them in this file. When we create a hosting database, we need to use a template to describe the items of my host. And we set the template default name as template Nova, I think, and it can be changed in our configuration file. Many people ask how can we install GCP and where can we get the source code? We already upload the source code in the GitHub, and the link of it has been put in the reference page, which I will talk about later. So, we can install GCP after we download that source code, and also, we can install them through PIP install. We can just run PIP install, GCP, and we will get it installed. And then, in our console, we just type GCP pooling and push the enter. We will start it. In this page, it's showing lights. Keystone domain will be created as a proxy when GCP is running for the first time. On the left is an OpenStack domain list, and we can see we have some domains on that list, and on the right, we have all the domains information as proxies created in Zebix. Because proxy can't be created if there are some special correctors in its name, GCP can find out if there is a problem we create a proxy with a bad name, so GCP will change it to part of its UI ID to make sure it can be created. And almost the same as the previous page, we can get all the projects as host groups in Zebix after first run, and we have divided these projects into different proxies according to the domain they belong to. And here's a note. The same name of... The same name of instance or project is loaded in OpenStack, but it is now loaded in Zebix, so GCP will append part of UI ID on the end of the name to make sure they're different. When an OpenStack event occurs, we will drive GCP to update Zebix resources. For example, create an instance. We will find out it is created from the GCP log, and then we will see it has been changed in Zebix dashboard. To make this more clear, this process will be shown in the next video later. After the instance information has been sized, GCP will pull the metrics of the instance to Zebix. Also, we can see this process in Zebix debug log, in the GCP debug log, and we can get the latest data from Zebix dashboard. All the metrics can show in Zebix and also can be showed in Grafana. The CPU you to show in this page is one of the 11 metrics. The data is in Mongo and also in Zebix. Grafana just gets the data from Zebix and then shows them out. In order to improve GCP's pooling performance, we use toSleep and ZooKeeper to make multiple pooling agents can work together. Here is the result of single process and five worker processes in a running test. Each instance has eleven items for pooling, and in this form we can see that as the number of instances increase, it will take longer time to complete one test. If we use five workers to do the same test, it will complete about five times faster. I think it means that GCP will not be the bottleneck of pooling data scale in each period, so we can add more workers to expand pooling scale. Here is a video demo to show how GCP works and how we test it. First, we log in Zebix and we can find out that there are no other hosts before we start GCP, and also there are not host groups like projects in OpenSec, and here is a process. Then we log in our OpenSec dashboard, and there are some instances we already have and there are some projects, domains we already have in our OpenSec environment. So I want to show the first one, GCP and ZVU sciences information to Zebix. Now we start GCP, and it will listen the NOAA and the KstoneQ and get the information also can call NOAA and Kstone API to science information. So we can see the project has been created as host group in this page, and the instance has been created as host also. Next, we will show the GCP instance discoverer. Then we create an instance in our OpenSec environment to be assigned a notification to the message queue, and the GCP is listening that queue to tell Zebix that we have a new instance in our system. So we can see that another host has been created in Zebix. Also we delete the instance after we delete it. It can be seen on the GCP log and also it disappeared in Zebix dashboard. We do the same test on projects and domain, they will see the information similarly. The host groups in Zebix will create as an OpenSec project and delete as OpenSec project. Okay, next, we will show you that pulling parameter metrics into Zebix. With this process, we should create an instance for test, and we make some data in the parameter to make our... We can see live monitoring environments has been created of the template, and we log in the VM in our OpenSec, and load full precise, and we can see it is 100% CPU tail running on it, and the data has been after view, has been sent to Zebix dashboard, and we can show the data in GraphNet too. They're just the same. And next, we're using to sleep to run the CPU. First, we use single precise one worker to do the test and just skips the test. The test is under 200 instance about more than 2,000 items need to be pulled, and it will take about three minutes to do it, and we change the workers to five, and we will test it again. We can see the host process has been joined in the group, so five workers will work together to pull them. So it's a test result. We can see it will take about 32 seconds to do this, and it is five faster times. Next, I will talk about the roadmap of the CPU, and we want to support a new stage I mentioned before. It is Nokia, because Nokia is the newest backend of Celerometer, and we want to integrate Celerometer alarm with Zebix alert, so if we have some alarms in Celerometer, we can get the trigger or something like alarm in Zebix. At least at last, we want to add some unit tests in Zebix, in the CPU, and here is a reference. You can, through this link, to download the source code, and you can tell us what do you think about the CPU. If anyone has some good ideas about the CPU's roadmap, please feel free to tell us. Thank you, Erva. I will take a few minutes to answer your questions. Thank you. Please. I got no questions. Did the CPU just do the work to map the project to the proxy and the other resources? Does it have other functions in the CPU? The CPU is doing two things. First, I talked about is to mapping these all concepts, and the second is to pulling all the metrics from Celerometer to Zebix, and the main feature is to talk about how can we gather data from Celerometer to do the CPU. The second one is, do you have some plans to introduce more features like other resources, like a volume or image in the CPU in the future? I think this question, I will represent some new features in the CPU. Okay. Volume and other network metrics, I think if they are in Celerometer, so they can be transferred to Zebix. We should think of what the concept in Zebix can be mapping to this volume or network in OBSc. I think we can do that. Okay, thank you. Thank you.