 OK. Zdaj smo. Mojde. Češno se boče. V tem prezentaciju na Fiverlab in izmonitoru. Počutim, da imaš otor o tem prezentaciju. Mi je Silvio Kretti, imam pri Kreati, resurscentra v italii. Ovoj otor, ki njim tudi nekaj. Fernando in Pablo Monica Spena in Miguel from the University of Polytechnica de Madrid. Before starting, let me ask you if you are already familiar with Fiverr or Fiverlab or not. And so on, please. No one. Wow, ah, okay. Perfect. So probably it appears that what I'm going to tell you is quite new for you. If it is also interesting, I do not know. Let's see at the end of the presentation. You will tell me. Okay. This is the agenda of the presentation. Or better, what is in the agenda and what is not in the agenda. Of course, I will start with a quick introduction on Fiverr. Then we will move to Fiverr lab, its architecture and the monitoring system, starting from a requirement to architecture and then the issues we found and the solution we put in place. At the end, I will show you a brief demo regarding Fiverr lab and its monitoring system. What is not in the agenda. Despite we used tools from the OpenStack community, like Cilometer, Ciloska and Monaska, this is not a presentation about those tools. So I assume that you are at least a bit familiar with those tools. Okay. What is Fiverr? I'm giving you now official definition that unfortunately, I'm sorry, it's a bit contains some bad words and then a more practical one, some example that I think is more understandable. Okay. Fiverr in a nutshell is basically an OpenStack cloud plus a set of libraries, software building block, Lego bricks, call them as you want, that expose API in order to help the development of smart, innovative context aware and in particular data intensive application. The idea behind Fiverr is to create an ecosystem for innovation, helping the developer, the SME, the entrepreneurs to basically materialize their idea into application and make business out of them. There are five pillars that sustain Fiverr. The first one is Fiverr itself, this set of component, software component that cover general purpose functions in different domains. Here on the right you see some of them, tourism, transport, smart energy, but also smart cities, for example. Of course, having a technology is a new technology as in this case is not enough. You need also to provide a playground where this technology should be experimented and tested and this is exactly what Fiverr lab is. Fiverr lab, because it is the focus of this presentation will be, I mean, detailed later on. So I go on now. Having a playground and a new technology is not yet enough if you have a new technology and you want it to be adopted by a broader developers and SME, et cetera, you need a program that found the adoption of this technology and on the table some incentives to the adoption of this technology. This is what Fiverr accelerates. Fiverr Mundus is a bit different. The idea here is to spread Fiverr all around the world. For example, last October I have been in Toronto, Canada in order to discuss a partnership with a Canadian institution there about Fiverr. We have a small partnership here in the United States as far as I know, related with U.S. Ignite, Gini and GCTC, that is the Global City Team Challenge. Last but not least, IABS that provides local and regional Fiverr support. If you ever came across Fiverr and you heard about these generic neighbors, what are the building blocks, the Lego bricks I mentioned before that cover this general purpose functionality and offer API for the developers. They are based on an open specification and of course, given the open specification, you need also a reference implementation, first of all, but then also maybe other implementation. All this, the specification, the generic neighbor reference implementation, the implementation, the documentation, et cetera, et cetera is present in the Fiverr catalog. In the last slides of my presentation there is a reference, the link to the Fiverr catalog if someone is interested. Generic neighbors cover different aspects of the technology and that group into chapter. We go from cloud computing, where, by the way, OpenStack is, to data and context management, IoT services, advanced web UI, and so on and so forth. All these create a so-called reference architecture in Fiverr. I don't want to worry you about this picture that is quite crowded, but here let me just comment a couple of things. The boxes are generic neighbors and here the idea is that at the bottom of this picture is the so-called data sources, IoT sensor, but not only also open data, for example. These sources provide data that are collected into the system through IoT gateway, C-CAN for open data, et cetera, and in the central part of this picture you have basically the data dispatching, data analyzing, data processing, and so on and so forth. At the top of the picture you have the presentation layer. The portals, et cetera, et cetera. This could be an example of an architecture of a platform that is based on Fiverr, in this case for a smart city. I think that I said too much about the Fiverr. Let's move now to the core of this presentation, that is Fiverr Lab. As we said, the Fiverr Lab is basically the playground for Fiverr, offered to experimenters, to developers, to SME in order to experiment their innovative application. In this case I'm giving two definitions, one more official and another one more practical, more concrete. Fiverr Lab is basically a working instance of a Fiverr where innovation takes place and we have different stakeholders that play a role in Fiverr Lab. On the left we have infrastructure owners, the one that provides resources to the Fiverr Lab. Bear in mind these owners because it's quite peculiar to Fiverr Lab. We don't have an owner, but many owners. We will come back on this point later on because it affects the monitoring system. Then we have data providers, as we said, that provides data. On the right we have, of course, developers that want to use Fiverr Lab and all the resources for experimentation. And then at the bottom, of course, we have the Fiverr Technology Providers that are the ones that implement generic neighbors, specific neighbors, et cetera, et cetera. OK, this is a more concrete picture of what Fiverr Lab is. It is, of course, a cloud infrastructure based on OpenStack. It is a multi-region cloud distributed all around Europe, but also in Latin America. We have an order in Mexico and another one in Brazil. And it's a public cloud. It is offered for free. It is open and offered for free for experimenters, for developers to play with it. Almost everyone could use it. Of course, there are some rules, but more or less it is free. Each region depicted in this picture is governed, is owned and managed by a different institution. And this, as I said before, creates some specificity around Fiverr Lab. Then, of course, regions are federated in some way. There are some common rules. Let me say a common identity structure at the end, IDM, but generally speaking, the resources are quite independent. They offer, of course, cloud resources in terms of CPU, on disk, virtualization, et cetera, but offer also other type of resources focused on the experimentation of a smart application. And you see a list here on the left, wireless access network, sensor network, open data, and so on and so forth. So, regarding the numbers, the figures of Fiverr Lab, first of all, it has been and partially it is already funded by European Commission. We have at the moment 12 regions with around 3,000 cores, 10 terabyte of RAM, and 600 terabyte of hard disk. More or less 1,500 user and the same number of virtual machine. Is it a big open stack multi region cloud? Well, I will say no, but comparing with the last survey for the OpenStack, from the OpenStack Foundation, we can see that it is neither a small cloud, basically the position as far as different parameters of Fiverr Lab is concerned, it's basically in this sector, in the blue light sector, because we have, for example, if we consider a core, so we have more than 1,000 cores, if we consider instance, the same. And so at the end, we are positioning Fiverr Lab in the top 30% of the infrastructure that at the moment are using Fiverr Lab, sorry, are using OpenStack. The same is true for these two parameters that are the users and the computer node. So having this beast to operate needs a set of tools. I already mentioned a peculiarity of Fiverr Lab that is the fact that to have different owner managing the different regions, but we have also, due basically to this, we have also another specificity of Fiverr Lab and these are related with the fact that it is not a static infrastructure, but it's quite dynamic. I mean, region are joining Fiverr Lab and are leaving Fiverr Lab quite frequently at the end. And this is exactly because they are owned by different infrastructure owners that could have many reasons to join maybe for a new funding or to leave Fiverr Lab maybe because the funding is not finished. So we developed the tools for of course joining the federation, joining the Fiverr Lab federation in term of deploying a new node and we base this deployment on OpenStack Fuel, but we did also some customization on this and we contributed back to the community regarding this customization. Then of course this is not enough and we need a process to join and to leave the federation and for this we developed tools for federation management. Of course tools for monitoring that I will leave for the third part of this presentation. Connectivity management, connected one among the others and here we leverage a service, a multi-domain VPN offered by Jean that is a backbone network in Europe offering broadband connection to all the academic and research institution in Europe. And last but not least we of course have also to put some services above the infrastructure as a service layer and for this we develop a tool for deploying generic enable, specific enable and other future internet facilities. Ok, just to recap, before moving to the monitoring of Fiverr Lab, this is the high level architecture of Fiverr Lab, we have many regions, we call them slave regions where open stack is still installed with the canonical standard components but we don't have Keystone for example in the remote regions. As opposed to in the master region, in the central region, we have of course open stack again installed with Keystone horizon and also monaska. The fact that Keystone has been installed only in the central region creates a lot of problem in Fiverr Lab. I don't want to discuss this now because it's totally another story but it creates a lot of problem in the past. This picture allows me to introduce the monitoring now of Fiverr Lab because as you can see here we have a cilometer installed on each region but on the master region we have installed also monaska. Monitoring Fiverr Lab. Let's start with the requirements. As said, we had some specific peculiarity of Fiverr Lab and this peculiarity is derived from the different owners that are playing a role in Fiverr Lab. Fiverr Lab is not a centralized infrastructure. It is a decentralized one in terms of government of the entire Fiverr Lab. So this means that as far as monitoring is concerned that every infrastructure owner could have different requirements in terms of monitoring, different needs, different objective. Just to give you an example, it could happen that an infrastructure owner doesn't want to expose all the data collected on his region to the central region, for example, or wants to collect different metrics with respect to the other region and so on and so forth. Here then multi tenancy is a must. Multi tenancy is not related with the end users but multi tenancy in this case related with the infrastructure owner, the administrator. The second and third requirement is any cloud infrastructure. So basically of course we need fault management and we need it under different perspective the one of the administrator, infrastructure owner but also the one of the end user that is of course interested to verify that all the end user functionality are working well. A root cause analysis is nice to have in this case. Performance manager is the same. We have to keep under control the resource consumption of our resources, the capacity, trend analysis, what if analysis and in order to do that we probably need to introduce some analytics techniques complexity and processing, map reduce and so on and so forth. The last requirement is again specific of fire lab I will say and it is related with the first one because we have different infrastructure owner which could have also different pre-existing monitoring system different collectors in some way and we should integrate them into the big picture of the architecture. Of course silometer is a must but we don't know of data collectors. This is a picture of the high level architecture of this monitoring system we deployed. As said we have the different regions each one with monaska agent, silometer and siloska. We will go a bit deeper into the detail here in the next slides. Then through the MDVPN before we provide the data from the remote regions to the central one where monaska is installed and of course monaska api. Here I didn't sketch the entire architecture of monaska but just the main building blocks of course through the monaska api the data is passed to the Kafka Q and then to the metric database in this case we have influxdb but also towards other tools components like in this case big data analysis analytics based on adup in particular with the objective who eat our own dog food we use a generic neighbor here that is called Cosmos that implements basically it is based upon adup. On the left part of this picture we have this box called fire health what is this? You have to think about something similar to tempest we need to measure basically the end user functionality if everything works well in terms of creating a virtual machine I don't know associating a network an IP address creating a volume et cetera et cetera and this is what we developed in order to monitoring all this functionality the data collected by this test continuous integration of the end test is sent to the monaska api again and then store it into the database in order to have a complete picture of the entire fire lab monitoring and finally of course the top left of this picture we have the infographics so all the presentation layer and graphical user graphical user interface going a bit more into the detail of the different regions as I said we have of course we have controller and computer nodes but we have here the monaska agent installed you could ask why well because of the monitoring of specific processes and services but also because of the integration the needed integration as I said before with pre-existing monitoring system so if there were some pre-existing monitoring system through monaska api we can send the data to the monaska sorry through the monaska agent we can send the data to the monaska api ok then another point here on the remote regions we have cilometer installed as I said before and we have ciloska ciloska is used to get the data from cilometer and send them to the to the monaska api but we have something more in this picture here in this case a couple but there are more cilometer we developed in order to collect data from different sources not covered by the default cilometer data collection ok, all this is what we did at an architectural level then of course we deployed this system into production in our lab I will say that everything went well doing this we had some little problem in installation of monaska but we solved it what instead I would like to discuss a bit here is a couple of issue we found of course it is not rocket science there could be probably other way to solve this issue but I will present here what we did again for the requirement I mentioned before in fire lab we had to customize a bit the data collection and in general the information we were collecting by the different region etc and we found a problem in ciloska here because ciloska actually assumes that in the metadata section of a given metric the data is formatted in a key value list and the value part of this key value should be a string in some cases we didn't have a string like in the example here we had again another key value list in this slide and this caused string deserialization error in ciloska ok, here the solution was quite simple we just marshaled the value meta before sending it to the monask API and this solved the problem I have a reference if you are interested to the code we developed for this in the last slide this is I think a bit more important always regarding ciloska there is a configuration file monaskafielddefinition.yaml that provides some flexibility in handling dimensions and metadata but it doesn't provide any flexibility in filtering the metrics remember that we need to be able to filter the metrics that from cilometer from the remote region should be transferred should be sent to the central region and for this reason we solved this problem developing a new storage driver in ciloska implement monaskafilter and of course the corresponding configuration file of ciloska adding a new item metric metric ID and under that item we put a sort of white list of all the metrics that should be sent to the monask API this more or less it's the last slide of my presentation then I have a short demo this is what we did we have a lot of work to do again because we would like to integrate some more powerful analytics with Apache Spark we should consider open source dashboard like rafana root cause analysis as I said is a nice we have functionality the monitoring of the logging we are missing this it's quite important to add this and because of our experience in fuel we would like to customize in order to automatically deploy our custom pollster agents et cetera et cetera last but not least verify if it is possible to contribute back to the community with our software the software we have developed ok this is the references the first one is the one to the catalog the fire catalog where we can find all the generic neighbors the second and third is basically the code we developed in order to adapt siloška, monaska and silometer to our specific needs ok I have a quick demo now if it is working just to show you that firelab really exist and it is not just something on the paper ok I don't know if you can see this ok here what I'm showing you now is the infographics basically the main page of the infographic of firelab we can see here the number of users community user and trial user are the one you have to look at and here we have firelab for example here you can see that is a live demo because there are two regions here that are gray that means they are not sending data from a couple of hours ago more or less here we have the capacity at the moment of firelab in terms of some disk public IP we have 12 regions here are the institutions that at the moment are running firelab this is exactly what I said before many many different owners that could create some problems and here if I go on a given node I see the capacity of that node or for example if I click on the course here I could see that the spain node is the biggest one and I can see the total core in spain node the virtual, the available and the one that are used what I would like also to show you here is that this is the spain node so here we have an overview of the course, the RAM used and the disk here let me come back to this to this later on here is the calendar basically that express the status of the sanity check the sanity check is the result of the file component I show you before is the one that measures the end user functionalities and here for example we have all the list of the computer node in spain with the measure about the CPU disk et cetera et cetera what I wanted to show you again is that this sanity check because I think it's a bit specific of file lab just a moment ok, we have another view here that is based on the status of the different nodes and you can see for example two nodes are gray that means that there are no measures at the moment from these two regions and one Sofia Tinkoli has some problem in the sanity check what does it mean the sanity check the sanity check is exactly the measure, the check of the user functionality that are also reported here average basically and aggregated per region and per per month but if I go here this is a quite simple interface but I want to show you this because for each region you see the one that passed the test and the one that are not passed the test for example we saw that Sofia Tinkoli is red and we can check why this is the test list executed every few hours and in this case for example you can see that these three tests deploy in instance for example with new network and you here can see why it fails so this is more or less what I wanted to show you another view is this one if it is working here we see the data in a different way these are all widgets provided by genetic enabler fiber genetic enabler here you see the data aggregated by region here again the hosts the virtual machines and this is another view about flavors because many problem that are not related with monitoring fiber lab is to for example maintain synchronized and aligned all the federation different owner in terms of flavors images and so on I think this should complete the presentation I don't know if there are any question Network monitoring I didn't see much about that yes of course we have also monitoring about network here I will say different monitoring system has been used on different node from measures to other system I didn't show this in the demo but this is also present in the in the monitoring system it depends it depends some in some cases it's I mean goes from cilometer to monaska but in other cases given this peculiarity we use the monaska agent in order to integrate the different monitoring system for example nages plugin and from monaska agent we directly send the data to the monaska api so we have basically two different types of source of data to handle and at the end everything goes in any case to the influxdb into monaska any other question ok thank you