 Okay, thanks everyone for joining here. We will show you today how to configure your OpenStack installation using OpenSource component to charge your users. So basically we will talk about gnocchi and clackity. This is the point that we will address today. We will start by a quick presentation of who we are. We are six people here today to explain you to that. We will explain you the infrastructure that you will use. Some use Viki's are being given to you right now. You can keep the use Viki's at the end of the talk. We will describe the various tools that we will use, and of course we will start installing them and configure them. So we will start by configuring gnocchi, then clackity, and then we will use them really to define the pricing policy and to insert some data to see how to do that. And there will be, we hope, some time for questions. Who are we? This is just the alphabetical order. So the first one will be Stéphane Albert from Objectiflib, right here. Raise your hand. This is the PTL and co-fazor of clackity. The second one will be France Bertelou from Objectif2. She's at the end of the room. She's giving away her music right now. Julien Danjrou from Red Hat will be here. This is the PTL of telemetry and also the fazor of gnocchi. Myself, Christophe Sautier, I'm the CEO of Objectiflib and the co-fazor also of clackity. Brice Trinelle, who is right here at the middle of the room, giving a use Viki2, is an open-stack expert from Objectiflib2. And finally, Maximiliano Vinesio from WLU, who's a technical expert for nuclear and clackity and the coupling of those tools. We'll give a talk tomorrow about that. But we'll talk about a bit later. What is on the key you got? The key you just received contains two images. One for using with KVM and one for using with V2Box. Actually, you might be using that also on VMware. It's a CentOS image using many RPMs from RDO. The root chooser is an open-stack. The password for the root chooser is an open-stack. And also, I'm really, really sorry because I'm French and when I built this image, I didn't notice that I did that using a French layer. So my keyboard is Azure T. So you'll have to do a bit some efforts to log in because, as you can notice, there's an A inside open-stack. So you have to find that. Okay, so once you do that, you'll be able to log in and to change the locale to, by instance, load keys in the US. But we'll be here to help you about that. What's on the image? What we thought was not really necessary to install a whole open-stack on the USB key. It would be a huge USB key to use. It would be quite large and quite annoying to launch from your laptops. So we said, okay, what are the key components that we might have? So we have a database already configured. We also have a RabbitMQ on it. We have a MkHD server. And we also have a Keystone, which is installed and bootstrapped. With one user called admin. And the password associated is admin. You also have one tenant called admin2. On the USB keys, you also have all the slides that I'm giving here, right here. So you can follow that on the slides, too, from the USB keys directly. It would be easier to copy-paste. So copy. You'll have to start copying the images that you have on the USB keys because otherwise it would be so slow. Once you're starting your images, please give them at least two gigabytes of RAM because otherwise you'll have some issues. Two things. When you're launching your virtual machine, please do some redirection of ports because it would be so much easier to use your SSH client instead of typing everything using a console. So here is the line to launch using KVM, your image, and to redirect ports, both the SSH ports and the HTTP ports. So the idea for that will be to SSH to your local host on the 10.0.22 port. It will be redirected to your guest image on the SSH port, by instance. If you want to do that on virtual box, you can just click on settings, network settings, and port forwarding, and it will be straight away using the interface. If you have any issues, please raise your hand so that we'll be able to assist you. For instance, if right now you have some issues launching the virtual machine, please raise your hand so that we can help you. Please, France, and all the others, I think we might be helping people. So keep your hands in the air if you have some issues to launch them so that we can help you for that. Okay. There are a few things that we noticed this morning because we did a small rehearsal this morning, of course, at the last time, but since we are just on the various parts of the world, it was easier to do that this morning. So you also have on the USB key a directory named missing parts. So please copy that to your virtual machine using that command line here. So we just copy the word directory on the virtual machine, and we hope it will be enough. We're quite confident. Does anyone still have issues launching the virtual machine? Please raise your hand. There are a few of them. We will give you enough time to do all the manipulations during this end-on. I don't know if some of you have already did these kind of things on previous OpenStack Summit, but usually on various end-on like that, people don't talk all the time to be able to do the various manipulations. After that, you should be able to log in. I will wait a bit before you'll be able to continue this point. Please keep your hand in the air if you have any issues to be connected. I don't know if there has been enough USB key or not, but we did a new in advance the number of people that will be attending this session. So we just brought 40 keys, I think. We thought it might be enough. Apparently it's not the case. Sorry for that. If you just enter right now, please raise your hand so that we might be able to help you and to give you the information and things like that. If you have any issue with the password, I remind you it is OpenStack, but I'm French. I'm using a French keyboard. So you might have to change the A with a Q. Does anyone still have some issues to be connected? Please raise your hand so that I can come to help you. And right now you can just do the virtual box. Okay, so for everyone having troubles with the virtual box, we've got the dialogue here. So you need to set your network type to NAT. And then there is a button on the bottom, which is Port Forwarding. It's in French, but I'm pretty sure it's Port Forwarding in English. And you need to click on this button. And then you can set the rules for the Port Forwarding. So basically, you'll have two rules. One which is 10022 on port 22. Yeah, I don't know if we can zoom in. Is it better now? Is everyone okay or do some people need some help still? Okay, good. I'll come to check. Okay, so apparently we should be okay right now for almost anyone. Okay, so the first thing we will do is that we'll get the right credentials for using our OpenStack, which is clearly just a keystone right now. So on the root directory, you have a small script called admin.sh that you can source. And it will give you the right credentials to use on your OpenStack. So source, do the first command here. And then you are able to manipulate keystone. So that's what we are doing here. We start by creating a project called Service, like all the time. And that's it for the setting up of our infrastructure. So right now, we will start to talk about the various components we are about to use. And Julien is about to talk to you about Cilometer and Newokey. So I guess everyone knows Cilometer by now. It's the OpenStack project we started a few years ago to collect the users in OpenStack in general. So it's being used there to collect data from OpenStack. And we put them into Newokey to have data to build for. Everything in Cilometer now is moved to Newokey for storage. We don't use Cilometer anymore. I mean, it's not the way it's recommended to use because it's just too slow. It's this one. So there's various parts of Cilometer. Now that we've split in several projects, let's leave Cilometer with the API, which is the API server which is less and less used. No, since we don't use it to retrieve the information anymore. The collector is used to receive all the AMQP messages generated by the agents, which collects the information. So there are two types of agents. One during polling, which are central and compute agents. They poll information regularly from Nova, Glantz, Cinder, Natron, whatever. And there's another agent, which is the notification agents which receives notification from various services such as Nova, Natron, et cetera, which sends regular information to feed data into Cilometer like usage of IO, CPU, et cetera. So there are metrics that are used generated by Cilometer, which are, for example, CPU tool for the CPU usage of a VM, the number of bytes or packets read on the network on the disk. Everything is pushed from Cilometer to Onyoki in this setup. Different types of metrics, which are kind of important to know about because they're not being used the same way, for example, when you do billing. Gauge, which are absolute value, like temperature in a room or something. And cumulative and delta, which are counters that keep increased like network IO, for example, which is always being increased and never sometimes go back to zero, but it's not an absolute value. The sample styles are the basis of Cilometer. So everything we do generate in Cilometer is called samples. So we generate samples each time we measure something in Cilometer and we send it to the collector, which then starts it into Onyoki. In a sample you have a lot of information, the dates, the name of the resource that is being matured, the users of projects, some metadata about it, things like that. We hit a few scalability issues a few years ago when designing Cilometer. That's why we started Onyoki. So the API provided by Cilometer until version two, it was very slow because of the way we stored the data. All the samples generated by the Cilometer components were stored as is in either MongoDB or an SQL database. And it was very slow because it was a lot of data. Most samples were regenerating information storing information in duplicate over and over again, which is very handy because it's very detailed. Things like auditing, it's pretty... It gives you great power, but it's a lot of data to store and to retrieve. So when you do requests, for example, on the Cilometer API, it's often pretty slow because the metadata are replicated over and over on all samples and you can't index them because everything is free from but small efforts of open stack components in general, which don't have any kind of schema or metadata of very resources. So it didn't work. So we changed our approach. So we approached Yoki two years ago, which started under what used to be the Cilometer program, which is now called telemetry. That's why you start seeing telemetry everywhere because we know of different projects which is not only Cilometer. So Yoki is one of these projects. The idea behind Yoki is to implement the time series database which is scalable in a position of many time series which are not. So this one is scalable. It can use storage which are scalable such as Swift or CIF to store the data which is known to be scalable. And it's being used by Cilometer as its storage for its samples. So it does pre-aggregation of the data. So when you use Yoki, you say, okay, I'm going to need to store it for months a year and being aggregated in some granularity like I want data to be stored for every hour over a year or every second over a month. Things like that. You define these kind of archive policies in Yoki and it's then being used to compute the data as you feed Yoki with new metrics. So it's way faster to store, to compute. It's way smaller too which are very more efficient to query when you want to retrieve anything from Yoki. It takes very, very little time. There are a few concepts in Yoki that you may want to know about which is the archive policy I just talked about. So that's a number of samples you want to keep for a metrics. So you define a policy saying I want to keep once you've gone granularity over a month, one day granularity over a year, you can specify multiple definitions of that in your archive policy. You can use that one of the archive policies for each metric you want in Yoki. It's going to be used as you feed data into the system. You can split in two parts. So there's a part for the metric storage which is the file system like in this workshop or CIF or Swift if you want more scalable systems. And there's another part which is the indexer which is the index data such as the instances that are being run. Things like that. There's a listing of resources for volumes, instant volumes, networks, etc. All the resources from OpenStack are being indexed in the indexer which is either Postgres, MazeGrid, whatever is supported by OpenStack and which are modeled where they are not freeform anymore. There's a schema with everything you need to know about the resources and all that are described which is very empty to, for example, in general, invoices based on this data. So Cloud Kitty, what's next? So I'll talk quickly about Cloud Kitty or it's working so you'll know it's working when we'll be doing all the configuration. So this is a new project. It was integrated in the BigTent. The first release from the BigTent is in Mitaka. And it's all Python, like every component of OpenStack. And we can integrate data from Cilometer and Yoki. So if you have an old metric pipeline only using Cilometer, we can query Cilometer API. We can query Yoki and get the data from Yoki. And we can store data back in Yoki so you can get data from Yoki, store data in Yoki and do all your calculations in Yoki and have only one API. Yeah. And it's super modular. So every part of Cloud Kitty is basically a driver or module so you can create your own collection driver, storage driver. If you need to integrate with another identity framework, for example, if you don't want to use Keystone you can create your own module and, I don't know, fetch data from a database, for example, to get all the tenants you want to apply rating on. Yeah. And the biggest part of Cloud Kitty is pricing. It's rating and pricing policy so you can add as many modules as you want so if you have some particular rating rules that you want to create a piece of code to apply on the data collected from Yoki, for example, you can create your own module loaded in Cloud Kitty. It's in runtime so you don't need to restart Cloud Kitty, the module will be loaded and you can configure it on the fly and all the configuration is sent back to every processing services so you never have inconsistent data in your pipeline. So all the work we'll do today is on the HashMap module. There is two modules at the moment in Cloud Kitty. It's HashMap and Pyscript. Pyscript, basically, you create your own Python code, your data using your own Python code. HashMap is more of a high-level framework so you can create a rating rule and apply the same on your data without writing a single line of Python code. So basically we have a way to model our data so we create group. We can group calculations. So a group is a group of calculation. You want to match some metadata and some volume to some calculation. You create a group. For example, instance of time if you want to do calculation based on instance of time and you place all your rating rule inside and you can do a volume pricing with discounts and it will not interfere with your time calculation. There is a service mapping so the same name as you will find in Kiston, so compute network, etc. So you match a service. For example, we'll see in detail later when we create the rating rule but you want to apply calculation on the compute service of OpenStack. You create a mapping on the compute service and then you can even apply a rating rule based on the volume. So for example, the number of instance of data or you can match some field so it's a field mapping and then you can match the metadata of your instance. So for example, you want to create a rating rule and you want to charge your user based on the instance of time and apply some specific calculation based on the flavor of the instance the user created. So you will create a service mapping on compute on the flavor ID, set the flavor ID and all the calculation will be made based on this. And then you can create a threshold. So threshold, it's the same logic as I said except you can create multiple levels and it's really useful when you want to charge your users for volume for example, or network IO because you want to apply some discount for example let's say if you create a volume that is bigger than 100 gigabytes you get a discount of 5%. Then you can create a threshold and say when you go past 100 gigabytes you apply your rate on your calculation and you get a discount. Okay, so we'll start the installation now and we will install a new key first. So you will install a new key and then we will install a cloud kitty on top of this and do all our rating rules. I guess it's not really... So we will first install all the demons of new key. So take care about the command. As you can see there is capital C in the options. You have to use the local cache of your image. So if you don't put the capital C you will fetch all the packages from the internet so it might be a little bit longer. So don't forget to put the capital C. And basically we will be installing the new key API, carbonara and the indexer for SQL alchemy. I don't know, did everyone type the command or copy pasted it in your S6 station? Yeah? I guess I will go on the slide. I will go with all the slide about new key and I will stop later and see if everyone is maybe a little bit late on the configuration. And then we will install the client so it's pretty simple. It's a Python new key client so we can insert data later. We have a small script that will be inserting data in new key because clearly we can wait for one or two hours that we have some instances pushing data in new key. I guess, yeah, we can. So we will insert data using a script. Know that we have all demons and clients. We want to reference them in keystone so every component will see them. So I will go on all the commands and we will wait at the end. So we create a service. So basically we create a metric service. We name it new key because we want to see what's the name of the service giving us metric in case maybe we have different service later. About the output on the bottom you will not have the same. Not the ID at least. But it's an example of what you're supposed to get when you're running the command. You will attach the endpoint to the service. So basically when you do a request on keystone you fetch all the services and you get information for the service and you get the endpoint. So it's entry to your service which you are supposed to make request to. So three endpoints. Same URL. So it's basically the same command. You change the ID. With the ID you get on the first return. So when you create the service you get an ID. You copy your ID and you paste it just after region 1 on the second line. So it's an ID of the new key service. And basically same command three times except you change the type of endpoint. So you have an ID depending on where you're doing the request. So same command. Okay, so it was the endpoint configuration of new key and then you need to create users. So new key when you're interfacing new key with keystone which is not the default configuration. You need to validate the token from the user. So you need to create a new key and then go to keystone and validate your token and validate your authentication in the OpenStack Cloud. So you create a new user in the project service. We create the project on the first slide. It's a project we create and where we put all the service users for every component of OpenStack. And then, yeah, we create a new key user with password, password. And lastly, we have the admin role to new key. So we need to put new key as an admin because we need to validate tokens to keystone. So that's all for the keystone configuration. When you're set with it, we need to store data for our indexing part. So to connect the files with the information. So again, copy paste. You can copy paste the command as you see them. Basically, we're creating a new database, new key, and we are creating a user and giving rights to the new key database so we can set it on new key configuration. And you've got the MySQL root password on the bottom. Again, for the guy that didn't change the layout of the keyboard, if you're in other key, don't forget that A is Q, et cetera. And this is Versa. Just a quick look at the configuration so you're not copy pasting a file without knowing what you're doing. You don't need to put all the line by yourself. You've got the configuration file on the USB key so you can scp it on the machine or if you don't have scp or go in SSH, modify the file and copy paste what you've got in your machine to your remote machine. We're enabling debugging because we might want to see what's going on inside when we are trying to see. Some people might do mistake of configuration and you want to enable debug so you see clearly what's the problem with the configuration. And it will save us time for a connection URL. So you're referencing the configuration you made before on the MySQL database. And yeah, it's what's used for the indexer to relate to file you're creating even if it's Swift on the file back end. Storage. So yeah, as I said, we are using the file back end because we don't want to create a database. I don't think you want to run it on your laptop at the moment. And the base URL. So it's a default configuration. By default you will have driver file and the file base path as you're seeing but we showed it in the slide because you might want to see what's going on inside. Yeah. The Q is A. And last part you need to do is paste. Any in the configuration by default new key is not bound to open stack. So you can use it outside of open stack. So you don't have any authentication by default in your key. What we're doing is enabling the pipelines that will do the authentication in keystone. And then you can talk to new key. So what we're doing here is enabling keystone integration and keystone validation in your key. And updating database. So like every component, when you're using a database, you need to apply some migration to create the tables and do all the upgrades if you have not already existing database. So here we are typing new key to dash. New key dash upgrade and you apply all the database migration. And lastly when you've got your configuration set, your database ready, you can restart the new key service. So you can do some queries on your API and start to store data, et cetera. Does anyone have any question about the new key configuration, maybe? With every component in OpenSack you need to define different endpoints. It already has always been like that. Because sometimes you want to have different IP address and networking. For example, if you want all your admin traffic to be on a separate network, so you can see what's going on on the admin API. And then you've got the public one and the internal basically is enter component and public is for the public communication. But most of the time, all components will query the public API. So the question is do some people use Cloud Kitty on the public API to do some requests? Yeah, sure. You can use Cloud Kitty in your application and we apply a role. So you can't configure Cloud Kitty if you are not an admin, for example. But you can use the storage API, for example. So we've got ways of creating reports and accessing the data stored in Cloud Kitty. So a user might want to get a report of what's the current total for the two months and you've got an API for that. One thing we have, if you're using gnocchi, cilometer or every other driver you can think, you've got a common API. So if you use cilometer and gnocchi at the same time and storing data in Cloud Kitty because we have a storage driver, we have gnocchi and our own custom storage driver, you can use the same API. And it's not depending on the storage driver. For example, when you're using gnocchi and you're using our storage API, basically Cloud Kitty is doing all the requests to the gnocchi API without knowing. So you don't need to learn how to use all the API of open subcomponents, you only learn Cloud Kitty, you configure and talk to you about Cloud Kitty and then we can have a little time to see if everyone is on the same step. So Cloud Kitty, again, we are installing several components, less component than gnocchi. So you have two components in RDO, it's API and processors. So basically API is all the requests from every user and you can configure Cloud Kitty directly from the API. So you configure one time your configuration file and all the configuration of Cloud Kitty is made on the API not on the configuration file and when you're adding a new server in Cloud Kitty, you don't need to do some tricky stuff in the configuration file, all the configuration is sent directly to the Cloud Kitty processors and you can do some calculations. So basically you can set your API process on your front server and have the processor on some big machines, for example, because you need to do a lot of calculation. And then all the calculation is done on the processors. So if a customer wants to... it's not working, right? Okay, plan B. Yeah, so all the calculation and all your data will be on the processors so you can have... if one of your API nodes is corrupted, you won't expose data to the attacker. And every request for calculation, for example, on the API you can request for calculation. If you want to send a request to Cloud Kitty with the instance description and ask for a price and you will get a price of what it should be costing to your user. And it's sent directly with RPC to the processors, the processor is doing the calculation and sending it back to your API node. So no data is handled directly on the API so you can... expose data of other customers if you have some problems on your API node. Okay, so I... Okay. So there is a small part missing. You need to install... So there are the missing parts like Christophe said earlier. You can install it with you but it's not cached on the image so in case we don't have Internet we put it on the USB key. So you need to install the package by hand. But before installing the Cloud Kitty dashboard you need to install the OpenStack dashboard which is Horizon. So you need to type quite the same command except when you're setting the file at the end you need to set OpenStack dashboard. So you have the missing part and you will have Horizon installed and then you can set up Cloud Kitty dashboard which does all the integration in Horizon. So you can configure Cloud Kitty directly in Horizon. Actually it's my fault so I will just fix it. You know sometimes you realize that you have something twice on your script or whatever you want. That's what happened this morning and let's remove it. But once I've removed it I have removed it twice too. So we don't have the installation of Horizon. So you will have to type bit. Yeah, that's what I'm going to do right now. I think it should be better. So before typing the FPM command here you just have to type this. I'm sorry it's not on the slides already but at the end you will see that you have all our email address and just drop me an email and I'll send your negated version of the slide. Is it enough? So it's basically just install the OpenStack dashboard from the cache that we have on the U of Z key. And once you have done that you can install the Cloud Kitty client. Okay, so again as before we need to configure Keystone. So same thing as with Gnocchi. We create Yeah, at this side. We create a new service called Cloud Kitty type rating because we're during ratings so we are applying calculation on data to get more data but with calculation and then we can put it back in the building system and apply VAT to the accounting. Same stuff. I won't stay long on this slide. We are doing the same thing as with Gnocchi except the port is not the same one. It's 8888 which is a problem with other components that are using the same but we will find a way to change it. And then we create a new role so I will expand that. Recreating a role rating because when you want to charge your customers sometimes you have tenants that you don't want to apply rating on. So when we want to mark a tenant or project we want to do rating on it we are applying a rating role for Cloud Kitty in the tenant. So when Cloud Kitty starts we have a tenant feature we call it and it asks Keystone which tenant is the rating role for Cloud Kitty and then Cloud Kitty process all the data of all the tenants having the rating role. So again we need to validate Keystone tokens for the API so creating a Cloud Kitty user and adding Cloud Kitty to the service tenant with the admin right. Same thing as Gnocchi we create a database and add rights for the Cloud Kitty user and now the configuration so more lines than Gnocchi but not that much actually. Again we enable debug but because if you have some troubles we want to see what's going on inside. KS oath is a Keystone oath section so we create only one section with all the Keystone information and we reference it in all the parts needing Keystone information so you don't have two, three or four times the same data in your file. So it's the oath plugin of Keystone so we're using Keystone v3 you can use v2 if you want v3 v2 the tenant feature can use v2 and v3 and we're referencing the data we created before Cloud Kitty user with the password password in the tenant service and then we're referencing the data in the Keystone oath token to validate the token on the API when you want to validate that an user is connected to your OpenStack Cloud and is valid. RebitMQ because we're using RebitMQ to send messages between API and processor and then database because you want to store your data when you're finished and we're storing the configuration, some configuration of Cloud Kitty is stored in the database so all the hash map rating rules that I talked before is stored in the database that's why you don't need to modify a configuration file because we don't want to have a rating rule in the configuration file because if some people form sales or something is want to create rating rules you have an interface in Horizon and you can easily give them access to the Horizon interface and they can create rating rules and you don't want to give access to your service, to your marketing or sales I guess that's why you can create everything on the API and the Horizon interface and everything is distributed on the API nodes and when you're doing a configuration modification on your API it triggers a message that is sent to all the processors so basically the configuration is only loaded when the calculation are done so you don't have inconsistent calculations because your configuration changes between two calculations so no you can't create a resource in your security like I said it's all decomposed in different components so you've got a storage we're using a NUCI hybrid we plan of having a NUCI full support knows that we can create a resource dynamically with the API we can get data from over components in OpenStack create resource in NUCI it wasn't in NUCI so we created some hybrid solution that do referencing of the NUCI resource and then we store all the data that we can store in NUCI in Cloud Kitty so you only have a small footprint because you're not storing all the data all the data is still in NUCI then you've got your collect session so it's where you want to where you set the driver so collector is NUCI so you want to get data from NUCI you can set multiple collectors if you want to if you want to get data from Cilometer and NUCI or if you created your own driver and you want to fetch data for example if you've got some specific information that are related to some customers and you want to apply reaching rules based on what's in Cilometer what's in NUCI and some specific rules you can do it and so for the NUCI collector clearly you need to have access to Kiston to authenticate to NUCI so we are referencing the same authentication as before turn and feature as I said is a way to connect to OpenStack but if you want to run Cloud Kitty outside of OpenStack you can you just create your own turn and feature and then you will not be using Kiston and you can use a database for example to fetch information and so in the Kiston feature we set the host section as before and then we set the version we want to use to query all the data it's log, don't worry about what's in the button when we hold set we can create the database so basically it's Cloud Kitty db sync upgrade all the tables and all the data you need and then it can look a little bit tricky but we need the storage so we have a command that is Cloud Kitty storage init we're doing migration on the database but the idea is what if you're using step or Swift you might not do migration on the database so we init the database for all the configuration and the database for and the storage so it can be different and then you need to have a working Cloud Kitty to do this so maybe I will wait a few minutes if you face when you try to type Cloud Kitty the module list of Cloud Kitty which is the command on the left if you face max we try something just restart the Cloud Kitty API service because sometimes it's not really reloaded correctly after changing everything okay so we are nearly out of time so I will rush through the last slides and explain to you what it's doing sadly we will not be able to create some rating rules so if you want to have any information make sure to bump into me in the summit and ask me questions and we will create a blog post maybe on objectively websites so with updated slides new images with less quirks so you can try it again and again if you have any questions don't hesitate to ask me anything so just about the last slide you will have a nice demo in two minutes because we are out of time someone will show you what they are doing with Cloud Kitty and Yoki in their infrastructures and what you can do with it basically what the last slide where is how to list all the rating modules so Cloud Kitty module list you're listing the rating modules you've got Ashmap No-Op and Biscuits No-Op is a dummy rating modules it just ensures that the data is formatted like it's supposed to be Biscuits enables you to create Python scripts and do a rating in Python and Ashmap you model your rating rules using some kind of objects and you get a rating calculation so I guess the line got eaten during the evaluation you're doing Cloud Kitty module enable Ashmap and you're enabling the Ashmap module but it's not in the slide we'll update it and give it to you later and I guess we we were visiting the information in Cloud Kitty database to be sure that we reprocess all the data and do the calculation based on the data but it will not be the case here so I guess I will leave Max showing you his demo of what he's got in his company and again if you need some information don't hesitate come and ask us we will have a big imagination right now and we will suppose that we have an architecture and we have also an architecture with salameterations that are pushing metrics into both systems and then we will have an horizon to manage our Cloud and we will see what we can do with all these services so this is a little bit different it's our dashboard but it's not very ambitious in the front end and the user experience but it's based in a vanilla dashboard so you will be able to do this in your own dashboard so we will go to the rating part the rules section so that we have here are billing rules that we will be based on from salameter into the gnocchi system so when salameter push data into gnocchi you will have samples but you then have to create some billing rules that are based in business rules so to do that I will disable the rating module to avoid some inconsistencies then I will create a new rule and you have all the metrics that are pushed into your gnocchi system so you have to select one of these metrics and then create a business rule to start to collect that metric and start to create a bill line item so in this case I will select this device right I will select the billing unit in this case is gigabytes and the aggregation function so I will manage the rules I will create the new rule and I will put here the cost so once the rule is created as you can see there you have the cost per the summarize of the amount of gigabytes per hour so that's a business rule so you have to create this business rule like the thresholds or the metadata rules but we will just try with this a flat rule then we are going to show back part and we will generate a new report based on that rules okay we will just create a new report perfect so we have all the metrics again that we push it into the new system and that we used to create this kind of business rule and bill line items so now we will choose this metrics to generate a report a billing report in this case we will use instance that is the metric for instance uptime then we will choose the granularity that it's like the resolution of the metric and the time span that is the time that you will restore the metric in this case seven days and with a granularity of one day so each day you will have a point with the cost of the entire day for seven days the result and then we will group that metrics so in this case we will show the instance uptime from all our cloud but we will divide the report in project ID then flavor ID and then instance ID so we will have three different levels in which we will have the value of each one of the levels you can filter a specific flavor or a specific project if you want so once the report is created you can show the report in this case I have another one created it has more metrics in this case as you can see you have the granularity in this case 30 days it's one month with three months of time span so you can go through the three months and you can see the total cost of that month in this case we are showing March the month of March you have the total cost over there it's $75,000 or in the currency that you want and then you have the partial cost of every one of the projects that you have in your cloud the admin project migration test project all that are different projects that you have and you have the partial cost of every project and then if you split down you can see the partial of the flavors and if you go deeper you will see the partial of the instances because we choose three different aggregation levels then you can go also to the previous month and you can see also the total cost of the month the total cost of every one of the projects and the flavors and the instances at the same time one thing that you can do with this kind of report is to export the report to be imported by for example your billing system or in a pdf format or you can import the report with an excel for example you can export the report and the load and you can import it with some application that supports the CSV format another thing that you can do is to create the creation of charts that are the same of the reports but in a graphic manner so we will create the chart the chart definition it's similar you have to add a name for the chart you have to select the metric in which the chart will be based in this case we will select the disk write bytes so we will select the metric again then you have to choose the granularity and the time span the same that you have chosen in your report then an agro-patient in this case we will select just one level of agro-patients because our graphics are really difficult if you choose more than one level of agro-patients and then we will create the chart so once we have several charts created as we call the dashboards that are collections of these charts so we will create a new dashboard and we can select if the dashboard will be seen by a user or by an administrator for example then we will add a group of charts we will select several of these to have a big dashboard and the dashboard is like it in place so once you select the charts and you add the charts into the dashboard you can scale up, scale down the graphics reorder the graphics in the way you want so you can show things like CPU delta or the cost of all the months in your cloud by flavor project, etc and the graphics are also interactive graphics so you can select and deselect some report or some flavor to have a better vision of what's happening in your cloud so this kind of dashboard are dashboards to show you or to show other departments, maybe commercial departments how the cloud is being used and how they are spending their morning so we will save this and you can have a list of charts several dashboards with different kind of graphics there and another thing that you can do with this is to set a dashboard as a home of the users so every user when they are getting into the cloud can see that dashboard the dashboard that you want that the user see so for example you can show the user how is they spending their money in the cloud how much instance they have how the instance are divided in terms of flavors or projects etc I would just conclude so first of all thanks to everyone here to attend that I can notice we haven't been able to finish but like I said I think almost anyone here just drop me an email to Christophe you have my email on the slides and I'd be really really happy to help you each other to help you one at a time one by one to finish that this week so that I can explain you how to use Cloudkit and and you can right now that you have installed them what you just seen with Maximiliano is the implementation of Cloudkit and Yo-Key and he will be really happy that you joined the talk that he is giving tomorrow at noon so tomorrow at noon he is explaining and Stefan they are explaining together more in detail what they did in NubelU to use Cloudkit and Yo-Key and why they did that to explain you everything like this ok thanks a lot to everyone