 Hello, I'm Christophe Sotéryanis from being here to join us for this session about CloudKitty. Stefan Albert here from Objectiflib, like myself, and we're about to talk to you about this rating component for OpenStack. Before starting, who's here has been aware of CloudKitty before this morning when Marco, you mentioned it. Has anybody tried it? Okay, we have some work to do. But we'll talk about that later. For starting, let's talk a bit about Objectiflib. We have a service company based in France. The company was founded in 2009. And over the last few years, we have about 30 here, or growths every year. So we're doing quite well. I think it's the thick softened stack submit we're attending and the third as a sponsor or something like that. We do a lot of integration, deployment, trainings. Many people were happy. I think the numbers are something like 98% of the people who train about 3,000 exactly are very, very happy. We are doing a lot of research and development since it's about 30% of our income and we are currently 11 with two branches, one in Toulouse, which is the headquarters where I live and one in Paris. Our core subject, our core topics, are clearly the Linux infrastructure and everything related to OpenStack. We have quite a lot of expertise on various things and current OpenStack is clearly the main aspect that we are dealing with. We are not just using OpenStack. We are clearly involved in the community and many levels. Of course, we deploy and we run the cloud for some customers doing consultancy and all the kind of things you can imagine like this. We also participate a lot on the community by doing some conferences or meetups or whatever you want like this. We also develop and contribute. That's what we are about to talk because we are the main developer so far. We have a lot of other contributors in the next few weeks on CloudKitty. So just to say it right now, we are quite happy and quite excited because one month ago CloudKitty just entered a big tent. So it is quite important for us as you can imagine that right now we are an official project of OpenStack. So we are clearly the official project to do the rating and chargeback solution. I'm quite sure that's why you are here. CloudKitty has been developed since the beginning using all the OpenStack-based practices. Stefan will detail a bit more in a few seconds the architecture we have in mind when we started the project and that we applied. So we also integrate quite well with the rest of the OpenStack ecosystem that we have to deal with. Of course, we have some interaction with Cilometer using the API to get some metrics. But not only. We have a great interaction with Horizon that you are about to see during the demonstration we are about to do. We are using all the library you need to use when you're doing an OpenStack project and we are really, really highly modular. We have four levels of modularity and we will discuss about that in a few minutes. Using CloudKitty, CloudKitty can be used at different levels. Of course, you start to fetch information from Cilometer or another metric resource. CloudKitty is doing this magic, let's say it this way and then at the end you can have some API or this Heli or whatever you want to fetch information on an admin view. So that can define the rating policy and you can also analyze the cost information and the usage of your Cloud from your users. If you're a user, of course you can see the history of cost resource you date in the past few weeks, months and whatever you want. And you also have a predictive view of what you're about to be charged for this kind of instance or resource you're about to launch. So before launching an instance, you will see the price you've been charged for that. And after that, if you want to extract information using the API once again, you can and that you can have some integration and post-treatment thing. But we will deal with that in a few seconds. As I said, we have a different level of interaction with CloudKitty. So depending the interest you have and the role you're having, you can have different interests using CloudKitty. If you're an IT manager, you might just want to be able to define the pricing policy for your customers. But you can also give a tool for your users to predict what the cost they will have when they are using your Cloud. If you're a Cloud provider, you want to make money using Cloud. So you need to find a way to charge your customer. This is what is addressed by CloudKitty. Of course, it's multi-tenant, so you can charge your customer like you want. And once your customer has been using your Cloud, you will have an analysis of this usage. But also if you're an editor, if you're an editor of any solution that runs on the Cloud, you can use CloudKitty because CloudKitty has been created to be highly integrable with all the interactions you might have. So if you're editing a software, you can just use CloudKitty to be your rating and billing engine, even if not using OpenStack. So it's quite easy like this. I would like to define the rest for this part and to explain you the different aspect in CloudKitty. Thank you. So know that Christophe told you all CloudKitty is working from an IO view. I will show you what's inside and how it's working. So basically, you want to do some calculation on your metrics. For example, Syllometer, we will soon integrate in your key support too. And you will see all the steps that CloudKitty is doing to create your data ready for billing. So the first step is the tenant feature. So it's something modular. We added the tenant feature to be able to remove the dependency on Keystone for CloudKitty. So for example, the base tenant feature is a Keystone tenant feature. So you will fetch all the IDs of the tenants that need to be rated. So it works with V2 and V3. And basically, it's fetching all the UIDs so you apply a role on your client. So the basic role is RatingRole. And automatically, CloudKitty will fetch the tenant information and send it to the collector, which is the next step to collect the data. So the collector is responsible of retrieving and aggregating all the data. So it's pretty simple. It pulls the backend, for example, Syllometer, in the default configuration. And you pull the backend every hour, for example, to see if there is new data and if you need to apply rating calculation on this data. It's highly modular, so you can have any number of collector you want. You can use Syllometer and Yokey at the same time, if you want, for example. Or you can create your own collector if you have some specific data that you might have in some of your database, some information on your client. And you want to apply specific rules based on this data. You can have external data. So next step is the Rating step. So it's where all the calculation is done. Basically, it's a list of modules, so you can have any number of modules, too. And it performs calculation on the collected and aggregated data that you get from the previous step. And you can set priority on different Rating modules. So basically, you've got a list and it's sequential based on the priority. So you can have a first Rating module, for example, that is modifying some data to be able to do some more calculation later. All the configuration is made directly on an API. You don't need to modify any file. Every module can expose its own API. So basically, if you want to create a Rating module and you need to have a complex API to be able to configure it, you just need to create a new API and CloudTT will automatically expose it to the end user and operators. And you can enable and disable a module on the fly from the API. So you don't need to go to all of your calculation nodes and restart the services. The calculation is still consistent because the new settings are only applied when the calculation is done. So you don't have inconsistency because settings change during the calculation. And again, it's modular. I will detail some of the basic modules we've got for the Rating and CloudTT. So the first one is Ashmap. It's a pretty simple module, I would say. Its goal is to apply calculation based on the volume of your metric and some metadata. So you can apply mappings based on metadata or volume. For example, you can query data from your compute services and you want to apply a calculation based on the instance of time, based on its flavor and rate, based on this metadata. So, for example, a basic collector is Cilometer and you get all the data from the Cilometer backend and you can apply rules based on every metadata you get from Cilometer. There is threshold calculations too, so you can apply levels. Threshold is really useful when you're doing a rating on network volume, for example, or cinder volumes. Basically, you want to change the price if a client is purchasing a bigger volume. So you can apply discount based on levels. And finally, all the calculations are decomposed in groups. So you can group all the previous matchings you've done in groups. For example, if you want to have a group for your instance of time, you will group all your calculations in instance of time group and another one for the volume, for example. But we will see this later in the demo. The new module that we added in a few weeks ago is PyScripts. We had some demands for this. Basically, you can create your own Python script to the rating. So if you need to have some complex operation, you just write a piece of code, you send it to the API and it will distribute it automatically to all your calculation nodes. And finally, the last part, it's the storage. So its goal is to take all the data you've got in the input and store it with the rated data. The goal is to have the input just for legal purposes so you don't only store what's the final result. You can tell your client the final result is this value because the input was this. Again, there is an API on top of the storage. So every backend got an API. You can query the API to get information for the tenants, all the tenants, you can apply filters, so you can easily generate reports to send to your clients. By default, we're supporting SQL Alchemy. We hope to be able to find a way to integrate with Nukey during the Mitaka cycle. And again, it's modular. You can create your own backend. For example, if you want to send your data directly to your billing system, you can create your own module and send the data back to the billing system. So you can have a common API for Cloud Kitty and your billing system, and sort the data directly to your billing system and query the data directly from Cloud Kitty, which will fetch data from the billing system. And then you're done. Your data is aggregated. You've got rates. You can directly show to your client what is its consumption in Horizon, and you can show back on the fly when he wants to create a new resource. Alongside with Cloud Kitty, we're also providing and working on a report generator. Like Stefan said, during the storage phase in Cloud Kitty, we provide a way to interact with your billing system. But not all billing systems are able to accept requests using API. So sometimes you need to provide to your billing system a simple file from the way they want it to be. So that's what you do with the report generator. We provide a tool that will request the API and allow you to format the file the way you want it. We have multiple formats that we can provide. And by the way, we can provide multiple files at the same time, like CSV. That's why we decided not to... The idea behind this tool is that we don't want to provide another billing system. We don't want to create another ERP, because all our customers and all the users of Cloud Kitty already have their own ERP. So we clearly know that people won't change their ERP to use Cloud Kitty. So we say, okay, instead of giving this brand new tool that we have to use, let's say that we can operate with their own existing system. And the fact is that at the end we provide a report that you can give to a user directly if you just want to do charge back, or you can give to your tool if you want to create the bill. And of course, it's highly modular here too. We're going to do a demo here. Since we like to play games, we're about to do a live demo. So let's hope it was quite good, but I'm quite confident. Yes, we'll see. Just before starting, on the slide that we're about to provide here, after the demo you'll see that we have added a slide with the links to all the demo on YouTube, so that if you don't have the time to see everything right here, you'll be able to download that on YouTube. And since the session is recorded, you'll be able to look at the links during the video recording that we made. I've got a question. Can you read what's on the screen? Or do you want me to expand it? It's fine. Okay. This way. Okay. Is it better? Yes? Okay. Good. So we'll create a basic rating rule to show you how it's working. So we go in the admin view, so it's where you will set all your rating rules. So you've got a new panel, which is named Rating, and you've got a list of your rating module. There you can see what rating module is loaded in Cloud Kitty, and you can enable or disable it on the fly. So we will mainly use Ashmap, because I guess you don't want to see me write Python during 10 minutes. So enable the module. You can have more information on the module if you want to have these documentation, see its priority. And then we'll start creating a rating rule. So we want to build our client based on the end-sense of time. So we'll do... I can't read what I'm typing. I guess it's good. So it's based on the compute services, which is Nova compute. And then we do mappings on metadata. So we want to create a rule that will build a client based on the end-sense of time and the flavor of the instance. So we'll do two mappings based on the flavor. Live demo. I'm in the group view. I'm sorry. So Fields. Fields is a match based on metadata of a resource. So yeah, it's flavor. I was in the wrong tab. Flavor. And then we create two mappings. M1 tiny. In the wrong tab again. It's M1 tiny. And here we set the type to flat. So it's the base price for M1 tiny you can apply rates to if you want to do percentages. But we'll do basic pricing. So 1.5, for example. So it will cost 1.5 per hour. Because in our configuration we are pulling data every hour and aggregating data. And we will create a new group that we will call instance of time to apply on all the calculations. So I will do the same thing for another flavor. M1 small. No, that's it. Two. And again, I can set it in a group. Okay, that's fine. You can have information too on what's in a group. If you want in a group view. So you can see all the calculations that will be done for a group. And then I will show you how you can define a threshold. So again, we create a new service mapping on volume. So it's in the volume in this case. We do service mappings. So service mapping is based on the raw volume of your metric where field is based on the metadata field of your metric. Yes, service mapping. Sorry, it's simple. It will be 0.9 per gigabyte. And we create a new group. So we don't apply this calculation on the... Type 0.1. What? Type 0.1. Okay. And now we want to apply a discount on your client. So if he's... The data generated and the data presented. You've got two volume. One is 10 gigabyte and one is 20 gigabyte. So we'll apply a threshold on 15 gigabyte. So we'll have two different prices. So 15 gigabyte. We apply a rate. So we do percentage... And we apply it to our volume calculation. Okay. So now we are set all our rating rule. And we will go in the demo view to see what's the impact on the end user. Okay. So the first thing I want to show you is the instance price... Instant pricing. So you can get an overview of what will be the cost. So for example, if I want to create a new instance... Test instance. Like I said, we set two rules. One for M1 tiny and one for M1 small. So you can see that there is a new box on the bottom called rating information with the price. So if I change my field, it will be updated on the fly. So your user can clearly see how much a new instance we cost him. There is two overview. So a new panel called rating. There is one. It's pretty empty at the moment. It's the current total. It's the total for the current month. So your client knows how much he consumed for the current month. And the last one, I'm sorry, is reporting. It's pretty slow. It's a DevTag on my machine, so it's pretty slow. There is a huge load of data. It queries the storage API from our Python and generate graphs. So here we get graphs from the current user with spikes. And you can see a breakdown of your consumption. So compute, for example, in volume here since we only done two rules. And you can get an overview of what's your consumption. We'll go back to the last part of the session. Like I said, there is the link of the videos. We've cut this demonstration in four. Well, like we did a live demo. It is not exactly the same one, but that's clearly the same idea. We have some ongoing evolution, of course, for the next few months. The first one, and Stefan already mentioned that a few times, is that we are working with the guy from Nokia to get the support of Nokia inside the CloudKiddy. It will be clearly a huge step for improving the use cases that you can have using CloudKiddy and the integration with Celerometer. We also started to work on a new storage backend so that we can have a better scalability. But the thing that people tend to notice the most is the graphical aspect. So we will continue to work on that. We have two things that we are about to improve. The first one is the reporting. We have many things that we have in mind for improving reporting. And also, we want to help the Cloud provider or the people who are in charge of the OpenStack to have an easier policy definition. Like you just said, it's not really complicated to define that, but it can be a bit tricky if you're not used to saying, which group do I need to take and which name do I need to use, so we have in mind something which is quite simple, I think. Actually, if you want to try CloudKiddy, you can. We are providing many repositories for DbAndroid 2, for Redat Linux 2, but if you want to try it with DevStack, it's quite simple. You just have to do that, and it will be done. You put that in your site, your local account, you start your StackSH, and that's it. Please, if you have any questions, come to visit us. We have a booth. It's just between the two rooms. It's number T66. We have some stickers of CloudKiddy here, but we have some stickers too on the booth, so we'll be really happy to give you that, and we don't want to come back in front of them. We also have some design sessions. One is tomorrow at 2.40. It's in the Kotobuki room, and there's another one. Please attend, and we'll be really happy to exchange with you so that we can be more aware of what you're looking at and what you're expecting and what you need, because sometimes that's something which is quite difficult for us to understand what you need. Actually, we will show more things in the tomorrow session because the time is pretty short for this session. We have screenshots of the new UI and the collector, so you can see all the data that is coming in the collector so you don't have to guess what will be the fields. We have screenshots for that because we already decided to work on that, so we'll be more than happy to show you that. And that's it. Do you have any questions? Actually, we set it to work on something like more than a year ago, but since we're just a small company, it's quite difficult for us to get people through years. But we are really happy that we've been able to enter the big tent because we really think it may be a huge speed-up for the project. So the question was, sorry, I didn't repeat your first question. The question was just for the record. What do you want to do if you want to rate just for the network? Yes. You can do this. You've got all the metrics that you have in Silometer, so you just need to enable it in your collector and say, I want network usage, I want floating IP or I want an image if a client can create an image. You can do whatever you want. If the metric is present in Silometer, you can do it. If it's not present in Silometer, you can create a collector module, for example, that will pull an API and get you information. And it will store it in the storage backend too, so you don't have to do tricky stuff, actually. Actually, if you're working for anybody of you who is working with an SDN provider or a storage provider, we'd be more than happy to rate the collector, and so we'd be more happy to talk with you to be able to integrate more and more data in-cycle, and that would be terrific for us. More questions? In-cycle? Yes. It's based on seconds. The question was, can you customize the building cycle? Yes. So there is a field in the configuration, so you can set whatever seconds you want. Is it what you were looking for? Because you might want different building cycles for different customers, right? Yes, I see. So if you want to have different building cycles, it's not in the code at the moment. We are working on it because, for example, when you're doing a network rating, you want to do it for the month, for example, for the current month. At the moment, we are applying calculation on every cycle. So if you say one hour cycle, your calculation will be done on every cycle. You can create your own module, for example, to exclude some cycle or aggregate the data in the storage and query the storage and get your data back at the end of the month, but it's not out of the box at the moment. I think we're maybe time for just another question. In terms of scalability, how do you see things? That's a tricky question. Maybe not so unscalable at the moment. We are working on a new storage backend because we are doing a storage, the Cilometer way, maybe not the best. And we want to use Gnocchi as a storage backend to... don't have to focus on how to store data and basically store directly a reference to the... Basically, we will have a new resource, which is a rated resource and a reference... a rate metric and a reference to the resource. So you don't have to double the data sources. Just a side note, we just received... and we finished with that. We just received an email last week from a company we met in Vancouver telling us, okay, we are really happy with CloudKiddy and we didn't know they were working on that. And we've been able to use it to charge many thousands of instances already. So we're quite happy so far. Maybe it's not for units right now, but it might be able to fix the needs for many, many people already. Okay, thanks a lot for attending it. If you have more questions, just comment or talk to us.