 So, good afternoon everybody. My name is Fabio Giannetti. I'm with HP Cloud for a couple of years before that I was in HP Labs. And I'm the technique for intering and building. And what I'm going to talk today about is building generation and building for Cynometer. And this presentation of how I should do of Polysomide, Bethster, that's not here and then it's not there. So, what I want to cover is the state-of-the-art in the Opus Bank regarding littering and the building aspect. And then how we're going to use building integration framework for different use cases. And then I will talk about the solution we are proposing, which is called MONETA. And MONETA means coin in Italian, that's what you see the overall coin of that. And I don't want to go over the conclusion next step for what we're planning to do with the framework, what is MONETA developed. So, you heard about Cloud, you heard about the new style of IT and how this is different from traditional IT and in great co-create partners and all this. So, with the new style of IT, you don't have a concept of racks and machines and servers. You have really constant tools. So, you have the ability now, you have the need now to be more collected data of usage regarding these movies, which can be spread across the different infrastructure. And you want to collect this data and reference the data back to a particular user, which could be a project or a talent or a name or whatever you want to structure your project. On the other hand though, a billion system is a little bit more complicated than that. It doesn't need things like buildable records, so what is the amount on a daily basis that are used or something. It also needs a constant product services, needs a rate plan, needs something that says, oh, this thing costs this amount of money. And the different, the price may be computed regarding when you use it for a long, in which region, in which area, and for what purpose. And then there is a cost of accounts and subscriptions. So, who is paying the bill? And what is the benefit of this person who is going to pay the bill? And what is subscribed to? So, I think this is to use storage, I mean to network, compute, what are the resources or what kind of services I want to really use. And currently there are no solutions that really bridge the gap between the aspect of building and the aspect of building. We are not talking here about creating a building system for open stack. I think there was in the past an attempt called build stack or something like that. And we acknowledge that there are a lot of building systems available in the world. New type of building systems, web-based or legacy like SAP. And those are perfectly great. I don't think any of the customers wants to go and go and change this stuff. But we want to provide a framework so that they can integrate the existing one and tap from the open stack world. So, the idea is to prevent this framework and we are working on it and I will give you the details. So, state of the art. So, what we are right now in the stack framework. So, the middle and middle processing pretty much works in this way. At the beginning you do a collection. So, you retrieve all the information regarding the different instances, the different events that happen. And these are transformed into a concept of sample and meters. And I'm going to tell you about that. The second step, you take a concept of this concept, you aggregate them and you transform the data that was rolled in something meaningful. Now there is a readable record. And beside the usage, you attach the subscription to the account that we discussed before. And then you go to rate it. So, now these readable records have a concept of dollars associated with it. So, how much of that usage is translated to a dollar amount? And once you've done this, you can take all these entries and you can put them together into an invoice. The invoice is generated and that is set for payment. So, then the charge is applied to the customer, which pay whatever payment method they really like. And so, Sinometer is already provided in the collection. And the third party billing providers, several of those, are really taking care of all the rating by generation. So, who knows about, heard about Sinometer before? Oh, okay, well, okay. So, I can go faster on that. So, what is Sinometer? Sinometer is a telemetry system. What it really does, it gathers the data, right? And then it does basic processing. It transforms these events into sample emitters. And then it allows you to access this data through the API. So, you can get it through the reporting API, you know, how much the talent is used. And it also allows you to reference projects, tenants, domains, hierarchical projects in the future through the Keystone project, which is the identity problem. So, as I said, Sinometer allows you to read information from infrastructure as a service, through eventing, and or polling. So, you could listen to things that happen in the message bus through Oslo, or you could actively go and instruct Sinometer through an agent to say, on a certain interval, go and fetch the data for me and let me know what happened. Same thing can happen at the pass level. So, you can have events and, you know, you could even have polling if you want. Even if, you know, the pass level probably will be really, really taxing. And what Sinometer does in the end, it produces two measures. It produces meters and samples. So, samples are really a translation of the point associated in time of an event. And then this is transported or represented uniquely as a meter. And in order to do usage, you will need both. So, what is the need? Why do we need to have an integration frame over the linear system? So, we think there are at least three use cases where identified that we need. And many people, when you talk about billing, they already, they go straight, go straight and say, oh, that's how you need the public out, right? Because the public out is the obvious use case. You have customers, there are no infrastructure, you need to know much to use because you're going to build them and pay for what they use, right? But the reality is that this spun across the board. And so, in private, really, billing is necessary to understand how much of the infrastructure is used by different projects, by different systems. And in the hybrid world, also, you want to know about how much is your internal cost versus the cost that you are outsourcing to the public vendor. But also, you want to understand historically what happened, right? How much of this is done? What are the projects that most frequently need to outburst in hybrid on the one-step lockout? And then the third case, one of the, is very interesting, is the ability for even private vendor now to allow third parties to tap into their extra capacity. And we are seeing this happening where, you know, I have a certain amount of data, a certain amount of capacity, I am using all the time, I can expose to third party to use this type of capacity. But for certain time. So let's go through the first one, which is the simplest. Clearly, if I have a hierarchical potential in my company, let's say, and there are different projects or different business units, I really want to know how much of this are using my infrastructure. And the reason is that I want to do cross-charge, is the common thing, I want to pay based on what you use. I also want to do planning, I want to be able to talk and understand which projects, which activities are using what. And so I can do capacity planning, I can do redistribution of resources. The second one is the hybrid scenario. So our private company has a certain resources, certain usage is needed. When the usage increases, I will need to burst up and go to the public cloud. So now I am going to use some or leverage some of the resources up there. And again, I want to know the usage across the board. I want to know how much internal resources I've spent, how much of the one in the public. And I want to have historical knowledge of this. So I can make a better plan in the future. I can understand which projects I can repatriate because they don't use this or so much. Which projects actually makes more sense to live completely on the public cloud and stuff like that. So the important thing here is that you want to associate cost to this, not only to pay the part you have borrowed from the public cloud or the other infrastructure. But you do want to put this into the cost of financials so you can understand how much cost was and you can do a good planning based on all this kind of information. The third case is the ability for project vendors to resell, sell or resell their extra capacity to external or fair party customers. And so, you know, when there is certain extra requests for resources coming from outside, companies can expose this and allow fair parties to use it. And now the complication of this is even greater. The reason is, now you are going into the business of selling your own infrastructure to fair parties. And for this reason, you may need more than just some building system. You need ERPs, CRMs and all this kind of stuff, which a company may already have, but connecting this with your own infrastructure is complicated. And as I said before, you cannot really go to a company and say, hey, if you want to use cloud or if you want to make the cloud, you have to forget about the way you build, you have to create this new way of building customers. They already are used to their own systems and they want to leverage those. And moreover, there are companies that when you do business with them, they have their own way of receiving the bills. And if you want to do the business with them, they dictate how you are going to build them. This is a big company. So really, the way you can do the building is very restrict and is privileged. So you cannot freely go and change it. The other thing we see is the ability for these kind of customers that run an infrastructure to be able to resell. So if you are a debtor, for instance, you may have your infrastructure and you can resell it to other customers that they can have their own. They can also provide something for someone else. So I can build my own tiny slice of the cloud on top of a big terco and then I can resell some of my infrastructure or some of my storage for instance to other customers. So Sivameter talks about, meet an example, the building system doesn't understand absolutely, right? So Sivameter will send this information, I can go through the API, collect it, but the building system doesn't know what to do with it. That's the kind of data is too raw and too low level for any building system to act. And so what the building system really needs, and what we have done here, we have abstracted the basic or the foundational type of information that are needed for a building system to work. I mentioned this before, the users, the accounts, products, requirements, descriptions. This is the bare minimum that you need for a building system to work, otherwise you cannot create a building. So what is our solution, Moneta? So Moneta is really the missing part, right? So we already have Sivameter that does a good job in collecting the data and we have all these third party providers for building that are existing, they are, you know, Samp, Zora, there are also those, and some are brewed in-house, but there is nothing that connects or easily connects the open stack ecosystem with those buildings. If you want, you will need to do ad hoc integrations, and that's what we want to reduce our limit, right? And if you want to talk this morning from our general manager, you basically mentioned that we are trying to make our time not only open source, but open in general, which means allowing the third party to easily integrate and support whatever they currently have. So this moves in that direction. So what is Moneta? So Moneta consists of four major parts. The first one is that he has a unified and simplified API for building. So you see those four elements, and those will be the elements that are used for the API. So the API will be simply saying, I want to create an account, I want to attach to an account certain subscriptions or activations, meaning in my account, which is connected to, let's say, a project, I am interested in using these three services or these five services. And in this way, we will be able to relate the usage with the account and create a new one. But also there is a constant product, meaning this particular service mapped to a product and that product has certain rates, so that when you can build up some of the setting up, the prices can go in and create these product entries and say, hey, okay, for this particular product, if it's used by from 6 a.m. to 5 p.m. this price overnight is less, and if you have a special customer, they discount and all this kind of stuff. We also provide an asynchronous mechanism to connect the request coming from the external, the API they are facing, open stack, with the logic that is done to create a building operation in the third party plugin. And so we have the ability of stopping, retry, and if the plugin owner can write code in such a way that asks us the state of whatever operation they have done, we will pass this back in a code back later on so they can go on and continue to do whatever they do. The reason is that the system, I may have a request, but the system I'm talking to is down, or there are several operations that needs to be done, there may be some approval in the middle of someone, so I may have to stop and retry or continue later on. So we are providing a mechanism to do so. And we also have the ability to directly integrate with a user aggregation system. So as I said before, Cynometer provides meters and samples, but those are not at the stage or the level of the users we need, so we can layer on top of it a user aggregation system which collect the data and produce the usable records. And last but not least is the ability of having a lot of plugins, a lot of plugins that are written and integrated. So then whatever vendor can take the Python interface, implement their side, and create their own way of talking to the system. And you have to understand that meanwhile in an open-stock environment we use REST and JSON as a major way of communicating between the services. In the legacy billing system it can go all over the way from ERP-like transactions through SOAP and XML and stuff like that. So this is a picture of the entire system when it's deployed. So Cynometer, we will inquire Cynometer to retrieve the meter and sample. This will go through to the user engine which collects and aggregates the data and transforms it into usable records. And then we leverage Keystone for authorization, authentication, but also for role-based access control because clearly you need to be aware of who is trying to access or who is trying to do with your account. I don't want people to see accounts that are not allowed to. A very interesting use case is in the reseller case. So even if I have a project under a domain and I'm the domain owner, it doesn't really mean that if their project is a reseller project, I am allowed to see who are the customers of that project. The reason is that if are the customers of the customer of mine, there is a reselling capacity. So I shouldn't be able to see what this is going to do. I should be seeing what he is doing so I can build it, but not the customers that he is selling his stuff to. And the third-party system adapter is the one that allows the vendors, specific vendors, to write their own plugins so that it can communicate back with the third-party system, the legacy system or the system. And we are going to develop a list, a couple of those, and also the plan is to move this framework into our public cloud as a test map. And so we will integrate with one of those vendors that we can do support. So what is Moneta and what is it now? So Moneta is a framework. So it does, the idea is to build the basic components so that the third-party can help us or can develop plugins so we can have immediately access or easier access to all these building systems to be integrated and allow building operations with that. On the other hand, for that would be an easier way to enter or connect to the open-start framework instead of having to develop every time point-to-point solutions. So you can integrate with that and then download it. And it will consume data coming from Cilometer. We are not planning to change the way the filtering is done. It's currently fully internally developed by HP but we are looking into when it's ready, probably after the Kilo cycle to move it as maybe at the beginning as a start-forge project and later on if there is any features need to be incubated. And so it's not open source yet but that's what we plan it. And it's now part of Cilometer on the telemetry brand and I don't think that would be the scope of this. So under the hood, so how many data is developed? So because we are clearly want to make it a first-class citizen in the open-start community is developed as pretty much any of the other open-start components. So you already support Oslo and all the goodies that come in the traditional open-start. So we use Keystone for authentication authorization. We also have a set of filtering our WSGI pipeline. One of the most important is the RBAC one clearly because we want to see what the roles that come in and what the user has access or can do. And then the Moneta R has two splits into two. There is the part that goes to the usage engine. So the part that we say hey, right now is the time to go and aggregate some information because I need to have a bit more record. And if that's the case, you can go in and collect a sample at the meter and produce the usage. And we will support both a combination of SQL or not SQL depending on what the customer size and the requirements are. But if you are doing queries or you are creating a count of subscriptions, you will go straight to the storage and it will return you a request and response. So we will work asynchronously so that my account for instance is being created. You will tell me yes, your account request has been stored. You are good to go. Come back and visit this link. We will tell you when the third party system has really enabled their account and if not you will receive an error. We will tell you the state of your account so the third party system used our interface and plug-in manager and then there could be several plugins one of which will be active and will perform the building. And so that guy will be developed by a certain vendor and then we will talk with his own sub-system and we will do all kinds of stuff. And the other interesting thing we have done is that the vendor plug-in we provide the API at the building integration interface layer where we can say, hey, you know, stop here instead of having complete a fade you can say stop here or retry here and you can pass a blob of data and we don't mingle with that is some data that the the plug-in understand we don't need to and we're going to take a timestamp where you are calling us back you are calling us back an hour, two days a week a year and then we just call back the plug-in passing this data back and then the plug-in can understand the data and decide what to do and this is particularly useful as I said before because one operation to create an account depending on the type of vendor it may result in several steps and orchestration and so we don't want to enter in that business since this is specific for every different vendor. So conclusions and next steps so the billing systems we believe are out of scope for the stock we think there are several those already and they are perfectly fine they've been around for a while and the billing has been done for a long time but also we acknowledge the billing is part of the cloud system and cloud providers and operators are doing billing regularly and so really the enablement should or should have to allow interface that well established so that the existing system can easily be integrated and you as a cloud operator or provider can look at either develop your plug-in if you have an existing one or get a solution with a third party that already has developed the plug-in and so you are good to go so the main goal is to bridge these two and make them working better together rather than having these point to point to the oceans so as I said we are we are developing this and we are not done yet of course we are in the middle of it so what I would like to do is please get in touch with me and if you have use cases if you are interested in participating in this if you have ideas or things that you want to do about the living and the reason is that we are trying to see if there is a community or a foster some interest so that we can start to start working okay so I think I have plenty of time for questions so thank you and one question is well from the architect that we have some I think it's really to say longer than to moneta if we have an application providing a service and we want to charge on the use of that service I assume that we have to make application to provide some information through the thermometer yes I have much information on the thermometer but what does it need an application to do so there are there are two ways for an application to expose that you have events or usage events you can use the Oslo notification so Oslo as unified as the library is called Oslo notification and with that library you can emit notification to a particular queue which is called notification.info which is the notification bus so pretty much every service can find an event and this event will go to the notification queue and then the notification is already listening when it's turned on to events happening in that particular queue of course if your event is specific and is not one of the known events in the open start world you will need to write a small plug-in in the notification that goes and understand that event and transform it in and start in that same date or whatever so Cylometer doesn't understand your particular service you can write a small plug-in there that will go and fetch that event and understand what the event is or you can expose your API through for instance a client like the Python Cylometer client Python Cylometer client and that Cylometer will go the event of course is better right we can understand that and then I will connect build start I think it was called build start build start something like that I mean networks what kind of projects every object of open start what do you mean every all from a point of view of collecting data oh yeah but this is all done in Cylometer Cylometer already collects all this information yes but no it doesn't so that's is the usage engine that is on the top that takes that data and transform it into so in public we do this for all the type of services even for past services we create buildable items so we do on instances we do on storage we do load balancing database as a service we do some so we pretty much meter everything is going on of the services that we are supposing so I would like some precision about how you communicate with Cylometer and do you call the Cylometer API or do you use a dedicated RabbitMQ to push some messages no we will pull when we want to retrieve usage we will pull the API with a particular tenant so we will say hey for this tenant for this time interval or this window we are interested give me whatever the samples and the meters that are associated with it have you considered retrieving the data from a Nascar instead of Cylometer do you know from Nascar you have become from Nascar yes definitely that's that's a possibility right now we are talking about what is currently available in OpenSTAP so Monasca is still a stock forward project and if it becomes an OpenSTAP project definitely whatever whatever makes sense even if Monasca is more skewed towards monitoring rather than retrieving Monasca what is this to the school you are talking about it's Nascar Monasca yeah it's monitoring as a service it's another HP project HP led project that is now in stock for it I believe at the end of the it's a situation of life production that are with massive for example Mongo MySQL datasets where Cylometer typically does not perform very well it can easily reach in a multiple time information and in that case our public cloud with thousands of this is becoming quickly a problem especially for some buildings that do not parallelize very well or shard or whatever requested does Monasca have a special approach to that issue or do you just crunch that and wait so well so the building system doesn't really have that much amount of data because what data then is condensed into buildable items that are on a daily base right so if you are if you subscribe let's say to five services right you will have five light light items with the usage on an hourly base right and then you will have five per day times 30 and that will be pretty much amount of data and even then those five times 30 at the time it goes to the building is consolidated right into a single line because you don't want to see five times instance instance instance you want to see instances is X amount of hours at X 0.0 whatever per hour and that's what you know the amount of I have to pay now if you talk about the raw data the high precision data which are the events or the polls that come in that's in the silometer domain right and then we pull this data out we do a map reduce operation to then transform it into the real world backwards so the next steps for us first is to finish the project right we are halfway through and we are integrated with one vendor and and then depending on how much interest there is in the community we may open source this is a stock fortune and then from there right there is the open stock processes you know incubation and after a while you go to interested why do you favor developing it in the close sorry why do you favor developing it internally rather than the open site well good question the reason I think is that we want to bootstrap it at the beginning so we have an internal need to move into a solution like this and in the past we poke the community probably was very early in the game so open stock we talk about probably a year and a half ago two years ago it was very early in the game open stock was had different problems than yesterday right and so one thing was that was too early but also I think that in order to attract some mass you need to have a starting point and then from that starting point we can evolve it and make it more interesting you also I think there is an also side of the coin which is if you have something vendors can test it and then you can get some traction from their side which will be interesting too because definitely you know they understand open stock is becoming right the way of the IT supporting and so for them being able to talk into that with reducing the barrier of adoption will definitely you know steer them but instead if it's like open for discussion maybe then let's see what happens so any advantage you build the monitor relying on instead of just to consume notification message directly from the message bus and just also calling the API from each component well the reason we don't do that is because as I stated before Moneta is a separate component it's not meant to replace a cilometer right and if I grab the notification or I call the services myself I'm eliminating or I'll you know circumventing cilometer right so I think each of those tools have a particular call a particular specific set of requirements and I don't see I don't think it's good to have a component that does all of that in a single in a single step the other reason is that you really want to be able to save the intermediate values right so I don't want to pull for instance at the end of the month to understand the usage of the entire month for a user or a tenant it's too late I should have this date already available so I can validate it I can check I can see everything and stuff like this so I really want this small data coming on a regular base and collected by cilometer right or whatever telemetry components that will be in the future and then this data is fed into the building system that's usually how the working works if you combine the two together it isn't really risky you know at this date in the middle you don't have this separation of concerns because the telemetry system the government telemetry system is making sure that every piece of information that has potential money associated with it is captured transmitted and securely stored right because there is money and I want to go back then and say because if there is a building dispute this data is actually going to be what is going to help me to either solve the dispute right or refine the customer the customer may find the deal and say you know you are charging me $500,000 what and then you say well I gotta say I have here all these records that show that you used all these amount of stuff and that and that's very important that you have this data right because if you miss some of this you may miss revenue but worstly you can overcharge your customer I think with that so thank you very much guys