 Okay Hello, everybody. My name is Jeremy Bourdon. I'm CEO of Idira technology I'm here with Victor or that which are our main developer for Edira cloud management Who we are we are a friend startup 15 people and we work on multi cloud management and optimizations We are involved in the open stack community since Diablo That's it or why that we work with open stack and our product the name is Idira cloud manager helps company to manage their Infrastructures inside or outside the data centers and of course our product is deployed on premise Inside our customers data centers Which are our customers our customers are big and medium company Company like book telecom visual or many French agencies So I know guys what you are thinking you are thinking oh shit yet another cloud management platform I know there is a lot of management platform in the market, but I will explain to you why we are different first as Many multi cloud management platform We are able to manage different cloud infrastructures Inside the data center in privates or in public. This is a basic feature for multi cloud management platform But we provide more our customers Don't only need to spawn VM on private or public They need to provide Control and governance to the infrastructures. They need to align this IT or process With the platform to manage the infrastructures. That's why they need governance and control more than that or Customer which is the IT operation guy reports to the business and need to provide Cruelty of service to the end user any report to the CFO and the CFO need more and more cut costs So we need to have cost effectiveness infrastructures That's our way to imagine and build a cloud management platform It's multi cloud is governance and is quality of service and cost effectiveness Look at the deep Site on the feature our platform is a single pane of glass to manage private infrastructures and public infrastructures We have connectors to Bermeter VMware open stack Microsoft SVMM soon and on the public side We have Amazon web services and soon as a OVH is a French, Auster and GC We provide a unified service management feature to all these infrastructures We provide automation and non cartridge orchestration Link to external tool like puppet chief the foreman and crowbar what whatever you use inside your data centers And we provide optimization tools What we call service management for us service management is based on a policy based Governments we have policy for the bridge phases, which is a resource and defined which cannot network virtualization until the middle layer and What is the behavior that I want for the run running phases of my services? And we have different behaviors like ability availability optimization or scalability's and We bring together policy from resources and policies from behavior to create a service that anybody using our system can instantiate very easily For optimization we have three kind of optimization The first is quality of service for the end user the end user experience The second one is operational efficiency to be sure that their infrastructures is healthy and the cost effective and for each specific feature we add a bunch of Algorithms specifically designed to improve these three characteristics We have an advanced rose engine to be sure that the behavior running well to ensure the quality of service we have for example anomaly detectors to Organize and provide the operational efficiency for example to detect a bottleneck Inside the data center for a storage or something like this and for the cost effectiveness. We have Agnostic capacity management to be sure that the private infrastructure is used well by the different Solutions that we provide and we pilots inside the data centers So after the slide, I hope that was not too boring. It's turn for Victor to play the demo of our software Thank you. I have the honor of presenting you our interface on this demo server This what you see here is the and try to make it a little bit bigger So yeah, what we see here is the administrators view of Off of our interface in the atmosphere of you you see everything But because we do provide role-based access control you can have roles like for example a marketing team They can only create and define services a Particular department might just see their running instances and not others There may be quotas installed that will Give them only a part of what's a part of resources and make sure that they stay within this part The administration happens down here. Otherwise, we have three blocks that Are the definition of your infrastructure so that Hedera cloud manager HCM knows it You build the services and then you run instances of these services so just just as a reminder the service is a purely It is a definition. It is a it is like a template that serves you for creating Instances it is purely descriptive. It is purely logical. You tailor it to your needs I start with the instances to show you how our running instance look like looks like here we have a service which is actually two VMs that run It within open soos 13.1 on an open stack which happens to be a soos cloud 4 on the dashboard You see You see already one of our main points the metrics that get retrieved this dashboard is completely Customizable you see there are any buttons like saving or editing it Which means I can show there the graphics that I want to see that I want to see there You see the available memory and the idle CPU If you see look at the numbers both of the instances don't do much so they are relatively idle You can of course show there any metrics that you want and here you see that in one graph you have at the same time the per node Metrics like the available memory for one and for the other node and also a statistical Aggregate like an average or like a standard navigation that you can define for the whole service You also can there are other widgets which are possible like you see here rules that triggered on the metrics or warnings that have been mounted for your nodes in the configuration you see what kind of Software has been installed on the VMs through HCM in this case We didn't install anything because we didn't put a puppet agent inside But let me show you another service that happens to run on the vSphere and There we did configure The VM itself and there is a puppet agent installed if I click on it I see the details that to which masters it connects I could change that here it connects to the HCM server Which is the puppet master now we use puppet and we are quite happy with it But as you will see nearly everything in HCM has been built to be modular and extensible So plugging in something something like chef But because which you might prefer because you do everything else the chef to is entirely feasible I go back to our open sous service In the resources I see that we have two VMs and I can see details of one I can stop it I can scale CPU or scale memory in the open stack sense that that is a changing of a flavor Huh, this is a resize operation. I can migrate the VM to a different hypervisor Or I can just see which IP addresses they have And then I can connect if it SSH is open I can connect SSH through it or Do whatever I want with the IP address In the monitoring section You see the list of metrics and you can combine them and get more now the idea Our goal is really to get as many metrics from the whole stack that are possible We can ask the hypervisors we can ask the the system of inside running inside the VM We can even get data from the applications if they give it in one form or another for example We can we can get statistics from Apache that way we can create rules That combine metrics from the whole stack and on the other hand well getting metrics like in this case We get our metrics directly from the VM through SNMP But of course you might think that it's connectivity between the HCM server and the VMs is not always a given So if there is no connectivity what we can still do is that we can connect to the controller node and ask Cilometer to give us its metrics That way we can have data about a VM through Cilometer even if we don't have a direct connection to the VM It is equally possible to get the data from something else on vSphere We can ask the vCenter to get data about the VMs We could connect in the Microsoft world to a scum We might even think of connecting to your Nugio system to get to collect the data that you want to have inside HCM And we have a nice graphical editor here to combine metrics In a way that that gives you directly a real-time preview For example here you have the total memory and immediately you see because the data is already there It can be shown you see what what curve it would give you This way you can have metrics for a node or you can use statistical Statistical functions to combine Metrics for the whole instance Now we have this data We collected all this data. We can visualize it great Now what are we going to do with it? What what can we do with it? We we have several engines in HCM that work on these data one is the anomaly detector which Which allows you to look Later on on the collected data and to identify peaks to identify Identify things considered an anomaly you have to configure it a bit you have to give him a periodicity To to let he knows what is the periods of your workloads? Then he can as well do forecasting and you can be informed when the forecast does not match the reality That will be considered an anomaly in the rules part. We can define short-term actions So we just got these metrics now. What are we doing with them? We can we look at what you defined the policies that you defined or that you you can define them for the whole service or just now for this instance and You can define that for example the CPU is to is to the free CPU is too low The available memory is too low or maybe the Apache gets too busy So what are we going to do we can might spawn another note We might if it's if we are talking about just metrics for one note we can We can do as a scale in and change the flavor to have something more more sort of more stronger VM We can also we can just send emails we could call our web service or we can go into your existing workflow manager and Tell him to launch an action so integration into your workflow managers are also possible Here for an example I can I can do something with one note I can scale in and of course the opposite is the same if there is if there's not enough happening Then we can scale down. We can take a note away Our goal is always there to have the most efficient usage of your resources for the current situation You see events and alerts that have happened on the service Works those that have been run Sorry, this resolution is slightly different from the one that I That I used to use Yes and Here we can put another thing that we can do with our metrics is that you could put them into a correlation And we see well there's a connection well to give you a very very easy connection here we see how much memory is available and What does the CPU do and in our case? There is a correlation because in both in most cases there is nothing to do And so there is a lot of CPU free and a lot of memory free so in our case that shows a nice Correlation, but you might very well another thing that you could put into into a perspective Would be that you put the CPU charge or the free memory and the number of requests that your Apache has to treat simultaneously or your engine X To show you a completely different service here is are the hypervisors of vSphere So here what we see are the metrics that we got directly from vCenter for its hypervisors Again, you show you memory and CPU usage You can define for how long these will be shown in in all cases here These are nine hours that they have been shown for last nine hours, but that could be any any period you'd like So that's it for showing your running instances But now I will show you how do these instances get created because after all that's the goal of automation You create your services from these services. You can create many many instances that all behave in the defined way So first of all you have to say HCM What you actually have? We still do bare metal deployment so you can enter your different hosts and HCM can do bare metal deployment on them, but these days of course the main interest is as you register your existing cloud Here in this case we have an open stack this is cloud and We have a vSphere infrastructure. So here you see in one screen what is happening on your on your clouds And these data is not just what HCM launches on it But we have asked vCenter and Oh and open stack directly how much is going on them So these are the total numbers for your clouds and then for example for the open stack you can see What hypervisors we have and what VMs are running on it Now we come to the services that I've already mentioned so often services I Composed of seven policies. I won't go through the details of all of them But just to show you an example for the open stack situation All these forms are dynamic Which means that if I choose a field in this case I choose the open stack and That's why I have seen then the values the fields specific to open stack below So in this case I choose the project that is the tenant I choose the availability zone and I choose the flavor so each time I'm using this policy It will create in the Nova availability zone in the HCM tenant VM of type M1 small I Can also leave free I can also choose not to define for example the flavor Then I have to define it later at each stage things you have to define the things that have not been defined earlier on And how do these policies get combined into services? Or rather here you see what a service really is I show you the open-suit service You have you chose one of each policy The idea of having your policies explicitly before is that you can reuse them you can mix them you Because many of them are independent from each other The scalability policy or maybe the orchestration policy doesn't even have to do anything with a particular cloud So you can reuse them between services Other policies are a bit more dependent on each other here You see the hosting policies defines the type of VM which is running the storage policies in our case defines the master image Used the network policy gives you the subnets that your VM have to have to find all of course among The data among the infrastructure that we have found inside your open stack Scalability says how much you can scale it you could define that they have to be two minimum nodes at each time So they have to be at least two for high availability Maybe then you can have up to 10 so as long as you have to scale up you can go up to 10 The system policy will deal with a sign the volume and with the components that you want to install inside Your your VM and the orchestration policy can pre-define metrics and rules that trigger So here you see that you With the system we really have the goal that you have all your clouds all your services all your instance in One hand and you have a global view of what is happening and you can be sure that you can enforce the policies that you want You I don't know whether we have time for questions We are running out of time. We have a booth over there. It is e 58 Yes for the last 30 minutes because it's closing. So be quick to go there. We are very cool pen to To give you and thank you for your intention Don't hesitate to go to our website or wait Twitter's and to contact us directly. Thank you. Thank you