 me introduce myself. I am Govinda Das. I work for Red Hat in storage team. So today we will be going to speak about you know how quick you can open your data center using open source project called Aovart. So behind before going into the Aovart is it audible? So before going to the Aovart, let me show you who are the people behind it. Okay, so here you can see the board member of Aovart like Red Hat, Cisco Intel, all the big people over here. So from here you can guess that how important the project is and so underlying the Aovart open source project Red Hat has you know enterprise level project called Rev, Red Hat Enterprise Virtualization. Okay, so as I mentioned it's a open source project. So all of us, everybody can contribute, can try because it's open right. So there are certain things how you can, if you want to contribute, how you can contribute to the project. Okay, we have mailing list, we have IRC channel, we are using bugzilla, you can try project and you can see if any bugs you can find, you can file a bug. So we will be happy to you know resolve those things and we are using it for code review. Of course it's a Git, it's a Git repo. Okay, so before going to the Aovart part, let me talk about some basics things which are used by Aovart underneath. Okay, so as you know the Aovart is a virtualization management tool. So basically it uses some virtualization technologies to you know meet the goal. Okay, so it manages both KBM and KMO and it contains you know three utilities namely API library and a daemon, it's a Liberty and a command line tool. So Liberty is quite effective and it can manage a lot of hypervisors. Okay, so we can use together. So it means a hypervisor, an accelerating agent and a management library. So when you use all together, you will get the complete solution of hypervisor. So let's take an example, let's create a VM using KMOK VM using CLI. So basically the command will look like this. Okay, so there is Libhurt manager which you can use to create VMs using API sorry UI. Then the question is I can create the VM using UI right or you know CLI. Then why we need Aovart? Well, so before going to that but let me introduce what is Aovart. So as I said Aovart is a virtualization management tool. Okay, it manages hardware and it manages storage, it manages you know networking resources. So underneath it uses KBM as a hypervisor. Okay, if you go to the architectural level, you can see. So there is the engine core which is basically Aovart code. This is written in Java. So you have three nodes. There is the agent, another agent is running called VDSM. Okay, which will talk to host the physical host and which will give the response to the Aovart engine. So Aovart provides you know different access controls like we have a user portal, we have admin portal. So admin portal is meant for admin which which has fully privileged you know to access all the resources. And we have user portal specifically to access it basically the control over VMs. User can you know create VM and the manage the VMs basically. It does not have other control. And integration is very easy. If you want to integrate Aovart with any other application, we are providing you know REST API, SDK, Java SDK, Python SDK. So how does the Aovart portal looks like? This is the landing dashboard. You can see. So Aovart can manage multiple data center. Okay, so here is a data center and you can have in a data center multiple clusters, so on. So you have virtual machines portal. So this is the virtual machine portal looks like you can manage all the virtual machines over here in the top of the right corner. You can see all the actions available there. This is the user portal looks like you can have only the VM access to so you can manage all the VMs using this portal. Okay, so as I said, right, then so we have what manager, then why we need Aovart. So the main reason is you know high availability. So you can't achieve using what manager you can't achieve the high availability. There are a lot more features. But the basic things is a high availability. Okay, so if you go to the architectural level, there are three hosts right in the below part you can see. Okay, so these three host will be managed by Aovart engine. So when you create the Aovart engine, it's basically a VM where the Aovart engine will be running. And as per the host availability, the VM will be migrating to one host to another host so that you can achieve high availability. Always you will have our engine up and running. Let's say anything goes wrong with host one. So your engine will be migrate to another host which is up and running. Okay, so this is a single host environment. You have a data center and you have Aovart running and you have only one host. So here you are not bothering about the high availability. You can create multiple VMs as per your configuration. And if you go to multi host environment, this is how it looks like. So here the benefit is in one host there are some VMs are running. Let's say something goes wrong with another host right where other VMs are running. So you can migrate your VMs from one host to another host. So that your VMs will be always highly available. There will be no downtime. Now this is a multi data center environment. So Aovart can manage multiple data centers. And of course you can migrate your VMs from within the cluster from host to host. Okay, so now come to the feature part. What are the features main features provided by Aovart? So obviously it's one of them is a high availability. Another one feature is live migration. So you can migrate your VM between the hosts with zero downtime. Then system scheduler. So you can schedule you know the scheduler will be running and which will always checks the you know load balance of the VMs means let's say you have three node cluster. Okay, and there are certain VMs are running in each cluster right. So let's take an example one of the host is going with you know high load. Then the system scheduler will always check and it will try to maintain the balance. We'll go deeper into that next slide. Power saver you can save the power. So certain time you'll feel that no much users are using and the load is very low. So I can save some power right I can switch up one host and I can meant all the VMs will be migrate to other host and I can still continue my application. So in that way you can save your power maintenance manager. So there will be no downtime of the virtual machines during the maintenance mode of the node image management. So it's basically a template based provisioning thin provisioning and snapshot. So you can create snapshot of the VMs in a certain state of the VM. So so that whenever you will restore your snapshot you will get the same state where you took the snapshot and monitoring and reporting. So it monitors the VM guest host networking storage and provide the report. So we have import-export feature. So you can import and export VMs and templates using OVF file. Then V2V convert VMs from other you know providers like VMware to the Revex VM. So now come to high availability. So if you see in the picture the first host somehow goes down. Then what about does is it will keep on monitoring and check okay this node has some issue. It is not responding at all. So what about those VMs which are running in the host right. So what it will do is it will start those VMs into the other host because we have shared storage where all the VM images are shared across the nodes. So it will simply start all the nodes into the other node which are up and running basically. So once the node is come up then what it will do is it will again migrate all those VMs to that node live migration. Live migration means all the nodes are up and running but I want to migrate some means VMs from one node to another node without you know any interruption any anything should not stop basically whatever application is running it should not stop. So any IO intensive workload you know should not interrupt. So everything should go well as before but in between the migration will happen between one node to another node. So this is the you know system scheduler. So in the first picture you can see one of the node is in critical stage it goes up to 90 percent of the load or CPUs right. So what system scheduler will do it will try to balance the load between the host whatever exist in the cluster. So what it will do is it will move a certain load to other which is low load let's say this is other the other node has 30 percent right. So it will move some load and it will maintain 60 percent 60 percent. So in that way system scheduler will maintain the workload. So this is the power server. So you can once all the VMs will migrate. So one of the node you can sit down or you can move into maintenance and you can do whatever means required things. So another thing is notification service Overt has inbuilt notification service we are using Nagios for that the framework. So you can you know configure certain things that okay these are the critical features critical things I have to monitor. So anything goes wrong I should get the notification basically. So let's say I am using so something happened in my storage. So if you are using cluster something happened to my brick or something happened to my volume whatever I am using. So what I can do is I can configure my notification service over here. So anything goes wrong or let's say my device is you know going to be full. Let's say I put some threshold like after 80 percent please warn me so that I can you know pre notified and I can take actions. So you can configure different notifications so that you will get the notification via eventing and you know email service. So of course for email service you have to configure the SMTP server and all. Okay so these are the key things you know which is maintained by Overt. So first thing is simplicity it's very very simple run one command so within a second it will install and within few seconds you can set up your things and your data center is up stability. We have functionality so there are a lot more functionalities which are basically solving your real time problem. So one product can't exist in the market right or can't do good business in the market if you don't solve or if you never solve the real the real problem of the customers. So Overt has a lot more functionality it has security it inherits the SELINX you know security and of course it has very large community. In the first slide you saw right who are the board members of the Overt community. So it has large community support. So as I said you just do you install Overt engine so it will do installation for you and do engine setup just two three inputs you have to provide like using database credentials and what is the database you are using you are done. Okay you have the JDK or CLI and rest fully of service if you want to integrate with other project you can easily. So simplicity means the usability once you try to use the user experience will be very very simple. So everything is collaborated over here you can go and try of course it's open source. As you know stability so Red Hat has downstream product from Overt it's a Rev Red Hat Enterprise virtualization. So it's pretty much stable. Now functionality functionality we have lot of functionality I put some of them let's say you a disaster recovery. So georepsization you know live migration so whatever I talked so all of the features are available. So you can do disaster recovery I want to take care of my data right customer data I have to take care. So what I can do is I can this is my primary site India is my primary site and I can you know install one secondary site in Europe and there are certain things that the data only I can schedule and push to the secondary site. So anything any disaster happened here my business should not stop. So I can recover data from secondary site and I can build my business. So security as I said it inherits the SLINX security features. So the large community you can see these are the comments regularly happening you know from last 10 days I can show you these are the you know comments happens. So this is how community is supporting to the Overt. So to install you know Red Hat has another product called RHHI Rai. So it's a UI based installation it's very very simple. So you guys can go and see I have some setup in both you can see that all as well. Okay so any questions. You mentioned about the power saver feature. So how does it work does it require manual shutting off the. Can you speak louder. Okay is it audible. Yeah. So you mentioned about the power saver option power saver feature like when a node is inactive and you can what can tell you this node to be shut down. How does it function I mean does it require manual shutdown of the node or how does it work. No so that is what you can you know whenever you are monitoring your infrastructure right or application. So you can see the utilization of the CPU or any high load into the system right. So let's say whenever yeah of course it's required some manual intervention because you know I don't know when I'm going to save the power right. So or you can schedule it. So so the way it works is that we have the out of band management we control the power of the host. So whenever there's a policy that we can migrate enough VMs out of one host we can shut it down. Now we know how to start it because we control the out of band via IPMI or ILO or anything. So you usually do it for example you know you can schedule at night right at night you want to condense your workloads and save on electricity and the day you want to expend as much as possible. So it's a scheduling policy apart from configuring the out of band management which is usually a good idea anyway for fencing there's nothing that needs to be done. Is it similar to the VMware ESX I mean vSphere it's a commercial version no. Yeah it's basically similar. I mean whatever your features mentioning it is exactly the same thing where vSphere is confirmed. Okay so obviously there will be the competitors in the market right. I mean that is a licensed version no. It's not free. Yeah it's it's a free. It's the orbit is free. It's an open source. It's out of band so you depend on the vendor out of band like iDRAC or ILO or you have your own out of band. Yeah we support all the fence agents so there's a package of fence agent and you can configure you know or modify your own but it's the same package that for example pacemaker the high availability has and it is updated to support all you know Dell and HP and you know whatever they come up with new generation it is you know supporting them. We have not invented our own. I think one of the main advantage of over it is that it's it's a lot about integration project. We use existing functionality. We don't reinvent the wheel. We use liver it. We use spice. We use the underlying storage features of enterprise linux right. So in the same manner we use the fence agents for example. So half an hour was so less time for this topic actually. I wanted to know two things. What kind of storage I can use for storage like I can use sand like block device or I can I must use NFS or something like that. So this is the first question. Second question. So when there is two things happening one you said live bring live migration and one when your KVM goes down or your hypervisor goes down the VM moves to other KVM. That migration part I mean do we see any slag any any lack I mean like breaking or something like if I have nginx running on one VM and that VM moves to other KVM. So do you see any downtime there. No there is no downtime. So the basically when your one of the node goes down right. So so the general thing is whatever the VMs running on that node it will not be functional right. So to overcome that so we have shared storage where all the VM images will be shared across the node. Basically all the node can access those VMs. You mean the the hard drive part the the IMAGE or QCOW that thing. So it will start whatever the VMs on that node which are down. So it will start those VMs in the other node. Okay so there will be no downtime. Okay so if I do a ping on particular machine on particular VM the ping will go on or there will be a yeah you can access the VMs. Okay and what about the storage. Yeah the storage over supports multiple storage integration like Gluster, NFS. Block device also my block storage also. Yeah okay and last question I mean is there any project going on for Ubuntu because right now I know is for CentOS and REL. I don't think there's enough community interest right now in Ubuntu but you know patches are welcome. So you know we've had some parts for example mainly from the guest for example. So Debian support for the guest agents. Right. This is something you know the community wanted and then we do that. Yeah because I tried to work on the VDSM part for Ubuntu because right now my company has only Ubuntu. I mean overt is the only reason why we are thinking of moving to CentOS or REL. Yeah it's a good reason. Yeah I mean if I have any you know some workaround or something we will not go. So the way to do it is the community way. First of all send patches and second of all or even first of all you know as for the community is there you know mutual interest maybe there are others that are quiet and are like yeah but we want Ubuntu but you know so join forces with others and you know we'd be happy to support Ubuntu as well. We're now focusing on what you know is the priority of whoever contributes but if there's demand for it it will come. There's no technical reason not to support Ubuntu right there are a few changes that are very specific. Right how can I follow what is going on like if any deployment or development going on for. Develop over the torque so we're basically mostly on mailing list and on IRC so develop over the torque and users at over the torque. Okay thank you. We use the underlying storage features of enterprise Linux right so in the same manner we use the defense agents for example. So half an hour was so less time for this topic actually. I wanted to know two things what kind of storage I can use for storage like I can use sand like block device or I can I must use NFS or something like that. So this is the first question. Second question so there is two things happening one you said live bring live migration and one when your KVM goes down or your hypervisor goes down the VM moves to other KVM that migration part I mean do we see any slag any any lag I mean like breaking or something like if I have NGNX running on one VM and that VM moves to other KVM. So do you see any downtime there? No there is no downtime. So basically when your one of the node goes down right so the general thing is whatever the VMs running on that node it will not be functional right. So to overcome that so we have shared storage where all the VM images will be shared across the node basically all the node can access those VMs. You mean the hard drive part the IMAGE or QCOW that thing. So it will start whatever the VMs on that node which are down so it will start those VMs in the other node. Okay so there will be no downtime. Okay so if I do a ping on particular machine on particular VM the ping will go on or there'll be a. Yeah you can access the VMs. Okay and what about the storage? Yeah the storage or what supports multiple storage integration like Gluster, NFS. Block device also? Sorry? Block storage also? Yeah. Okay and last question I mean is there any project going on for Ubuntu because right now I know is for CentOS and REL? I don't think there's enough community interest right now in Ubuntu but you know patches are welcome so you know we've had some parts for example mainly from the guest for example so Debian support for the guest agents. Right. This is something you know the community wanted and then we do that. Because I tried to work on the VDSM part for Ubuntu because right now my company has only Ubuntu. I mean overt is the only reason why we are thinking of moving to CentOS or REL. Yeah it's a good reason. Yeah I mean if I have any you know some work around or something we will not go. So the way to do it is the community way. First of all send patches and second of all or even first of all you know as for the community is there you know mutual interest maybe there are others that are quiet and are like yeah but we want Ubuntu but you know so join forces with others and you know we'd be happy to support Ubuntu as well. We're now focusing on what you know is the priority of whoever contributes but if there's demand for it it will come. There's no technical reason not to support Ubuntu right there are few changes that are very specific. Right. How can I follow what is going on like if any deployment or development going on for. Develop over the torque. So we're basically mostly on mailing list and on IRC so develop over the torque and users at over the torque. Okay. Thank you.