 Hello everybody. I believe we can start. My name is Livnath Perr. I have been working at Red Hat for the past five, almost five years. And I'm here today to share with you why Ovid is a great choice when it comes to managing your KVM-based virtualized data centers. How many of you have heard about Ovid before? Wow, that's a good crowd. And how many, so you're familiar with OpenStack? How many are familiar with OpenStack? Okay, great. Last one question. And VMware? Okay, so I have a good audience here. And I want to start with a short story. So five years ago, there was a startup named Cumranet. And they had many talented engineers. One of them is Avi Qviti. He's the founder of KVM. In Cumranet, they had two projects. I mean, they had multiple projects. But KVM was obviously the most big and important and the focus of the company. But the company also wanted to enable the KVM-based virtual machines. So they had to work on their own project for managing those virtual machines. Obviously, the KVM guys would try to convince you that this is great. Using command-line, well, I'm sure everybody likes it, but it doesn't scale. Well, it is scale, but it's not user-friendly, I would say. It depends on the users again. So in Cumranet, they had the management solution to be able to have a higher-level orchestration operation like VM migration, scheduling of VM, VM placement, if you're familiar with more with that term, and KVM, obviously. For me, and that's the last personal part here, I joined Red Hat to work on open-source projects. And I joined Red Hat to work on Java. Part of the management solution we're going to cover today is written in Java. Part of it is in Python. So I joined to work in Java on an open-source project. And I came to the company, to Red Hat, and my first task was to work on a closed-source project written in C-sharp. Yeah, so they actually acquired Cumranet. It was one of the acquisitions they made at the time. And my first task was to convert the code to Java and to make sure it's open-source within the next few, I would say, years now. Yeah, it took us some time, but we got there. So that's how Ovid started. And now we'll start looking into the bits. So what is Ovid? It's a centralized management solution focused or tailored for KVM as in the hypervisor. It leverages all the Linux good stuff, scalability, security, and it is open-source. It's an open-source alternative to vSphere, VMware, and vCenter. Who is behind it? So we have all the big companies, part of the government that we have. Ovid is obviously supported by IBM, Red Hat. I'm working for Red Hat. And the Internet app, et cetera. Ovid is by default package for Fedora and Linux centers. But there is community work currently to package it for Gen2, I believe, Ubuntu, Debian. And you can choose your flavor of Linux, go to the website and see if it is being supported. And just one last note, Red Hat Enterprise Virtualization product is derived from the Ovid project. So what setup do we support? What kind of workloads are we expecting? Not the workload, it's still the deployment part. So basic stuff, like one host, multiple virtual machines. On the data center, data center is a logical entity that is being a container for all the resources required by the virtual machine, like storage network. A more interesting deployment would be a multiple host environment with a cluster. Cluster is a migration domain in Ovid. That means all the virtual machines can migrate from one hypervisor to another. I'm going to use the word host, hypervisor and node interchangeable, if you don't mind. All about physical machines. Actually, I have this, I can do it like that. A more interesting setup would be to have multiple data centers, multiple clusters in each one, and VM migrating between the different hosts in the same cluster. That would be like a typical deployment of Ovid. Who is our audience? What kind of users does Ovid support? So we have three types. The most basic one is the sitting penguin. He doesn't do much. He actually only consumes his virtual machines. He can start and stop just to make sure you know what I'm... So this is like the user portal he logged in. He gets, he can see his virtual machine that were already allocated for him by the administrator. He can play, he can stop his virtual machine, and basically that's it. So it's quite a basic user. A more interesting user is the dancing penguin, which is the self-provisioning portal. He can basically also create virtual machines for himself based on a quota that was allocated to him by the administrator. So there is a quota for the CPU, for the storage, for the memory. So he gets a predefined quota and he can create his own virtual machine also based on templates. Templates is something we're going to, I'm going to describe a little bit later, but it's just like a static version or an image of image and virtual machine configuration. So it's an aggregation of all of that. It can be used for creating a virtual machine. And the last user which we're going to focus on today is the administrator. The last user is the administrator with a great toolbox. So this is what he gets. This is the administrator portal. He can manage technically all the resources. He can configure the network. He can configure the, he should configure the network and the storage and create a VM template, whatever he wants. And we'll see what great functionality he has to play with or to use. So those are the three types of use of that we have in Overt. When it comes to choosing your management solution, then for me it's about three things you have to consider. One of them is simplicity. You don't want to spend your time configuring the management solution. You want to spend your time on capacity planning. You want to spend your time on deployment or architecture of deployment, whatever you want. You don't want to waste your time on patching the management because it's boring. That's why the management is there. It should already be ready. It should be customizable. If you want to have a specific management, you should be able to do that. But out of the box it should be as simple as possible. Second thing I would consider is stability. Nobody likes his workload crash burns. Nobody likes his database loss. Nobody likes his data disappears. So stability is definitely one of those considerations when it comes to choosing your management. And the last one, functionality. Obviously you have to understand what kind of workload you want to use and you have and what kind of functionality you need and only then choose your management solution. Because you can go to and choose a big complicated management solution that would be too big or too complicated for you to use while you can use something simple and get the same basic stuff that you actually need. So it's kind of important consideration when it comes to choosing your management solution. So let's see how Overt meets those criteria or what Overt has to offer on those criteria. Okay, when it comes to simplicity, let's start with installation. So installing Overt is as simple as YAM install, technically. If you have Fedora or REL or CentOS. So you do YAM install and then you run up the engine config. Quite simple. You have your setup up and running. So installation-wise it's quite comfortable. User interaction. So everyone has his own special favorite way to communicate with the application. So it could be an intuitive UI if you're using a UI. It could be a Python SDK or CLI if you're Python enthusiastic. And it could be a Java SDK if you come from Java. If you come from Java, you don't want to use Python probably and the other way around. So you have all those choices in Overt. An intuitive UI, Python CLI and Python SDK, you can obviously go also to the REST API, which is available. If you really want to do all the technical stuff, all the programming stuff against the API, the REST API. The next thing is Overt node. Overt node is just enough Fedora to be able to function as an hypervisor. So Fedora has a lot of packages. And you not necessarily want all of them if it's going to serve only as an hypervisor. So what we did in Overt is we packaged Fedora, just enough Fedora. It's about 170 megs, the image size, to be able to function as an hypervisor. That makes our life easier and the user's life. Well, when it comes to security, I think you know that the less packages you have, the better security you can get. So that's why we have Overt node. Configuration-wise. So let's assume you already set up your environment and you can use Overt and everything is fine, but now you want to tweak it. So you have a centralized place where you can tweak all the configuration that is related to Overt. Which is a big benefit that you don't have to look for specific configurations. If you use management solution before, you always know that you have to go to different consoles and different config files and look for it. So in Overt it is all in one place, one utility that you can use. Okay, so there's also a quick start for those who are interested in doing a quick point proof concept. Then you can do YAM install Overt and you can use the only one plugin. The only one plugin will deploy your management solution, your hypervisor, your database and your storage on the same machine. So that would be like a quick start if you have a single machine and you just want to know what Overt is. Then that would be a good way to start. You can then do the engine setup, connect to the UI and there you go. Now to make this session more interesting, I chose to do a live demo now. What I did, I did do some, making sure that everything works basically, is I have my laptop and on that I have a virtual machine running. In that virtual machine I did the installation and the configuration just to save us time. And on that virtual machine I'm going to use nested virtual machine to bring a VM within a VM. Just to see if it's working and just to put the expense. One thing I did in addition to those two steps, I copied an image to the ISO domain. I would show you why. I wanted to save time again, not because it's too complex. Okay, so this is after the installation bits you're going to the web UI and you get the user portal, the Overt portal. You have a user portal, you have the administration portal. You have reports. Well, that's an advanced functionality. I'm not going to demonstrate that. So we're going to the admin. What I'm going to show you is how quick it is to set up your own virtual machine. So it's a little bit slow because of the nested virtualization I have here and so bear with me. It's not over, it's just my laptop. So this is what you get once you're done with the all-in-one installation. You get a dedicated local data center, a storage with ISO domain. ISO domain is the domain that holds all your images for the virtual machines. You can boot from the images in the ISO. Local storage is used for when I create a new disk for my virtual machine. It is being stored on the local storage. Template cluster, one cluster by default, which is a local cluster. Hosts, which includes my own host because I used all-in-one setup. So my host should appear here and VMs. So I want to create a new VM. All I have to do is go to the virtual machine tabs and actually that's the default. I just played it with here and I'm going to a new VM and give it a name. Live demo. You can give a description if you want. In this case, I would go to the boot options. We're going to cover all of this a little bit later. I just want to show you how it is to start a VM. So boot options. I want to start this VM from a city, not from Pixi. I don't have Pixi here. City, I want to attach a city. I copied two images. One is the tiny, just a tiny image of Linux. So the VM would start. After creating the VM, I can configure another disk and another network interface, but I don't want to do it right now. I just want to create the VM. So the VM was created. Here is the VM. I can start it. And then we're going to see, you can see here the status, which is wait for lunch. I'm not sure you can see it on the big screen. Well, this reflects the status technically. And in a second, I'm going to show you, we're going to open a console to the virtual machine. So this is a console to your virtual machine. I'm going to boot the image. Hopefully it would work. And we'll see the console there. So creating a virtual machine is as easy as that when you do a POC, which is quite easy to get started with. So booting Linux should be faster than that, I believe. No, but hopefully it would be fine. In the meantime, the UI comes up. And here we have Spice. Spice is an open source protocol, which is optimized for virtualization for connecting to your virtual machine. It's quite, it's not the purpose of this session, but it's also, okay, so the point is that you saw the VM. I don't want to spend the time. I believe you know how Linux looks like. So we have a running virtual machine. It's as easy as that. Just to show you the functionality that we have here before we move on. So when you create a VM in addition for choosing a name and the basic stuff, there is advanced options. What can you control? You can control the high availability. You know what? I want to make sure I have enough time. So I'll go over the slides. And then if we have time, I would come back here and show you on the live demo some great stuff. But I don't want to spend all the time on that. Okay, so live demo checked, worked. Okay, this is the backup in case the live demo didn't work. So that will be fine. Okay, so going on to stability. So the first thing was simplicity. I hope you got a point that it's quite simple to get started with over here. The second thing is stability. So I believe that if you can see that big companies are working on a project, it's not always means that the project is stable and mature. But it definitely means that there are big companies that believe in that project. And that is something we can take out of the fact that we have so many big companies involved. In addition, we have the reddit enterprise Linux virtualization product based on the overt. And we have customers. So it's an enterprise grade management solution that for sure indicates stability. We have an open governance model which is not about the technical stability, but more about the stability of the open source project. It's based on merit. So if you're coming and you're contributing enough, you are becoming part of the community. You can definitely get an influence position in the project. And we have a regular release schedule. Well, in the past two years, we released every six months. But lately we've discovered that there are many features that go into each release. So we are working to make the release schedule on a three-month cycle based and we want our users to be able to get more features faster because the upstream moves quite fast. This is the last point. We have an active user community, many contributions from various companies, many features going in. We want to make sure we release in a fashion manner, I mean quite often. So we have a regular release schedule. We have stabilization period in upstream. We want to make sure, well, I'm sure that if you contribute to an upstream project, you know that it sometimes tends to break. So we are trying to make sure this doesn't happen in overt and we are working to have stabilization period in which you can push new features. You can just stabilize what you already have. Those are short periods though. And community test days, which we have all hands on board with testing to make sure everybody is testing somebody else's features to make sure we get more stability. We know how to use the product that we are working on, which is great. Continuous integration. On each patch that is being sent over, there is a set of tests that being executed automatically. So Gerrits could be, so Jenkins can give Gerrits a plus one if the tests are passing. So it's just technical stuff, but it shows you that we care about stability and we are trying to do our best to make sure it is not broken. The active community users, I've mentioned one other deployment in production I wanted to mention is AlterWe, which is a France hosting company. The reason I wanted to mention that is because they're hosting our continuous integration servers. So thank you. It's like a thank you note. Stability-wise. Oh, just to make sure the point is that we have all these mechanisms in place. We are doing our best to make sure everything is stable and it seems like in the past few years we were able to release quite good, to be able to have quite good releases with many features involved and so far, except for some exceptional things like I was able to delete my storage domain, how can I recover from that other than such enthusiasm, I would say funny use cases, then it's quite stable. We didn't get any big complaints and stability. Functionality-wise. So functionality-wise we cover two aspects in OV. There is the full environment staff, which is the data center, the general data center, and there is the VM-oriented mechanism. So one of the goals that we had is to be able to configure anything within OV. You don't have to go out, except for specific stuff. You don't have to go out to another console and manage your stuff from an additional console and then from an additional console. You can do everything from within OV. It starts with adding a host to your environment. You want to scale your environment. You want to add a host. All you have to do is go to the... I'm going to show it to you on the demo. All you have to do is go to the UI, click the add a host button, give IP address, the root password, and everything is provisioned for you. So that's quite good, I believe. There is another option. You can use an auto-registration mechanism. If you use the overt node that I've mentioned earlier, you can also use another flow for adding a host to your environment, which is the host register itself in the management. And all you have to do is look at the fingerprint and approve the host to add it to your environment. So you control the host environment. You control the networking configuration. We are basically currently our network separating the different users' network based on VLANs. That's one aspect, but we obviously don't control the switches or the forwarding elements on the way. So you have to configure the switches to configure a trunk or a specific VLANs that you want. So this is not done within OV obviously. We do have an integration with Neutron, which is the OpenStack networking component that enables also VXLAN and GRE tunnels. It's just a work in progress. I mean, the integration is already there, and I'm going to mention it later on, but that's not one of the mature features that we have. So you can configure the network with it over it. You can configure the storage. We support block device storage, and we support NFS or general politics. Storage, everything could be ensed and plaster, of course. Integration would plaster everything and be configured within OVIRT. We can see the screens later. Vizillion policies. So out of the box, you get two policies for scheduling your VMs. One of them is evenly distributed when your workload is evenly distributed between the nodes, and the other one is power save when we're trying to reduce the amount of physical resources we're using. So those are two out of the box policies you get with OVIRT, but there is now on the upcoming release, which actually was already released last week, so OVIRT pre-order 3 offers a pluggable mechanism for scheduling. So you can write your own scheduler if you'd like using Python, and then your own policies would be implemented. You can also control the quota and network quality of service. Everything is on the data center level. The other aspect is your virtual machine. Obviously, creating a new disk, attaching a disk to your virtual machine, attaching your virtual machine to a network, all the basic stuff. It can be controlled within OVIRT. You can also mark the VM as highly available for those special VMs that you really care about, and you want to make sure OVIRT is also monitoring your virtual machine, and in case of a crash, or the hypervisor crash, or the specific VM crash, you want it to be restarted on another node, then you can specify it on the virtual machine level. And some fine tuning. As I mentioned, we have a lot of things that are tailored and enhanced or leverages, KVM specifically as a hypervisor. So the fine tuning can be done on the VM level. You can pass parameters directly to the Linux kernel, and that's quite good when you want to work with virtual machine and do some fine tuning. There is also advanced functionality. So we have history database, and reports we're going to see some of it. Well, not history and database, but there are some built-in reports that you can get on your virtual machines and on your hosts and your general statistics and data center where you use your resources. You have quota. One of the good things, or one of the things I like, you know, is that you can delegate permissions to users. So it's not just that you have the administrator. If you're an administrator and you want to delegate some permissions, you can definitely create a role for that and have users associated with that role to be able to do just specific stuff. If you have a storage administrator and you don't want him to miss your virtualization resources or virtualized resources, you can just give him a storage-related task, and that is all he'll be able to do based on the role that you gave him. So we have quota, which cups the users with CPU memory and storage. Okay, neutron and glance integration. So I saw that some of you or actually many of you are familiar with OpenStack. I wanted to deliver some of the technologies that are available in OpenStack within Ovir. So we integrated with some of the component. It's still not mature and still being worked on, but in terms of networking and storage, we integrated with neutron and glance. For specific details, you can always go to our website or you can come and ask me later specifically on the integration with neutron. I can give you much more details. So we have that in place. VM live snapshot with RAM. I was in a session about it yesterday. So taking a snapshot, maybe it's worth mentioning what's the VM life cycle before I'm going to the VM live snapshot stuff. You know what? I paused. Do you guys have any questions before I move? I'm going to describe the VM life cycle just to make sure that we're fine. Yeah. What is the overarching extension to do live migration? What do you mean by extensions? Whatever is available in upstream. Yes, no tweaks. KVM QMU. I can see the command line at the beginning. That was actually something we generated. You said that you wanted to do a newer version of KVM and the very multi-photo. Yes. Until you get advanced integration, what's image formats? I'm not familiar with that. So my actual, I'm working on the networking stack and I'm working on Ovir for a long time, but I'm not familiar with storage, but I do have someone in the audience that can just... Yeah. Itamar, can you stand up? What is the question? I was interested in image formats until we get advanced integrated. For formats, we support raw and QCAL as image formats with the Ovir app, but you can use the great V2V tool to convert, VMware, et cetera. Yeah. Sorry. Glance is not my domain. Yeah. What is the native way in which you just deal with networking and franchising now, just the VLANs and bridges? Yeah. Okay. So the basic out-of-the-box stuff, yes, it's VLAN and bridges and Linux bridge, but we have integration with Neutron and there we can use VXLAN and GRE tunneling. And now they are... Yes, by Neutron, by Ovir to Neutron. Yes, technically, yes. Without Neutron? No, we're deploying Neutron for you, basically. Actually, okay, that's the way. The accurate flow is that you have to deploy the Neutron service. We have a mechanism to do that, but you get that in... I mean, you can take the upstream, but there are specific versions that we support. So you have to deploy the service yourself, but all the agents on the host, Ovir does that for you. It also configures and do the fine tuning on the host level, so you don't really have to go and do it manually, like you have to do it in Neutron. So we took Neutron and we tried to wrap it to make it more easy to consume, because Ovir users are quite regular to having everything prepared for them. They don't like to do a lot of bashing and stuff, but so we wanted to make sure it's easy for users, but it's intuitive. So except for the service that we are working on, I mean, I assume that in a few months we'll have it deployed also by Ovir. At this point, you have to deploy the service, and they know we are doing it for you. Okay? Yes? So there is some community work to package Ovir to different Linux distributions. I think somebody worked on Gen2, for example. I know somebody was working on Debian. I'm not sure about Ubuntu. No Ubuntu. So we are working on that. It's not available yet. Well, the guest agent, that means which flavors of virtual machines we are supporting. So that's basically all Linux flavor, I would say. And Windows, of course. Yes? Yes, I know. We are using all Ovir in paint environment and an eco-technical characterization production. We have about in our age, we have about 200 games servers all with the SUSE Linux 10 and 11. But I need this really nice proof that I can see a manager. I've got the rest, the networking presentations. The open guest. Yes, yes. Okay, so maybe I should... Okay, one last question, and then I want to move on just to be able to cover the architecture a little bit. And you can always grab me after the session. I'd be happy to answer questions. What about the network? I didn't hear you. What about... In the network? Yeah, okay, yeah, of course. So currently there are two level of quotas. There is the capping on the VNIC specific to make sure a specific virtual machine doesn't take over all the bandwidth. And there is capping on the network level. For example, you have different talents or different users with different networks. You don't want one network to take over all the bandwidth. So for the guests specifically, we already have that. So you can configure, you can cap your VNIC traffic, okay? But on the network level, it's something we are working on. It would be available on the next version. There are already patches that being worked on. So did you... Was I clear? You understood the two levels that we have? Yes. Infinity band? Yeah. Okay, so that's a good question. We... Theoretically, it should work, because we have somebody that is using Infinity band that did some testing and right on the overt users community. But I didn't personally test it, so I don't know. But there was some correspondence on that. Just wanted to say for the quotas, there's a presentation on quality of service which will cover all of the quota capabilities of 330 in the KVM forum. Yeah, so we have a number of sessions over today and tomorrow. I'm going to mention that. But I want to be able to cover architecture a little bit just because to make sure we are all... We all know what we're talking. So the basic thing is, and that's really high level, is that you have the engine currently written in Java. It's highly... It's a centralized management solution. So this is like the brain. And we have on each physical node, we have an agent which is VDSM written in Python. On the client side, I already mentioned, you have many flavors of clients you can use. The more interesting... Can you see it? You can't see it. Okay, so maybe just a little bit more in-depth architecture is that we have the engine. We're using a Postgres database. It's all obviously open source. We support authentication. We support LDAP directory in general. And specifically, we also support Active Directory for Microsoft. We expose the API on top of the engine. And I mentioned the client. I'm sorry, we have the SDK, the admin portal and the user portal that you guys saw. On the host, we have the VDSM. There is a host and node to differentiate between a full blown image and just the over node version. On top of that, you can run a Linux-based virtual machine and Windows-based virtual machine. We have guest agent for your flavor of virtual machine that enables us to do things within the guest. For example, to view the applications that are running within the guest to be able to have a single sign-on for the guest from the user portal and other functionalities. We have... The SPICE protocol is not formally part of OVIR, but it's an open source project for connecting to your virtual machines. You can use the SPICE either for Linux or Windows machine. And storage. We have local storage. So we enable to utilize your... If you have a hypervisor and you want to utilize its local disk, you can do that by configuring a local storage and using that. And we have shared storage to be able to migrate a virtual machine from one or to another. You have to have... Well, the easy way is to have a shared storage that is accessible from within the different physical node. So we have shared storage. In the guest, there are multiple drivers, virtual drivers for improving performance. I don't want to get into the details, but to improve performance for storage and networking, we also have the virtual serial that our agent uses. Currently, there's also a virtual SCSI driver within the guest. So all of this to improve performance. So you get a nice performance. Customization options. We have multiple ways you can customize to meet your requirements. One of them is on the agent. We call the agent VDSM. It's on the agent level. We are using... Okay, so VDSM, technically using Librit and Librit is using KVM for everything that is related to the VM lifecycle management. We have a way that you can write a hook, a script and deploy it on the host, which can manipulate the Librit DOM XML. If you're familiar with how it works, it's technically... VDSM is taking into consideration a lot of the parameters it gets. At the end of the day, it creates a DOM XML which represents your virtual machine and then it launches the VM using... Librit launches the VM using KVM QMU. So you can go and manipulate the XML the way you wish. It's usually useful for testing new features that just got into KVM and they are not leveraged in over it yet and you want to give it a try. So that would be a good way to do it doing this customization. An example of such a hook would be... So now we have a watchdog emulated device within the guest so you can make sure... It's a feature that exists in KVM and we leveraged it in the last release. It's like a hardware emulated car that the guest... There is a demon within the guest that touches this car and if it doesn't touch it for a configurable amount of seconds then the machine can reboot or stop. It depends on the action you provide to the watchdog. So for example, before we incorporated this feature we just had a hook so you could use this feature even before it was part of the product. We have a mechanism to extend the UI. This example is how many of you are familiar with Forman? Okay, so maybe I should just go to this and Nagyos? Okay. So let's look at this. Technically, there is a way to extend our UI to integrate with external components so for example, NetApp has such an option if you can get statistics from the storage. This specific is for Nagyos. You can get statistics from the host and different events that happened on the host. So there is a full tutorial... There is a full tutorial on the website you can use to customize and do your own flavor of UI. Okay, so... Yeah, good. I have one minute left so what I wanted to make sure... What I wanted to make sure before... Maybe I'll take a few minutes to show you things in the demo but I want to make sure that the general idea is that Ovid is tailored to KVM. I think that's the benefit you can get out of Ovid so it leverages a specific thing and if you don't want to use multiple types of hypervisor or if you don't have this specific urgency to use hypervisor or something then you can simply use Ovid which is quite tailored and easy to use, easy to consume. It provides a lot of functionality, a lot of advanced functionality. Some of it I did not cover today but you're welcome to come to our booth and see some of it. I think it's usable and you have a good community to support you if you have questions. I find it a likeable project because I've been working on it for five years so I better feel like that and I would love to get your questions now, I think. Till I'm setting up the demo you can get Ovid, by the way if you just go to download you can get it here like USB. It's as simple as plug it into your laptop and try. Fedora repositories you can build it from source if you're really adventurous and you are welcome to contribute even if it's just by stating your opinion and saying what features you're missing we will definitely we love getting feedback from the community we love working on features that the community asks for so definitely joining the community could help. Thank you but I want to show you something else in the demo. OK, so maybe just a quick coverage of what you get OK, so this is the data center clusters and host I don't have many entities here because I didn't want to add to do too many actions before I was able to show you the demo but there is a great intuitive search in here. Yeah, but so there's another complete so if you're looking for a host with specific VMs on it or specific template let's do VMs and I specifically chose I don't remember, I call it something like live live demo so you can get your OK, so this is the host, there's a single host so it's not really functional when you have a single entity to do a search mechanism but it's quite intuitive and there is also a complete mechanism which is really useful on each entity you can see with the specific sub entities so for example for host you can see all the virtual machines that are running on that host on the network you can see this is the main tab, so for specific network you can see general information in which cluster it's available which host have this network which virtual machines are using this network which is quite useful template permissions which user is able to use this network etc we have the storage maybe just a new domain you can see here that we support NFS well it's a local data center so there is only local interfaces but there is also ISCASI OK, so I think the idea is quite clear I don't want to do any more of this demo stuff but I would happy to get questions for the next I don't know, technically we are out of time but they don't kick me out so I'm here, yes At some point in the past at least it was only possible to have one type of storage per or is that still the case? Unfortunately yes, but we are working to make sure this goes away yes OK, so that's a delicate question technically it depends on your workload telling the truth so you probably heard the page about the cattle and the pets are you familiar with that? OK, let me be the first to tell you it and I'm sure I'm just the first you're going to hear it many times so there's the pets that you like to give it a name and you love it and you handle it and you care if something happens to it most of the applications these days are written in such a way that they have some kind of states they are not stateless which is a requirement when it comes to cloud so when you're looking at pets you care about each virtual machine then you go to Ovid so this is definitely you can configure highly available you have a virtual migration to make sure you don't have any downtime on your virtual machine that's definitely a benefit of Ovid when it comes to cattle you have many of those you don't have name for each one you don't care if one of them crashes you simply launch another VM so it depends what kind of applications you're going to use what type of workload you're going to use on your virtualized environment if you don't care about your virtual machines then you can go to a cloud based solution if you care about your virtual machines and you want to make sure they have state then you can go to Ovid it's not a clear cut so you can definitely use Ovid if you want if you have stated virtual machines there's no conflict there but it depends on scale I assume it's not true because I'm using open stack and I'm not sure what scale it goes to so I don't know technically that's the formal answer not saying any more than that yes for monitoring there are a few aspects there is the host monitoring and there is the virtual we have VDSM which is an agent on the host on the statistics that is related to the host the VDSM technically reports it back to the engine how do I gather? are you talking about the reports I've mentioned earlier? okay so we have history database Ovid is using Postgres as a database the database that we use on the day-to-day transactions and then we have the history database which aggregates every one minute the data from the statistics we gather on the regular database the RDBMS and based on that we have integration with Jasper if you're familiar with that to generate a report Jasper is also an open source solution to generate reports based on the history database you can also definitely make your own report customize the report tool the database is part of a formal API it's not changing it's backward compatible which is also a great benefit with Ovid so we release every six months and you do get backwards compatibility we don't break backwards compatibility it's like one of the core things in our community which is painful for us as engineers but I'm sure as users you can appreciate that so that answers your question any other question? yeah okay so we integrate with an external LDAP directory if you can configure groups in your LDAP directory then yes it's fully integrated okay so you can for example give a quarter to a group of users and then each user will get the same quarter that you gave to the group oh the same pool I can give this group of users 100 gigs total yeah the formal answer is no but when we implemented this feature we actually thought there are two ways you can give a quarter to a specific group and you probably there are two options either you want it to each one of the group yes okay he wants to kick me out and there's an option to give it to them as a group as one resource we chose to go with the other implementation that it is per user okay thank you everyone I'll be here around