 So, I work with ThoughtWorks as a DevOps person and we internally adopted our virtualization with OpenVG. So, the agenda will be, so what are the different options available in virtualization? Why we pick the OpenVG as a virtualization? There are limitations with OpenVG, but with those limitations how can we you know exploit the features available with OpenVG and questions. So, basically full virtualization has a bit overhead in terms of performance, so there is a layer for virtualization application which consumes pretty much of CPU cycles as well as memory and there is a limitation for how many number of VMs you can spawn. Similarly, para virtualization also has the same limitations, but it is more futuristic than the previous one. The third one is OS virtualization. It's not complete virtualization, but the kernel operating system kernel itself is modified in such a way that you can create multiple OS, but the same version. That means you can only implement Linux, but you can implement different distributions of Linux. So, for example, Debian and RPM based like CentOS, Fedora, Susie, Scientific Linux, etc. And the thing about OpenVG is you can spawn pretty large number of VMs in comparison with the previous two. And the third feature is you can have dynamic resource allocation. Dynamic in the sense as in when you want resource allocation, you just allocate resource, the allocation taken in prefect immediately without rebooting, without any change, just immediate resource allocation is there. You can directly get into the container and see what are the resources available to the container. So, it's a hardware, we install some native OS, then install OpenVG kernel and then OpenVG kernel allows us to install multiple VPS like virtual private servers and which are nothing but the Linux terminals. Performance wise, there is like 1 to 3 percent performance, it is there, so it's not like that, it's like better than others. And so, processing in individual VPS are basically can exploit as much as your host can provide. So, apart from this 1 to 3 percent overhead processes, rest all can be utilized by individual VPS. Even 1 VPS if it is needed, it can exploit entire host's ability to take resources. So, there are limitations like we cannot have multiple operating systems, only Linux is supported as of now and that too with multiple distributions. So, there are limited VPN capabilities and yeah, so one host has multiple virtual private servers. Host is one point of failure where the capacity is limited. So, the features are very minimal, overhead is needed to install OpenVG kernel as such. The dynamic resource management, in context with virtual private server, they have their own, so it's a complete isolated system as such. So it has its own users, group, processes, network, firewall and routing, etc. So the individual VPS are efficient in utilizing the resources, so they can actually, so for example, in first case where emulation is option, then the number of virtual servers we can have is limited. So for example, with say 96 GB of memory, we can have at most 40 to 50 VMs, with OpenVisa you can spawn around 100 or 120 VMs, because of the fact that resource allocation does not actually guarantee that resources are utilized. So even if I allocate some 2 GB RAM of a particular container or particular host doesn't necessarily mean that host is actually using it. So the fair share which it is using and the other one which is not used, actually used by other host, other VPS. So that's how you can actually scale up the number of VPS inside the OpenVisa. The other things are you can create VPS in very short time. So on machine like my laptop has core i5 processor with 8GB RAM, it takes almost a second to create one in here. Other things are like creation of VPS are from templates. So if we see OpenVisa container installation, there are predefined templates available. Previously it used to be like you can create your own template, right now they are like decommissioning that part and they are giving you very different options of templates. So templates like for example, for Fedora, for Google for scientific units etc. So there are predefined templates available, you can take that template which basically has very minimal packages for installation as well as the native kernel. So you can install it from the templates. So the beauty is like it's a basically individual machine but all file systems are you know extracted in TARBOL. So it's similar to the uncompressing the TARBOL and have your OS ready. So that process basically requires very minimum amount of time and because of that you can actually scale your infrastructure, you can have number of VMs in very very short time. So to have a private cloud with OpenVisa, so it facilitates less number of physical servers, small small VMs and we can actually do the capacity planning like I said the resources are shared. So for example on 96GB RAM you can actually allocate to 2GB RAM on individual container and still have 100 plus VMs. So you need to do capacity planning at some point or another because if all the containers are working at a high capacity, they'll break some things or another. So while using monitoring tools like we use NodeOS, Graphite and Graphios for trending like okay these many resources are used, these many are free. So we can actually have this many number of more VMs. So we actually do the capacity planning, trending and then have our individual VPS so that whatever we commit to the quality of service should be retained when we do the service. So the dynamic resource management is one and the wind, yeah so the parameters for individual VPS to configure are pretty large in terms of number. So these are UBC parameters. So there are the concept of UBC's user bin counters in context with OpenVizier. So there are 20 odd different parameters which previously are available, even now they are available but to reduce complexity they enable VSwap kernels. So for example you just configure your swap and physical memory that is RAM and the rest all parameters are handled by kernel itself. So whatever processes are spawned kernel decides okay this much memory I have so this much I can give to individual process and likewise but you can actually configure like how much memory is needed for that particular process, how much guaranteed memory that process can take, how much actually you can allocate to that process, you can configure it manually. So that flexibility is there. So there are features like checkpointing, you can do live migration, you can have pre-built templates so you know that okay this kind of server I need pretty frequently so you can create that template for that particular server, have some predefined packages installed and you just keep it shut down so basically it is costing nothing but just this space. So whenever you want you can actually start spawning VPS as and when it is required. So live migration like in between your hosts is very easy so it's like pausing movie for say some seconds like one or two seconds and again it is live because there is small lag like I said in some seconds one or two seconds is there because initially it do are synced with the another host with SSH key of course and then it shutdowns the old VM and deletes it and at the same time that VM is active on another host. Another customization you can do so for example there are some servers you frequently need your build servers, they are constantly whenever developer checks in suddenly build trigger and you know so there are multiple triggers happening when they probably 30-40 times your build server is busy writing data, logs everything so for that you can actually mount SSD for those particular server so all activity this style happens on the SSD which is fast, pretty fast. There are servers which need your storage reliable storage so you can mount sand on those particular VMs and then the so all this sharing of resources is easy in between the OpenVC containers or VPS. OpenVC has its own kernel but you can install it on any host OS so for example I have Fedora in my laptop I can install OpenVC kernel and boot in OpenVC kernel and then start using that feature so you can add there is nothing which restricts you basically so it's just another hardware and another set of virtual servers in that hardware you just need to configure the IP address and you can bound them one VPS can have multiple IP addresses and vice versa. So what you can have is kind of limited because of the resource sharing capacity but here the allocated resources and actually using resources are differently different by category so that way you can basically that how much you can spawn and rendering with the individual like you should not be crashing your OS to go in hours. No but the individual VPS has upper capacity so one virtual server will always have like four gigabytes of memory one processor of say this much capacity so it is not going to be overwrite that particular memory. The thing which I am talking about you can have number of VMs in the same capacity unless they are using their optimal capacity you can have multiple number of VMs so that's I can repeat my talk. Yeah because what I heard recently probably this month or last month that they are decommissioning the Debian support day by day and probably in future they will not have it. Yeah it is kernel level right now it is supported even we are running VMs on the Debian kernel but support is limited so if you stuck something at you probably may not get help you are your own. So these parameters so I may not know your exact problem but individual containers have this parameter so the parameters are accessed by this city IDG yeah so here if you see fail count so any parameter fail. No so it's a diagnostic tool I am saying so any of the any of the fail count these are zero means everything is working fine anything so anything goes wrong here you will see the number is not the same problem but we have seen the fail count actually increase when there is something happens so for example what I have seen is even though you don't have memory or you may be more number of VMs you have allocated than actual capacity some of the VMs starts say for example pick time 10 to 5 so we have allocated the VMs to individual project suddenly all of them started working and the host actually started running at a very high capacity so that time the fail count start increasing so parameters like KMM size, pre-vim pages, umgar pages, fix pages and VMgar pages they start filling because these parameters means what you have allocated and what they are using is kind of they are stuck so you have allocated but they cannot use it so they start filling one by one and eventually they take container down but I haven't seen any time probably that they took the host down but still that is a possibility that's why I have listed that is a possibility because ultimately these resources they are using it from the host ways so host current so these individual processes even though yeah so that's how we do capacity planning so we actually monitor the host for its capacity for example RAM, CPU, IO and there are number of other factors so then we saw how many CPU cycles we left how many RAM we left and accordingly we add or migrate VM into another host so that is how we maintain cloud this is can be different so we run majorly 3 flavors into 2-3 flavors CentOS 2-3 flavors and there are another but I didn't recall the I just heard the blog post from they are not mining further means we can run it but if we stuck at some problem probably we may not get help that's the point my rootcon stock somebody has posted the same thing that do mention this thing so which version you are running open music in 2.6.42 stab version onwards it's pretty stable 40 to onwards it's pretty stable previously it was not vSwap yeah so previously it was not vSwap enabled those were the time when we actually struggle a lot to stabilize 40 to onwards or 44 onwards that is a vSwap enabled means you don't have to care about individual resources the kernel itself will take care of we usually template and template caches for routine use or for use which you know we know that okay these servers are going to be replicated multiple times so just make a template and put it in some template caches and you can use it anywhere you want if you are capacity probably it is a KMM size issue it's a KMM probably yeah so just monitor host kernel memory size how much it is taking no no the allocation the allocation to individual processes how much it is allocating and how much yeah 100 MP more 100 MP yeah I haven't seen container taking machine down but yeah sometimes it happens 700 VPS it's a cutoff so you can always check if you deploy an individual application inside VPS what is the minimum requirement for that application so if it is not fair it probably will going to give you like premium because it actually starts consuming more memory than actually container can upload then you can take content down with it it takes almost 2 or 3 minutes we have in one host around 100 120 like that we have power of host is 96 GB RAM and 32 cores of 2 because we actually monitor we take Najwa seller to very seriously even if it is warning stage somebody has to look because it's a quality of service we are not using the VMs for our own that there are somebody who is using it and we are actually think we are liable to give that much uptime so even if there is one fail count wish person admin person goes and look what is the problem then probably things will not escalated to you know VPS level and VPS to host level that much time you do not have because one content is not issue but if it takes 120 content and down with it it's a very serious issue then there is not only one feedback is put alerts it is always good to have monitoring on every VPS that will be good so we have 2 layer of monitoring one is on the host itself host hardware and another is on individual VPS and we monitor them closely so you know every like we can then we monitor ok this week what has happened why these spikes are there so that level that granulation we need to do then only you can have excess capacity mostly servers, web servers, database applications, CI tools, CI integrations, repositories and all the other stuff which is going to be there is that application supposed to run on it's way which way it can inside it is like so you can split your scope in the previous draft what are they doing so you can actually overlay the parts and then you can