 So my name is Tony Gargi. I'm from the IBM Development Lab in Boeblingen. I've been working since about three and a half years now on KBM. My focus here is I'm the Systems Management Architect for KBM cross-platform with an IBM. So I'm sort of sort of a bridge between what we do in the open-source space and how our products use it. And my focus is going to be pretty much here, the different open-source solutions which are available to manage KBM. Okay, so please do ask questions whenever you wish to. Very briefly the agenda. I'm going to talk about very briefly about why our customers are using KBM in a cloud environment. Then I'll be talking about the various KBM management, data center management and cloud management solutions available today. And then I'll be going through a couple of customer scenarios which customers who have built little clouds and big clouds and are using KBM today giving a little bit of feedback on what experiences they've had and finally come to conclusions. So what are the four main reasons why our customers or why typically people sort of use KBM? And these are the main four reasons here. So I'll start off with security, get a little bit in detail into that. Then I'll go into the performance aspects of the hypervisor. And then of course costs is not unimportant, especially for customers using VMware. And finally, and this is going to be my main focus of this presentation, I'll be talking about the various virtualization and cloud management solutions available there to manage KBM. Okay, so let's talk about security first. So if you're using or if you're deploying a cloud or if you're running a cloud, you very often run into a situation whether you're working in a completely intranet environment or whether you're working in a public cloud environment that you need to isolate your customers. So for example, if you're lucky to have Coca-Cola and Pepsi-Cola as your customers, you really don't want that the virtual machines when running on the same host are able to access or share the same data. So you need to isolate those virtual machines. And one of the ways, not the only way, but one of the ways to ensure such isolation is, for example, using SE Linux. I'll show you in the next slide how it works. The point is it provides you this mandatory access control security. And so if you have SE Linux enabled to be precise if it is enforced, then you can ensure that those virtual machines are not able to access each other's data. Of course, you also need to add other aspects like storage security and network security. But SE Linux gives you a very good starter up. The main thing which I really want to point out is that both the primary distros, both starting with REL 6.2 with KVM and SLES 11 SP2 both have the EL4 plus certification. And this is not just a normally just check mark, but this gives you a very, if you look at the documents, how they get it, this gives a very precise configuration on all what you need to do in order to get such a certification. It's always very advisable to go and configure it in a similar fashion. So this is extremely important. And KVM was one of the first open source hypervisors which got this certification actually last year. So how does SE Linux work? So let's say this is your hypervisor here, your KVM hypervisor, a part of the Linux kernel, and you have three virtual machines on them. And let's just assume that for whatever reason you had some code running in this virtual machine which sort of attacked this particular virtual machine which embedded itself in this virtual machine. Now you want to ensure, and this is what exactly SE Linux does in combination with S-Wirt, that this particular virtual machine is not able to attack this virtual machine and that this particular virtual machine is not able to attack this host OS here. And this is basically what gives you the isolation. What SE Linux enablement also gives to you is the fact that this particular virtual machine then is not able to write to the virtual machine image of this particular virtual machine and vice versa. So these are the very basic high-level aspects of how SE Linux works. In addition, we also have the audit date daemon which allows you to basically lock all live-word interactions. So all interactions whenever you have started a virtual machine or deleted it or changed it, attributes or whatever. So all those are locked here and very often our customers, they require that any actions which are done with virtual machines that they're locked by in terms of which user ID did it and when and what actions were executed. So for example we have a bank for whom it was extremely important that they use a hypervisor which had AL for PLA certification and which also had an auditing facility available so it could trace all the events. So this was just one aspect of security. Another aspect why KVM is being used is because it really provides excellent performance and I won't go into all the standard virtualization metrics which are typically available but I'd like to pick out two examples, real-world examples. This is a real-world example of both, these were two different scenarios, both running KVM. Here in this case it was one virtual machine on a host, on a KVM host on a RL64 host running a virtual machine running Microsoft Exchange Server. And so if you look at this axis here, this contains the number of users which were simulated 2,000, 4,000, 12,000 and 20,000 users and there's a typical industry-wide acceptance that if you're running Microsoft Exchange Server and you do a sent mail transaction that should have completed within 500 milliseconds. That's the general industry-wide acceptance rate. And you will see if you see all these blue boxes here, bars, it was always way below any 500 millisecond limit and this was while running one virtual machine. This red line here shows the CPU utilization rate and as you can see the CPU utilization rate was extremely low so it showed a lot of area for growth also if you wanted to do that. This is in the case of one virtual machine running on a host. This particular graphic here shows the case of multiple virtual machines running multiple instances of the Microsoft Exchange Server. So for every 4,000 users we added another virtual machine, actually a pair of virtual machines. And so it basically shows you there too if you look at those blue boxes you always had a response rate in which 95% of all sent mail transactions were below the 500 millisecond mark. Again, the red line here shows you the CPU utilization which was also there left enough room for growth. So what are these two and by the way, this was very comparable to the results which VMware had, VMware 5.0 had. The detailed results are available here for KVM and here for VMware. They were very comparable and it also shows that even for Windows applications KVM had an excellent performance. Let me just show you just one more performance-relevant data here again something from a real world. And in February this year there was a TPCC result published by IBM. This was using REL 6.4, our database server DB2. And of course we were running on a pretty powerful machine but what we saw was really this is the highest TPCC result ever published for a virtualization technology. So there too if you just take these two examples, it shows that even in real world, let's say scenarios, KVM is really fantastic. And that is really an important argument in addition to the security argument for customers using KVM in the cloud environments. This is the third argument why many customers use KVM. Many to save costs. And this is typically either saving costs versus the usage of VMware or saving costs versus using Hyper-V. So the way you can read the chart is we had, so the three-year cost analysis which we did was based on purchase of normal software licenses, whether it's REL or whether it's management software, and then a three-year support. And then taking the total cost and then comparing those costs. So if you had a completely 100% workload with 100 virtual machines, in terms of savings versus VMware, it was anything between 20 and 50%, whether it was versus VMware or it was versus Hyper-V. If you were taking a mixed workload, that is let's say 50% Linux machines and 50 Windows machines, again we had significant savings versus VMware, of course versus Microsoft, we started losing the other benefit because if you're running Hyper-V, you tend to get your Windows licenses pretty much for free. And if you only had a Windows guests environment, 100 virtual machines, yes we were at a disadvantage versus Hyper-V, but as far as VMware because we were still pretty much cheaper. This is using very normal standard, supportable products, whether it's REL or whether it's the management services. You can further reduce the cost if you start, let's say, using CentOS and other open source tools like Overt, which I'll come to a bit later. So those were the three main reasons for using KBM. By the way, this is an organization, Open Virtualization Alliance. It was announced on the first day of this conference, that is now a new Linux Foundation collaboration project. And the objective of this particular virtualization alliance is to increase the overall awareness of KBM, to bring developers and consumers of KBM together to foster an ecosystem. I'll be talking about that later on, what we mean by an ecosystem and to encourage interoperability. It has lots of members, 250 and still counting. These are the five governing members, HP, IBM, Intel, Red Hat, and Adapt recently became a governing member. So there's a lot of focus also, not only from Red Hat and from IBM, but also from others to really make a push here. Okay, I'm gonna skip this chart because I assume everybody knows how a KBM hypervisor is structured. I think I'll take this one out too. So summarizing, because of the security aspects, performance aspects, price aspects, KBM is really a very good and natural fit for cloud environments. It's not only just AC Linux, you also have C groups, which allows you to give certain soft and hard limits to how many resources, how many CPU resources, how much memory resources a particular virtual machine can get. It's scalable and economical, as you've been seeing in the previous charts. But in order to make KBM successful, it's not only about the hypervisor. You need lots of things around it. You need management solutions around it. You need ecosystems around it. You need software which is certified to run on it. Because until all this is available, customers will just not make a decision based just on the characteristics of the hypervisor. And so therefore, I wanna now talk about these other aspects which are extremely important to customers. Okay, so what I'm gonna show here is try to categorize or group the different types of management solutions which are available today for KBM. Okay, so I'll start at the bottom. You have the different hypervisors. In this particular circle or eclipse, you have what we call the data center virtualization managers. What are the typical properties they're optimized for longer living virtual machines? Everything is pretty much centralized. And most of the focus is on a centralized API or in a centralized GUI. Examples, and I guess everybody knows examples from there, is if it's from VMware, it's vCenter. Or if it's from Microsoft, it's a system center 2012 for Hyper-V. For KBM, it's typically overt slash rev-m, overt being the upstream project, rev-m being the product from Red Hat. And here in IBM, we also have a flex system management which does pretty much the same functionality as the others do. On the right hand side, you have these cloud infrastructure services. The properties of those cloud infrastructure services is they tend to manage virtual machines which are short living. So perhaps high availability is initially not important for them. Because if something fails, they just start up a new virtual machine somewhere else. They're very often decentralized, allows them to scale to a very high degree, and they're really centered around automation. So they're not very strong in GUIs, for example. And I've deliberately painted this or drawn this in an overlapping fashion because we are seeing that infrastructure services like the ones here are moving into this space and virtualization managers are moving into this space. So there is an overlap and there is a trend to go exactly the other way, to go and attract exactly the other focused areas. Then you have the cloud manager at the top. An example for a cloud manager is the vCloud suite from VMware. There are other cloud managers, for example, platform computing or the IBM smart cloud family. And their focus is pretty much on business services. So things like metering and billing and getting the money for usage of these systems back. But also focus is very much on having an extremely rich functional image management functionality there. And things like that. So this is where I put the cloud managers in. And please try to keep this in your mind when we talk about the individual solutions here. So before I start, I just want to mention one thing here. Today the KVM comes out of the box management tool called Vert Manager. I don't know if any of you folks have used it. It's not always easy to use. So IBM has been working on a new project called Kimchi out on GitHub. And the object was always to offer a web browser based extremely simple, low-end management tool to manage a very limited number of KVM boxes. So you can do very basic functionality, create and delete virtual machines, attach disks, attach networks, and then open up a VNC session to it. This is a very new open source project. It's been published on GitHub and the latest release came out last week. And please have a look at the website there. Okay, so but now let's get into now what we call the data center. This was this. Let's talk about solutions here for KVM. I would say the main project, the main open source project available in this space is over it. The latest release 3.3 came out in September. And it offers you so if you folks know what we center is overt is the closest to the center. It not only allows you to manage the life cycle of virtual machines, but it also allows you to deploy the hypervisors to attach the storage to attach networks to do live migration to do live storage migration and things like that. Okay, and the latest release 3.3 has also started now its integration with open stack components. So this means that if you have images in the glance component, you can import them into overt. If you have networking services in quantum now in your turn, you can consume them in overt. So a certain amount as I mentioned, you know, moving towards in the other person's space is already happening there. It also offers important services which are important in a data center, for example, high availability. So if a host fails and it has determined the host is failed, it will set up those virtual machines on a different host. Okay, it also offers support for live snapshots. It also offers support for live storage migration, things which are important in traditional data centers. Okay, so this is very briefly what overt is. So while overt is the upstream version, just like Fedora is upstream and rail is downstream, Rev is the Red Hat product which productizes and makes overt support available to customers. So typically enterprise customers, they purchase Rev from IBM, sorry, from Red Hat. Although I think IBM also sells as a proxy Rev to our customers too. In fact, we have a number of customers who combine IBM hardware and IBM storage with Rev. So it's a cooperation there. So these are just some of the important aspects of Rev which are available. Pretty much everything which is available in Rev is already available under overt and there's a typical five to six month time period until a new Rev release comes out after a new overt release has come out. Just to give you, this is a Red Hat picture. I should have mentioned it, it's sourced by Red Hat. So this shows the architecture of Rev. So it supports today both the rail hypervisor and the Rev H hypervisor, which is a part of the Rev product. It contains the Libvert library and it also contains an agent on each of these hosts. It then has a central management server which is a JPOS application. This puts its data into a Postgres SQL and can also use different directory services. And then it offers all the functionality which I showed you on the first chart, either via an admin portal or via CLI shell or a REST API. And also it also allows you a specific user portal so that certain users can do certain functions. I should be mentioning it also offers VDI support via the Spice protocol too. So this is the Rev architecture. SUSE also has solutions to manage KVM. They are not as rich in functionality as Rev. So running on a SLES hypervisor, SLES KVM hypervisor, you have the SUSE Studio which allows you to build appliances. You can combine with the help of SUSE Studio. Okay, you know, give me this operating system. It could be a SLES or a REL and combine it with a certain application, some LAMP stack for example. It creates for you the virtual machine appliance and then it allows you to either deploy it in AWS or in any specific other cloud. Or of course it just generates the image for you so that you can deploy it yourself later on. The SUSE Cloud 2.0, it's open stack based. It doesn't only contains open stack but also components from crowbar and other components. And then you have the SUSE Manager which's focus as far as from a KVM perspective goes is provisioning of virtual machines, limited metrics but it mainly has many to do with patch management. Okay, that's the focus of SUSE Manager. Okay, so there have been a number of presentations both yesterday and today and probably tomorrow also on open stack. And I'm going to want to go very briefly from a KVM perspective and then later on from an IBM perspective on what we're doing for and with open stack. So it was mentioned I think yesterday that there are about 1,000 developers which have contributed to open stack already. By the way, the latest release is the Havana release. It came out last Thursday or last Friday and in six months the next release is going to be called Ice House. So open stack contains a number of key components. Nova which allows you to provision virtual machines, large clusters of virtual machines. Neutron which provides you networking services. Swift Cinder which allows you object store and block storage support. Horizon which is a GUI. Glance which is an image registry. And Keystone which provides you authentication and authorization for different aspects. Starting with the Havana release we now also have two graduated projects called Heat which is an open stack orchestration component. And Silometer which allows you metering. So those two are the new components which graduated in the Havana release. This, I won't go into all the details but what I do wish to mention here is most of the interaction with the KVM hypervisor take place in the Nova Compute component. So in the Nova Compute component it has a LiveVert driver and this LiveVert driver interacts then with this particular one or more instances of the KVM hypervisor. Open stack, there was a user committee questionnaire six months back. And this was the result of about 220 feedbacks and showed and I included it here to show to you that the primary hypervisor of choice for people using open stack today is KVM. There are other hypervisors too but as you can see in an extremely small minority. In fact when open stack is being built or when code is being integrated then there is a very extensive testing which is done mainly and primarily on KVM. So you could assert that KVM is the best tested platform for open stack today. This is a brief heat chart so when IBM started working on open stack we started looking at aspects where we thought open stack is good enough. Where we thought it should be improved or we thought there are areas which are not applicable to open stack. So obviously I mean things where it's really good at its role and authentication management. VM provisioning and VM image construction. We believe that it has missing capabilities for monitoring capacity planning and surface orchestration. But there were areas which we think open stack doesn't play a role like license management and patch management. And so we are in our activities in our IBM activities we are trying to improve within the community aspects which we think are missing in open stack from an enterprise perspective. Just the same way we did a few years back with Linux. So let's take the area of compute this is the nova component. We're adding IBM teams are adding high availability enhancements. Again this is a picture of where we started off where on the right hand side you had cloud infrastructures moving towards data supporting data centers. This is also what customers are customers are asking us that they would like to use open stack but they would like to use in the data center and so they're missing you know high availability aspects. They would like to see richer resource scheduling. We helped in initially making the scheduling architecture more flexible and now we're also plugging in making capable to plug in other schedulers. Live upgrade contributions so that you don't have a very disruptive upgrade when you're moving from grizzly to let's say Havana. And then also we're enabling our IBM systems both P systems whether it's. Power VM or whether it's KVM on power or IBM systems but also in our software like DB to be used with open stack. We're doing also a lot you know internal testing and validation a lot of it is automated. In the networking aspect we're supporting you know the version two API's and previously a lot of a lot of networking over had its own network component. Now most of it has moved over to quantum or actually it's called a new trend. We're also enabled we've also enabled sender drivers for GPFS GPFS is a highly scalable storage subsystem in for IBM and we're also supporting the SV seven K products. In terms of shared shared services which are basically used by all the other components we've added in support for LDAP and an 80. And of course then we're doing general open stack contribution specifically in the area of translation and QE. So that every time a patch goes in an open stack environment is created on a VM a complete open stack environment that gets tested only if it passes there then the patch is accepted. So these are areas which we are doing community facing to improve open stack for everybody. We also have a whole bunch of products I won't go into details here just want to show you the different products which we have to manage KVM for our IBM software portfolio. KVM is a tier one platform which means that a very high percentage of our software is supported to run as guests on KVM both on rail guests and on sleds guests and on windows guests. This is something which you know this is a part of the ecosystem which we are sort of pushing for then as I said in the customer says OK I'm convinced the hypervisor is good. I'm convinced that one of the other management solutions is good but if I'm going to run this piece of software called I don't know gradient from IBM. Have you certified that it runs on a KVM hypervisor and if you tell them no then it doesn't really help all that much if you have an excellent hypervisor. OK one of our products called pure flake supports KVM. It does a lot of you know management functionality power on power off live migration delete and I won't go into that you can talk about details if you want to do it later on smart about entry. This is something which which I do wish to talk about this is a product which allows you to manage appliances to manage flavoring and for KVM. It uses open stack underneath the covers to be precise it'll be using open stack grizzly right now it's currently in beta. But it allows you to do. You know first of all you can you can define VMS in terms of flavors which are of course backed in by open stack flavors Nova flavors. It allows you to do also approvals and exploration of virtual machines. So very often especially in a in a cloud environment everybody's creating virtual machines but nobody's really stopping or deleting them. So you just so you get a huge number of virtual machines. And it also allows you a simplified out of box experience. So all the open stack controller components or the so-called manage from components they come packaged in one virtual machine. And then it comes with additional software to deploy the necessary open stack agents plugins or different plugins or Nova agents into your individual hyper devices. So so that's coming very soon. The other tool which we have from IBM which also uses open stack underneath the covers as an infrastructure service is smart orchestrator. So I showed you the previous chart where we think open stack is missing some things. So we have enriched in SEO open stack with an image management tool. This is best compared to let's say a code a code like like a GitHub. So it allows you to manage different versions of the images. It allows you to catalog which image sorry which image version is running on which virtual machine extremely rich. There it allows you also to build multi tier applications. So if your application contains let's say of two web of two web servers three databases a load level it it it combines all these different appliances and deploy them on different systems. Orchestration it has a lot of plug in support for plugging in orchestration of Cisco networks and Jupiter networks for example. And it also has support for monitoring and backup and restore. So this is a chosen example on while on the one hand IBM is contributing towards you know improving open stack for everybody. We are also using open stack within a number of our products and we're enhancing certain features which are only available then in our products. Today this SEO supports both VMware and KVM and support for PowerVM is coming next month next month. So while in terms of open stack and I think this was also briefly I don't know if the same chart was certain I don't think it was shown by Mac divine this morning. When we talk about open stack we pretty much concentrate here infrastructure as a service. What we're seeing however is things are moving up people are developers are more interested in platform as a service. That is what they would like to do is they would like to develop an application plugging in components like give me a single log on API give me a database API. And combine it to build applications and this is where IBM and Pivotal have now created this cloud foundry here which is going to be the core of our platform as a service infrastructure. Blue mix is the IBM add-ons on top of cloud foundry you can go to IBM.com slash blue mix to get more information there. So this is sort of shows you where our focus is as I said open stack for infrastructure as a service and cloud foundry and a platform as a service. So want to talk about a couple of use cases here. So of course we're talking about you know Linux server consolidation and cloud computing is all these are different use cases for KVM VDI support. Hosting of virtual appliances managed service provides and multi hypervised environments. This chart just shows you very briefly. Industry wide customers using KVM the Google compute engine for example the HP public cloud is using KVM. And this is a list of IBM customers today who are using KVM. And this is just a small subset. There is a there is a there's a URL here which can get many more details on the customers IBM customers using KVM today. So I'm going to skip into the benefit of time a couple of slides here to just show you for example let me just show you one thing here. So this was a customer communication service provided in China. And what he had was he had different systems he had a power you know IBM power cluster and he had x86 clusters. And what he was having a problem was of you know deploying of deploying the servers of deploying virtual machines on the servers. And what he what we suggested to him was to use smart code entry together with rail KVM for the x86 part. And this helped him first of all to utilize his service much more. This was this was more for Linux consolidation story here. And it also helped him to reduce the time for deployment of his service there. Mac this morning talked about software. And so I wanted to give you a little bit of few details on what our software is all about. And this basically gives you very briefly the technology capabilities of software. This is IBM's public cloud at best comparable perhaps politically wrong with AWS and others. So first of all in the software you can provision dedicated metal servers bare metal servers very rich characteristics which can choose. And you know with corresponding billing and time to provisioning what you can also do is you can also get a cloud computing instance. You can also provision and ask for virtual machines. What you can then also do is you can also have a bare metal cloud computing instance which is a variation of this. And lastly but not least is this particular element. You can ask for your own private cloud. You can ask for your own private cloud which is hosted within software today. Officially that private cloud is based on Zen server and cloud stack. I would not be surprised again this is not an announcement that in future such a private cloud could also be open stack based. So you can say okay give me with a very few and limited number of APIs a private my own open stack based cloud. But for example KVM hyper biases. This is a link to the IBM success stories book which I had mentioned earlier. There's also some more information from IBM on KVM. It's not just sales information although that's good too. It's a you also have a lot of links to technical information best practices for KVM best practices for security on KVM and things like that. And so coming to an end KVM is an open source alternative. It's an extremely powerful open source alternative. And we think it's a much better choice not only for lower cost and higher performance. It has a rich ecosystem. And now there's also lots of virtualization management tools available for that which were perhaps not available two years back. So with that I hope I didn't put everybody else to sleep before lunch. Any questions? I'm sorry I had to skip a few slides because I would need to check on details on that. But I would not I would not expect that it is more than five to seven percent but I would need to check if you can leave me details. I can give him to get that. Okay. Any other questions? Perhaps I was too fast because I started off so late. Okay. That's not the case. Thank you very much.