 OK, let's get started. So we are going to talk about provisioning software-defined servers using HP Cloud OS. So what we have is the idea is to provision software-defined servers. We'll talk about what software-defined servers is and how we are using OpenStack to provision our software-defined servers. We'll talk about some of the use cases. And we'll have a quick demo of a couple of minutes, and then we'll open it up for key. The idea was to develop an OpenStack solution for provisioning software-defined servers with virtualization and physicalization and creating a unified dashboard. So as we know today, most of the OpenStack solutions are based on the virtualization. We create VMs, we provision VMs, we run VMs. But this particular idea was to use Nova Bear Metal drivers to provision physical instance. We'll talk about in detail what a software-defined server is and how we are using Nova Bear Metal drivers and also how we have come up with a unified dashboard where we represent the physical servers and the virtualized servers using a unified dashboard. So why software-defined servers? Like, as we have seen in some of the keynotes, there's a lot of focus on software-defined networking and software-defined storage. Like, similar to that, we have software-defined servers. As you know, the number of internet applications, like you can see the number of the internet applications keep increasing. Like, today we have around, as you can see, number of tweets, the data in the internet are diverse. We have tweets, we have photos, we have media files. But as we go forward, the number of things in the internet keeps increasing. It's not common. Like, 10 years back, we didn't have the diverse kind of data what we see today. So as you go forward, the kind of data, the kind of data what internet is giving is changing. And the number of devices, also, as we know that the number of devices keeps increasing. Like, by 2020, we'll have, like, 30 billion devices and around 8 million people. And so it's not the internet of the IoT, the internet of things keeps changing. It's not always the same thing. And you can't always use the same virtual machines, VMs for processing all types of data. Like, you need a specialized servers to do a specialized job. It's not always like you, in most of the cases, what we see is, like, you create a VMs for doing some storage for, if you are, for there can be data which is specific to the storage and intensive data, or there can be some data which is specific to a Hadoop workload. For everything, we create a VMs. Like, most of the times, we end up creating VMs. And what happens is that in most of the time, we are only utilizing 20% of the compute power of the VMs. But those VMs what we create are not exactly meant to do things like internet of things data. So there is really a mismatch between why we are creating VMs and how it is getting used. So that is to solve that solution. And there's a new style of IT. Like, today, as in, we create number of VMs. The more data, we create number of more VMs. So as a result, like, we have the data center cost going up, like, 10 to 20 billions, which is almost like around 8 to 10 nuclear power plants. And the power, the space, the consume for data center is keeps increasing. Like, today, it's like, as per the, today it's like around 2 million. The power consumed for data center is around 2 million US homes. So, and this keeps increasing. And by 2020, it will be tripled. So it's not always a good, it's not always a good thing to create, to generalize virtual machines for irrespective of the kind of the applications and the workloads, what we want to process. And so what is a workload? Like, so workload are like, the workload can be anything. It can be a Hadoop workload, or it can be a workload for processing images, movies, files, media streams. So everything, like, for a, so most of the times, we are creating VMs based on x86 architecture. And so what happens is that we are not usually, not utilizing the exact compute power for those VMs. For example, in the case of, if you have to use some applications which require storage incentive, so we end up creating a huge VM, but that VM is never get actually used. So what happens is the cost of the processor increase, there is unnecessary wastage, also the virtual, and also there are some applications which require specific hardware to run it. There are some are like mission critical, like some are very hardware specific, also there is always a problem with virtualization is because it has to go through a hypervisor layer, there is always a performance issues. So the idea is like, there is creating VMs for all kind of applications is not an ideal solution. There are some applications require some specific hardware, or some applications require some specific configurations to run. So you can't generalize everything. What is a software defined server? A software defined server is a moonshot is an example of a software defined server which is based on a federated architecture. So based on a federated architecture. In the moonshot architecture, like we have a chassis and we have like around the number of servers can be around 45 to 180 servers on a single chassis. So because of which we will have a savings in the space, power and also the scaling like we have, the moonshot architecture is like we have 80 to one scaling. The efficiency is like 8x efficiency and the innovation pace, it's a 3x innovation pace because I just want to talk about that because like if you are developing the blade servers that are development cycle and the release cycle of a blade servers will be like one to two years. Today the internet kind of applications keep changing every three months and six months. If you want to build servers which are faster to the market which cater to the specific internet kind of applications. So we want to, as and when the new kind of applications come into the market, so our moonshot servers will be catering to those particular applications and go to the market, the release cycle will be faster conventional compared to the conventional blade servers. So moonshot servers are like the tailored servers which are like for the specific workloads. Like these servers are optimized. For example, some servers are optimized for running a web kind of application. Some servers are optimized for running a Hadoop database or I mean anything like, so HP has partners who HP has like partners and we from the partners we get a lot of, we understand what kind of applications they want and what kind of hardware that is required for the application. So HP, the moonshot cartridges are optimized to run those particular workloads and those particular applications. So it can be a Hadoop or a web server or a database or it can be a message broker or anything like. So it's a tailored to the specific workloads. And so because of this, we have like savings in terms of dollars per year and moonshot cartridges are very compact. It is like there is a savings in terms of space and power because the moonshot cartridges, the moonshot servers are on a single chassis and obviously there is a savings in terms of power and the space required for moonshot. So what is the world with the moonshot or the software defined servers? That's like the workloads are data intensive with energy efficient CPUs. So right now we have the servers which are specific to the workloads and which makes them very efficient. So and the cost will be proportional to the CPU utilization because for example, if you're just storing some kind of application or some kind of data you don't require a VM with high power CPU. So there will be a savings in the cost and there will be extreme scale out. The scale out is possible and the hyperscale environment will be simplified. So because moonshot is a federated architecture the components like the power cooling everything are shared on board like since it is on a single chassis. So my colleague Mahav will take over like on how we are using cloud OS which is based on OpenStack to provision our software defined servers for moonshot. Thanks, Adish. Am I audible? Yeah. So hello everybody, I'm Mahalakshmi. I come from HP. I work for the HP Cloud business unit. So Adish was taking us through what are software defined servers and what is HP moonshot system and why do we need them at all? So before going to details about how we provisioned software defined servers using HP Cloud OS I just had a question. So how many of you here have heard about or know about HP moonshot system? Okay, enough. Okay, so we'll move on to see how we are going to use HP Cloud OS to provision software defined servers. Okay, so first what is HP Cloud OS? So HP Cloud OS is an enterprise grade OpenStack based cloud platform. So HP Cloud OS delivers a cloud platform which is based on OpenStack and it has many additional features or very interesting features and the first being the simplified delivery. So most of us here would have experienced the difficulty in installing OpenStack. So there are N number of nodes, N number of components, configurations, files to be configured and it's a real pain to install OpenStack as of today. But anyways, I think it will be addressed using triple going forward but that's one interesting feature or that's a very big value at that HP Cloud OS gives to the OpenStack community. So when you say about simplified delivery so we can bring up a cloud in a couple of clicks. So you get a cloud up and running in minutes rather than hours or days as how OpenStack goes about. So HP Cloud OS has a installation dashboard which takes care of automating all the configurations, installation of all the various OpenStack components and Cloud OS components. So the other interesting feature is the enhanced lifecycle management. So based on the workload that we wanted to provision we can select different resource pools. So when I say resource pools I can group the resource pools from public cloud as well as the local OpenStack cloud. So based on the requirement if I wanted to bring up a three tier application say I wanted to have a web tier, app tier and a database tier. So I can choose to have a database tier in my local cloud and then I can choose to have a web tier and an app tier on the public cloud. So as of today, HP Cloud OS supports AWS and HP Cloud Services as the public cloud environment. So where we can access resources from. And there's another interesting feature where we can create a topology or we can design our own topology to suit the application workload that we wanted to deploy. So HP Cloud OS comes with a designer where you can drag and drop the various components and then configure it based on your requirement. So you can create it once but use it as many number of times you want. So you can retain the configurations that you do it once and you can use it as many for as many number of cloud services as you want. So that's the brief about HP Cloud OS. So now coming to Moonshot. So most of us, okay, I saw many hands going up on Moonshot system. So how do we enable Moonshot with HP Cloud OS? So as I said, HP Cloud OS is based on OpenStack and Moonshot is a bare metal server. So obviously I need to use a NOVA bare metal or OpenStack based NOVA bare metal component that enables provisioning of a bare metal server. So we use NOVA bare metal in the grizzly release to enable Moonshot on OpenStack, which is on Cloud OS. So this integration brings in the unique features of both OpenStack, Cloud OS, as well as Moonshot. So that's the main idea here. So we'll go to the next slide, okay. So what you see is a high-level architecture, component architecture of HP Cloud OS for Moonshot. So the light blue boxes you see are basically HP components and the dark blue components you see are OpenStack components. Although I say the dark blue components are OpenStack components, we have to customize them in order to make it work with Moonshot. So it's not that we use it as is, but there are some specific requirements for supporting a bare metal server for say, that's like it is the Moonshot server. So the workload topology document is the one that is created on the HP Cloud OS that is the design document, topology design document that we create in order to bring the multi-tier workload topology. So using the topology document that we create, the provisioning engine of the HP Cloud OS redirects it to the appropriate components like storage, network, compute so that the provisioning, elemental provisioning is taken care of by OpenStack. But when I say elemental provisioning and storage, network and compute, but with bare metal as of today, only compute is supported because quantum, the now called neutron does not have a role to play here because quantum is integrated with bare metal only from Havana onwards. So you should have heard about this in the, in yesterday's session that is where no bare metal provisioning session by the Aeronix team was given. So we have a glance repository, Nova Controller and the Nova compute host or the Nova bare metal compute host. So in the Nova bare metal compute host we have the pixie configurations and the IPMI driver which is basically the power manager driver to manage the power and the console of the bare metal servers. So we initially tried to use the IPMI driver as is but then due to the additional support for double bridging with IPMI for moonshot we were not able to use it without any changes. So we have to update the IPMI driver to support double bridging. So when I say double bridging, there are two levels of addressing for the moonshot cartridges. So first I address the chassis. From the chassis I query for all the cartridges and then I redirect the specific operation to the cartridge. So that's how I actually redirect a power management operation like power on, off or if I want to get a power status, everything is addressed to the chassis and the chassis redirects it to the appropriate cartridge. Okay, so that's on the IMU. Okay, so this details about what are the specific customization that we had to do in order to make the moonshot servers work with the NOAA bare metal component. So when I say moonshot servers, those are all basically the bare metal servers. So as of today, there is no support on the UI that is there's no support on Horizon to enable bare metal server. So that's one main thing that we did but I'll start with installation and configuration. So HP Cloud OS as of today has support only for the virtual environment. So I can create a virtual server. I can create a topology based virtual environment or I can create elemental servers, instances. But there's no support for bare metal instances as such. So with the integration of moonshot servers, we bring in the support of bare metal cloud to HP Cloud OS. So we had to include the NOAA bare metal component along with the existing Cloud OS components. So we had to create the install scripts for that. So once, and then we have to have install scripts for the various OpenStack related packages that are required for bare metal. And we have to, there are some specific requirements on the third party or open source packages like IPMI tools is Linux. So I have to, we have to create scripts to install the Debian packages and to have some startup scripts. So that's one main scripts that actually enable the bare metal provisioning with HP Cloud OS. And coming to the quantum, as of today, quantum does not play a major role with respect to bare metal provisioning. But there are some services like quantum DHCP running on the OpenStack or the HP Cloud OS which actually conflicts with the compute, the bare metal compute host DNS mask service. So the DNS mask service is the one that actually provides the networking for bare metal. So we run a service called DNS mask on the Nova compute host. So when I say compute host, the worker node is the one that manages the networking. So I run a flat network on the compute node which takes care of locating the IP address. And here is the conflict comes with the quantum DHCP. So we have to manually stop the quantum DHCP in order to take the IP address from the compute node. So we have scripts for doing that. And then it comes to provisioning. So there are a couple of things, major things that we had to do in order to enable provisioning of Moonshot servers. So when I say major changes, so first one is the IPMI driver, as I mentioned. We had to actually update the IPMI driver to support double bridging, as I mentioned what double bridging is. So the default IPMI tool that comes with the Ubuntu or the Ubuntu 12.04 is of a lesser version. So the basic version that supports double bridging is 1.8.12, that's what we have been using for the IPMI power management. And there's a pixie configuration file available. So we have to update the pixie configuration, which has some specific configurations for Moonshot servers. So then flat network support. So we have to bring in a single flat network so with HP Cloud OS, flat network support was not there. So we had added flat network support in order to support bare metal. So we have seen that. And this is very interesting, this feature auto enrollment of the cartridges. So when I say auto enrollment of cartridges with no bare metal provisioning framework, we have a command to register each and every bare metal node. I mean, it's a single command for every node, which means with my Moonshot server or a Moonshot chassis, I have 45 cartridges. So when I say 45 cartridges, I cannot go on, execute a command for all 45 cartridges, which is going to be manual error prone and it's time consuming. And this, the HP Moonshot chassis can have up to 180 cartridges, which is making it even worse. So we had a script that actually takes the IP address and credentials of the chassis, auto discovers all the cartridges and the required configurations like CPU memory disk and the addresses of the cartridges. And it registers automatically. So all these configurations are automated in the install scripts. And when we give the details of the chassis, everything comes up. So you are ready for provisioning the bare metal or the Moonshot servers based on the workload requirements. Okay, so most of the items on the NOVA bare metal and quantum we have already talked about. But to be very specific on the, okay, I have not covered and much on the cell service dashboard, yes, I think that's one important piece of customization that we had done. So when I say self service dashboard on horizon, as I had mentioned, there's no specific support for bare metal instances. So if you people are familiar with the open stack, Wiki for NOVA bare metal provisioning framework, there are a few specific configurations that are required for bare metal. Like the images have to be specifically configured for using with bare metal. You cannot use a VM image with a bare metal server instance creation. And similarly, you have to have specific extra specs, extra specifications onto the flavors. So that has to be done. So we cannot use the flavors for virtual machines with the bare metal instances. And okay, there are more interesting things. Like we have enabled the, okay, when you look at the console of a VM, so you will be able to actually access the console using NOVA NC, but with bare metal, that support is not there. So even with the provisioning framework, there is no way that you can bring in the console onto the self service dashboard. But we had updated it so that the console of the bare metal server can be accessed via the self service dashboard. So anybody can log into the console and then he can access his instance directly. And he can also access it via a browser, but this is one specific configuration that we had brought in. And the other is the IP address of the created bare metal instance is actually allocated by the compute host. There's no way that the compute host can be accessed by the horizon because the compute host will be on a private network and the self service dashboard will be on the public network. But we had actually updated, customized the horizon code in order to get the IP address from the specific compute node and make it visible on the self service dashboard. The reason why we did this, there's no other interface to know where my actual instance exists. So that's the reason we had brought it from the compute node to the self service dashboard. Okay, so that covers the customization and update part of HP Cloud OS to make it work with HP Moonshot Systems. So now these are the next steps. So what we are looking into is we are looking into a synchronized release cannon so that we can support various burclothes or various leaps of Moonshot cartridges. So based on the cartridges that are released, we will be aligned with the specific releases. And this customized product that is specific for HP Moonshot system is proposed to be available to all Moonshot customers along with the Moonshot chassis. And whatever changes that we are doing, whatever changes that we are seeing in this process of integrating HP Cloud OS with Moonshot, we are planning to contribute it back to the community. So because we have started productizing it and we are getting to see many issues and many enhancements very specific to Moonshot as well. So we are planning to contribute it back to the community so that we can get the power of the community again. And we are closely working with the ironic team and in actually improving the experience, Moonshot provisioning experience and there are many enhancement that we are seeing and we are working closely with them. And as in yesterday's session as the ironic team mentioned that we have to have advanced networking support in order to have the bare metal servers work with other compute types. So we are still in a very primitive level on the networking because the quantum or the neutron is not involved. So we are planning to backport or consume it or we are planning to move to the next releases in order to accommodate that. That's in the roadmap. And this specific product that we are working on that is HP Cloud OS for Moonshot is very specific to Moonshot. But then we are planning to support x86 based bare metal servers because there might be some requirements where you wanted to use a workload specific servers or a compute intensive servers. So we are planning to support them as well as along with the HP Cloud OS for Moonshot. Okay, so before I move on to the demo I just wanted to thank the ironic team and the community for helping us out with all the hurdles that we had seen because otherwise the development cycle would have or the development phase would have been much slower. So I just wanted to thank the community. I wanted to take this opportunity. So let's move on to a quick demo. Okay, so what you see is the HP Install Administration dashboard for HP Cloud OS. So this is the dashboard that actually takes you through the installation process of Cloud OS components and the OpenStack components. So HP Cloud OS is delivered as an ISO where you bring up an admin node. The admin node is the controller for the installation. So whatever you have seen is, once the controller or the admin node is installed, you see the where, sorry about that. Let me start it again. Okay, so we saw this. Okay, so the admin dashboard can be accessed with the URL that is highlighted. So we have a couple of prerequisites that have to be done before we go about installing the HP Cloud environment. So when you, so we go to the prerequisite. In the prerequisite, we basically set the firewall configuration, network configuration for firewall and the time zone configuration because it is required to keep the machines or the components of OpenStack and things. So we have the configurations. So once these configurations are set, so we will go on to the nodes. So the nodes here are the various nodes that are used or the various instances, VM instances that are used to bring up the HP Cloud OS components and the OpenStack components. So what you see here are the discovered nodes. So you are seeing them in the deployed state as of now, but when it initially comes up, what happens they'll be in an allocated state and then they'll get allocated, which means they are ready for provisioning or ready for installing any components of OpenStack and Cloud OS. So as of now, I only have two nodes that are used for deploying Cloud OS. So one is the Cloud Controller and the other one is used for the region. Okay, now let's move on to see how the Cloud and the region is created. So the managed Cloud is where you actually create the Cloud, you can select various nodes such as Cloud Controller, Storage Controller or Network Controller. So what you see is the, okay, there's a button at the right top that is Create Cloud. So that is one which enables you to select whether it is a single node deployment or multi-node deployment based on the requirement. So you go click on the Create Cloud, we'll know how the configurations go about for the deployment, okay. So what you see is how we create the Cloud. So you have a Cloud Controller, Network Controller and Storage Controller. If you have available nodes, you can select one of them and then you have some specific configurations that are required for bare metal, especially the network configuration. Okay, we go to the, okay. So we can select either single node for all the three controllers or we can select multiple nodes. So we'll go on to the attributes. So these are the specific attributes required to configure my flat network on the bare metal host. So you see, we take the inputs for the network interface, DHCP, IP range or whatever is required, whatever inputs that are required. So this are passed on from the Network Controller on to the NOVA compute host. Okay, so now the Cloud creation happens that way. The next is the region creation. So if you had looked at the Cloud creation, there is no configuration for a specific region. But here this is how you create a region. So the configuration for a specific region is separate. So you can create a region specific controllers. So you have a compute controller. You can have as many number of compute nodes you desire. And we have the attribute. So in the attribute section, if you see, we have a compute type of bare metal. So this is how the bare metal support comes into picture. And we have support for moonshot servers and standard server, which is a x86 based server. I'll take a couple of more minutes on the demo. Okay, now we'll move on to the Cloud administration. It takes, maybe I should pass it. Okay, so we are on the Cloud, these are the attributes. So since the Cloud was already created, you see the attributes that are already being used. So we have a same controller for all the components, Cloud network and storage here. And you see the IP address of the, okay, you see the compute controller and the nodes. So there are different nodes that are used for the Cloud and the compute. Okay, this is on the installation part. Now we'll move on to the, okay. So what you are saying is the Cloud administration environment. So which means I'm logging into the actual HP Cloud OS. HP Cloud OS, Cloud OS cell service dashboard is based on OpenStack horizon. So I'm logging into it. I think we are running out of time. So once we sign in, it takes us to the summary of a project summary page. So now most of the tabs that you see on the left side should be familiar, but there are few tabs like topologies, documents and resource pools, okay. So you are in the images, okay. That's a specific image created for bare metal and access and security. There is no specific requirement for security groups, but then we have to have a default security group for no bare metal to be working. And there is a network that we had created, which is created internally. There's only one single flat network that we had created so that I can use it for provisioning. And then we'll move on to the topologies or the documents. So you should have seen this, designing a topology in the demo booth we have. So we drag and drop the components and we create a topology. And I'm just going to the designer. Okay, so this is the Cloud OS designer. So we have various components here. Okay, so you can see the various components server, volume, network, load balancer, router, et cetera. So, but with respect to bare metal, only the server component is useful as of today because bare metal does not support volume as of today. And network quantum does not come into a picture until Havana for bare metal. So, that's on it. Okay, so we had created a document, a topology design. Now we'll go about the instances. Just to say, see whatever customizations we had done on launching the instances. So we are on the instances tab, it's becoming very slow. So you are on the, you're seeing the launch instance form. So if you look at it, there are some additional tabs that are removed from the standard horizon dashboard. And the image that you see is the bare metal image that we had created. And there are no default flavors. So we had to remove the default flavors because all the default flavors are for VMs. So that's a customization we had done on the launch instances. So we are on the regions tab now. Okay, so the network, you are on the images. So you can see there are some additional images, image types available. So the deploy in a tardy image, which is the RAM disk image and the deploy VM Linus image, which is a kernel image. So these are the two images that are to be attached to the OS image in order to enable them for bare metal provisioning. We'll quickly move on to, okay. So this is the create image tab. You see two additional inputs here, that is the kernel image and the RAM disk image. These have to be created beforehand before you actually create the OS image so that you can link it with them here. Okay, now we will move on to the flavors tab. In the flavors, okay, I have a flavor that is already created. Okay, in the flavors, you see three additional inputs. That is the CPU architecture, kernel image and the disk image. So these are required to be attached to the flavors so that the NOVA scheduler selects appropriate bare metal node for creating the instances. So based on the image and the flavor, the instance is selected by the NOVA bare metal scheduler. Okay, it's okay. Now, what you see is the instances tab. You see the IP address of the instances. This IP address is allocated by the compute host. So we have brought it up here. And one final thing that I wanted to show is the, okay, so what you see is the console. The bare metal console is now onto the horizon. So we have brought it up onto the horizon and you can access your console directly from the horizon or the cloud or self-service dashboard. Okay, so this is one additional feature that we had added specifically for the Moonshot chassis support. So we take the chassis inputs, IP address and credentials so that the cartridges can be auto-discovered and end-roll at the back end. So, okay, so that ends the demo. So we can open up for questions, some quick questions. Okay, the question is, how do we differentiate between a bare metal server and a software-defined server with respect to Moonshot chassis? Is that correct? Okay, so, okay, basically the Moonshot servers are called the software-defined servers or the context of software-defined servers comes from the various supported workloads onto the Moonshot cartridges. So in the Moonshot cartridges, everything other than the software-defined server or software-defined portion like CPU, memory and the disk are shared. So which means the cartridge that you provision a bare metal instance are customizable. So you have specific configurations for specific workloads. So as Adarsh had already mentioned, you have a cartridge that can support a Hadoop-based workload or a big-data workload. Some can support the dedicated hosting. Some can support graphical or a video streaming application. So the configurations are still hardware, but they are very much tuned and tailored for a specific workload. So whatever you provision and whatever is given to the cloud service is the software-defined portion. Rest of them are not shared. So which means you pay only for the software-defined portion like the CPU, memory and the disk that is required for your workload. Okay, so the selection here for the provisioning of a cartridge happens based on the workload type. So you do not ask for a specific cartridge. You only ask for a specific workload. So we take care of selecting the cartridge at the back-end and we give a specific one that is tailored for your workload. So maybe we can take the questions offline. Yeah, yeah, that's correct. Yeah, that's correct. So, any more questions? Can you be a bit audible? Okay, so as of today, the whole chassis is allocated, but we are looking at customizing that per tenant basis or compute basis or based on some host aggregates we can segregate or aggregate the chassis or the cartridges for specific access control. I don't know, right now the controller node and the compute nodes are not on the moon chart. So it's outside. The controllers are not in the chassis, it's outside and the admin at whatever she told, it's outside the chassis. So it's not part of the moon chart system yet, but we are working on that. Like we want to put cloud OS on moon chart. So that will be the upcoming question on the next. Okay, thank you all for joining.