 Good afternoon everyone. My name is Tomasz Poszkowski and I'm Cloud Solution Engineer from Intel and today's lightning talk is about Ironic and how we can leverage that for Bermatter workloads and what's the future with Intel RSD. So why Ironic is important. So in every modern data center Bermatter provisioning is essential thing. So whenever you deliver new hardware to your data center or you deploy something new you need to go to Bermatter provisioning and so Ironic has this advantage that it offers fast and repeatable deployments. It offers easy scale up scale down with the help of Nova. Ironic as well is having full of course with Nova having full support for multi tenancy and with support of the Nova scheduler and placement API we have really good data center wide intelligent resource orchestrators and we had really good advancement in the area of scheduler so we can safely say that Ironic together with Nova offers best in the market intelligent resource orchestration in the data center and so of course there is a lot of tools, processes and software which is running on top of OpenStack API and all this which was developed for instance VMs, VM instances works as well for Bermatter instances and why people would like to deploy their workload directly on Bermatter so the most the first and the most obvious is because they would like to have no virtualization overhead they would like to extract maximum performance out of their resources and they would like to have low as possible latency and the other thing is that with virtualization we very frequently hit the noise neighbor problem in Bermatter this problem is shifted to a different place where you can is better manage that and a lot of hardware that is introduced recently to the market offers very low level optimizations that can be accessed only when running directly on the Bermatter so within a time those solutions are being extended to offer full support for virtualization but it's not necessarily happening in the very early stages and of course a lot of hardware is having issues with the virtualization I don't know if any of you have known that QM has a limitation that VM was not able to have more than 256 cores it was fixed recently but before that it was not possible and there was a bunch of improvements in integration between OpenStack and Ironic in the last year so to go further with the future parity between VM instances and Bermatter instances the biggest was the networking carrier improvements so right now we can safely say that they are almost equal with small differences and there is also a lot of work that is happening right now so in terms of rooted networks boot from volume and a lot of stuff needs to happen which is extremely important for like intelligent resource orchestration in data center I mean mostly affinity and anti-affinity and dynamic rate configuration which is not yet in Nova I mean in Nova for Bermatter use case and for what specific use cases one can use Bermatter instances the first one is virtualization so I hope that everyone is aware about the triple O and that's exactly the use case so where you have where we have under cloud which is built with help of the Ironic and then we have in Bermatter instances on which we are deploying OpenStack itself it's called overcloud in the triple O terminology and for running containers directly on the Bermatter as I said so if you want to get maximum performance out of your compute resources going for Bermatter is the perfect use case perfect case and as Kubernetes is offering their own schedule you can put as much applications under the single Bermatter cluster and make sure that they all work well and with the essence advancement in the Kubernetes related to resource isolation which is happening right now you can also be much more secure in terms of noisy neighbor problem as well and big data and analytics so everyone probably knows Hadoop Spark so everyone knows how like CPU hungry this application is and Hadoop and Spark really likes to live as close as possible to storage and running directly on the on Bermatter offers you that you can take advantage of the direct storage protocols like NVMe which is not really possible with virtualization and for HPC so HPC is that kind of environment which is like seeing a lot of legacy deployments and it's very hard to convince those scientific guys that they should migrate to the cloud because they get used to their tools and a lot of those tools simply doesn't work on inside a VM so with Bermatter and Ironic we can simply mimic their old environment with some features that clouds offers and this example super example of building a bridge that we can show people how to migrate from legacy to the cloud environment and HPC is also really about multiple different hardware accelerators and like no virtualization helps them to extract more out of those hardware resources they have in their own data centers and historically Bermatter provisioning is not the really fast-performing one so when you compare the time which is needed to deploy a VM instance versus a Bermatter instance there is a significant difference so VM is seconds while Bermatter instance is minutes and one of the areas that we identified is simply delivering images to the Bermatter instance because as it's works right now it's being delivered from the centralized point and when you deploy let's say 100 nodes at the same time you simply overloading north-south capacity you are putting a lot of stress on your centralized storage when you store the images and we figure out that by employing peer-to-peer network like torrent we can remove a lot of stress from those components and our experiment that we did together with Mirantis on one gigabit network one gigabyte operating system image and 15 nodes which were provisioned at the same time show that we can reduce provisioning time by half so it's a significant advancement and also we removed a lot of stress on different parts of the infrastructure and the second thing is that simply booting Bermatter instance is taking time because usually when you provision a machine it needs to be powered on it needs to go over the pixie boot it needs to load the kernel RAM disk and then there is ironic Python agent which is downloading the image running to the hard drive and doing the reboot so our answer to that problem is that with the expense of additional power consumption in your data center we can you can reduce the provisioning time by simply having all unused servers already loaded ironic Python agent RAM disk and what happens when someone is requesting to launch an instance it's simply downloading the image running into the hard drive and doing the reboot so by this we are cutting the like instance deployment time by half and going even further we can remove the last reboot with kernel exec so the kernel exec is the feature that you can replace one kernel with a different one and in case when you're deploying Linux instances you can take advantage of this model and in this model Bermatter instance deployment time is even smaller than VM deployment time because you simply write image to the hard drive which is dedicated for a single machine so there is no hard drive sharing and you just reload the kernel and you are done and why Intel is investing so much in Bermatter provisioning because we believe that future lies in the Bermatter resources in data center and Intel is actively pushing development of Intel Ruxgate XAM platform which is trying to answer many problems that data center operators are reporting nowadays so it's simply disaggregated that you have a computer storage and network and whenever you would like to deploy a work load you are not trying to fit it to existing infrastructure but you build physical infrastructure or demand so you select CPUs memory storage compose the nodes and deploy a work load and Ruxgate design is built on top of standard Redfish API so everyone can leverage that API and we also have integration between OpenStack and Intel Ruxgate design and sorry I ran out of time, are there any questions? Okay so just quickly finishing so we did some POC of integration between OpenStack Ironic Nova and Intel Ruxgate design and simply deploying a work load on Intel Ruxgate design is as easy as launching an instance on OpenStack you just simply select an image you select a flavor and then you are done so if anyone knows how much effort it takes to put all the hardware resources into Ironic database with Intel Ruxgate design you forgot about that because you simply provide single Redfish endpoint URL credentials and you are done all information about the hardware is automatically populated to Ironic or it's actually when the node is composed and the node in Ironic is created and being provisioned and when if anyone is interested in hearing more about the Intel Ruxgate design we are having a demo booth at the marketplace where you can see this in action or if anyone is interested in OpenStack on Kubernetes in action we are having that as well so thank you very much