 This you want to remove it and I'll just do the clicking. Yeah. Yeah. Okay. All right Good afternoon everyone. My name is Carmelo Ragoza and my name is Tariq Eli. We both work at SAP SAP and primarily focus on bare metal as a service so Yeah we're going to present the work that we've been doing in in this context and We first we're going to kind of give a quick overview on how this fit in all SAP plan of moving into OpenStack And the presentation then will focus mostly on the bare metal as a service In terms of content audience We think that the type of work that we've been doing can be beneficial for It is more companies or enterprise that can share the same type of requirements So we're gonna go through quickly this quick background on SAP what we called called converged cloud plan And then we show the current in our solution that we've been we developed and we are using in production Then we focus on in the enterprise requirements that We are relevant for us We don't we go after in the OpenStack evaluation. So together what word our fundings in that and and then how we actually integrate the Hyronic into our bare metal solution and then we show some of the future work So currently in SAP there are lots of different clouds And so they are kind of silos They are self-contained the work on their own, but there is already a plan in place and we are moving Towards a single Solution So OpenStack is at the bottom of that to manage all the different type of hardware And and then we have all the various applications that we have That can be using the some pass a solution application that can call directly OpenStack and applications that can use some Hinaus layers platforms So our bare metal work fits into this because some of the Applications they need this type of solution So currently in SAP as I said we are moving into into this Solution which we call the converged cloud now Maybe some of you already heard about SAP HANA just quickly HANA is an in-memory database and Allows you to Loads hold it and customer data into the memory and from there you can do Analytics and this interesting stuff and HANA works with the mostly with the large Ram footprints so in terms in terms of like terabytes of RAM And so in order to have HANA working in a cloud setup We have two different Aspects that we were taking care of first one is the what we call the cloud cell HANA cloud cell and is from the Infrastructure point of view where we have standardized the different type of hardware and how it has to be Combined together. So we have one part which is the compute You can see it pretty straightforward. There are different Different switches You can have different type of networks or two for management and for the normal actual customer Networks and Those nodes are highly dense the in RAM Because as I said HANA normal deployments are in that order and then we have separately a storage rack where All the storage is kept So this is this is what we use to provision HANA landscapes and This is Allows us to have optimized performance scalability reliability and security The other aspects The other aspect that we have instead is the software that allows you to Deploy in this landscape infrastructure, sorry it's called the cloud frame manager and It basically allows you to Manage the life cycle of your HANA landscapes So you have a customer that need to have What we call that frame In this frame, which is made of one of more nodes. You need to deploy your HANA and Our in-house solution allows you to do that and it manages all the low-level resources so storage networking servers and so on I mean and this is great this works fine, but of course There are some kind of limitations in the long run and So when we started to think about moving forward We say, okay, let's look at what is if you can have something that is open. That's just in-house and so we started to think about in terms of what the kind of Requirements do we need if we want to go that direction and so the first one of course is the ability to Manage your bare metal infrastructure, so you need to be able to do that In terms of servers Network storage and so on so you need to have a solution that allows that we have it in-house But we want something that is open That is because And it links to the second bullet points if we want to bring him new vendors at the moment what we have to do is Writing a new software that allows you to do configuration for that new vendor The more they come in the more we want to use the more we have to do that We will we have to maintain and so on on the other end We don't want to be locked to just on a single vendor So we want to be able to detach from that and rely on something that kind of go on his own It's the vendor itself that writes that part and you just leverage on that and The other requirement was to have open API so open interfaces that are just accepted that they are used and so we can just Rely on those Another important aspect is a multi tenancy. So multi tenancy in In cloud you have the true VMs bare metal. It's kind of easy. You give a box just to a User, but there is also the multi tenance in terms of networking. So when you provision a Server you want to be able to use a Tenant network You don't want to you want those to be separated for each tenant that you have in your landscape So that was something that is really important for us that you achieve it Obviously by using things like VLAN VXL and so on so What we were looking needed to have that type of features We also needed to have network reliability. So we have reliability in terms of Servers in terms of storage, but we wanted also to have in terms of network. So what that means is that you have Multiple networks that are bonded together and that if one fails You still be able to carry on your provisioning and Finally, we wanted to be able to support the multiple deployment models. So you can have local Deployment local boot you can have NFS route and so on So these were the requirements that were important for for us So when we started to look on what we could use Of course OpenStack was one of the Most wanted candidate for that so in our evaluation we realized that out of the box We could already have things like the active support of multiple vendors Is there each vendor that come in the OpenStack community once is hardware Supported it does the development of whatever is it's a plug-in and so on so that is It kind of ticked the box there for us And we have standardized API of course and that's that's easy is accepted in OpenStack Is there and it works fine with us? Multiple deployment models in there. We we didn't have the local boot CFM our in-house solution does NFS route so we wanted to have we ended our plans to have the local boot and Ironic supported that out of the box. So it was a good thing for us To be able to use that feature right away and the other thing which it's It's very relevant. It's very important We wanted to have an active community because you can have the solution as best as possible But if you don't have support behind that You're gonna be there stuck many times and we found that The community in OpenStack is great in terms we've been Actively engaging in the community in terms of new features and bugs and so on and Just like I mean, there is just an example over there bug report in IPA Patch committed just within few hours, which is great. I mean if you think about it You find something that doesn't work talk with somebody that knows what he has to do and it just fix it and and also the last bullet is The ability to have ironic a standalone mode why that Because how are in our solution is already working is in production We cannot ride away bring all the functionalities that are there in OpenStack. We want to do it gradually And so the ability to just use a single component for that It was quite important for us because right now we have a solution that we've been integrating ironic that it's working It's out there so in In our evaluation we then come up with a number of findings, right? the right support We we needed to have Rating capabilities because what CFM does during the Provisioning it that configure the node with the exact rate support that that you need and we needed that so although at the time it wasn't there for ironic, but to we It was nice to to know that they already were working in that direction Multitenant networking so ironic up to now I've been only supporting flat networks. So there was no the possibility of having and using VLANs and Although Newton supported that that there was no integration between Newton and ironic But we've been working with the community and this that was a short come. We changed into a feature Talking about that later on One of the things for example right now is the at the moment ironic doesn't support NFS route It's something that we are kind of working with and perhaps in the future. We may contribute it And one of the things just a comment about the logging it can be a bit difficult at times you need to go through many Many files, but is the way it is is there It's just on the logging in other aspect of us that the we wanted to integrate and Open stack components with CFM it has its own logging components And we wanted to be able to if we trigger a provisioning workflow We wanted to make sure that we can integrate all the relevant to logging entries from the open stack component No, the CFM workflow so we can see if something goes wrong Which what happened in there are then going to look into the log files you can see at the CFM also So this this point refers to integration of the back-end open stack logging capabilities into the CFM logging capabilities So and as Kamal said to me log files, it could be difficult to actually go find out where the exact Issue is but we wanted to integrate that into the CFM logging so we can relate to a specific provisioning transaction that we triggered from the CFM And the last bullet is at the time of the Evaluation that we were doing there was no hardware discovery But these actually then come up as a nice another feature in in ironic and he's already there so Now how we've been going for the integration as I said because you didn't want to go and integrate right away all the open-stack Components we went for integrating only I ironic only in standalone mode the way we did it was by wrapping up the Ironic API calls and so that they could use the in a CFM as Incrously, that's because when CFM gets a request to create a frame He creates a kind of a plan and that plan. Yes, there are many different tasks so because we wanted to be able to create a task for Ironic we needed to have this kind of approach The other things instead like discovery networking and image provisioning We just kept the one that we had developed in CFM So the discovery was done through CFM CFM has a Discovery script so once you boot it This discovery script starts it's collected in node Information and then what we do we use that node information from CFM which invokes Ironic To create a node and using that type of information that has just collected then once you get a reply back with the UID We store that UID In CFM databases so that that can be used later on for any other operations Pretty simple. It's nothing fancy there The other part instead is when we do the deployment Once we kick off the deployment We use the Ironic Python agent to do the bare metal node configuration So we do like create configuration and writing the image to the disk And then after that in order to do the node customization we use cloud in it and CFM Dynamically generates the config drive and user data for cloud in it so As a future work, so this is what we've been doing so far and as I said This is something that is already implemented and integrated in our Solution and it's in production. It's used As a future work and We've been working with the community. I was saying that earlier about the Newton and Ironic integration and This is something that apparently many Companies or anyway open-sync users who wanted to have and Is the ability of when you do a provisioning of a node to use the tenant network? We'll be doing that to work in the community and We jointly with Arista And now this work is in mitaka and of course we need to integrate with Keystone for identification authorization and this is mostly because if you think about the Cloud converged cloud the plan that we showed at the beginning that work will go into the whole SAP and Infrastructure it means that we are going to use More open-stack components and so we will need to integrate with keystone which would be served in the in the context of the converged cloud The other thing that we've been working is about the The discovery so We had our own discovery, but we want to leverage the Ironic inspector one and And so we've been already doing some some internal work in order to integrate our own discovery and Ironic one in and using just a single RAM disk that can decide which one to use at provisioning time We also been looking at Nova in terms of the Scheduling availability zone affinity and affinity and so on this because we Need some how a way to have a bit more control on that Reason is that the HANA can be can have multiple nodes for example and depending on the customer Requirements you may need to have a rack specific that you want to push Specific HANA instances into with the same rack and so we've been looking into Nova to use some of that capabilities and finally the use of NFS route as I was saying we've been working on that to integrate it with Ironic and that could be done by decoupling the deployment and Boot booting operations at the moment when you do that in Ironic when you do a provisioning just a single operation If we are able to decouple that we can just do the boot from NFS route and then from there It just take care of itself and we have something that is Almost there and eventually we might decided to push that back to the community And I think Yeah, I can take questions if there are any questions. I think yeah need to use the Hi great great to see Ironic in action Could you please elaborate on the rate configuration and do you do it during the deployment itself or sometime in advance and Do you have some uniform rate configuration or it's various per instance? And so we do it in during the provisioning workflow. So that's where Our requirement was a bit different than Ironic because I want to do that does that in the cleaning phase because we want to do that during the provisioning phase and So in we have some developments ongoing where we are we're using the target rate config still to specify What rate information you want to do but we have a custom hardware manager in IPA that actually triggers that during the provisioning and Then we also wanted to have the capability so we can specify different types of rate depending on the workload because different workloads might require different types of rating and so we're still using and Some aspects of Ironic, but other than relying on the cleaning based rating. We're doing it in during the provisioning Thank you