 Hello, good afternoon. I think it's time to to start this presentation So welcome to this talk about empowering Ironic with Redfish support My name is Bruno Kornek. I'm working for Ulet Packard Enterprise I'm based in France in Grenoble in a nice city near the mountains where we can do a lot of stuff And I do a lot of stuff around open source on the list for the last 23 years And I've been interested around OpenStack for the last two and a half years and Particularly around the bare metal deployment because I've been involved with disaster recovery or stuff like that So I'm interested by both software and hardware in combination and I live there in a nice place where we receive customers to do proof-of-concept workshops and have interaction with them around new technologies and as part of it I've Interacted with some customers interested as well by using Ironic for bare metal deployment and that's how the idea of Looking at what we could do to improve Ironic with setting Redfish support came into our mind so Couple of definition, I guess that everybody in the room is familiar with the notion of REST API and Jason as a fire format the only stuff that maybe is more of interest to the audience is the Redfish part But it's not necessarily that slide which is any interesting for you, but more this one So I will talk about what we as HPE do around our Redfish implementation Which is a bit larger than the standard itself But the goal of what we want to do with Ironic is really work around the standard part of Redfish Not the extensions that we also have in our solution, so The goal of providing Redfish based interfaces for managing servers is to give you An infrastructure as code Possibility for managing really your servers So using standard APIs using standard other ways such as what we have in open stack to interact with your hardware platforms through What most servers provide to you today, which is a BMC a way to do out of management of your platform So it's simple to understand everybody Working in programming inside open stack can program very easily the Redfish interface of the management board of the server It can be it's based on HTTP communication on the rest API. It's very simple. It's use JSON Everything that you know it can be secured using HTTPS So everything that you like and know about how you communicate between modules inside open stack is something that you will also do With your server using the Redfish interface. So what is concretely Redfish? So this is a standardization effort which started pushed a lot by by HPE originally with a couple of other Actors in the market that Emerson Intel were at the origin of the of the standard and it has been pushed into a working group of the DMT F to become a Management standard and the future management standard of all hardware platform. Hopefully It has been started a couple of years ago already It has been published in 2015 in August with version 1.0 and we have had minor revisions to fix a certain number of missing elements or incorrectly described Elements inside the standard up to the 1.0 for version which has published this this august on the DMT F website The DMT F is not only publishing the standard itself as a document describing how you interact with the platform to manage it But also all the schemas to support how Redfish Represent the notion of infrastructure and I will give you more details on that a bit later on It also provides a mock-up so you can have online access to a fake Redfish based interface which allow you to pass through the different fields that are available and have an idea on how you can interact with the platform The goal is to really to be an IPMI replacement So IPMI has a certain number of limitations and probably shows its age Rest-based secured much more in line with what we do to control platforms these days and We hope it will become the future management path for most platform Having HPE and Dell behind it means that already today when you buy a server One out of two servers in the world are already a Redfish compliant So that's already something that you have and you can play with on most well a lot of platforms That have been published and and we hope that most manufacturer will will follow that So the goal is to have a representation of your hardware environment At large. So not only the server itself, but also where the server is So if it's a blade in a chassis if it's for us a moonshot cartridge inside a chassis If it's in a rack you want to have information about not only the machine itself and its components But also the links between that machine and the rest of the environment in the data center So the chassis management part Multinode support as well as mononode support It should also it has been designed to provide Extensions OEM extensions such as what has been done with a 7p in the past So the possibility to work on the standard and provide as much as possible a lot of information part of the standard But if some manufacturers have some additional values some additional Information actions that can perform on the platform give access to those to customers through the OEM extension part of the standard And work with other manufacturers part of the working group in the DMTF to put them in the standard if we think it makes sense to standardize that access to to those information and The goal is to have really a standard across manufacturer Which is accessible through a set of tools, but also without any tool just using The rest API and the HTTP service, which is on sewing on on the platform So the first version published in August 2015 Development is ongoing the latest version in August 16. We have since four minor versions since in the publication of the standard Again not only the standard is published document It's a 100 plus page document describing What we mean by by management inside the platform But also all the schemas to support are available both under Jason Oda and XML format and the mock-up is also available You can download it you can put it in a in a container or in a VM and you can start playing with Locally was an access to a simulated redfish environment So there are a lot of videos and white papers and facts available from the URL which is mentioned below and on the DMTF.org website and Typically, I don't know for the other manufacturer for for us with the ILO for type of management Processors that we have on our systems since the firmware version 230 We have redfish available and you can go to the management IP of your server You add slash redfish slash v1 and you have access to the redfish interface and you can start exploring it And we also have it on the moonshot for the small cartridges So what do you want to do with with redfish as? Typically what you were already doing in the past with IPMI, but we will go much further in the future So of course you have information about how the server behaves health information Temperature information fine fine information Inventory information on the system so your numbers and inventory of the various hardware components inside your machine This is I would say the get part of the standard Then you have the possibility to pass actions onto your server and of course for an ironic purpose What is interesting is to have power control of the system Possibility to power on power off reboot the machine change boot order So possibility to boot on a specific device Either the next time or by default Meta certain put a certain number of thresholds on on the system and alerts Having access to logs on the system and some basic operating system information as well And in the latest version also a virtual media control. So if your platform is providing you an access to bootable so smooth management board generally what people can do is boots through media Which is in fact on your system or somewhere on your network and you give a remote access to that boot environment from your server using your management Components on on your machine. So that is also something that you can programmatically Configure on your server in order to give access to the possibility to boot on the media What we had an on I low and that other manufacturers have as well, but in a standard way and Finally you have also access to the possibility to manage The information of the BMC itself. So it's network settings and it's user accounts on on on the machine So all that gives you an idea of the features. There are something like 400 parameters that you can get inside the standard 100 that you can set through the standard and Manufacturer can go way further because they have a more fine grade control on the platform. So For HP we have more items that you can control through the OEM functionality We are providing a certain number of of tools to give you access which could be Power shell base or Python base, which is probably more interesting for you as an open stack community And we have also a command line interface tool, which will which is called HP arrest Which is based on the Python interface that can allow you to script for example Inside shell script the possibility to interact with a redfish interface on your machine What is also pretty interesting is a redfish standard is the data model which has been created around it to represent The hardware platforms that we want to model So the entry points on your management board is always a slash redfish slash V1 URI that you can you can reach and from that entry point you will be able to navigate in the graph of Entry points that you have describing the system describing the chassis describing the manager and describing the various actions that you can pass on the system as well as Navigating through links that are put inside the Standard to allow you for example in a system to find again the reference to the manager and to navigate to the manager in it So it's really a graph. It's not a tree of information And it gives you access to all the low-level information that you would like to get Especially from an ironic perspective We want to be able to interact with the platform and get a certain number of Information to generate some configuration files for example automatically. So Let me try to Demo something and for that all guys need glasses Okay, so I was not knowing really what I will get in term of network access so Let's do that Okay, so if you go to the DMTF website You will see that you have a certain number of mock-ups today that are available One for rack mounted what for blade systems and another one for the OCP platform What I want to show to you in the short video is what you can do with the blade system Interface and that's the same for the other type of systems So you have the mock-up you have an access that you can also Create online So this one is for blade systems Which means that you have multiple servers inside a single enclosure and you have possibility from the main point To see here through the standard that you have a certain number of systems, which is a collection chassis managers Link to that and if you go into the system entry point you have the possibility to see the different blades So here we have four blades which are attached to that chassis And you see that the systems entry point are using the serial number of the blade in order to create an entry But you can do what you want as a manufacturer Then if you go into one system you have access to the various informations So the name of the system system type if it's a physical system a virtual system Partition or not partition You have other type of of metadata as well below manufacturer models Q serial number You name it you get it a certain number of status So all that is really part of the standard There is nothing here, which is HPE specific and this is a markup from the DMTF So it's a representation of how manufacturer Implementing the standard should support The platform a lot of those fields are optionals So here for example you see the processor you see the number of processor you see the processor family and You see the memory Available and the nature of the processor which is again a collection And if you click on one of the process of the on the collection You will be able to look at what type of processor you and you have on the system And here you have at the end the links with the chassis and the manager of your system So if we go back a bit of her in the in the list We will be able to have a look Okay, so and at the end you have the action such as a possibility to reset the system So if we go back to the processor we have the collection We see we only have one processor in this blade system We can click on the entry point for that CPU and then we have information on the CPU So it's really a true approach with some links at some levels We can branch you to different entry points in the in the standard and the markup is pretty well done Because you can really navigate through through links in it and it's pretty easy to discover What what is in it? So remember a lot of stuff are optional and that's my my Blame with regard to to the standard itself is that I would I would really like manufacturers to Agree on what would be mandatory in terms of fields typically for example The MAC address of the first nick is not something which is mandatory so When I look at that in the HPE case I had to go in the OEM part to find the MAC address of my server Which is a bit of a pity because I think manufacturer could agree on a single Representation for the at least the first MAC address of the server, but that's the way it is now and You have to have people discuss around the table and agree on what needs to be a really the standard and what? Needs to stay apart because it's really specific to a particular vendor You have a representation of the storage as well here we have one disk or one storage which is Represented by multiple disks in it two SATA drives in that case and you see the three terabytes Type of this that you have in the system So that's how it's it's working will now jump Back to to the systems to look at another one just to be sure that I don't lie and Again, you see the same type of information for another blade name system type manufacturer Etc. All the same type of information bios version memory processors and links to The management and the chassis so if now we go back to the main entry point We may have information on the chassis itself, which is hosting the different blades and here you will see that We have a certain number of members of five members in our collection corresponding to the four blades and The enclosure itself and if we go into the blades this time We will have different type of information We will have thermal information power information on the system and links to the systems themselves through the management board of the of the enclosure and We have again links to the manager of the system the manage and the enclosure itself We can go back through so if you pass that data structure You have a lot of way to go on on the different elements that you need to interact interact with in order to get all the information you need and here for example on the thermal Aspect you get the thresholds and the current temperature of your enclosure or of your blade in that case right temperature fans So that was for the enclosure itself now we can go to the management part of the systems and Again, we have six five managers one per blade and if we go on each blade we will have the information on how to To see the type of manager so this is a generic BMC But if you were on an HP server you would see a Nilo if you were on Dell server You will see a drag type of controller You have the firmware revision of the management board not of the system itself And you have access to the configuration of the network card of the management interface And we will go into it and you have links again to The manager for for the chassis itself as a manager for the server itself So you can click on them on the server and go directly to the server go back into the manager the way you want Here we have a collection with just one element, which is the configuration of the BMC of that server and You see the network configuration, which is the most interesting in this case So the MAC address There are different notion of MAC address could be a fixed MAC address or at install time or a MAC address that you can Overwrite if there is a need and then you have the IPA information about about the system Okay, so that was to give you an idea of What is the standard itself so When we when we had that information inside HPE We saw it in a in our solution centers It could be very interesting to use those metadata to enrich and running ironic was it To make it easier and ironic is one of the possibility But we we can work also with other deployment tools because that's something which is Generically interesting for different deployment tools So In order to make it easy for ironic to consume those data the idea was to say let's work on a low-level library That will help people to have access to the data model that Redfish is proposing and giving it under a format which is easy to consume from a Python perspective And so we worked with a certain number of people to create So Python Redfish library that you can find on github Which has been so worked on in in 2015 and since that we have made different evolution It uses again. It's using a certain number of Python components To make our life as easy as possible and you have currently version zero three Which is what we think is ready for proof of concept. So it's as a state where we can consider using it for Developing a driver inside ironic based on that because it provides a certain number of information Maybe not everything that is needed yet, but we want to extend it as we see the need from from the ironic driver perspective and of course ironic for us is the first consumer of that library, but it's a Python library It can be used for different type of tools and I will cover another one at the end of this of this presentation So again, it's usable for proof of concept right now It gives you back some information around bios power management. We have made some demo scripts We have a client tool that I will demonstrate to you in the next video And we can use we can interact both so we are testing it both with physical hardware So prior on servers moon shot cartridges and with the DMTF markup to be sure that it's Working on different type of platforms and From one version of firmware to another of a physical path platforms. There are already differences in the ways a redfish standard is Giving back information to you So it's already a work. We started with the version 0.95 of the of the standard and we've had some adaptation to do when We move to 2 1.0 So I'm working on the packaging of the of the project itself to make it available So right now I'm more opium an opium guy. So you have opium packages. We want to work on the debt packages as soon as possible And that's what is most interesting for you right now So let's jump on to the other video to show you where we are Okay so Here I have my package version which is a Between 0 3 and 0 4 there is an online online help You can interact with the different Different BMC. So with a config parameter Then you can have information on the manager on the chassis and on the system itself Which is something which is mimicking what I have shown to you during the Redfish mock-up exam. We are completely aligned with the data models that Redfish is providing so here For the purpose of the other video. I just have one system one blade, which is a physical one Refer referenced in my configuration. I can have access to right now Which you are I I will use and the login password. I need to use to have access to it This may not be as secure as you can think about That's the way it is right now and so We are communicating using the insecure option to not have to deal with the HTTPS authentication and we can get in for on the blade On the system itself So the video is cutting the time it takes takes a bit of time to dialogue with with the BMC To get all the information and to bring them back and what it brings back is currently that type of information. So Nothing fancy we find exactly the same stuff that what you have the through the mock-up except that this time This is a physical and real server not just a mock-up But you can have access to the same type of information except that we just give you an extract of the data Structure that we have in memory to show that it's working and that we have at least a tool Which can be useful to the bit of inventory, but that's not the goal The goal is really for us to be able to test the library which is below And and ensure that it works correctly and it shows a chassis Link to the to the server manager link to the server. So it's a blade server. There is one on closure around it and as you can see There is no internet interface which is part of the standard which has been found and we have made a Next well next tension. We have made a special Handling of that case because we were in need to get the in my cadres anyway So we made an special extension for that to say okay if it's a praline server Then we need to look in the OEM part to find the Mac address. I don't like that You don't like that, but that's the way it is right now. So it's a workaround and I'm Bugging my colleagues inside HP so that they changed our mind about it, but it seems to be a difficult discussion Similarly with regards to to the storage there is nothing appearing as part of the standard. It's again in the OEM branch Which may be more acceptable from the storage because we have our own red controller So it may be a bit difficult to to standardize Theoretically it should but so again with the same Command line interface you can make a query on the chassis and using the same library You will get some of the information that you have in the chassis So some redfish version manufacturer model chassis type etc etc and See information linked to to the chassis itself which are temperatures and fan type of information and the last one is the possibility to interact with the manager of the blade which will give you Information on the network configuration of the blade so the firmware version of the ILO wall So here we see it's an ILO 4 was the right version of firmware and you have the information of the IP address IPv6 address etc Okay, so I guess I hope I will not pass too much time because I'm not the expert The expert are in the room in fact So if you have questions about ironic you you look at all those people in the first row and and you ask questions And you will get the right answers so To make it simple the goal of ironic is really to manage bare metal deployment inside up and stack It can be used as a Nova driver or standalone and personally I like the standalone version of ironic a lot Because that's makes my life very easy I don't need to deploy a full up and stack environment to make the test with the driver here so that's nice and In fact somewhere the goal of redfish and the goal are ironic are pretty aligned because We want to have a neutral way to interact with the hardware so ironic for the deployment and ask for Description of the hardware platform and bit of inventory of it and they use the project uses drivers to do abstraction of the hardware and That's where we want to to play With regards to the combination of a fresh was right ironic So as I said, we are using the standalone ironic to make our test right now So we use by frost to do that And ironic already has drivers to interact with some BMC's so I load drags IPMI and Currently there are two modes For the driver approach one is she which is ppcs based and the other one which is agent based with IPA so there is a proposal of An or a fee which is mentioned at the bottom of the of the slide Which has been updated thanks to the work of Julia yesterday So that we we have something Proposing how we plan to add the support of redfish is inside ironic and the idea would be Depending on how the specification works For the evolution of the drivers, but to provide at least as we see it right now two drivers One for pxc support one for hn support That we could derived pretty easily since and I will not code myself probably so so it's always easy when you are not coding but based on what we have in the ILO for pxc ILO agent ILO because That's straight in here in term of structure and We will have to provide a certain number of additional So a new module called right fish hopefully which will Again, it's proposal which is possible to use a python redfish library that we have developed To already benefit from the gathering of information and the data structure and pythons that we are providing So to impose those data structure and do the job which needs to be done with it directly And of course we need to work on tests and documentation Hopefully the mock-up the availability of the mock-up will help us a lot With the simulation of the environment and the capability to do CICD correctly for redfish without having to deal with the physical hardware however Testing the hardware platform is always nice And I need to talk with the people to understand how the ironic community dealing with bare metal deal with those aspect around how to interact with physical platform as part of the test environment so that we can also ensure that it's working for a certain number of hardware platforms and Each time the people are making a firmware updates We will have to look at it again because there may be disruptions or maybe Problems and in the way they implement the standard so the mock-up is nice But the mock-up does not evolve a lot firmwares will evolve much more than the mock-up So we will have to deal with hardware platforms at one moment in time. So again something I need help from the from the community Playing with ironic by frost and trying to put my environment up Set up I we found a certain number of interesting Challenges was was that infrastructure and I we created some bug reports and and I'm still trying to understand how all that is working. So so again the experts are here I'm just here to give you an idea of where I think we should go as a community Especially I found puzzling that there is not really a cis admin type of guide So for someone like me who has a strong Unix Linux background cis admin type of guy Trying to understand what it does why is there are two kernels that you need to pass as parameters and in an instance And there are a certain number of questions for which I had not found easy answer immediately So what I will probably work on is I plan to work in fact on them on a training for our internal people inside HP To show what ironic is really and how it works a step-by-step guide for an ironic standalone Outside of of the rest of open stack because I think adding it as part of open stack Mask a certain number of stuff and I would like people really to understand how it's working step-by-step So that's probably the type of contribution I will be able to to work with some of my colleagues on For the community and the other stuff we are working on right now is this approach so The the idea is to say okay We have a lot of information available through different type of tools and in production customers today Are using very often? configuration management database or what was called in the previous presentation and ERP To store those information and to ensure that the operator have access to the right information when they need to do Measurement of impact of a problem you want to replace that switch What is the impact on the rest of the infrastructure etc. So they need that type of tool and There is nothing in the open stack project providing cmdb as a service right now so We said that it would be nice to have such a Tool working with the same approach as ironic so having drivers having drivers on the South interface to talk with Management interfaces of hardware platforms or Standards such as redfish or Different type of tools that can provide inventory information to your to your platform And it would be just just the fact that you write a Driver to communicate between how the data are managed by you by that platform And how the data or manage inside that tool that we call Alexandria in the middle Which is a librarian somewhere maintaining all the type of information in coherency That tool could also have drivers on the left hand side to talk to a cmdb And we have in our solution center an open source cmdb called itop and if you don't know it I really encourage you to look at it because it's great The data model of itop is a parameter of the tool so you can do everything you want with it And you have synchronization script with it very easily So it allows it would allow us once we have the data model of the hardware platform in memory To push those data back and forth with a cmdb and to maintain a cmdb up to date because the main problem We have today is when you are creating hundreds of VMs and deleting hundreds of VMs per minutes or per hour I don't know You don't want to manage that by hand. You need you need to keep your cmdb up to date completely automatically So so there is really a need to to branch those type of information together And and then from those cmdb information You could have on the north side other type of drivers to talk to your deployment tools or Whatever type of tool you want to use here I mentioned ironic it could be a Docker platform It could be what we had in alien and HLM or it could be a network management platform because it's not Restricted to just servers here. We are talking about architecture in general and so really the idea is to have that type of break in the middle able to communicate to different type of drivers north and south and In fact west as well and having on under on as an input the possibility to answer on an API so that you can Drive it through an API as usual our our use case for for this is really to say okay, we have a Data center. We want to put a new server in place. We rack it. We power it We bring we plug the management board on the network We have the configuration of the account and and the network information for that we collect Information through the manager BMC to put the data inside the CMDB with a certain number of parameters and then we can use redfish to Once we have created the item so the configuration item inside the CMDB we can use a redfish to put all what is the automatically Parcable by redfish inside the CI and It's help us keep currency between the physical Platform and what we have in the CMDB which is already a nightmare when you do that manually and then you can say okay That machine I want to deploy it using ironic and with that type of image and again That's something you could put in the CMDB extract from the CMDB You could configure ironic automatically with those parameters and generate the deployment of the server from the CMDB if that's how you want to to work And you can have so the way it's working. You can have a regular maintenance of the communication between the CMDB between ironic to keep Up-to-date the information in the different sense because the CMDB is able to know Who is a master of the information? So that's also something we are working on But we would need help What I can also show you is what we have done up to know so this is not just So this one is I don't need necessary to talk because it's written already So we have an environment where we want to have all the components. So we want to start our Alexandria layer Which is running on the on a port and which is on sewing as on on its API So version 0.1 bear with us. It's just to show that it's possible. It's not too Pretend it's working completely. So now we start a Docker container, which is running our CMDB Platform and we will show that it's Just working with the port redirection on 80 So we will be able to interact with the CMDB tool a couple of seconds Okay, so it's running. It's still running It's nice So now we can go into our browser. We look at the height of platform And we will be able to have the welcome banner It takes some minutes to start. That's okay. That's linked to the configuration inside the container Nothing fancy So that's a tool we are using in our solution center to manage the 400 servers that we have network equipment storage system, etc It does also ticketing and stuff like that So we see that by default when you are creating a new height of instance It comes with a deep one vinement and you have four servers available by default inside the tool So now we can pretend we have a new server using the redfish simulator So we start another Docker container Which contains the the mock-up for the redfish environment. So it's a new server running somewhere having its interface Available on port 8,000 We can check that that redfish Platform is working. So we have a redfish server, which is in fact a not a physical one just a mock-up and we can have a look inside The model to look at systems and here we know that the system is named one So if we go into system one, we have the same information as we had before and now we can Using the Alexandria API we can communicate with the tool to say I want to add that new server inside my cmdb and I want you to Make a redfish query and to import the data some of the data that you find through redfish inside the cmdb automatically So we do the past and now the API Seems to be happy. Of course, it's a video if it would not work It would I would not show the video But now if you go back into the the itop environment You see that a new system has been created in the cmdb, which is coming from the redfish query that you have So you see the serial number which is identical to the one we had Inside the the redfish mock-up you see the 16 gigs of RAM That is again the mock-up and the computer which is my computer So you see exactly the same name between what has been Created inside itop and what is what you can create directly from from the redfish mock-up. So really the goal is is To provide an easy way to automate for operators So the money is a management of their information through Through a cmdb as long as the cmdb is clever enough to provide a flexible data model and API's so that you can control it from the outside So there are a certain number of links linked to Or what we covered here during the presentation and that's what I had for you today any Any question not too hard questions Yes, have you seen cases where Everybody that's doing redfish is kind of doing some OEM commands and and we're able to get that back into the Mainline and not OEM style so yes, I Since the start of the work on the standard there have been some move back and forth between the OEM area and the standard area so not back and forth and fed back so a Certain number of elements have been agreed upon by the different actors of the standards and put in the standard And which were before in the OEM line But I think it's a bit slow with regards to that the the real problem for me that most of the fields in the standard are attacked as optional and I really would like to see some of those Mandatory even if they are empty But at least to force the manufacturer to really provide a certain set of minimum Information through the standard which is not yet the case. So I don't know how that will evolve But I also welcome suggestion because I can talk with CHP representative inside the DMTF group to To take input from the ironic community saying okay for us to work correctly. We would need that type of metadata Always part of the standard So so let let let let's discuss about that Give feedback so that I can push that to the DMTF group and see Whether we can make them move on that and improve the standard to more Support more what we need in terms of information I do this happens while writing the driver How are you planning to deal with this optional things if something option if something is not returned I mean you can probably affect HP but as more vendors start showing up and How do you plan to mitigate this problem in the driver that you are going to write? I think the only mitigation is to force my BIOS drivers my BIOS developers to really put Something in the MAC address of the of the nick because without that there is no way that the standard is useful What we want is to have a single way to access to one information, which is a MAC address of the first network card for example and and That should work across all the manufacturers Implementation so so the only way is that the standard so the standard is providing that information right now But some implementation at BIOS level Do not feel that field because the things they have some stuff which are not correctly handled etc So all discussions on that and what we need is to put more pressure on the hardware manufacturer Myself included so that we use the right fields in the standard and in the standard part Not in the OEM part at least all what we need for us to work correctly I mean if if we have some fancy Hardware piece in our platform that only us provide and that we provide a redfish way to access to that in the OEM part I'm perfectly fine with it. That's no problem But if there is something which is available across the different hardware manufacturer It needs to be part of the standard and it would need to be mandatory I don't see today's a need to disagree on the fact that you have at least one nick per Machine that you want to deploy because if you don't have that nick I don't see how the machine could work in fact, so I mean Requesting that we have one nick with one mandatory MAC address is not a big ask But yeah, that's where we are Thanks Is there a standard way to discover which set of OEM extensions are supported by a given BMC or will we have to do crazy discoverability logistics so there is to my knowledge there is nothing standardizing the OEM part so What is in the OEM part is something which is left to the hardware manufacturer to do what they want with it because that's Exactly the same as SNMP you had an entry point and below that entry point you were managing your data They really the way you wanted so here is the same you can do what you want in the OEM part But from from an open source perspective with my open source hat I don't want to deal with OEM We should never ever have any handling of the OEM branch. We should ignore it There is no reason to use it for us. So either we have everything we need in the standard way of accessing to the data or We feel about to the hardware manufacturer saying you miss that and we cannot work with your hardware platform because you miss that Does that sound reasonable Yes, that would be great I But if they can't do that I'd much rather at least have a standard way of discovering what they do support and not have to guess Yeah, but the problem is that if they would agree on how to Modelize it they would put it in the standard instead of putting in the OEM So it's because they disagree on the way to represent some data some information that they put it in the OEM because they want to handle it They are on way and for example, we have an HP one view tool Which does deployment of of system etc. And it's using its uses at OEM This OEM branches because you have a lot of them Everywhere to to get information specific information for us and and that's fine But not for an open source program We need to have access to the standard parts and have everything we need in that standard part so I really think as a community we need to Gather all the information that is really needed by ironic and We make a request to the standard saying okay for ironic to work correctly and I have an ironic driver in a Redfish driver inside ironic we need that type of information and we need all manufacturers to commit on providing those type of information And that's the way that's the way forward and and the pressure will come from customers as usual Thank you very much