 Hello team. Good morning, afternoon or evening depending on where you are. I am Bharath, one of the presenters today. I'm an architect with the ODIM team in HB and along with me there is Shiva Charan. I'll be going through a brief introduction to ODIM as architecture and how it looks in deployment and their build dependencies and then Shiva will actually go through the build procedure, not actually building but running the instructions through and showing a demo with ODIM in action, adding resources and removing resources to ODIM and you can ask your questions when the presentation going on in the chat window either Shiva or me will try to answer the questions. So we'll just get started. So ODIM is standing for open distributed infrastructure management. It's what we call as a resource aggregator. If you see in this slide, we have the upstream clients and users. So this could be the cluster frameworks or resource managers, service managers or any other off the shelf or custom cards versus manner offerings. Now Redfish will be there in between these upstream clients and the actual physical hardware at the bottom and the way we do is we do that is we expose a standard Redfish interface as shown in the blue arrow, the two ended arrow and if there needs to be a integration between the upstream clients and then ODIM this integration adapter plugins is going to do exactly that. So you see green in green is the native interface for the upstream clients and below that is ODIM. It you know we can say it offers a hardware as a service interface and it's a implements a pure Redfish interface. Now at the top we show the Redfish interface that is there and then you know the box in between we have the service of the Redfish DMTF services like account event aggregation. Aggregation is a new service that has been you know brought into the standard by our team this year. So you can refer to the 2020.2 standard spec and you'll find details of this aggregation schema and session composition and I'll have been in the Redfish even before that and if you go further down a little bit you will see the abstraction layer and you know that is the plugin layer. So southward also we have a Redfish API being exposed so we connect to the devices using the Redfish API. So we have Redfish or between the plugin and ODIM is Redfish but depending on the device interface it could be IPMI, Redfish or any other proprietary interfaces. All right so this is the general architectural introduction for ODIM. I proceed to the Redfish model for those of you who are familiar it's okay but you know if there are people who are not familiar with the Redfish model this is an early model. So the Redfish is like you know this is how they model their schemas. You have the service route which is Redfish V1 is a constant for all Redfish implementations and you have the service of the task service of the session account event registrant schema service. So task is something like when you request anything of Redfish it may give you a task handle to track your task request status and it's also possible to get notification via event services and sessions as for maintaining sessions when you have a login to Redfish accounts is obviously you know you need to have an account to create that and this account service helps you do that and similarly events, events is for publishing events from the devices. Now ODIM doesn't is actually a manager of these managers so each of these ILO or ILOMs or IDRAGs depending on the vendor they are modeled as a collection resource. So you have systems which will model these ILO ILOMs or IDRAGs under which you have the system computer system schema and then you know different sub-schemas for specific information about storage, ethernet or log service, processors, memory it's too many to list under this in one screen and similarly the systems as you know will be hosted in a chassis. So you have the collection of chassis here and the power and thermal metrics are basically caught from the chassis schema and then you have the managers collections which refers to the BMC in case of say Dell HP IBM AX86 equipments. So they host things like log service and what network protocol example whether you connect it to HTTPS or SSH or things like that and what are the ethernet interfaces. So you can have a different ethernet interface for the BMC and one different for the system right. So both of them if you see have ethernet interfaces and then you have the virtual media and stuff like that. This information in the screen is from the DMTF so it's their IP. I put the source here if you want to you know go through that below that link is having details of this. I mean the full document will be there available. Then I go on to the software architecture of Odin. At the top we have the API service and then you know that in turn calls the different services. Now where you see US that stands for microservice. SVC is a service. So you have aggregation as a service but system, fabric, manager are all collections and then account session and event session service also is microservices. Then we have the task service. We internally use RADIS for our database. So we just use it as a document database. So we just put the JSON schema here and use it. We also use Kafka as a message bus and it goes out with the console in this release. And here below we have this compute plugins which you know bridge between Odin and the BMCs or the you know that's a generic name you could replace with the ILO or ID rack and stuff like that. And there's also a fabric plugin which currently supports Ethernet fabrics. The storage plugin is meant for the external storage that is the SAN storage. Currently it's not supported because the swordfish itself is undergoing changes. Now if you see to the right you have the description of the interfaces. The dark blue interfaces are the HTTPS interfaces and then the red arrows depict the message bus interface. So all these API services and services and collection microservices interact with the devices via the compute plugin when needed. And if you see the there's a red arrow that goes from the plugins to the message bus. So what this is showing is that you know the events are being sent say from the ILO or the BMC events or Ethernet interfaces fabric events are going to send to the Kafka MQ. Kafka MQ for again you know forwards that to the event service and or rather event service pulls it from there and then you know depending on the subscriptions we forward it to the end users in the client layer. Now here you note that you know we are saying will forward because not all the events go to all the consumers. So it depending on the events that a particular customer has a client has you know subscribed to only those events are forwarded to the user via the event service and aggregation service is shown to have a red interface because there are some events like when you initially add a note there is no way you could have subscribed for that particular note. So whenever there is a node addition the aggregation service generates a resource added event. All these events are red fish based and task service also sends because if you create a you send a request that creates a task then you can get a notification on the event completion or provision. You don't have to keep pulling so that's why you have that interface and then this blue interface is already covered that is the HTTP interface. So when we add a system that would go from the aggregation service via the HTTP interface goes to complete plugin which will discover the device and add all the topology information that already is database and the acknowledgement is sent back on the HTTPs and then back to the end user. So that's the rough flow. Sure we have any questions on the chat. Okay so then I'll proceed. So this is like a more in detail diagram for showing the interface where you know this box represents ODMRA which exposes an HTTPS connection. So you'll note that all the connections are HTTPS not HTTP. So the use only server server side authentication using certificates for Azure APS. The APS service that in turn connects with each of the services which is event fabric, aggregation, task service and others. Now some of them or rather most of them connect to the Redis database as well to fetch and set like aggregation service once it adds a device to the you know it's when it gets added it adds all the topology information. When I say topology I actually mean the whole Redis Redfish schema that is stored here in Redis. So that subsequent fetches are you know cached in the Redis DB and not fetched from the data plugin every time. And here you see this compute plugin and fabric plugin indicated by the green arrow publishing events to the Kafka and Kafka in turn sends it to the event service here. Aggregation service also has a green arrow here indicating you know that when we create an event from the OD manager layer to indicate this event for BMC is being added and so that's the green arrow here. And task service also sends it to Kafka because when the task is completed it will send a modification out to Kafka. If you see the all these microservices they're communicating with STTPS interface to the plugins that is denoted by the red color so that we understand it's the interface between the microservices and the plugins which talk to the devices eventually. So I think that's about the software interface. The last thing I'm going to speak about is the tools and third party. So we have used Ubuntu 18.04 as a development platform and also the platform where this primarily runs on but then I say primarily because you know if we take Docker images and Kubernetes support is planned for the future release but Docker we are having in this if you go to this site you have information on how to build and you know create Docker images and deploy that and then your runtime environment could be any Linux or we have never tried it on WSL but at least the other Linux platform should work well. The primary tool is Go. We use 1.13.7 and then we have used a few bash scripts for creating the Docker images and stuff like that and tools include Make, OpenGDK. OpenGDK is actually only the JR is only used for the key tool and not for the any other part of the code and Docker CE and Compose for creating Docker images and deploying them on CE. The third party tool that we use is Kafka 2.5.1 for the message bus and ZooKeeper 3.57 that's for the Kafka itself. We don't have a direct dependency on that and so does Council which is used by Go Micro. So the Go Micro is a platform we use for inter-microservice communication like within the product and the plugins themselves we use the HTTPS interface we don't use Go Micro for the plugins. The idea being that that will be you know it's up to the person who's using it if they want to write a plugin on their own and they want to choose some other language we don't want to bind them to Go if they have the independence of writing it in Python or Java or whatever language they fit we see fit and Redis Database is also there as the backend store. We use both in memory and on disk DB storage. At this point if you have any quick questions I could take it or yeah but there is a question. Yes this asking in order to support Odim to end your stay. Okay Saad for the BMCs you know it depends on the firmware versions and the vendors like there'll be variations across vendors and firmware versions. So we have provided a generic Redfish plugin as a template and with minor modifications you should be able to support any BMC which is having a Redfish support. Otherwise the plugin has to be more involved in that you know if it's a IPMI or SNMP or anything that the device provides the plugin will have to talk to Odim as in the northbound with the Redfish API and then the device in the proprietary protocol. So that's why we put in the plugin you know so that we can adapt any kind of devices. So as I said before compute and network that is fabric is there. Compute we have provided as a sample with the source code in the URI below. You can have a look at it and I think it should be easy to customize to your requirement especially if you're just talking about BMCs. Does that answer your question? Thanks a lot. I think Shiva if you don't have any other questions we can proceed with your part. Yeah I'll stop sharing. Okay let me start sharing the screen. Let me know once there is a screen. Yeah I could go to the top of the screen. Just highlight the URI so they know. Yeah yeah. So as mentioned in the slide deck is the same template that we have where we have all the source code and these steps to install and use. For my part of the demo I'll be going through the deployment steps and a couple of use cases where we add a resource, a digital resource and based on the subscription that we created see the event searching. I use a defined client. So the first part of the demo is looking at the deployment itself. As part of the prerequisites there are a couple of packages that we need to install and one of the major prerequisites is to have Ubuntu 18.04 as the operating system to perform all these actions. The first step is to download and install Ubuntu 18.04 and the next couple of steps talks about installing the prerequisite packages for loading RL such as make for the deployment process in JDK and installing Docker also. So in our environment all the microservices are running as Docker container and each of the third part it will start but it just presented which is Redis Kafka as a keyboard concept. All of them are running as containers. So to replace them as containers we'll have to install Docker. Along with installing Docker we'll also install Docker Compose which is basically a tool which installs all our dependencies with a single command which is all the Docker dependencies containers and any other container deployment that we need we can use Docker Compose to deploy. The next step is to add the user itself to Docker row to perform all the actions on Docker and then we check if the status of the Docker services are running or not to verify we can continue with installation and then we make sure if there is any proxy environment to add the proxy into Docker proxy setup and make sure the Docker picks it up and uses that concept and then we just restart the VM just to make sure the Docker services are enabled and accept the latest proxy concept. After the prerequisites are installed we jump into the installation of Audimare itself. So Audimare and Generic Redfish plugin both of them are installed part of this process. Part of prerequisites are default configurations. We need a certain set of codes to be open for Audimare and Generic Redfish plugin to be running. We can also change the default configurations which will be mentioned at the end of this page about the steps how to do and what steps to follow. Currently these steps talk about the default problem. The first step is to get the repository cloned and next we ask the user to create an HQDN which is basically setting an HQDN such as like say example Audim.local.com for the rest API usage and for the HTTP communication and we make sure we ask the user to export the HQDN and the host IP where we are installing the Audimare itself and we add that HQDN into V2C host since we are an Ubuntu 18.04 we ask the user to add the HQDN to specific path and the next step is for certificate generation as we saw in the architecture explanation we use certificate under HTTP communication and we create the certificates for that Audim can contact the plugins using the certificates at Generic and once we create certificates we make sure we append the root CA certificates into the resource aggregator file. The next step is for us to generate the certificates itself which is using RFQDN which we have asked the user to set and the next set of certificates is for Kafka and we run the copy certificates comma script which is basically the whatever scripts you generated we copy it into a specific folder and make sure Audim picks it up from there. The certificate paths are mentioned below for Audim certs this is the path for Kafka that we generated on the side of this path and for Generic Red Paste plugin this is the default path then we go back to the root folder of Audim and we run the make all command which basically builds each of the services and make crates all the containers needed which is basically building Audim or a container Kafka containers, a kick container, a Redis container, a console container and the plugin container. So part of this make call command we also make sure all the prerequisites are built and pushed into the containers and the configuration is also set and we also make sure that Redis has the default prerequisites entries to be pushed into it and the Generic Red Paste plugin container even though the service will be running inside it it will be individually running we'll have to add the plugin into Audim to continue using it. The user can choose not to use Centric Red Paste plugin and use his own plugin if he wants to add his own specific results if he has any. The next step is once the make call command is completed we verify for the individual microservices are running or not. Each of the services are tagged with SVC in the name of the service binary so we can grab with SVC and check if a say for example API service is running or not, acquisition service is running or not and aggregation service so on and so forth you can check each of these service status if all of them are running and there is a default log path that will come up in the build up steps where we can check if any of the services are coming up and there is an issue. The default configuration path for Audimare EZTC Audimare config where there is a config file present where the default set of values will be present and if the user wishes to change it he can restart the services so that he can get the latest config into the services. For plugin this is a specific config folder and these two are the default ports where the criterion can be accessed and the log path is as below where we put it into var log Audimare and the generic correctness plugin logs will be under var log share of plugin. We also have noted the steps for log rotation if user wants to perform log rotation the steps is to get into ADC log rotate folder, create the necessary config and get into the cron job folder to monitor the log set as he wishes. The default credentials with which the Audimare and which plugin are running are as mentioned. So if the user wishes to perform any restockings you'll have to use these two credentials for plugin and Audimare. So there is also a readme file under aggregation service which talks about how to add a plugin and how to add a BMC into Audimare. The next status information talks about if the user wishes to change the default configuration that's present in ADC Audim config and run with his own configuration he can reverse into this path open the config file. There are a set of parameters for each of which we have a defined description and the user can go through it and if he wishes to change any of this he can change it and restart the services. So the restock config file parameters is here. At the end we restart the Docker container so that the services under Audimare picks up the latest config that is changed. Any questions so far before I move to the next part of the demo? Sure there is another question from Saad. I'll answer this. He says if there is any KPI or performance list available. What exactly do you mean by KPI? Is it the server metrics or Audim itself? The server metrics is available from the chassis schema as far as the temperature and CPU utilization is concerned currently. Telemetry will add because it's a very large amount of data so we've not done it in the first phase. We'll pick it up in the subsequent thing. So all these metrics will be supported. Server storage and network. Storage currently there are no plugins right so I don't think anything should be available but network that is the fabric KPS will be available from the will have to be available from the fabric plugin and server metrics as I said will add support in the next one or two phases. There is there is sorry there is no list currently but if you could just leave your contact information we'll get back to you. Maybe you can now mail me okay yeah thanks we'll get back to you Saad thanks a lot. Yeshiva we can continue with the demo. Yeah so for the next part of the demo let me switch to the next tab. Yeah for the next part of the demo I'll be adding a particular BFC plugin and watching the events for those will be seen in the client side. So part of this demo we can see that there are so Odin currently we support two types of authentication one is via XR token and basic code so a user can choose to create a session token and then use that token for subsequent operations or he can use basic code where he can provide the username password in the header and request Odin for specific operation. So currently I'm the tab I'm showing is a basic rest line which can be used for programming in operation or any code command is also fine. So first step is I'll be showcasing for creating a session. So Odimare there is standard set of URLs that are mentioned under each of the services and each of the actions so we can follow the same to perform any action. And the retrospect also talks about what parameters to give and in what format and which are mandatory and which are optional the format of each of the parameter. So if we follow the retrospect guide we can obviously take up all the URLs that Odin exposes and perform all the actions. As part of the first step let me create a session as we know in the readme file as I showed the default variations that Odin gives is admin and with a specific password. So Odin also has a password conditions that has to be made when we create a user. So when we create a user we also have to specify the role of the user and each role has a certain set of privileges which talks about if the user can only view certain set of information or perform certain set of actions and so on and so forth. So when we create a user there are three sets of roles that are default provided by Redfish which is the same will be added into relatively a part of the Odin development. So when we create a user we can use either one of the three roles administrator operator and we don't need to define a certain set of operations for that user and we also can create custom roles which has certain set of privileges if the user wishes to do so and for now I'll be using the default user which has admin role with all the privileges present with them. So when we create a session let me go ahead and post the request for the session. When we create a session whatever response we get session created and the XR token will be present part of the response header. So the user has to take the XR token from the header and use that for his further set of operations. Say first before adding anything I'll just show you that currently we don't have any servers added into our system. We use that same XR token part of the header and get on the systems here. You can see that the numbers list is empty currently and before we add a particular plugin or a server first thing is we can see that we need to monitor a certain set of events. So if a user has a specific client sorry if a user has a client where user monitors all the events and performs certain operations based on a certain set of events we can provide that as a destination when we create a subscription. Currently I have a small client running which basically takes the events that's come to it and just dumps it into the console. So the first step is event subscription. So part of event subscription we're currently using basic auth part of authorization and part of the request body we have a destination parameter the user has to give the client address where the final destination where all the events will be sent to from all inside which in turn comes from the DMC. The next is event types. If the event types is empty we subscribe to all the event types that are at the supports. If a user wishes to subscribe to specific set of events we can do the same. So we also specify original resources as systems and managers. Currently any resource change that happens under a systems collection and managed collection that event will be triggered and sent to the destination that's been forwarded here and subordinate resource to means that any resource subordinate resource under the systems or managers collection any changes that happen to it or any events that are coming to that particular resources will be sent to the same destination. Let me go ahead and create a subscription. So as you saw in the architecture diagram for a certain set of operations we respond with tasks. So if a certain set of operations takes more than standard amount of time we respond back with task view rise where the user can monitor the task and verify the status of it. So when we do a get on that task that's coming in that request as a response body we can see that was in this component zero the body that's being given for that operation and all the information that has been done for that particular operation will be given here and the status of that operation currently you can see that it's complete and we can verify the response of that operation with task monitor. If you click on the task monitor and get that information we can get the response of it. Okay looks like there's an already another request let me verify that. Maybe I clicked it twice when I was creating the subscription. Yeah I might have clicked it twice when I was creating the subscription. If I ever get on this we can see that the same information is there when we are creating a subscription with the same destination and the original resources. Yeah as we saw in the earlier demo there was a conflict right so similarly we can see success response since we clicked it twice I think there was a delay when I was clicking it and it says that the resources is already in use which is basically we have created the same subscription with the same destination and all the other information that we have. Let's get to adding a plugin. So currently we can add a plugin using aggregation source currently if we do a get on aggregation source sources we see that there are no members present under it. So let's go ahead and start by adding a plugin to our aggregation source. So as we see for all the operations I'm using basic instead of creating a token and using that if it expires I'll have to create again. So part of adding a plugin we mentioned the portion of the plugin username passwords for that plugin. Currently I'm adding in the gender interface plugin that we have deployed part of the deployment process. So as I mentioned earlier one of the services deployed we'll have to add it into ODE to continue to use it. We mentioned the plugin type with compute plugin ID with which we can store the plugin information and the type of authentication that ODE has to use to contact the plugin. The key is either basic code or confirmation authentication on the plugin side. If you perform the post operation similar to event subscription we get a task. If we retrieve the task we get the status of it and the necessary information that's part of the framework. And when we do a get on the task monitor link we can see the final status of the task which is do not want created and the resource is added with plugin ID jrf and the details that we have. The location of that particular resource will be part of the responsibility. If we take this information and we'll get operation on the same we can retrieve that particular plugin resource. So now that we have added plugin if we get to say managers instead of systems managers as we're sending the token is expired so we get 401 and authorized. We create a session token again. We can configure the token expired time also part of the config file. We perform get operation. We see there are two managers. So when we currently deploy ODE default we get one manager which is ODE manager itself and the other is the plugin as the manager. This is the plugin manager ID. So we can see that the status is enabled. Part of adding the plugin we also verify the status if it's running or not and we update these get on this particular manager link and the status. If any point of time the plugin is down the status will be disabled. Similarly if we do get on the other link we have ODE manager and the ODE that we have provided part of the config file. So if we since we currently added the plugin and we've added the subscription with managers as the original resource on the client side we are going to get a resource added event with the URL to the managers plugin which is currently the plugin ID. So the user can wish to do any operation based on this event say once the plugin is added or do certain set of operation. So the user can choose to monitor and perform any operation on the client side. Now since plugin doesn't have any systems as such only manager so we got only managers as the resource added if we go to adding a BMC. Similar to what we have adding a particular resource we provide the resource information and the plugin ID with which you want to add the BMC. I'm also using the basic auth here. If you perform the post operation again we get a task ID. We perform get operation on the task. Currently it'll take a couple of minutes. So part of adding a BMC it will first validate the credentials that's been given to it via the plugin and also via the plugin it will get all the subordinate resources under that particular BMC. It will fetch all the systems managers chassis and each of the links that are present under each of those parent URLs. So we traverse through all the links that are responded by the BMC and get all the information and store it into our inventory until we get a leaf node. So say for example if we do get on systems we get multiple systems and for each of the system we get say a number of processors and each of the processor information and each of the driver information each of the network adapter information so on and so forth. We also make sure that there are no repetitive off-links so that we don't keep doing the same get on the same links. Also part of saving this information we create a certain indexes for under Redis DB for search and filter operations. So currently when I did a first gate it was at the 15 percent you can see that the task is completed the status okay and percentage with 100 percent. So to get the response of the task finally when you do get on the task monitor you get that the resource is created with the information that the user has provided and the location of the particular resource will be given in the header. If we do a get on that particular resource we get the resource information which is the host name username used and the plugin ID is running for it and if we go to the client that we were running we can see that there are two more events that are coming both of which are resource added one for the systems and or the BMC and one for the managers and other BMC. And if we go back to the systems collection under order yeah we see that the added BMC system information is present here if we get on this particular system we get all the resource information that is there under that particular BMC. So we can traverse through each of these links and we can find each of the individual resource information. The next step is I'll be showing the removing of this resource say let me go get our aggregation sources we have two aggregation sources if we do get two of them yeah this is the BMC that we added so if we perform delete operation on this particular aggregation source we delete the BMC from Odin and in turn delete the db entries that we have similar to other operations we do a get on the task service to find out the status of the task it's completed and we do a get on the task monitor to find out the status of it. I'm supposed to just get on the task monitor as since I used to delete it I was showing so once we have deleted the task I think once again okay once we do a get on this particular task and we've deleted the resources we no longer have that resource present with us if we do a get on systems once again sorry let me get it open again if we do a get on systems we have empty members and on the event client side we get a resource removal event with the system URL that we have removed yeah so this is part of the demo for a couple of these cases under Odin. Thanks a lot Shiva request the audience for any questions if they have so there's one more curry some auctions like provision. So this is sort of a provisioning tool as such right say if you see the BMC and the server is already up we just help you manage that if it's a bare metal system then the client layer some provisioning or workflow engine can kick off the that could also be driven by Ansible so this comes at layer below you know this is like bare metal as a service only so we don't do always provisioning ourselves but we can help you in doing that say like you know changing the boot order so that it picks up the OS media from a remote location puts it one time from that so the install goes through and then you know restart with the old boot order which will point to the local disk so we don't do provisioning but then the lifecycle this thing will have and then the event management and stuff we provide so we just are helping you select server so what can happen is from the top layer you know composition layer what you can enable is that you search for a certain server configuration you need for your workload say you're introducing a new service then what you do is and and you know you need say two CPU dual CPU systems or four CPU systems with so much RAM and so much storage so all this infrastructure would have been added and you know that's why we have the Redis database where we have all the infrastructure details and when you can query us and then you know quick off your workflow where you install OS and claim that system for your use case do you get the point any other questions very few questions so I think they're all very clear give them some more time and see if the question is coming up Bharat there's a question in the chat not the Q&A tab but the normal chat you know yeah um our Prakash Starling X I am not aware of that are you aware of that Shiva I'm sorry but I've never heard about Starling X so maybe we can open the voice for the okay so Prakash I had a quick look at that so this Starling X could be a client to ODEM you know so one of the things maybe it was not very clear in the presentation was ODEM is since it's based purely on Redfish so we can support hardware servers network fabric and storage across vendors so to compose a node you can use ODEM and maybe your Starling X can you know be northbound to ODEM and then select a server and do all the things like you know they have all the support for you know CIF and Keystone and all that they use as enablers and then they give you fault management and stuff like that right configuration management all that so they they can be a client to ODEM so ODEM will set at the bare metal layer as we said in the beginning of the presentation but exact interworking I'm not sure because I have not explored Starling X so we are with the our website is you know wiki.odem.io so we are with the Linux foundation you know so all of this ironic mass and BMAS currently LF and Linux foundation I said no LFE is maybe their potential partner but right now I don't work with them so ironic mass and BMAS and all that so we are like you could say by mass BMAS implementation but the other things we can you know we can work under them so that we interface with the hardware to the event management and selection composition service we can provide and when you say ironic and all they'll have the same old you know all this provisioning and all those in detail they'll do we don't go for either the WIM layer or the software we work at only the bare metal which is across hardware so you can have a data center that is composed of a mix of HP or Dell and other vendors and with the plugin for each of those type you can you know seamlessly work and we provide only the Redfish APIs to the north bone so your ironic and mass and other adapters yes they don't have to see which is the vendor sub vendor and all those things they're not needed you just treat it as a Redfish device and interact with that yeah thanks a lot I think that site was there in the presentation but I'll type it in so this here is the link for the project and we are looking for partnership with the various organizations currently the CNTT and you know open compute and all that but I wouldn't say we have a relation with them right now server configuration we provide a JSON the thing I don't have so the the Redfish is to if you want to interact with the server you'll have to send a Redfish request and if you're thinking of YAML I think we'll have to have some kind of converter you know plugin that converts from YAML to Redfish I think there's one more any other questions from anyone if you use server configuration we we only provide a Redfish so you know the Redfish being a REST and we use we accept JSON payloads so whatever YAML to JSON as I said you know that has to be done through some adapter so we only provide Redfish APIs on using JSON payloads anything else so that's what that's your name see that that's what I'm saying like you know it's one payload that will work for both HP and Dell that way it is you know standard base and interoperable but we don't have YAML support we only give Redfish API support so you form a Redfish request and send it will respond to that so in that sense like if you if you can convert from that YAML request to this then JSON payload only then we'll respond but it doesn't make a difference whether it's HP or Dell for us is that all Prakash or so Prakash you can give or you're mailed with us and we can you know get back if you have more questions okay there's some more question from Prakash do you support Kubernetes cluster API base support these days everybody has we we are adding support for Kubernetes cluster but I don't know what you mean by Kubernetes cluster API so we currently implementing the same on the Kubernetes so that you know there you can have multi-node cluster running ODIM and the API service which we discussed will be exposed to the external world anyway the rest of the servers you know the microservice won't provide an interface to the external world even today when they run on Docker so you can route your request through the API and that will do the work of forwarding the request to the various microservices do that do the service and you know get back the response and as far as the roles are concerned we support standard Redfish profiles accounts and you know roles and if you want we can add also the OEM there is a provision for that but of default like the framework is there for that but the default install will not have any so when whenever when somebody customizes they can add their own roles and accounts and they can you know roles and privileges so they can have a control on what the user can do or not do do you have any questions on this again project in cncf metal three okay is that the project you're associated with okay this is where you come from the bare metal so this is for what you're saying metal three as I understand quickly having a look is that this is about provisioning Kubernetes on bare metals right but as I said to the previous question that ODIM sits below that so ODIM lets you get to those bare select those bare metals and manage them from the hardware perspective you know hardware eventing and all that so when you do a composition again part of the Redfish you select the server and then you get a reference back to that server and then on top of that you could use this metal three which seems to be a provisioning Kubernetes on bare metal similar similar to what I said for that Ironic and all those other project right so Ironic or even metal three will be able to act as a client in terms of you know the hardware lifecycle is done by ODIM and the software lifecycle whether it is Kubernetes deployment on BMS or VIM you know setting up VMware or any other virtualization software and on top of that building a VM and all that is the client layer for us we are only helping you with the bare metal composition across vendors so it looks to me like you know the metal three can sit on top of ODIM select which server it needs to be using for the workload and then deploy itself will be deploying Kubernetes on as a bare metal implementation is it clear brokash I can't pronounce your name but prem slow we began around 70 minutes back 6 30 Indian time so we've been in this way for about one hour 10 minutes any other questions getting quite again yeah team just pulling again for any questions that you might have yeah joseph can you type your question in joseph joseph bravo hello brok hi hi hello great great i was looking for the you know the mute button and it was not available so how did the call go i am joseph from am i we have been working with you on mode yeah i know that yes yes yeah so what's the percentage is it done or yeah we just finished about 15 minutes back so we're seeing if people have questions okay okay that is good that is good so joseph as he said is from am i and you know we've been working with them on the composition part where they act as a northbound client to us and take our services for composing notes for the different workloads thank you thank you but yeah and will this session be available as a recording so i can see what was pitched what was presented yeah i think all registered people have access to the recordings all right okay thank you thanks a lot you can mute me again i'll go back yeah so team we have i mean i think i've been calling calling for questions so do you do you have any questions for us or i also think we can end this event so any people who joined late in including whiskerski please make use of the recording that is that will be available after the event thanks a lot and i think moderator we can close the end the call