 Good afternoon. I want to thank you for attending my presentation this afternoon towards the very end of the summit. I think this may be the last session in this room. So thank you. I'll be presenting on catching consistent redfish in the deep blue data center. My name's Richard Pioso. I'm with Dell Technologies. A little bit about me. Formerly, I was a contributor to the Ironic Bear Metal Project where I lent support for our Dell Technologies PowerEdge servers. I worked specifically on the iDRAC driver and also made contributions to the redfish driver. So we're going to cover several topics. First, I'll give an overview of redfish, a brief description of it, and identification of a particularly thorny problem, which is what I refer to as a consistency challenge. Then we'll go ahead and look at a solution to that challenge, which is redfish interoperability profiles. They enable you to express your needs. In other words, your opinion is to what's needed. We'll then look at a tool that can process one of those profiles and exercise it against a redfish service implementation to determine if that service meets your needs. And then to dive even deeper, we'll go ahead and examine some other conformance tools that the redfish forum makes available. Finally, I'll go ahead and describe the DMTF interoperability lab, which is a community testing facility. Before we continue, I'd just like to ask, how many of you are familiar with redfish? Just raise your hand. That's great, the vast majority. How many are actually using redfish in production? Awesome. That's great. Thank you very much. So let's move on to the overview. Most of you will know or understand this already that redfish is an industry standard to manage converged hybrid IT and the software-defined data center. It's REST-based with JSON payloads. The model is data-driven and it's schema-backed yet human-readable. The redfish data model describes a broad range of technologies and features. That means that most of the model, the vast majority, is not required, which could be a bit confusing and challenging for those that use redfish. Why so little of it required? What kind of standard is this? How do you make it interoperate? And what I learned while using redfish and lending support and actually contributing to the implementation of a redfish interoperability profile for Ironic is that the authors of the standard wanted it to be broad so that it could support multiple needs. And they've left it up to those organizations that are trying to lend support for their devices to define what's required. So basically, redfish implementations are expected to support only what's most applicable to their or for their product. Due to the wide range of options available, users and organizations require methods to convey baseline expectations for what is implemented in redfish services they consume. And so the solution to the challenge is redfish interoperability profiles. Redfish has independent versions for each of its roughly 100 schema. There's a common question that's asked, which is, what version of redfish do you support? Well, that question's not meaningful at all. There is a protocol level, which does have some meaning, but that's typically supported. It's the schema that matter. So redfish interoperability profiles offer a standard format for specifying redfish support needs. The Open Compute Project, OCP, defines several profiles. There's a base profile, which applies to all OCP conformant products. And then there are others in development. So there's also a server profile, which applies to OCP server conformant products. And others are in development for other classes of equipment, such as rack-mounted power distribution units. Provided a link to where you can find those open profiles. As I mentioned earlier, I contributed to an ironic redfish interoperability profile. And it defines a profile to express requirements for a system to integrate with its redfish device driver. And there's a link to the Git repository that hosts that profile. So let's dive down and take a look at a redfish profile document. Redfish profiles are also JSON documents. They're based on resource and property requirements. Now resource is one of those schemas, or it's at least expressed as a schema. The format allows for easy comparison to a retrieved redfish payload, which is typically a resource in its schema that complies with the schema, I should say. You can build profiles on the shoulders of existing profiles. So as I mentioned, there was a base profile for the OCP, and its other profiles are built on top of that. So the profile document allows specification of redfish protocol, features, actions, and registries. They are a redfish interoperability profile as a version document. Once a version of a profile is published, it should not change. If new requirements or fixes are needed, you create a new version. And versions are in the form, which I think most of you are already familiar with from other software, in the form major.minor.irata, for example, 1.4.2. So here's a picture of a redfish profile document structure. At the top, we have the profile information, which expresses protocol requirements. In the middle are a bunch of resource requirements, one per resource that the user organization needs. And then at the bottom is a set of registry requirements. So again, each section is a JSON object. Resource schema and registry objects follow the names of the defining schema. So for example, Ethernet interface, or maybe also familiar computer system. And then the property level requirements are nested within resource requirements in name to follow the defined property name. So for example, asset tag and speed and NBPS. So here's an example of profile info and protocol requirements. So you'll see at the very top, there's schema definition. So there's actually a redfish interoperability profile schema itself. And it's saying in that first line that this particular profile conforms with that schema. Then there's a name for this particular interoperability profile, which is the open stack ironic profile. And then a version number and purpose, et cetera. And toward the bottom there, the author has gone ahead and specified the protocol requirements, the min version, and also query parameter support. And you can also express discover support. And finally, at the bottom, you can go ahead and build, as I said, you can build a new profile on the shoulders of existing profiles. It's sort of a glorified include, as in pound include in C and C++ or include in Python. So this fleshes out the resource requirements section that was presented on the previous slide. And the example here is for ethernet interface, which recall matches the name of one of the redfish schema. So it's organized, again, by schema name. Again, ethernet interface matches the ethernet interface schema. And it can include requirements for any schema. So you can use a resource requirement for any schema of interest. The resource level read requirement sets the need, sort of the fundamental need, for the resource. And then there are property level requirements embedded in there that are embedded in there. You can specify minimum schema version required URI patterns and action requirements. So here we see that there's a read requirement on ethernet interface, and there are particular properties that are required. And then here we drill down into the property requirements for individual properties. So in Ironic, there's a need for boot source override enabled. And there's a need for, specifically, a minimum number of supported values, disabled once and continuous. It's critical in Ironic. And then there's also boot source override mode, which is related. So UFI, for example, boot mode. And then boot source override target, where you specify the device that you want to boot from. So this is the way that Ironic. Ironic uses these properties to specify the boot source in a hardware-independent manner. Boot sources can be very complicated in that they're implemented in different ways across hardware, commercial off-the-shelf hardware and BIOS. And this is sort of an abstraction where you can say, oh, just boot once from this particular source, or at the end of the deployment, boot forever more from this source. So it's a very important set of properties from Ironic's perspective. So again, as you can see, the property requirements are expressed in a JSON object that follows the property name. The properties not listed, and there are a lot of them, have no requirements at all. And empty objects default to mandatory. So if you just have open brace, close brace, it means it's mandatory. Right requirement is used to specify patch support. And other terms can be used to express expected values returned on get, allowed values to patch, and conditions for usage based on comparisons with other properties. So here's an example of an action requirement. In this case, resetting a computer system. And again, you'll see this pattern from the other ones we've looked at. It's organized by the action name within each resource. So again, it aligns with the actual schema and with the actual resource itself. So this allows for specifying parameter requirements and supported parameter values. And finally, in terms of a profile, we have the message registry requirements. And these are, again, organized by registry name. It allows for multiple registries to be specified. And you also have the ability to include OEM registries. Message is listed with individual requirements. Messages can be listed with individual requirements as needed. And an empty object, as before, indicates that it's mandatory. So now that we have a profile, or at least we've seen how you build up a profile, let's talk about the tool that actually operates on the profile and causes it to exercise the device under test, a Redfish service. That's called the Redfish Interop Validator. So it's a tool that is, as I think I mentioned earlier, it's a tool to determine if a Redfish service meets your needs if it is aligned with your opinion as to what the Redfish service should provide. It's a Python 3 open source tool to check a Redfish service against a profile. It's publicly available. It's open source on GitHub. And installing it is very easy. Just clone it. You change directory to the tool. And you do a PIP 3 install. And then running it's also very simple. You just enter the command there and provide the values needed. Those are highlighted in red. And then you inspect the results. By default, HTML results are produced in the .log sub directory. And here's what one of the test results or test reports can look like. The top there shows the tool information. So you can see there it says where it comes from. It has the version and the date and the amount of time it took to run it. And then toward the bottom, it talks about, or it gives information about the system under test. So it provides the root service version and the URI of the system under test. And finally, the profile tested is provided by the bottom section. In this case, it was the open stack ironic profile. Additional information is provided by that test report. You can get a per URI info result. In this case, it was the processors. And there's an ability to toggle the payload display. And then the payload test results per property are shown there in the middle. We see processor architecture read requirement. And it failed in this case. The status.read requirement, it passed, et cetera. And then finally, there are other errors and warnings listed at the very bottom. So now that you've gone ahead and expressed your opinion and determined if the system under test is aligned with your opinion, you can go down deeper in terms of conformance testing the implementation and the service to see if it conforms to your needs. And the DMTF Redfish form provides these additional tools. There are also Python 3 open source tools. There are two of them that I'll cover today, just very briefly. One is called the Redfish Service Validator. And it validates that all resources in a service match the actual standard schema definitions. So it goes ahead and queries the service, gets the resource from the service, and then compares it against the actual schema. And as before, it's very easy to install it and to run it. And I go ahead and provide where the HTML report is provided. So that's sort of a middle. I'd say that's a middle level. We're sort of working our way from top to bottom in terms of testing. The top is your opinion or your organization's opinion. Next level down here is to see if the resources that a service provides are conformant with the standard. And then the next level down is the actual protocol level, which is very foundational. It's the basic foundation on which all the rest is built. And that's the Redfish Protocol Validator. And there you can make sure that the service conforms to the protocol requirements, such as HTTP header usage. So it's much, much lower down the stack. So let's go ahead and let you know if you're not already aware of a DMTF Redfish Interoperability Lab that the Redfish form makes available to its community. So as I said, it's a community testing lab. It's hosted by the SNIA Innovation Lab. Members of DMTF Redfish form can submit Redfish-enabled equipment to be tested at the lab. The, in addition, the DMTF Redfish form conducts plug-fests every two or three months at the lab. And those plug-fests allow the Redfish form to see how different implementations interpret the specification. So of course, the spec is written by humans and then interpreted by humans. And so you have to determine whether or not the implementers are actually interpreting it the way that the authors intended. And if not, those differences have to be reconciled. And that's one of the goals of the plug-fests. And they range from running conformance tools to stepping through typical client workflows. And then the results, as I said, are fed back to the standard and tools to address interoperability concerns. I should point out the Redfish form does not certify, qualify, give badges to implementations. Because the whole point is for this to support a large array of equipment. And they're not in the business of defining the requirements of those different types of equipment. That's what the Redfish interoperability profile offers. Oops, out of memory. OK, bear with me a moment. That's interesting. Next to last are three slides left. So I've gone ahead and collected a bunch of references that you could use to learn more about all of this. They're listed here. There's a large collection of DMTF GitHub projects that are open source. They're available there. Then there are three different categories of things that I've different, three different categories of tools. One is the Redfish client tools and other is conformance tools. We've covered three of those today. And then there are development tools. So the client tools include libraries for Python and C, as well as what they call a tackle box, which are client utilities. Then the conformance tools are, working from the bottom up in this case, the protocol validator, service validator, interrupt validator. And then there are these use case checkers, which go ahead and test typical workloads or workflows, I should say. And then finally, for those that are implementing services, there are a couple of development tools. One is the mockup server for testing a client. And then there's a mockup creator. And that's all I have in terms of formal presentation. Be happy to entertain any questions you may have. Any questions? All right, well, thank you so much. Appreciate your patience and attendance at the very end of the summit. Thank you.