 Hey guys, so my name is Tuman Chen, I'm a software engineer at Red Hat and today I'm going to talk briefly about multi-tenancy and Ironic. So quickly, what is multi-tenant Ironic? Well, Ironic was originally designed for admin only use. Multi-tenant Ironic simply means we allow non-advance to have selective API access to particular nodes. What's the motivation? Well, I work with an initiative called Elastic Secure Infrastructure, or ESI, which is an initiative started by the Massive and Cloud or MOC. It works on several projects, but one question I was trying to answer was, if you're in an organization with a bare-mail cloud, what do you do during periods of inactivity within your bare-mail cloud? Well, one answer that they proposed was that multiple organizations actually collaborate together and donate hardware into a single bare-mail cloud, which has hardware belonging to multiple owners. In that cloud, owners have exclusive use of their nodes, but owners can also choose to allow Elastic to gain temporary access to one of their nodes. So you can imagine if you're a research institution with a bare-mail cloud and you share your hardware with other research institutions, you still have exclusive use of your hardware whenever you need it. But during periods of downtime, you can donate use of your hardware to other research institutions and during moments when maybe you need a little bit more, you might be able to find that hardware from other organizations. So when ESI decided to go ahead with this project, we talked to the upstream ironic dev team about how we might implement this in ironic. And they were very helpful in helping us come up with a series of implementation steps and writing up a spec. So the implementation is actually pretty straightforward. The first thing we did is we updated nodes to have owner and legacy fields, and actually ironic nodes that already had the owner field, but it was purely informational. Next, we exposed owner and legacy fields to policy. And in OpenStack, policy just means it's a set of roles that determines whether a user's API request is going to be accepted or rejected. So by exposing owner and legacy fields to policy, we can add two new roles, isNodeOwner and isNodeLessie, which is a person making this API request the owner of this node or the legacy of this node. Policy roles can be chained together. So with those two roles in place, we can now do things like updated bare-mail node updates to allow the API request to go through if the requesting user is an admin or is the owner of the node. Similarly, for bare-mail node set power states, we can say that this API request is okay if the user is an admin or the owner of the node or the legacy of this node. By default, in ironic, the default policy roles still shuts off API access for not admins. So in order to enable this, you would have to modify your policy file, but that's a very, very straightforward process. And really, that's most of the implementation. That's really the bulk of it. There are a few additional details we worked through. One was, we discovered that if you want to provision a node using standalone ironic, you need to be able to update the nodes extra and instance of the fields. If you're a lessie, so one way of doing this would be to grant non-admin users the ability to update the nodes fields. But if you're a lessie, then you probably don't want to grant a lessie the ability to update any arbitrary node field. So we added two additional policy roles for update policy roles specifically for the extra and instance info fields. Another thing we did is we exposed the node owner and lessie for associated bare mail port operations. So now non-admins can view bare mail ports and the bare mail ports associated with their nodes. And depending on the policy, they can also manage them, create or do other things with them. And then we also added node allocation owners. So what this means is if you're a non-admin and you create an allocation, then you are now the owner of that allocation. And the allocation conductor, when it matches you with an ironic node, will only check ironic nodes that you either own or lease. So with these changes in place, we tested this out with MailSmith, the client-side Python library for provisioning object nodes. And we use a specific policy file, which you can see linked here. And we discovered that it just works for node owners and lessies with no other changes needed for MailSmith, which was kind of a validation of our approach. So I'm just going to take a quick step back and see how this work fits into the ESI hardware leasing system design. The proposed hardware leasing system principally consists of three services. One is a leasing service, which is kind of a new thing for us, which I'll talk about a little bit in a bit. There's a bare mail service, ironic, and networking service or neutron. And there's two main workflows going on here. One is a leasing workflow where owners can offer up their nodes into the leasing service, and then lessies can go into the leasing service and see what nodes are available and lease nodes for a period of time. Once the leasing begins, then the leasing service simply tells ironic, can you set the nodes lessie to the person leasing this node? And when the lease is over, ironic will unset the lessie and clean the node. The other workflow is the provisioning workflow, where owners and lessies can take the nodes that they own or lease and perform various provisioning actions. They can provision it using Mel Smith. If they have an external provisioning service, they can connect their nodes to that external provisioning service using neutron. They can power cycle the node, they can perform other actions. One thing I want to emphasize here is that the leasing workflow and the provisioning workflow are very, very distinct from each other. So if you're interested in the multi-tenant ironic work for provisioning, but you don't need a complicated leasing workflow, say you either want owners to designate lessies themselves, or you don't even have any for lessies. That's totally fine. You'll need any of that and you can still take advantage of multi-tenant ironic. Just some quick notes about some of the additional development we did that aren't directly ironic related for this hot releasing service. One is we added support for the Cisco Nexus switch and ML2 Ansible networking because that Cisco Nexus switch is what the MLC uses. We also found it useful to add some simplified user commands. And what that really means is that we extended the OpenStack CLI to combine multiple OpenStack commands together and discover it's a really simple process to do so. So in MLC, we created this really simple leasing service where, as I said, owners can offer up unused nodes for specified time periods. Lessies can claim unused nodes and the lessies and the leasing service will take those thesis and tell I'm not able to deal with them. So what's next? First of all, we're currently deploying this hot releasing system into as a trial basis within the MLC. And we're going to get some trial users to test it out and the expectation is that their feedback will drive further development. Another thing we're working on is node attestation with KeyLime. KeyLime can talk to a piece of hardware's TPM module and the test of node, make sure it hasn't been tampered with, which is something that I think a lessie will be very interested in knowing. We also like support for non admins to be able to be the node with the volume, with suffice because you support in our case because of the requirements. In order to do so, we just need to expose node owner lessie for associated bare metal volume targets and connectors operations so that if you own or lease a node then you can create volume targets and connectors for it. After that, we just need to add additional update policies for specific roles. One is updating node capabilities and the other is updating a node storage interface. This is stuff that this code that we've actually already done, we've tested it works and we're just waiting for it to be reviewed. Finally, there's something that's a little bit longer term, which is FlockX bare metal marketplace. So the idea here is if when you think of ESI but in the marketplace model, an owner would not only offer up a node, they'll also set a price for the user nodes and then the lessie will come to the marketplace, see what nodes are available and what price and then actually purchase use of the hardware for whatever time they need. This is an idea of a couple of BU students as a graduate student project at Boston University. And if you're interested in more details, there's a link right there. If you're interested in further information, there's a ESI presentation and a demo on this YouTube link. If you're interested in our documentation or code, we have a repository. And if you'd like to talk to us or IRC about ESI then we can be found in FreeNode in the MOC channel. And that's my entire talk. Thank you.