 Hi, I'm really sorry about this. So we don't have enough time left. So we can go through these lines quickly I'm Dave, Dave Chen from Intel Today I'm going to go into through metadata in Cinder I could also with Travis Duncan on the David line to plan the topic. So thanks guys for joining the topic This is today's agenda. First, we will talk something about the motivation why we want to do this. And we are going to introduce some use cases. So in this use case, we will see why we use this feature in Cinder. And we will go in through some some works we have done in the horizon and Cinder and then give some conclusion and we will provide a concrete example how to fix the problem, how it works. Okay, this is an issue I actually get from the OpenStack Magazine. So see from this, this problem, we can know that if a person may face the problem like this, he has a portable volume. It just want to put some instance from this volume. But sometimes he want to put this instance in the different virtualization environment. So he needs to change some properties like where I model into the E1 sound. But currently you can't do that because there is no approach to support to do that. You cannot specify this metadata or we can say that property is in the Cinder. Something behind this is we know that you know that Cinder volume actually holds different kinds of metadata, image metadata, volume metadata and metadata. So when we create a portable volume, some image properties or we say image metadata is copied from the Gaons images into the Cinder's portable volume. So this image metadata will be served as the same purpose as the metadata in the Gaons to use as the controlling of the VM voting or scheduling, something like that. And we know that that image metadata is used in various scenarios. We'll talk some details about that in the photo slides. And actually the problem we just mentioned can be solved easily by setting the VF model in the Gaons image. Then we can create a new portable volume from that image. But the issue is that if you have done something in this portable volume and you have a lot of data stored in that portable volume, actually you cannot use that data anymore. This is a big problem if you have a lot of work done in that portable volume. So the problem we mentioned here is just that you need some approach to update image metadata in the portable volume. Or you can customize some semantic meanings to the portable volume, give some different meanings to this portable volume, such as you can customize some OS shutdown time. You can control how long it should wait before we perform Gaons before you shut down some guests. Currently what we can do in the Cinder is we can just use this command here to change some volume metadata. So this is not what we want, this is not what no one is looking for. The first command here you can use just such as something like create or update some volume metadata. And the second command you can use to delete some volume metadata. But what we want is we should also leverage the capability to modify some things in the image metadata to match some specific requirements for any user. Someone may argue that we can do that in some alternatives such as Noah Flavors or we can control in the Gaons imager, right? Yes, Noah Flavors can do that. It can use it to control Noah booting when you can configure some flavors so Noah will consume these flavors and will know how to do when Noah booting. But after digging into the source code, we will know that Noah actually will override this customization from the Noah Flavors by the image metadata. So we will know that if we just do that using Noah Flavors, it's not enough. And another approach is we can just use the Gaons to do that. You know that the Gaons maintain an image and we can do that in the Gaons. And we have the capability to control this image metadata in the Gaons. But as we just mentioned, we want to use some existing bootable volume. We don't want to lose the data we store in that volume, right? So we talk a lot about the image metadata. So you may want to know where all these image metadata are consumed. Why we want to do that? Why this is needed? Here is a concrete example. The first user case is Watchdog. Actually, Watchdog is a feature that you can use Watchdog to control some behaviors about the gaster. Watchdog keep eyes on these gasters and they will carry on some specified actions when they serve a harm. These actions include you can power off the instance and you can post the instance, right? So from this piece of code, we will know that you are trying to look at the value from the Noah Flavors. But if we have the same properties defined in the image metadata, then it will override some behaviors defined by the Noah Flavors. And this is actually the overflow of both the instance. When you click some buttons in the horizon, it will run some instance for you and some functions such as run instance, build instance, and it will respond instance. And the last step is we are trying to generate some LibreWord.xml file. This file is needed by the Noah, not needed by the LibreWord. It will use this config file to launch instance. So you know that the image metadata is one source to create this LibreWord.xml. So it will read some properties defined in the image metadata and it will put this value into the LibreWord.xml. So it will control how it will do, how it will do when you want to launch an instance. Another use case is a host filtering. When a host passes, we will know whether the host can be added to the candidate list, right? It will check some properties like architecture, hyper-wise type, and mod. So this requirement matches the requirements. We can add this computer node into the candidate list. This is the reason why we need the capability to modify some properties defined in the image properties. You will see it will actually read the value architectures, hyper-wise type, and then mod from image metadata. As we just mentioned, if a guest is going to shut down, it will give some time before performing grants following shutdown. So obviously this should be configurable. By default, the time is 16 seconds, but you should configure the design value by yourself instead of just use the default value. So all we are looking for is some way to update some image metadata for the bootable volume as the specific requirement for any user. So this is the overall process, and it's actually our proposal to do in both Horizon and Cinder. The first step, we suppose that any user wants to create some metadata. It should be using Horizon to do that. And Horizon will actually invoke some APIs from Cinder. And Cinder will go into check whether the user has the permission to do that. So we define the module as property partitions, and we also spline the module into two parts. The first part is the real best access control, and the second part is the policy-based control. We will talk details about this in the bonus slides. And actually it will look for the user's draw from the KinStore to find whether the user has access right with this property. Of course it can do that in the Horizon, but you can also use some Cinder client to do that. So we provide some command to do that just like the command used to modify no volume metadata. So the first command here can be used to update or create the image properties or image metadata. And the second command can be just used to delete the properties. This Cinder API, we propose to define some APIs in the Cinder. This API is used to create and update the image metadata. And you will see that for this API we will pass both the name and the value of the image metadata into the body of the request. And we expand the same response in the body of the response. This API can be used to delete some metadata you don't want to use. So it will mark this metadata as the deleted. You can check that from the database or whatever. So for this API, you can just pass in the name of the properties, and you will expand the empty response to 00 as the response code. So that means you have a download successful. Next we're going to talk something about property protections. So why we use this in the Cinder? So you know that Glance has some raw or policy-based access control to the image properties. So why we do that in the Cinder? This is because the bootable volume is actually created from the image. So we can change some image properties inherited from the Glance. So if we don't check the user's rights, you can also upload the bootable volume into the image. So this will violate some billing policies for the image. So this is the reason why we need the property protections also in the Cinder. So how to do that in Cinder? We're supposed to provide two types of property protections. The first is the raw-based, and the second is the policy-based. So this is a sample configuration for the raw-based property protections. We just use it to protect some metadata, like SPR create properties, SPR read properties, and here we define some actions and associative rows. And this is a concrete configuration. We use it to control the access to the metadata, like billing codes. So from this configuration, we want to know that the user who has a role at me, you can do whatever to that metadata. You can create updates or denits, whatever. But if you have just a guest, so you can just read the properties. This is some configuration if you want to use that. So the first step is you should configure Cinder.conf. You should define the property files as the property protection.conf. And you should also define the role format as the roles, and copy this file into the ATC slash Cinder slash directory. And then you should create the role. The role should be matched with the role defined in the Keystone. And then you can run the command to do updates or create or whatever to that metadata. Policy-based property control, for this time of property control, is basically the same as the job-based property control, but it's based on the Oslo policies. So from this configuration, we can see that besides, we define some actions for each property. So we also should define the policies for each actions. And we should also define these policies in policy.json to use Oslo policies capability to check whether he can do that. And this is a sample configuration for OS7 time. From this configuration, you will see that we define the policies Cinder creates for the actions create and read. And we also define these policies in the policy.json just as the description column. And this is the step we should do to use the policy-based property control. In the first step, similarly, the first step you should define the protection files as protection-policy.com. And you also should define the job format as policies and copy this file to the directory. And you should create the policies. For the policies, you should first create a user. And you should assign some jobs, such as guests or admin for this user. And you should properly define the protection-policy.com. And you also define the policies in the policy.json. This is used by the Oslo policies. And then you can just run the command to update that image metadata. This is what we want to do in the horizon. So it will make it easier for you to just connect some buttons or some things like that in the horizon. And then you can do all these things and let it down in the behind. For this slide, you will see that we can just click update metadata for this portable volume. Then there will be a window that pops up. In this window, you can type in the name of the image metadata. And you can just define the value as you want from this window. Actually, this property has been provided in the horizon. But we should also create some properties which are not listed here. That means you can customize some properties as a specific requirement. Let's put it all together. First, the user wants to set some property like a waste of time. We can just do that in the horizon. So in the horizon, we invoke some REST APIs in Cinder. Cinder will store this value in the portable volume. And then when NOAA puts an instance, this property or this image metadata is copied from the volume into the table of NOAA. Actually, it's the instance system metadata. And then sometimes the user wants to just shut down the guest. Then NOAA will try to check how long he should wait before performing grants for shutdown. So here it will wait 18 seconds before shutting down the guest. This is the conclusion. First, we want to change the horizon to expose the metadata. Besides some default properties, you can customize whatever as your specific requirement. And we're also going to change Cinder Client. So we can do that in Cinder Client by typing some Cinder commands. And we propose some REST APIs to update this image metadata. And we're going to provide some property petitions to avoid the violation of the policy. This is what I want to talk about. Any questions about this? Yeah, it will read. When it's trying to generate Librewater.sml, it will try to read these properties. Yeah. So once again. What are the chances of getting this feature? It's a nice feature. I don't understand how important it is. But what are the chances of getting it into the next release? So into Liberty OpenStack? Sorry. Dave, isn't the code already ready? And it's an issue of getting approval for the blueprint for merge? Yeah. Yeah? Yeah. So we essentially need all your work to get it merged into, I mean, get the blueprint approved. So it would help if you say we need it. But the code's ready and Dave pretty much had it ready for Kilo itself. But we never got the let's do it kind of thumbs up sign. Any other questions? Thank you.