 Okay, that's okay. That's okay. I don't need. Okay. Hello everyone. My name is Bajgi Bizaram here to talk about the small tool that visualizes placement resource view. As you most probably know OpenStack started collecting the resource view in a separate service called placement. And today it's already tracking CPU memory and disk utilization of the compute hosts. But in the recent releases we added a lot of other resources and other features to placement like tracking bandwidth, tracking cyborg accelerators. And through this we added a lot of complexity into the resource view of what is stored in placement. When you want to query the placement resource view, you will end up using OpenStack CLI because there are resource provider subcommands in the OpenStack CLI to get that. You have to go and list of the resource providers, check the inventory, check the trades, check the aggregates. And also you have to know what are the consumptions in placement. So you have to know what is the consumer. Consumers tend to be another server. So you list the servers. Remember the UIDs and go back to placement and query the locations of the servers. So this is a lot of command to remember. And the output of these commands are referring to UIDs. You have to correlate between command output. So when I use these, I always end up drawing a picture by hand. And then I realize that, okay, that picture can be drawn by a tool, not just by me on a paper. So I ended up implementing something that visualizes the placement information and that's ended to be an OpenStack CLI plugin. You can pip install it called OSC placement tree. It actually provides two new OpenStack CLI subcommands called resource provider tree list and resource provider tree show. The tree list will give you the whole resource view of the overall cluster. And the provider tree show can limit this output to a single provider tree, which is basically a compute node. And this tool emits GraphWiz.formatted text output that you can pipe into whatever tool that understands GraphWiz and you will get a picture. How this picture looks like. So this is an output from a two compute node DevStack, which has bandwidth configured in both compute nodes. And one of the compute nodes has even one FPGA from Cyborg. So this is a pretty complex picture already. Let's zoom in. This is one compute node from this two compute nodes placement, two compute nodes DevStack. All the boxes are resource providers. The top box is the compute resource provider and down below there are child providers because we have to have more fine grain control about who reports what kind of resources. If you zoom even further, then this is the compute root provider. This provides inventories from different resource classes like disk memory and vCPU. And the output contains the used and total and reserved amounts of these resources. This also shows the traits the compute has. These are compute capabilities. And it would show aggregates if there would be any aggregate defined. I don't have those. And of course it shows the name of the compute and the UID of the resource provider. And on this picture, which is more interesting, is there are arrows that shows relationship between providers and consumers. So the normal arrows are parent-child relationship between the root provider and the child providers. And the dashed arrows will be consumptions. If you go down on this picture, this is a child provider actually for the compute node that provides the bandwidth. But what is more interesting is the bottom of the picture that is a consumer. In this case, this is a server booted by NOVA. And there is two arrows coming out of this that presents two types of consumption by this consumer. The one is in the left side going to the grandchild bandwidth provider. On the right side, this arrow goes up to the root. And both arrows are annotated with the amount of resources this consumer consumes from that specific provider. So this is the structure. Okay, what's good for why I did this? I had to troubleshoot a lot of resource allocation problems in my environment. And as I said, I always ended up drawing pictures. And I wanted to automate that. Also, it helped me to understand how placement models resources, how placement models consumption. And last but not least, as a NOVA developer, I had to understand how we test the complex placement interactions. And we have functional test both in NOVA and placement that sets up pretty complex scenarios. And those test cases tend to describe what they are doing. But I wanted to verify that the description is correct. This is what the fixture tells me what we'll do. It's actually what it does in placement. So what I did, I just ran this tool against the database created by the test case and then compared the description in the test case with the actual picture I got from the tool. What is it not good for? If you try to run this against a big, big, big deployment, especially the OpenStack Resource Provider 3 list command, then it might fail on the client side. Because this goes and queries everything out of placement. And today, placement doesn't have one get API. So I have to iterate through a lot of API calls and then gather all the information in memory in the client side and then generate the graph. So actually, I tried this against 100 compute node deployment and it takes 20 seconds in my laptop to generate the picture. That's still okay, but I wouldn't run it against a thousand nodes deployment that most probably blow your client. What you can do is you can use the 3-show command because that's limited output to a single compute by the name of the compute. So that's always safe to run because it's just one compute. That couldn't be that big. So you miss something. If you miss something, tell me. I'm Gibi on FreeNode and I am here on this summit on PTG. And of course, you can go and fork the repo and send me pull requests. It's not under the governance of the foundation right now because it just increased dent and me who has contributed to that. So it was easier just to keep it as a GitHub repo. But if there is enough interest, I can move it under the foundation stuff. I already know some of the things that I'm missing. For example, placement talker gates today is just UIDs in the picture, but I can draw arrows to represent it visually. Also, there are relationships between consumers. If you resize or migrate a server, then that will mean that you will have two separate allocations, one on the source host and one on the destination host for the server. And that would be just two separate boxes on this picture. I should correlate them as well that these two consumption because of the same server being migrated at the moment. And also, as I said, I use a lot of this in the functional test of NOVA and placement. Today, I have patch proposed to placement to enable dumping this resource view as a .file at every functional test step in a placement functional test run. So you can actually step through the evolution of this picture as the test progresses. You can enable it with an environmental flag. And also, I have a snippet of code that you can copy into the NOVA functional test and then you can get the same. I also plan to propose that patch against NOVA to automate this. But if you have anything else, then please tell me. Thank you.