 Okay, is everyone here? Good morning everyone, it is great for us to be here in Berlin now today to present to you about chasing an open stack, a new step forward and here is some information about us, the speakers. This is Mr. Vinh Muintang. So hello everyone and I'm Tau Vinh and I'm a core viewer of OpenStack and I'm an organizer of Vietnam OpenStack user group and currently I'm working in Vietnam. Hello everyone, my name is Hieu and I'm also the organizer of the Office of Vietnam OpenStack user group and also the organizer of the OPNF user group in Vietnam and right now I'm also the software engineer, especially in cloud, at Vietel, the co-provider in Vietnam. And the last one is me, my name's Yen but you can call me Iris. I don't officially work in the field but I have a huge interest in OpenStack and I hope that this is the first time I have ever been to this summit and I want to learn more about OpenStack. Okay, and here's the agenda of what we are going to talk about in this presentation. The first thing is chasing in OpenStack OS provider. Next is ways to add more, to add chase points in OpenStack project and here next will be the supported driver back end chase databases and we will also talk about common configurations and store chase with driver chasing and also we also add a demo project, a demo presentation and a Q&A session. You can see here this is a representation, a representation of chasing in OpenStack with a simple command, OpenStack in mesh-lit. I'm sure that everyone knows this command and you can see here the duration of this request about 544, I'm sorry, 24 millisecond and you can see we can always look and trace a lot of things here such as WSGI code, DB code and it's across the service keystone glance and you can see in this slide and later on I will show you some information that OS can trace. Okay, and here's some information for those who don't know what is OS profiler. It is an OpenStack library which can be used to analyze service dependency to optimize performance and latency and also it is used to analyze root cause and some information that OS profiler provides for user is dependency between OpenStack services called duration in each service and each function in each method and DB code and also it also lock error and exception and you can see on the slide that we have also provided something that OS profiler can chase which are WSGI HTTP here request, DB code and SQL request. These requests are usually disabled by the default because it causes a lot of traffic for the user so but if you want to chase these requests you can also turn it on and also OS profiler can chase RPC calls and driver call and in the case of Nova and Cider it can also chase the cause of vendor drivers and here are some projects that OS provider has already supported which are the core projects included Nova, Keystone, Neutron, Cider and Glance and also there are some other projects that are ironic Glare, Heat, Magnum, June and some other things you can see on the slide. And I think you guys wonder how OS profiler work and you can imagine that in OpenStack we have a lot of service such as Nova, Keystone and Cider. I assume that we have a request that travels through OpenStack service such as service A, service B and C and service C will have a function and a function from service A will call to service B with the function B and function B will call service C and function C and at the end the request will return to the service A and return to the user to get information. You can see here that I'm routing some trace point. That's trace point is actually how OS profiler work. At each trace point OS profiler collect some information such as time consuming from the start to the end of the function and some argument even and result or result of the function and send to the notifier and store the information in the trade database. You can see about the trace point. And we have a lot of way to add trace point to OpenStack service. Actually we have five way but here we only show you two way to add trace point. The first one is manually added with the started function and the end of the function. You can see here the point name and any key value that you want to log to OS profiler and with decorator you can trace from just the start of the function and OS profiler will do all the thing you want. And there's the more way to add trace point to tune to your OpenStack service you can see in the documentation. You can trace your class even with your meta class too. So currently OS profiler support a lot of backend drivers such as messaging with RapidNQ via OS profiler messaging by default and Syllable and store straight to panko. Actually this one will be replicated in the future. And Redis, MongoDB, Elasticsearch and Loginside. And the last one I want to talk about here is a target chasing. It has been done in Rokie cycle. So the next thing I want to talk is some configuration. If you want to enable OS profiler in your OpenStack, you just add this section to your service configuration such as Nova or the Glance with some information such as enable to mark that we will enable OS profiler in your OpenStack service. And if you want to chase your SQL and can meet a code. Yeah, you enable it. Actually by default with Disable it's because it's generated a lot of chase and the smart key to make sure that we client is code to your OpenStack service. I will talk about this later. And the connection string here to show that's a weak driver like and we want to store the chase. In this case I threw Redis and for this configuration I have a simple command that I showed you earlier. With OpenStack image list, you can see I add some argument OS profiler with the smart key here and beside the information that's you familiar with OS profiler provides some information related to a trace ID or software ID for OpenStraight Sync based driver. In this case it is a drag tracing. You can use this ID to search in this case in drag UI dashboard and if you want to generate an STML representation of this chase, you can type this command to your CLI and I will show you an example of this. This is an example of the image I showed you earlier. Such as this one. At this point OS profiler collects some information about your service such as query and method and the pass to your service and also it's a trace duration in a millisecond. This is a request span in Glance WSGI. So back to the presentation. And also we can store trace with drag tracing. Just chain the connection string here to the URI of drag asian and I will demo to you this one. So in this demo I will show you some config and how to create a trace and view a trace in drag tracing. Have a virtual machine here. So with this simple command, just enter and wait a moment. It shows some information the same with I showed you earlier and you can see in drag, yeah, there's a new trace here. Click to it and the information shows the same as with the HTML you see earlier. There's a lot of information that you can see here. And the duration of the call, that's a request in service. I will show you a query, just a quick query. Yeah, it's like a DB statement here. You can see the complete query for you guys to debug or RCA. Also it will show some dependency. Oh, just I'll show you later. And for the configuration, you only have to add this session to your service configuration. So it will enable OSOLO for you and after that restart the service and you have anything you want to do. So back to the presentation, okay. Here we'll talk next. Okay, so we already demo for you about the process of canoring the trace, viewing the reason in the HTML and also viewing the reason via the jacket back end. But for some, another reason that you want to deep dive or some to analysis, so we already implement the feature to represent it on the reason in the .dot graph as the showing in the screen here. Yeah, in the previous demonstration, we also show you the DAG, the direct acrylic graph that show you the dependency between the microservice in OpenStack. And as in showing in the slide here, you can see the from a new API to the new control or to the clean HPI. Here is the dependency between the graph, viewing the service and the number here state the total request that we already collect via the trace after adding the HTML key. And for some reason you see that we need to manually config adding the HTML key, enable the OS profiler in on the service manually. So in some scanners that you want to automatically benchmark for your cloud, the OpenStack cloud, the community already implement the feature to integrate with the rally framework for benchmarking at the service. And you can enable the scanners here and adding the HTML key and run the benchmark scanners and capturing on the output from OS profiler. And here is the reference to the rally .dot data how to enable and profiling the OS profiler via the rally. And as some of you can ask, what is the HTML key that we already use? So the HTML key, you can use it as a secret key to identify what type of client to send the request. And for the chain of service that you enable in the OpenStack, for example, you can enable the chain from Nova to Client to Keystone via the HTML key A. You can enable the chain from the Nova to only the Keystone via the HTML key B. And when the client use the HTML key B, they only see the trace or the recent trace only from Nova to Keystone as you config. So OS profiler use the HTML key just for the identifier. And for the future development, we want to firstly, supporting more backend for the trace database. Right now, in the ROCKY cycle, we only support the Zucker tracing from the OpenChasing from the Client. But in the future, maybe there are another trace database that we can support as Deepkin from Twitter or LiveStep. And for reducing the time for manually config the OS profiler, we want to add in the mutable configuration from the Oslo project for changing the configuration without restarting the OpenStack service. So you just change the configuration and sending some signal and then your service can auto reload. And for continuous tracing, we want to trace automatically without typing the command with integration with the mutable configuration. And the last thing that we adopt from Deepkin on the Zucker is implementing the sampling probability and rate or adaptive to reduce the overhead. We have already done some benchmarking for calculating the overhead while integrating OS profiler into OpenStack cloud. So the total number overhead can vary from 5% to over 15% for the memory footprint or the CPU. So with the sampling rate or to config the probability inside how we can capture filtering the trace result or the spanning. So we hope that we can reduce the total overhead. And as being said in the previous slide, by default, we disabled the tracing via SQL Academy because it can generate a lot of trace and cause the performance overhead too much. And yeah, that's all for our session. So any questions for you? Yes. You can go to the big camera. I have a question with any link this in production environment. So if you have a HMAC key specified in the config, but that's like a secret and nobody knows that secret and is there any overheads with just enabling one secret? You can enable as much secrets as you want. And with each key, you can generate trace of service that is key only and you do not have to care about more. So there's no overheads for normal requests? Yeah, actually it's a half overhead. Even with the HMAC key is not specified in the normal request? There's still overhead? Yes. Okay. Thank you. Basically, we inject some data into the request to the WSDI, ATTP or DB. So the overhead causing here is firstly because the number of the trace and secondly is the size of the request because based on the HMAC key, we need to inject some trace ID into the request and this is the reason why it's causing the overhead about the WS profile performance. Yeah. Would it be possible to use this as a way to do SFA monitoring of calls into the OpenStack framework or would the overhead be too much? Sorry? Would OS profiler be a tool for monitoring latency in API calls so that you can enforce self-set service level agreements about how fast the infrastructure should respond or is that too heavyweight for this purpose? No, it's not the tool for monitoring nor for the civilization agreement because for the SFA, if you add enable OS profiler, it can cause some overhead and so it affects your SFA. By the way, firstly the OS profiler, the name is for profiling. That's why we call it OS profiler and then after discussing with the community member and another like OpenChasing community, we saw that we can follow the way like deepkin or like step. So basically the OS profiler is just for benchmarking and root call analysis. And if you can see here the it can be used at the root call analysis. So for example in the demo here, if the request from the user to the end line is the keystone data way here and it can return the response to the end user, so we mark it at the green. But if something happens, for example the database timeout or you lost some connection or your code caused some bug and the last trade displaying here will have the red color. And for any request that can cause you some trouble and you want to debug or finding the root calls or you can look at the trace and finding the last trace in the result. So it will show you the information about the request or about the exception inside here. The exception here. But this is the good, no problem request. Okay. Is there any integration plan with external tool? Like by example if you have some middleware will call OpenStack and is already OpenTracing compliant, so you can follow everywhere then into OpenStack and have a complete workflow and not only the OpenStack port. Currently we do not have things like that, however in community we see one guy do things like that send request from other system to OpenStack and trace is with OpenTracing and in OpenStack his trace is with OS provider and return back to outside with OpenTracing. However we do not have it yet just community do that. Okay. Any question? Thank you. So you will have around 10 minutes for the lunch. Thank you for attending this session. Thank you.