 I would like to present me and Andrei Kurilin. Andrei Kurilin is currently Ptl of Rally. So he is leading this project. And I was initial after of Rally and leading it for a few years. I'm Boris Paolojev, if someone doesn't know that. And first of all, I would like to ask the question. So do we have some already Rally users or contributors? So who is using Rally currently? OK, cool. And did someone contribute something? Yeah, cool. OK, so OK. But I'll repeat for two that doesn't know what is Rally. So it tries to make software testing great again. And it tries really hard. And there are some really good results already. And basically, it's a together tool and framework that allows you to write simple plugins. And then using this framework and BAML format, combine these plugins together in very complicated test scenarios. So you write simple code. Then you combine it in complex scenarios and get results that you need. And then Rally allows you to work with these results, generate reports, compare results, or just like you can get results from Rally, pushing some other systems, like Elasticsearch, do other stuff that you are interested in. As well, it allows you to build health checks of different services based on the results that are more complicated than just some kind of unit tests. So this is very old picture, but it's still good, still good. So there are few parts in Rally that are pluggable. And there are components of Rally. So first of all, there is deployment part. Usually, most of people are using production-like or pre-production environments. So they already have environments. So they use just existing deployments. They don't use Rally to deploy the stuff. However, it's possible. And then there is a part verify framework that allows you to actually wrap your unit test-based frameworks like Tempest under Rally and have a simple, unified interface for all of them and store results for longer purposes, like comparing, tracking, how it changed over the time, and so on. And there is a benchmark or task framework that is Rally framework built in that allows you to combine these plugins together and create different test scenarios. And there are like, you can write exporters or just use built-in reporting mechanism that allows you to generate reports. So this is like new version of Rally task framework. And as you see on this slide, there is a version two here. And you can specify such simple things like title and description so you can understand what this task was about. Then you have sub-tasks, which are a list of sub-tasks that you are going to run. Each sub-task, you can assume that it's like test scenario that you are willing to run. And this test scenario has like as well title and description and other things. And it has like this keyword called workloads. And it allows you to specify different scenarios that will be run one by one or in parallel. And parallel is still in progress. But you can run Rally scenarios in the same context which gives you more flexibility here. So this new format is simpler than previous one. And it's more explicit. And it allows us to continue work on the framework that will allow us to run multi-scenarios or different scenarios in the same context as well. There are a few more features related to that. Then you have like Rally task reports that you can generate by one comment based on any results that you have. And they are different. So it's like one HTML file that contains multiple inside pages, you can say. And it has overview where you can see a list of those sub-tasks that you have run. And then you have detailed information about what sub-task was actually run. It's result and different charts and so on. And after that, there is a Rally-Verify report. So this is framework, as I said, based on the unit test framework. And Andrei can tell more about this. So it is a new report for verification component. It was writing from scratch. It allows to generate a report for several verification results to compare, to filter by test status, success, keep, and to generate this report for different number of results. It's not limited now. OK. So about the project itself. So it started in Havana, which is like, so it's already a few years old. And from the beginning, we have already more than 350 contributors from almost 80 companies, which is great. And latest user surveys show that like 25% of deployment has Rally, even if it is just like testing tool, not something that produce direct value, which I think is great. And a lot of people is thinking about adding it to their deployments, which is as well great. So we have adoption. So and I would like to pass here both to Andrei. He's going to tell more about what they did in this release, and what are the goals for the next one. OK, thank you, Boris. OK, so in this release, in Pyke, we spent time to make Rally more generic framework. It includes several tasks. Make Rally verification component more generic. Originally, it was designed to simplify the launch in Tempest, but now you can write a plugin for another unit-based framework. For example, it can be some unit test for Kubernetes, for Docker or something, another system. And we unified our Rally task framework to see different platforms. And here is a POC for Docker plugins. It contains several simple scenarios, context, and so on. So let Rally is suitable for launching, checking whatever you want platform, whatever you want application. We spent a lot of time to do it, and finally, we succeeded in it. OK. While we are preparing to launch for checking different platforms, we are seeing how we can manage plugins and so on. So we are planning to split Rally to several repositories. It includes that it is possible to easily install Rally plugins, like simple Python package. It allows to simplify managing requirements for these plugins and so on. So basically, I mean, what is the problem is that if we put all Python clients, Docker clients, Mesas clients, other kinds of clients, there will be a huge amount of requirements, which makes Rally very, very high-waith tool that you don't like. And then you can just split Rally core functionality, like pip install Rally, which will work very straightforward because it doesn't have any high-heavy-weight dependency. And then you can install Mesas plugins or OpenStack plugins or Docker plugins or Kubernetes plugins using the same pip install command. And Rally automatically will discover all this stuff so you don't have configuration pain here, any kind of that. Thank you, Maurice. Also, we make great improvements in the clean up. Previously, RallyCnap was awesome, but it wasn't ideal. It tried to clean up all resources in specific tenants, and it is not suitable for production clouds in case of real tenants, real users. And now Rally filters only those resources which are created by specific tasks. So even one task of Rally will not remove resources for another task of Rally. So you can use Rally for these existing users and do interface it will remove your VMs and so on. So it was about PyCralis, and here's a plan for Queens. We plan to focus on making Rally more user-friendly for operators. It includes clean up improvements. Again, we make a great progress in previous release, but we need to implement more features such as disaster clean up. So if for some reasons Rally failed, someone killed Rally process and so on, we need a way to remove all resources which left in your cloud. And we have all mechanisms for it and just need to implement simple command to do it. Also it includes postponing clean up. So we were able to launch some specific task, then analyze resources why they failed or something like that, and then we were able to remove it. And your cloud will be clean enough. And since we are trying to make Rally unified for all platforms, we need to extend clean up mechanisms to support different platforms, to support different resources for all these platforms. And we are planning to look for Elastic and Boris will talk about it. Yeah, so for the last year, I started working more in DevOps role and operational. And as well, I tried to use Rally on some production environments. And I started understanding that DevOps approach is a bit different from developers approach. And what works well for developers doesn't work always for DevOps story about that. So we are going to fix some mistakes that we did in Rally. So one of them is that, OK, everybody is storing some time-serious storage or elastic, this kind of data in production or pre-production in DevOps. So they will bring Elastic Search, put data there, put some Kibana or Grafana on top, or they already have everything they would like, just few metrics extra. So there will be a huge effort here trying to make a backend for Rally that stores data to Elastic Search instead of SQL. And data will be stored in such a way that you can aggregate it in a custom way that you actually need, or put some alerts on top and so on. So the old things that DevOps guys are doing, so make it simple, stride forward out of the box without any hacks on top. So yeah, so this is going to be like create a fort. And if we succeed and see that SQL backend is not that popular anymore, then maybe we will have two modes of Rally, so same framework. But in one case, we are like one time run, give me report for quick tests. And another is more Rally as a service where you just run Rally, and it stores data to Elastic Search. So you have API that you can query from other systems, or you can use directly data from Elastic Search and push them to other places that you need. So OK, next slide. We'll just start our slides. So as we presented task framework version two, it includes subtask and now it is possible to launch workloads one-to-one serially. But we want to make it more perfect and enable launching these workloads in parallel so it will be more similar to real load, to some chaos, something like that. And as I mentioned previously, we made Rally more unified. And now we need to implement plugins for other platforms. It is already possible, but we need to work on these plugins here. Basically. OK, so. Yeah, go on. So we need to help. OK, so first of all, we need some help like someone should hire Andre, because he's in his work at Mirantis. Yeah, but he's a cool guy. You can get my reference if you need it. And as well, as any open source project, we really need support from communities, so making reviews, fixing some bugs, doing some commits, making new plugins. In at least sharing feedback, so if you're facing some problems or you're required to do a lot of hacks and customization on top of Rally to run it, so just share that. And we'll try to fix it and make it simpler for usage. Like sharing feedback is a very important part here. And it's not that hard. You can just send email to mailing list or point someone specific, so that will work as well. OK, so there are some links here. So if you like get presentations, so you can hear where is the source code, documentation, bar tracker, other things that may be very useful for you. And basically, we are moving to the part of like. So I'll keep this slide for a while. OK, so good. So questions. Does anybody have some questions? Sure. Regarding the future improvements on different platforms, can you go a bit detail into that? I mean, so let's say I'm testing. I'm using Tempest part of Rally. Can you present yourself first? Sorry, I'm a QA Oracle open stack. So I use Tempest part of Rally. I'm testing. But let's say we move on to Kolo Kubernetes. And I want to do some testing. So how does that? So it's not Tempest related? Would you like to answer? OK, originally verification component was designed to simplify a launch in Tempest. And we have a lot of code for this task. And then we realized that a lot of things related to verification component doesn't relate to Tempest at all. So we made a verification component more pluggable. You can write a plugin how to launch a specific tool. And then our reporting mechanism saving database comparison will work for it. OK, I can add here a bit more. So basically every company is taking some kind of unit test framework and putting their integration test inside it. And we saw that there are many companies that they have some own Tempest-like frameworks. And they have a bunch of tools of this. And they have then on in Jenkins some job and other things that are trying to glue all the stuff. And instead of doing that, you can make a plugin in Rally which will just trigger your job to set up, clean up, and run tests. And it will store results for you. And I love to do reporting and all other stuff that are the common for all kinds of unit tests. So that was the thing. Yeah, that answers my question. Thank you. OK, cool. Any other questions? OK, sure. Can you go to microphone, please? QI engineer at Midokura. There was some plan to integrate Shaker also on Rally. I don't know what I have been a little bit out lately. Is this still true, or what's the situation of? I think we have enough resources in the next release. We will work on this task because Shaker is a good tool. And data plane testing is very important. It's a high-designed feature. So yeah, we want to include it in Rally, integrate with it. So we did some changes in framework itself. So scenarios can return custom data that is aggregated after. And this thing is required to basically do data plane testing because it is going to generate extra data. Like, for example, if you are doing some pings or sending traffic, then you have bandwidth, latency, and other parameters that you would like to put up. And as well, we had these things in reports. But we are still not there with the plugin that is doing the Shaker work. So that's still work in progress. OK, any other questions? No? OK. I hope that's good. So thank you for your attention. Thank you. Nice days on summit. Bye-bye. Bye.