 Okay, sure Boris would you introduce yourself, please? Hi, yes, sure. My name is Boris I'm working on a stack already for more than three years I started a scientist in Institute of System Programming Russian Academy of Science risk area name We're working actually there on very interesting project. It was a HPC cloud so it was for high performance computing and We have to implement PCI pastoral in Nova. So if you've heard maybe about SR IRF So so VMs will be able actually to work directly with devices without Virtualization this was very very critical for case because Without it we have overhead of open stack was like 90 percent of Foreclosed, so it just didn't work well So I developed this part of code that allows you to use PCI devices directly inside VMs And I tried to contribute it back to the open stack and I would like to say very big thanks to Russell Bryant and the one under Russell Bryant at that point was a PTO of Nova and the one under was helping a lot me with Involving community and I started working on this code after that like I see that there are some like Gaps in Nova, especially related to DB part So a lot of requests were not optimal and I was starting refactoring from the bell error and upper and I did a lot of patches and merent is so that in stack analytics And they just called me to work for them. So I moved to merenties where I got some kind of a Quite large team to teach to contribute to you in upstream of open stack And I was pretty flexible of choosing the task that we would like to work So we were concentrated mostly on Oslo and helping remove Oslo and Kubator and split everything in libraries and one of the large library that was done By my team at that point was Oslo DB so However, like I would like to thank Roman Padaliaka and Victor Sergeev that was working very hard on this task as well So we unify all DB code across the whole projects Which reduce a lot of lines of code in each of the projects and start this movement Faster about removing Oslo and Kubator and using Oslo libraries And as you know now, we don't have any Oslo and Kubator code and that is very very awesome So and during the work on OpenStack, I saw a lot of like issues with architecture These decisions there were some performance problems scalability problems and so on and It was really hard to explain in upstream that your code that is doing some magic with DB or some magic in changing code is actually helping in all the cases because OpenStack is very flexible. You can configure it in different way for different purpose And then it's absolutely unclear what to do so I thought that we need this trend for measuring performance so I can propose the change and Tell somebody else. Okay. Here is without this change performance here is after and Then he will be able to repeat the same experiment so this is why I started to rally project like two and a half years ago, I think and We started it a bit different. So it's like out-of-box solution for testing So you can install and run it. You don't need to Take care about how to process results how to generate reports how to run it how to verify that your Input tasks are proper and so on so we try to keep everything together and make it very pluggable so But so and by testing I mean all kinds of testing so we can we try to unify in single framework all kind of possible testing Strategies of functional perf scale regression a lot longevity capacity and so on There is one that is missing a chain, but we are working on it. So a chair will be You will be able to test a chair using the same framework and same plug-ins actually so How it works? So there are like test cases are designed as a files and The plugins are Python code there and in YAML files. You are specifying like set of combination of four types of plugins That are used as a building blocks So you have like scenario plug-in. It's like set of actions of users like boot. We am Wait until it boots delete VM and wait until it deletes. So it's like one iteration of scenario And then you have a runner plug-in this thing is Generates a lot calling multiply times scenario plug-in with arguments that you pass to it So like you can call this scenario 10 times in parallel There are times in total so it will keep like 10 Running parallel Scenario iterations and then there are context plugins. So it's like if you compare it for With unit test framework, there is a setup in teardown. So prepare environment and after the lot is done Cleanup environment so we can create users Set up roles set up Quotas add some servers do some other stuff because like as I said Open stack is very very hard for testing and you Always have a lot of steps before you can actually do the lot so you can you need to create users Stop them properly and so on So this like Separation the scenarios and context and runners allows you to reduce amount of code So you're writing just we have just few runners and it's enough and then we have a Bit more context classes so for all the resources that work together to build the environment And then scenarios are very pretty they're pretty simple. So they just do they or job So we have a really good interfaces here and the last one is a SLA So it's like set of criteria of success Basis on results so no failures or some amount of expected failures Which can be negative testing. Yeah, if you for example would like to test what does and you're running 10 Starting 10 VMs and you can just start 5 you're expecting that file will fail And you can do this with this stuff without writing new plugins and this is great. Yeah so All plugins are set on arguments So like what flavor to use or what law to generate or what like what amount of users to create and so Parameters that are Actually add to the YAML files where you are combining everything together to test case So this actually is great because like developers can work on the code plugins Which are building blocks for QAs they don't need to write the code and that can just in YAML Combine this in the way that they need to do their tests. So great This is like really so what we are good working now is like we would like to make it possible to use ready for other projects for other open source solutions and We are really close to that point of time So it will still be simple to test open stack as it is now, but it will be simple to test other platforms as well Exxon from the Newton Design Summit of what are kind of three or? You know highest priorities for the rally for the Newton so one of the Really big is improve cleanups mechanism. So we would like to have a Disaster in apps which means if something went very wrong For example keystone goes down or something happened really in the middle of testing We don't have a proper way to clean up all the resources. So we're refactoring during the Mitaka a lot of Code and rally to make to defy the nameings of the old resources So we will be able to delete them with the really new instance of rally when you fix your cloud So keeping this is very important for production environments another is So we are doing restructurization of rally open stack plugins This is required to reduce the code application between scenarios and context that we have now and allow people to test Multiple API versions with the same So one plugin for booting VM will work with different versions of NOVA and users won't see the Difference so it will speak the proper version that There in cloud exists and it will use it to run the proper commands to perform this section The another topic is scalability of rally itself so We did as well a lot of job in Mitaka and we are going to finish it in Newton So basically you'll be able to run desks that are running forever So until your disk is not full and this is actually required by another very important task is distributed Load generation which allows you to generate load from different servers So like targeting numbers is to have 100 servers that are generating a lot And the hard part of this is that we need to store all these results very optimal because there is a lot of results and Making the system require this scalability changes that will be in place. So And there is a fourth topic that is as well a very interesting Trends report so historical performance data reports. So you're running the same task Over the time and you would like to see like how the durations and errors change over the time This is us by open stack community and a lot of companies Because they are running periodically a rally and there is no way really to generate such kind of reports for now So as well as reporting for comparing results of two runs compare them That's comparing the results including the report generating includes the configuration It is the open stack configuration itself. So you can compare for example different options for let's say neutron or nova or hypervisor or various other options which are available So this is a really good question. So The answer is no The thing here is that when we started rally we were thinking about very portable solution So it won't be vendor specific And you will be able to use it everywhere and it should be easy So we will have a criteria of not using anything else Then a crystal credentials So you are using crystal credentials and based on that you are getting all information from cloud that you can do via api and performing all the tests So until we have some kind of config db That is a configuration as a service for all Open stack we should have api that returns us The configuration options of different service. We won't be able to do that Until another thing happened is like rally won't be open stack specific and we will have like Ability to go via ssh to the different servers collect this information However, in any case it will be very vendor specific So Dev stack has in one place configuration Merendis open stack in another Air view in third each behavior in fourth place and so on. Yeah, so it's very hard task until we don't have a Configuration as a service which is very good thing for open stack. However, like it's not support well I understand you unfortunately But it will be really a good design decision Understand. Thank you There is multiple themes which Evolved in the open stack over the last several releases scalability, resiliency, manageability, modularity, interoperability, security user ease of use Which of those themes, uh, you know the work in the newton for rally Is addressing or Even more in background so We have definitely We are definitely working on scalability problems So we did a lot. We refactor whole code of rally. So it all uses streaming algorithms And streaming algorithms are nice because Uh, they can proceed chunks of results And they need constant amount of uh ram So the only thing that is like The only thing that is left is to do db migration from Schema to new Where from all where we are storing everything By one request to db in one block field We will store in a lot of small chunks Over the time and then everything will be done So we'll get the real scalability until you have Foodies another thing that we would like to do is we won't store chunks of raw data. We will store chunks of zput data so Based on our experiments, we will reduce 20 times. Uh, this is usage Which is a lot So you can run way more things Great. Okay. So about, uh, manageability so, uh We are working on refactoring of open stack plugins as I said before So the uh, think here is that, uh, we will make it way simpler to make open stack plugins Open stack related plugins And this is like very important topic because In some places it only becomes a mess And especially with these requirements for testing multiple iapi versions Where we don't have a clear solution for everybody Okay, and then we have working we are working on modularity as well so, uh During the two last releases We were splitting framework from plugins We almost done all this work and we are improving the plugins framework. So it will allow you to to make actually new repository Which can be installed with the pip like just biting package and really will allow to discover all This stuff and work well So when we finish all this work related to the plugin framework and splitting like Some of open stack plugin from rally engine That is another task like splitting and rally and open stack We will be ready to split all open stack plugins to separate that repository Which will make a rally core Way more lighter because you won't need to install or the python plugins and all requirements And there will be less code Which will make simpler to newbies to understand how rally is organized how it works And allow actually to scale up The process because we can have separated community for open stack plugins and separated for rally core functionality. Yeah Very good. And and then like if we add new platforms Like mesas kubernetes stock your live view anything They can have own repositories and you won't need to install all the requirements for all these projects every time Just only Detain it great Thank you. Well, uh, the final question I have is kind of a follow on You know what you can say or if you're feeling what will be Kind of a themes you are planning to address in ocata time frame So ocata, I think that ocata will be all about h a testing testing and monitoring And rally as a service So complete solution like you can run rally as a service And then you can set up front task You'll get like interactive mod for working in a simpler way to like design your as cases This is on plugins And as well, there will be support for many platforms like kubernetes mesas appeared and so on and we'll have plugins for testing these platforms as well and Like as well we will work on this So I think that in only in ocata will start splitting core code of rally with open stack plugins Because before that point of time, we actually don't need that before we introduce another platforms So these are core ocata Things to do great Thank you very much for your time and uh, greatly appreciate it. Thank you. Thank you