 Welcome to the first talk of the six feet easy build user meeting. This talk will be by Vasilias Caracias from the Swiss National Supercomputing Centre. He will be giving us an introduction to reframe a talk titled writing powerful HPC regression test with reframe. Thank you Simon, did all of you had a neck hole? Oh. No, it was okay for us. Oh, I'm here. You might have the YouTube stream open. Oh, okay. You're right. Okay, so thank you Simon for the introduction and the easy build community for hosting us. Most of some of you have certainly seen the presentation, a similar presentation in previous is a bit meetings. So I'm going to give some updates. I'm going to go over again for those that they don't know what reframe is. I'm going to again go over what it is. I'm not going to focus so much on the encode listings and things like that in this presentation I'm going to be a bit more high levels is now we have the tutorial for all the details. And here just before starting. You can see the links for the GitHub project. The documentation as well as the reframes like and I will ask those that they're planning to attend the the tutorial to join this like channel. And we also have since a couple of months ago. Twitter account as well as so make sure to follow if you want to get more updates. So, as I told you in the group lead of scientific computing support group. So basically, our key mandate in the group used to provide the same environment to this to the scientists. We are mostly on the application level, rather than on the system level. And there has been the question, many of you have faced how we can ensure that the user experience is unaffected after a system upgrade or a small change in the configuration. And we all know that HPC software stack is quite complex and very fragile sometimes and also scientific codes so we need a way to abstract. The testing is done and make that more easy and more attractive let's say for the application people to write tests for the system. And then there are several things in there how about how the testing can be made sustainable. How do you have consistent as I don't want one group for example to write in one way and other in another way and then when the system gets upgraded. It's a mess of communication so it's very there is a single framework that we can stick to and be consistent. That also comes with better means and ability of the test, and we don't want to have in each test all the details about how it interacts with the system. We want to move to to abstract away the common part in a single place, the maintenance partly somehow so that tests are independent of that portability is another big thing. We want an easy way on how we could migrate the same test to other systems or if we set up a small cluster like we did for the reframe tutorial. It is saying the environment and don't want to read all the tests again. And of course, automation is quite important, especially when things are going and efficiency we don't want. We want an easy way to run and we don't want to wait for ages. So all these are things that are challenges let's say for testing. And let's admit it I mean testing is. I mean from software to even system and HPC application testing, it's not something very attractive because it really requires the same level of engineering as writing the code itself. But it's, we should definitely admit that it's much less attractive to write and I, I bet how many, how many of you have done really test driven development instead of just jumping into and writing the features first and then okay I might have to write a test. And the problem is I see also myself is that it's the value of testing is seldom visible in the short term as opposed to features. Also testing has several levels it's not just the unit as that you might run in your application for example reframe is unit tested but it's not just that especially application you need high level like integration tests and so more which you can achieve necessarily with a unit test of your of your code. And then automation is another big thing, especially as projects grow initially. Okay, I will run my tests every time I change something that as projects grows grow automation is really essential. And of course we all know that testing can never be complete. So we will never have like test that they really cover everything unless we have like real software verification and these kind like that. But that's a general thing HPC as another direction to system testing, because for us the application teams. We have HPC has as you all know multiple interacting components from the hardware to the software, multiple compilers and programming environments to change MPIs multiple libraries multiple applications that we need to support multiple textures, and even multiple cluster. And what is quite important in HPC that makes it different from other types of environments either is that it is high performance and high performance does not go well with portability. And then so functionality and perform so problem. So problems might arise in any in any place and functionality in both both performance and functionality are quite important so you can have like the performance code that really does not compute what it should compute especially in a scientific environment. And of course you you need you do need performance and to ensure that performance stays across upgrades. And most of you have come across and with all the intricacies of the software stock here I tried to visualize this a bit it's a very simplified view. But you can see all the different interactive components from the they might affect how an application run there can be a problem in a in a driver and can really screw an application. And then you have the compilers the different times everything and all this. If you start writing individual tests and ad hoc tests for each of those things this, it really becomes unsustainable. So, what is the HPC or what has been until recently the HPC system testing landscape. Either was no or minimal testings testing so whenever an upgrade we were just jumping with them, the great we're going on with the great and let users discover problems and open tickets and with solve on the fly. Or it could be something minimal like doing manual testing where it's member of the of the of the staff, especially if you have large groups which is. Yeah, not the case in most of the of the centers to run it's it's its member has a set of tests that it maintains and just work on those. And then other other solutions have been very ad hoc very specific frameworks that usually you have tests that they're very tied to the site configuration. And sometimes you might end up with lots of unnecessary test code and application, which really has high maintenance costs and can lead also to low test coverage because you can just easily run tests, or maintain them, because essentially you'd have to do copy paste. So, this has started to change the recent years. One of the solutions is a frame later on a house line with a couple more frameworks that are worth looking at. So, our solution is what we're aspiring with the frame is to create a generic HPC testing framework that allows you to write portable as portable not entirely portable but essentially quite portable HPC regression test that they're easy to move from one system to the other, and we're going to see that in the tutorial and these tests are written a high level language so they're written in Python. It's really necessary to know very good Python to write them. And this gives you also all the, all the flexibility of the modern programming language and high level programming lights like Python gives you in writing and organizing and maintaining your tests. One key aspect is that the framework abstracts away the system interaction details we don't want anything in the in the test that specify that ties it tightly to to the system we want. We want to be able to port test different systems with minimal changes. And that comes along also we don't want to pollute the test with code that is not related to the logic of the test itself. That's that's helps in any maintenance ability. And also the frame provides an invider and runtime for running the test because we're we can run them efficiently and take advantage of parallelism and concurrency and these kind of things. So that you don't have to monitor your server the scheduler and things like that the frame at the runtime will take care of doing that in a better way. So, we have started like, it's almost five years now, a bit less than, you know, with less than five years. And we have some design goals because we have really faced the problems we had an in house regression testing street, based on shell scripts. And so we have faced the problem so we have faced the problems of maintenance ability of portability, and this cobbled the fact that this we're really coupled with the system and we have. It was we have struggled whole team to just do simple back fixes that they're not we were not related to the tests. So productivity was a key goal so we wanted it to do to write test easily. And portability as I said another one we, we had like three systems to maintain. And now we have like two, again, or three classes and we want to move tests around. And we want also speed and use of use because we, it's not us running it. I mean our team but it's also operations that they are actually running a subset of the tests. So we want to make it easy for them that they don't want to write tests or they don't necessarily want to bother with the application. We want to provide the nice and easy command line interface for them to run and inspect any errors and then conduct us. And of course, robustness since we're talking about testing, we couldn't, we couldn't design a testing framework that wouldn't test itself, it could be really an oxymoron so we really started with a test driven design. When we started with the frame and actually it's, it has already quite high coverage and we, we nothing gets seen if it's not well tested and if there's no unit tests around it. So this is a bit of the timeline we started in March 2016 so it's almost five years. And by the end of that year we moved in production along with upgraded. We had an upgrade back then of the big supercomputer that beats the end. And then we decided to move publicly back then we didn't even name the framework reframe. I think we named it internally a spy regression or something like that. Then April 2017 we went public, just moving only the basic. In every release we're just publishing on GitHub. And then in 2018 we moved completely the development of GitHub. And since then we're currently everything is happening on GitHub. And yeah, we, we have gained already some popularity I'm going to check the stars after the talk if you have started the repo. So please start. And actually this is the documentation readers from last, from last month and we have around 150 unique users. Reading documentation are mostly US based and Europe. And yeah so the project has been used, used or there is an interest about it all over the world. So some key features about it. So we, we support cycling through programming environments or in easy build terminology is like to change so you can define. You can associate a test with the two chains that you want to test and then the frame is going to cycle through them, using the same test so you don't have to write specific tests for each to change. You can adapt your test and find unit for it to change, but it's not necessarily you have to write a complete new test by copy pasting another one. That's something that we never wanted to do. We have support for different workload managers parallel job launchers and any combinations of those so you can use slurn plus MPI run slurn plus is run pbs or torque with MPI run and so on. Module systems we support team mode, even from team on three one up to the mood for L mode. And all this is abstracted away the test never know what what is the module system they're using. That's the system specific today. We have invested a lot in insanity and performance checking so that there is their kind of a mini language that you can really write your sanity and performance test or extract the values you want in Python code. Without having to do with complex parsers and state machines so it's it's really you extract the data you want from the output and you really work on it as as you would in a normal Python function. And there are also lots of predefined function to help you. There's also support for test factories or we call them parameterized tests where a test can be parameterized several parameters and then you can automatically from single test class, you can really generate as many test combinations. You want that's quite powerful. We support container runtime so you can have your tests. You can have a frame launching containers to run your test in test dependencies so you can structure your tests as that. Yeah, one test depends on the other that's quite useful, especially if you have like lengthy compilation processes, and then you just need to run a bunch of tests that reuse that. All these are handled by the runtime so that's and also when the garbage collection of the different of the test resources, everything is coming by the runtime. Of course we support concurrent execution of tests. There's also result reports performance logging that's quite important so you can, you can really log the performance of different through different channels. And the other good thing is that we really invest also in internal so the API is internally ours are quite clean so that allows the easy extension of the frameworks functionality and also people can come in and develop or extend the framework easily. Over all the architecture is kind of layered so on top you have the user facing, let's say thin layers one is on the right is you see the regression test API it's the way you run, you write your test. And on the left is the front end is how you run the framework, which eventually loads the test passes them to the runtime, and then the runtime runs them. And the thing is that those levels they don't speak directly to the system even so there's even several abstractions, and there's several plugins that this can be changed or new plugins can be added, which makes the framework quite extensible without having to touch different things as soon as you implement the required interfaces. So the framework. In order to run the tests, it uses it uses kind of pipeline. So every test has to go through all those of those different five stages that I show here. So the test is initially set up. The test is created for its resources everything is copied there. Then the test is built if then the test is is around this is submitted for example the job scheduler or to the local system. And then there is signed the performance checking and then clean up of the resources. Now, the good thing about this design is that you can you can also easily adapt it to handle dependencies as well as concurrency in different stages. And in fact, for example, here is the serial execution policy where we assume for example that they set up built in phase or quite short and then you have to submit your job and then basically the serial execution policy will simply have to wait until your job is finished is finishing finishes and then before moving on to the next, but essentially with this kind of design then you can just suspend the job as soon as you submitted suspend the test and keep it in some cues internally, then move on to the next proceed with their pipeline, and then started covering tests as they as they are ready and finished and then recover with their pipeline. So that's how the asynchronous execution policy works and then this can be extended different phases for example you can have. We can extend out so that the build phase happens remotely and then you just have even more parallelism there. The nice thing is that all this is handled all this complexity is handled by the framework. Configuration is in a reframe. Essentially it has three basic sections, and is where the system specifically takes let's say it goes. It's where you define the systems to your environments your tool change that you want to check, and you how you associate it's technically it's a big JSON object, which is stored either in a JSON file or in a Python file. And Oh, I don't know if it's. I think we had a have a problem in the house. I don't know if we haven't noticed, but anyway, let me know if that's annoying the the bit that if you can hear it. Please don't. So, I'm going to jump into example so here is an excerpt from an actual configuration of the pit stage so we're going to see all this in detail in the tutorial but here for example you have a system. I'm going to give it a name how how the host names that basically kind of correspond to that system to the framework and pick it up, pick up the right configuration entry. The module systems module system used for that system and then you start defining different partitions this have nothing to do with scheduler partition. It's just a way that you make the system. You present the system to the frame so here exactly actually they're the we defined the GPU or hybrid node partitions that uses learn plus SRAM. And then here is how you get access for example that particular partition. Here you see the environments that you need to test and that partitions and actually these are just names that they're later defined in a different section of the configuration. And then you can even define whether the container platforms that are valid for that partition for example this one. We have singularity and this is how you're going to get the singularity. And then you have a section about environments where again you just give it a name and environment is in a frame is just a collection of modules and environment variables. They may not even have modules or environment variables set, but then it's just the environment that you're running. Here for example you and then you define the compilers so that the frame knows how to compile simple tests in those systems. Again it in the way it builds thing is really yeah it's not it's purpose to do what easy build or or SPAC do. It's just has the necessary stuff in order to be able to build some tests the code for some tests and then you define different programming environments. Now here is really a very simple, let's say the simplest test. I'm not going to go into more complex tests in this talk, but I'm going to just give you an idea of how a test looks like so essentially it's a. Python class that is specially decorated so that the frame knows that. So basically that the frame picks up and instantiate is and sees as a test. You may not have this and then you can have your own class hierarchy is if you want to structure and your cosets that you don't have the application things like that it's really really quite flexible. And then you have you need to specify for which systems in the configuration. This test is valid for so this is a list and by here we mean that that's for any system. And the same from their programming environments that means that the frame will essentially skip the test if you try to run it on on a different that it's not supported, or an programming environment that is not supported. You can specify what is the path to the test. Which is here simply a simple C program the frame will automatically understand which compiler to use based on the extension. You could even here specify the directories in which cases it would, it will auto detect whether it's make project or see make or auto tools project and try to invoke the configure or make step. But again, you can also find all of those. And here is how you see that's how you ask a frame to the test to look for sanity so here is essentially a lazy evaluated expression that essentially says, Okay, assert that this regular expression is found in the output of the test. And this is lazy evaluated so it's not, it's not being evaluated when test is instantiated but it's going to be later evaluated in the sanity phase of the pipeline. And here can be arbitrary complex you can even use like a functional style of programming if you want to use this framework provided function or you can even write your own fun functions to process the output. Do any post processing you want, and then decide on whether it's a it's a sane output or not. That's a very simple example I'm showing here. And this is how it runs essentially you just pass the path of the test. There is also a predefined path for the frame if to search for test but here and passing one. And then you pass the minus our option which basically says okay now run the test. And here you have those yet you are familiar with Google test. It's like a similar look and feel. So you see that it runs the test on the component system and partition that it picked up so that's the generic and default partition we're going to see all those in the tutorial, using a built in environment that's that's basically that's before the default configuration that the frame comes with it allows you to run it on literally any system with of course limited functionality if you don't configure it. I'm not going to go in more details on how the tests are are because we're going to have that tutorials. Here is the link to that tutorials I'm going to. Today we're going to do the 3.4 release with the tutorials that we're going to show from tomorrow. So, yeah, moving on performance monitoring that's quite important so reframe can log its performance to several channels so either in a regular files, or syslog or great log so if you have this is from a server we have in css that listens and accepts great log. records and then reframe sends those records to that server and then we can visualize and then you can have really performance data cross time so as I'm showing here that's that's quite nice because then you can create dashboards. And you can see also you can correlate different tests failing that's I see quite useful. Now this is a bit of an hour setup on how we we run. So here we have essentially we are our continuous testing works through Jenkins. And actually daily we pull the repository the latest master from refrain so as to get always the latest developments in the test plus some private test repository. And then Jenkins longings the different systems that we're testing and and basically launches a refrain. So then refrain runs and sends data through great look to elastic search, and then we can visualize as I've seen I've shown you before. And we have several different several several categories of tests, which we identified by tags. So generally you can associate tags with refrain test so that we can easily select them from the command line. So we have some tests that they just test the free programming environment functionality than the HPC software stack. And then we have some a sub selection of those tests that essentially our maintenance test run by the operations people before and after maintenance is we have some benchmarks and we have generally lots of tests. That they are used across systems. I know some of you that they have refrain production may have even more. So yeah it's it's I think it skips and generally it's it really streams lies the process of upgrades because as soon as you put it in the in the process. When everything is green. You really are quite confident that you're not going to mess when you give back the system back to the users. And it really has revealed the last the last year we we had a bit some a bit of bumpy upgrades with great and it really with lots of regressions that we have them to come back to the vendor for fixing them. So here is a bit of the refrain test suite you can find all those tests online. There's lots of tests that you can just pick those and adapt them to to your needs. And these are a couple of sites that I know they're using refrain because with some of them we have collaborated like with nurse where we have they're also using refrain for their software stack validation performance testing advanced marking and also they also use CI CD but with key club. And to continuously test their software stack. Similarly Ohio super computing center they have a similar setup. And then there's other centers that they're using refrain and certainly there are more that they just keep learning that oh we're using refrain. And another another nice example is that you can you can use the refrain for integration test for your code. For example here is is an electronic structure library is developed by some people in CSS and generally in Switzerland. And actually they're using refrain inside their github actions to run some integration and validation tests. So essentially what they're some validation tests essentially what they are doing the github action basically pulls the version from the refrain repository runs the validation tests. And the nice thing is that those tests that they're valid for that vm. On github. The same tests are used for running on the guys laptop or on pit state. And we have having to run to to rewrite the test just the same test with some slight adaptations, everything in a single place. So, there is a small community forming around the frame. A couple of days ago we had 66 members in the Slack channel so now we are about 70 plus because people have are joining for the tutorial. There is also an effort for a put up here. If you want to share your refrain test repository. I'm going to do a fork, just give me the URL and I'm going to do a fork under this group where I'm planning and grouping together the different test repositories so that people can look into other sites what they're doing. So there's already there I mean a part C from CCS there is a couple of other efforts, but it's still like really under development so I know that several centers are quite reluctant in sharing their tests but if you ever want to publish them then just let me know in the Slack channel so that we can collect them under here. For those that they have been in easily use them in the last year. We had lots of lots of progress done. First of all, we released around May, I think, or it was early June and don't remember the frame 3.0, which did some had some breaking changes. The way the tests are. The syntax is much better now I think also the configuration was completely revised and rewritten. That's why the old one is no longer supported especially from 3.4 improvements in installation a synchronous execution policy there's we have the additional reports at the end of each session had improvements in in test dependencies support for module collection we had a nice utility function that basically crawls for different modules. It's like the spider, but, but not it works also with the team so essentially, and then you can parametrize your test to test all of those modules. That's quite nice. Better verbosity and then we're having a new syntax introduced in 3.4. How you can parametrize this and how you can compose them and how you can dynamically expand the parameterization space that's quite cool. It's really fresh. And I know Kenneth is going to kill me with this last feature that we also support on sparkload for loading modules. And the frame is that not is not the only tool I told you as I told you the beginning. There are other efforts that they more or less started the same period of time. One is the beat test and Friday at noon in the afternoon it's going to be the talk from Suzeb and the pavilion to I if I would say pavilion to use another tool from. That they're developing. So those two tools they're using YAML files for testing instead of Python. If I would say the some some difference is perhaps the view of the things were more refrain were more on the from the application view and performance and things like that. And whereas those tools I see them more from the system side develop bottom up whereas I would say refrain was top down. If I could just in a single sentence summarize the differences there. There merits in in all the three tools. And yeah, you can you can go in and try them. All three I think that's that's good to have some more tools and change of ideas. Now what's next so they wrote them up for the. Yeah, for the near future let's say is we're really working towards enabling enabling you to write enabling enabling users to write test libraries and composable test. But no matter what you do. If you don't want to have like a smoke test. A test cannot be entirely generic you will have to do it. The more the more serious a test is it can be entirely generic. In the frame we you can basically minimize the effort you need to take to port test. But we're with this feature here we're planning to make it easy so that you can create libraries and then specify. We're going to add some syntax element so that you say that these are fields for example require and whoever is using that generic library would be forced to specialize it somehow for his system. So at least the generic part can be shared and common to everybody. So you can we can follow that project under the link here. Another nice thing we're trying and there is also already pull request and we're hoping to merge that soon is we want to refrain to generate dynamic pipelines in GitLab and have Run the test so then you can have a very nice integration if you're using GitLab in your CI CD and you can take your test that you can run manually and then have a frame in a single state to like reframe CI generate very similar to the Spark CI generate and then that would generate based on the test you give it it will generate the actual GitLab jobs. And have a GitLab run them for you. But then you can use the same test and have a frame run them for you that's that's quite nice. And we're planning some problems in the runtime for increasing concurrency. So touching base with different developers. We have certainly done a fixed release cycle. So we have dev release every two weeks and the stability release every six weeks. We're our plans to follow nor plan we have been following the train model seems like half a year now. And no matter what release is going to be made so whatever feature is ready get seen whatever not is postponed to the next one. So that's where we're going to stick to that, as well as the semantic versioning. And, and everything is in GitHub if you want to see what's going on so there are different projects in the releases so what is scheduled for release the springs and big thanks to the core dev team here are the five people with their GitHub account. But I should I should make a disclaimer. We're not full time and it's an open source project we have plenty of other things that we're doing. So issues might be late to catch the release train issues might get spilled over to subsequent sprints and since we're all CS CS guys here priorities might change based on our needs, but we encourage contributions so if you like the tool if you think you like the tool, and you find bugs or even features. Yeah, contributions are more than welcome. And that basically concludes my my talk so essentially refrain is quite powerful tool that allows you to test continuously and HPC environment. There's a great test in a high level language and there is that helps you have portability across HPC platforms by also abstracting away some system details. And has power runtime, powerful runtime. For help there is the mailing list select GitHub and back reports for GitHub. And finally, some logistics about the tutorial. So it's Tuesday, Thursday and Friday. And please respond to Victor's email about when by sending your SSH public kicks so he enables the access to this virtual cluster that we have set up. And as soon as you join our frames like channel, there is a tutorial even 21 channel here. So you join that so that we can have take any questions or about the tutorial even offline. So that's all about it. And I'm happy to get to reply to any questions. Thank you for the talk we've got the first question ready for you. I will just ask the first person to unmute. Hi. So the question is, is sort of two parts but the main part is for the tests. Is it possible to specify something so that when you have a test it's non deterministic you can specify some, like a central value and then some bounce or plus or minus. So if I have a test that comes out the answer is supposed to be within one half of five. Yes, yes, you can do you can do that, both in performance and also when you're doing sanity so essentially we we do have tests I mean I don't have in the presentation of course. And I'm going to show also in tutorial that you can essentially, we have some tests for example it's the Gromach's test I think thing that some applications where we extract the energy output that is being by the program and then we essentially let me go a bit just give me a second so here. So here in the sanity pattern so then you can compose different functions or write your own. So actually we extract the data and we say that this thing assert bounded there is a function that's called assert bounded and then we compute the absolute value of the of the thing we get minus the reference, and we make sure that this is below a threshold. And yeah, you can perfectly do that and in the performance patterns which is the performance is is even easier. So you can say this is your reference value and mark this as a failure. If you're beyond 10% of that, for example, so you can you can do that. And the second part is, I'm thinking specifically of Gaussian where the output is kind of messed up because they want to give you some fancy quote at the end every time it runs something. So the output format can vary quite substantially sometimes from run to run which makes it very difficult to do so you have like a regular expression parser for the output. So that's the beauty of it because it's not that there is just a parser. I mean, you can have you can write your own regular expression, but in such cases as you say if you try to write the regular expression it's going to be it's going to be a monster. Yes, so what you can do that's that's the beauty of it because this is fully programmable. So you can have a simpler regular expression, get you a list of things back and then in Python code you process that and you find your way out with by really programming that and you can write your own function that you have reframe extract some some of this in a course and way and then this passes it to your function and your function does a further analysis and decides. Okay, that's that passes or not. So it's really in code, you can write normal Python code to do your scientific checking and use the utility also for refrain of refrain, have refrain give you some extracted data or things like that. And then you can process as you wish it's it's really very very flexible. Okay, thank you. So if anybody else has any questions then in zoom at the bottom there's a button called reactions, which allows you to raise a hand. Just as Kenneth is done. Yes, I have a question actually so. So my question was, Vasilios to, to what extent have you seen different HPC sites, or people sharing tests among each other and how would they get organized to do that. Yeah, that's that's something that it's it's still hard. I mean, it's because I mean in some tests, some centers they have performance data, so they don't want to share or they have to go to get approvals of sharing tests and things like that so I know to whoever I talk about that would be a great idea if we start sharing tests, but whenever I say okay just if you make something public and give me the URL so that I can fork the Apple there. It's, it's, it's not yet to the extent that I'd like it to be. And towards that direction, what one thing is that admittedly, you have to have. I mean, if you write a more complex test at some point you will have to have specific things for your system. And that's why that's where the idea comes in with having a composable test library so for example take the Gromach test. There is some components that they are there I mean if you have your input files, your output. There are some non system specific things. And you can write in this asset way that whoever gets that library would require him to specialize it. And you can even do that but not in such a strict way that would force essentially the users of your base test to to be disciplined in that way in the way they write it. But we have a lot of test public and I think most of the centers that they just take hours and they adapt them. But yeah. Okay, thanks. Yeah, I see some other people raising hands. Yes, I'll pass over to the first person. I'll ask to unmute. Yes. Hello. I have a question for parametrize test for instance for Osu micro benchmarks you would like for a single run to be able to specify multiple size of the messages and ranch for the latency or bandwidth you allow. Yes. So, is that no possible because the previous test was not that simple on that side. What do you mean using the parameterized test decorator you mean. Yes for a single run. So obviously if you should make multiple run. And then you set for each of them as a value for the messages you would like to test then as if you repeat. Okay, so you want in know the parameterized test even even now with this enhancement, they're going to be a new test, new test but there you can use dependencies. So you can have like the part of building the also benchmarks for example, it can be in a single test where the others are depending. The run test will run really fast and be submitted. But then I have the impression from what you described that you, you would still have at the end multiple execution of the of the benchmark for a single message size. My question is more possible to make the full run that in there that then, of course is testing multiple message sizes and then you define as parameter that for subset of the message size you expect a certain value with some tolerance. Yeah, yeah, yeah, yeah. No, this you can. Yeah, yeah, this this this you can do it even right now for example you. Because in the output essentially, you can, you can have multiple performance patterns. So you can define, we're going to see it in the tutorial where I have the stream benchmarks and I'm, I'm extracting from the output the value for copy try it, etc. So there you can extract the value for the different sizes and have references for each one of those. If I'm if I'm getting what you're trying to do. Yes, that's. Yes, but then I was thinking if there is another way to abstract maybe for input parameters. So therefore, in this case, the message size, and then for a set of inputs to provide expected value to give her with a tutorial on sponge. And I have the impression it's not yet you can. We can discuss this in the channel in the chat but I think you can do it if I understand it because you can parameterize your test with any set of parameters and it's up to you how you generate the list of parameters that you pass so you can generate combinations of Yeah, of message sizes and I don't know if you want different transfer in the same I don't I don't understand what exactly you need to parameterize that that's the thing. Perhaps we can we can discuss but I believe it's it's doable. Okay, thanks. So going to interrupt the questioning there. We've opened the breakout room so the next few minutes if you wish to ask facility as questions will please enter the breakout room. This is just to allow us time for the next speaker to set up ready for the next talk. We'll start in about eight minutes time.