 Thank you. Thank you, Kristen. Thank you, Parisa. Thanks everybody for joining me today. I am going to start with an overview of Linux kernel development and testing philosophy. Understanding the Linux kernel development and testing philosophy will help you get a better understanding of the rule case self test kernel self test place in the development and validation of the Linux kernel. Then we can dive into a case of the test itself and how it can be used for kernel validation. I'll also go through a little bit how a brief overview of writing a new test and how it can be added to the kernel self test suite to be run by default. All right. So why do we test testing itself is an integral part of software development. If anybody took the software engineering course, they talk about how designed development and testing and integration testing and so on right. So, whether we call that unit developer regression or integration testing. It doesn't name really doesn't matter right at the end of the day goal is to release software without regressions to the users because why is that why is that because it makes good money sense. It costs more to debug and fix customer found problems. So, with Linux kernel is the same thing. So we are looking to looking to release kernels that have no regressions or very few regressions and in good quality code. Linux kernel testing philosophy is the same as its development philosophy, which is, it is the developer and community driven and Linux kernel community relies on community and users to, to test workloads configurations various configurations that the kernel supports, and then also supported hardware, finding and reporting bugs. Also, community does that it does that a lot right developers, other developers find problems in code that they might be using but not necessarily developing on writing that particular subsystem. Testing is more important in this, in this model, not just for Linux, it's for all open source projects in general. But this is because it's not a locked down a software where everything a very parameter is strictly controlled meaning development phase integration phase and testing phase. So Linux kernel development is developers keep adding new features, as they keep adding new features and enhancing existing features. And then bug fixes, they all go in in close to source projects variables are tightly controlled. Not so in any open source project, especially Linux kernel with its supports 24 architectures and 300 and plus sub architectures and several configurations. So it becomes very important for the philosophy matching the developing development philosophy testing philosophy matching the development philosophy, what that means developers and community driven testing. So just a little bit on Linux kernel release cycle itself. It's a time based model, not a feature based. That means if a feature, we don't agree on a set of features that go into a release. We work towards getting the features completed in the time necessary. If a feature is ready, it'll go in. If it's not, it won't it will wait for the next release to come in. So since the releases come in every two, about eight weeks, eight to 10 weeks, it's if a feature is not ready might as well wait for the next one it's right around the corner. And it is a continuous and parallel development model testing included. So what how do how does Linux kernel overall testing and validation work. In writing tasks, there are two in kernel frameworks kernel self test and K unit. And if you are watched if you haven't watched last couple of weeks ago Brian and Higgins presented a webinar on K unit and the role it plays in kernel testing. Please check it out it's on it's on the YouTube channel. For testing developers use case of test and K unit and others to test, and for regression testing, the same thing regression testing also gets done using case of test for sure, and K unit, you can do regression testing. While you are booting up the kernel and so on. So, so that all of these are used. It's a continuous integration testing I want to touch on what are the other tools that get used to we also have these webinars already up on the YouTube channel static analysis tools for smash cottage check, etc. And you use these tools to do static analysis of the code to make sure that we are. We are finding problems. Using various methods, static analysis as well as dynamic analysis. And you probably heard about sysbot and fuzzers. Those we that type of testing happens continuously. So let's take a look at where does all of this. systems. As an example I'm running latest release well I'm running fighter called RC six. I'm going to be switching to RC seven very soon that came out Sunday. I, that's the system I'm using to do the presentation. So I like to self host so I keep going to the latest releases as they come out. All of the RCS. So developers do test use test RCS on their development systems. And then we have several continuous integration rings. We have kernel CI, zero day boot and performance testing ring, and zero day build issues. These are all community driven. I mean, these are, this is not anybody, anyone company doing that this is all community driven linear test form. And there is a build bot. It runs build tests on. Think about 53 configurations if I last time I checked, and then about 150 different combinations of those. Just take a look at the build bot it's impressive to see how many tests get run and Hulk robot. In fact, Hulk, Hulk robot found a problem in my batch last week in the Linux kernel Linux next before it went into the main land which is very timely fix. So, all of these integration link rings play a crucial part in finding problems in Linux next, and then other developer repositories. Sometimes problems. Also, when the reporter reports a problem, they at times also include a patch, which is nice. So there is what is tested on these rings. You can check out the kernel repositories, the kernel developers request to request ring administrators to add to add their repository repose to their test strings. So they go, they get added that way. And then once repositories get added. For testing each day, and to run tests on those testings also test several stable releases candidates each week, and they report results to the stable release mailing list. I included links to the repose and active releases here you can check those out, which active religious release active. The releases that are actively worked on, like for example state several stable stable releases come out every week. Approximately every week and not testing happens if you were to watch the stable release mailing list you can see that activity. So what can you do. If you would like to participate in the development process, you can start with being a user. You can run one of the first things first steps to get your feet wet. What, what can you do what you can start using Linux on your test on your developmental test system, and then run basic boot and user tests and basic sanity test tests you can do that. If you are self hosting like I am. I'm running like I mentioned five or 12 RC six, you will automatically test various scenarios, because you're actually using the system yourself in your normal mode, you'll be testing all of the things I identified on this slide. And that does networking work, why if I work is your, are you able to log into your SSH into remote systems. Are you able to our sync large files from another system or within the same system. If you are happened to be doing backup on your system you'll be very likely using our sync to do so. If you were to be downloading files, your get cloning you are using testing all of those aspects. And if you are playing videos or you to listening to a webinar, you're playing both audio testing audio and video so it's it's useful to self host it helps you be on top of other systems. And it is what is called community call helping yourself to help others right. So when you, what I find is when I am upgrading to a new distribution. If I am on top of the integration during integration, running all the RCs I find problems ahead of time I know that it would. I know I can report the problem I can fix it I know I will be able to test it as well. So how do we another aspect of validating your kernel if you happen to be running on your development system or test system or your own system is looking for critical error messages, anything new popped up. Do you have a new error message that you haven't seen before new critical message. I'm talking about the message when you look at the demon. These are all the message level warning levels critical error and warning messages so you can. And then also you can look for are there any new panic races that showed up. It's possible sometimes that a new error message gets added existing code that that means that we are reporting a new error we choose to report a new error. Existing error in a new message. So, not all error messages that show up new are concerning but you have to kind of go pay attention to those. All right, so this is a good time for me to take any questions do you have any questions at this time. All right, so let's start talking about Cardinal self test itself. It is a regression test suite for Cardinal developers and users. Mix off user space C programs and shell scripts. Some shell scripts and C programs depend on Cardinal test modules. You'll find them under live in the repo. You will see most of them are named test something. So you'll find a few of those. So what shell scripts do is at times they load the module. They load the test and then unload the module and same out same case with the C programs as well. Some of the C programs, they will depend on a Cardinal module to exercise. Parts of the Cardinal that you have to load a module to get get into. For example, system calls and such you can use. You can make system calls and write a program to do open close and so on and then several other system calls you can test it that way but in some cases test modules are necessary. To be able to exercise Cardinal code. It is a mix off white and black box tests. It includes some unit tests, as well as functional tests. And it also has a hardware dependent test. In some cases you will have to. Some of the shell scripts driver specific shell scripts, you will have to load the module and have the hardware as well to test. And if you stress and performance test I'll get into that in a little bit. Okay, so we talked about how Cardinal self test is a mix of white box and black box and functional tests. So the goals for this Cardinal self test is it's a developer driven test suite. That means developers as they add new features, they write tests for those. So feature functional and regression tech tests are the focus of this Cardinal self test right. So, when developers find a bug, and they add a fix in, they will also add a regression test for it. So it's also bug fix focused when you fix a bug adding a regression test helps test for that fix as well and then it doesn't show up again. So if it shows up, we know about it. And then these are all subsystem tests, there are several subsystem tests that you'll see in the directory. This test are organized and grouped in directories for each subsystem feature API examples, net networking timers system sync VM, you name it, there are seven sizes in that directory, each subsystem has has test sub tests underneath, they will have sub testing and also test cases as well. So what it is not is it's not a workload or application testing. So the focus is on increasing both breadth and depth of coverage, that means other parts also included. That means if your feature has covering all the features for example of a system calls system call all the flags, what kind of flags that could be pasted in I act as what can you if if they I act a lot of system call has flags options or other variables, you test on those different arguments can be tested that's kind of a bread coverage. So for as depth goes you go down into into various code paths that can be covered under a feature, as well as across the subsystems. So that is the focus here. I'll quickly talk about case of last test patch flow works. They have all of these subsystem tests. They have dependencies on subsystem trees, because if a new feature gets added to memory management subsystem for example or a networking or a system call. That would have dependency on the future a test that test that goes with it has dependency on the future so the tests and features go together. The future is added. Since the test depends on it and developers work on features and tests in power. It makes perfect sense to have the tests and features go together in it subsystem tree. So you will see a lot of patches go into case of test. I don't handle all of them. Maintainers for those subsystems. They submit them directly through the trees. Features and test get reviewed and accepted at the same time into upstream via subsystem trees. This model makes it easier for maintainers and developers because if tests have to go separately in a case of tree and feature goes separately it becomes difficult to coordinate all of that. So the easier thing to do is having tests and features go together. The goal really here is make it as easy as possible to add new tests make it easier for developers so that they don't have to worry about breaking their features and tests separately and doing the work separately. What does case of test contain? It has a framework component. The components, let's go through the components a little bit. Case of test framework which is common infrastructure for building, running and installing tests themselves and then individual tests. Case of test framework includes common aspects of it like building, running and installing tests, also reporting test results, some of the interfaces to report those test results. So the goal is for developers that they can focus on writing tests and a common framework can take care of building, run and install, which is common across all testing. So this is what a typical view of the test looks like. Main test, which is a target, say it could be a MM test or a timers test or breakpoints, one of those that it is the directory. This is where the target directory resides at the under self tools self test. I'll show you the get where source resides in a little bit. And then usually there are subtests under that and then each subtest could have a another test underneath and then there will be several individual test cases. In some cases, some tests could contain up to 100 plus individual test cases. A breakpoints test is a good example of that and few texts and timers also have a lot of test cases underneath them. And a little bit more on case of test framework itself. See interfaces for reporting test results. You can find that in case of test.h file under tools testing self tests. There will be pass fail skip counts for a test for example how many if you have 100 plus test cases individual test cases, we have how many of them passed how many of them failed, how many of them skipped because of unmet dependencies. Common run framework provides wrappers to do the counts and reporting results because we shell script there's lots of shell scripts involved as well. So shell scripts cannot use the C interfaces for reporting test results. So we do a wrapper on top that we just count the number of tests that get run. As they are getting run, we count the pass and fails and there is hooks from the shell scripts to return error codes to skip codes or pass fail codes so that the wrapper catches and then just reports the appropriate result. And then individual tests can also use test hardness. This hardness helps provide a running runtime amount for multiple test cases. At the same time, compare the results with expected results. For example, if you say you make a system call and you say what are the expected with certain flags. What are the expected result that you're looking for saying in if you're testing error conditions. For example, you are expecting an error from the system call. So you can specify using this hardness, what is expected for when this particular test case is run. So if and then the result will be compared with a error number, for example, and say hey if this we don't see this particular error number from the system call that means that the test actually failed because we're looking for we're looking to test the conditions different behavior of a system call with various inputs. User inputs. For the reporting tests, we use test anything protocol. It is a simple text based protocol that we use to report so that parsers can go and be able to be able to parse the output and present it in a good format. I mean, I mean this is human readable but if you want to see nice graphs and parsing and then see that that is possible using this protocol. So we talked about it's all make make file based build run install interfaces, they're all make file based run script is also make file based and install and packaging tools all of them are driven through make file. And the default run of a, I will show you some examples of a how to run these tests in just a little bit case of test default run includes all targets that are slated to run and you add a target make file has all of the case of test make file has all of the targets listed in there. And there are some stress tests as well. These stress test support default mode, meaning some like for example hot plug CPU memory hot plug and a P store test support default and stress modes. So default mode will be run when the default run happens. If you want to run a stress mode test you have to do that separately at the test level individual test level. So running as root offers the best coverage, because some of these some of the tests required root access to be able to go test a particular feature. And running mainline tests on stable, if possible, gives you good coverage for the reason that each release, we keep adding new tests to the test, new test get get added also new test cases get added and new functionality sometimes get added. However, running as root has its downsides, just like running mainline tests on stable I'll get into that in a little bit. Definitely running as root office best coverage, however, running as root has its downside. So just watch out for that. And then, in some cases, if you were to use a virtual machine to run then you can, you can run these tests safely. So this is kind of what it would look like. You can run a kernel self test on your running on top of the 512 kernel. And then, in, in the second picture I'm showing a kernel mainline coming latest coming in and running on the stable releases. You can also install and run it on a target you can either run your build self test, and then run it on the same system, which I'll show you in just a bit running it on my system. The other cases, you can build everything and then take it to another test system and all of the test test strings do that they build and cross build and copy it to the test system to test. So, when is this model not case of test running case of test from mainline on stable release doesn't work that will not work on BPF because it has a and other some other tests but PPF for sure. It needs it requires a tight it has a tight coupling between kernel version and test version. So, be watch out for that. So, stress test could change the system state. Like I mentioned with the running as a root in some cases. And so, and then some tests like hot plug and P store support both modes, but some test could like if a panic test runs it could reboot test some of them. If you want to put them in suspend mode. If you run the test that if you don't specify root. If you don't run them as a root, then you're safe. But if you are planning to run it and save root root as root, just make sure that you are. You watch out for any side effects so better thing to do is excellent first in a virtual mission. So far we are constantly balancing developer and user requirements, it is. Users want to run all tests and get a feel for release validation, you know, did case of test run cleanly, and developers run tests specific targets to their subsystems. So, case of test being a developer test suite favors developer use cases. In other words, supporting developers, more so than users. We kind of keep balancing, and it has a driver coverage. Aero path coverage, as well as improving common framework and infrastructure. So, increasing the coverage. You know, as you add more tests it takes longer for the, the entire test suite to run. Coverage is a priority. When we started out we're kind of looking at how long does it take but we decided that coverage is priority, and then we don't worry about timing as much. Because you can select a subset of tasks, and I'll show you how you can do that in just a little bit. Aero path coverage is good. I encourage test writers to think about increasing error path coverage. And this is also an opportunity area for new developers to look at coverage and add tests to increase error path coverage. So driver coverage is challenging especially with them, you know, hardware specific testing it is difficult to have access to all the hardware for everybody so default run. So, generally, focuses more on areas of, even if there is a driver test, it tends to focus more on iactals verifying error paths on iactals and so on. So you can find, I won't go through the slide in detail, but you can find all of the branches next and fixes, and so on next contains upcoming merge window content and fixes contain current RC. And then you can find these sources on tools testing and self tests and there is documentation dev tools case self test RST. That's where you'll find the documentation. So you can, how does building tests work you there are several options you can build a test directly from the main make file which is the Cardinal root source root directory. And then if you were to run case of test dash all. It will run the silent option suppresses the make file build messages so it's a good one to use. Then you can using this star gets very make file variable, you can select tests you can give. You can just build one test, or more than one test like I'm showing here timers this particular command. The command here will build timers and size test. And then it when a case of test build runs, it will install headers. This is because you want to run colonel self test on a kernel release with the tests with the headers from that release. So you want to do that. That's why it will install the test first install headers first and then it will build the test themselves, and it supports cross compile and relookable builds. And if a test fails to build we have lots of tests to build right so if a test fails to build it'll continue it will keep going so that it can build the test so that you can run it. And that is the case of test depends dependency check tool that you can check it what that is this does is it'll run on your test system or development system wherever you are planning to build your case of test. It'll check and say hey, do we have all the libraries for example, some tests require additional libraries to be installed like a cap capabilities library fuse library fuse test we will require that. So this will look for the release if all of the dependencies are met on your test system. Or a system system you are building case of fun so it's a good one to run. And it can also check just pop it can you can also run on one single test if you want and say hey does my VM test can or a fuse test, can it build on this. So you can check that I'm, I am, I'm not going to go through too much of this but yeah. So that's, that's how that works. So running tests, you can run tests using from the main make file at the kernel. So root source directory, you can just say, I make a case of test. So that will build the test if they aren't already built, and then it will run them. And you can also select if you test to run select a subset of them to run as well just like the way the build works run works the same way. And you are we talked a little bit about stress testing is stress testing isn't part of the kernel self test main run default run. So you can. If you want to run it from the main make while you can do so by running run hot plug will run the hot plug test and run p store crash will run them. So timers have a some destructive press we split them out timers when you run spires timer test that could change the timing on the system. And such they have been spread out into a destructive test and though it could be run on a system, you have to choose to run them and be aware of the side effects, or how it will affect your system state. Recording. It does in two different modes. The reporting is detailed versus summary detailed more mode will give you reporting on the individual test runs themselves. The developers favor detailed mode as it helps them debug problems of course they want to know how individual tests are doing or a particular test case is behaving. Whereas users want to know the summary of test cases, just the summary of the entire test run to keep at the higher level. The case after supports both of those with the side reaction. So if you were to say summary equals to one, it'll just display that I'm going to run. I'm going to do a demo of this in just a little bit. So you'll know, you see the difference. In installing, you can do that from the main make file all of these case of dash commands, they all work from the main make file. And then that's the only place they can be run from the funnel from the main kernel make file into the self test make file. And then you can clean after building if you don't no longer want them, and then make gen tar this installs and generates a nice tar file for packaging. This needs to be done in this need to be run though in the self test directory. So, I guess we can do a quick demo. I'm going to stop sharing and then share my terminal. Do you do you have any questions at this time any questions. Sure there is one question that came in through the Q&A chat. Okay. Okay, let me do that. Let me check that. Yes, you do have to have a full kernel source tree on a system to run one case of test. In fact, you should have a kernel built as well for that. Okay, so I will actually show a case where I am I have a kernel built in my report right now. So I will share my window here. Can you see my terminal. Okay, let's see. Okay, so I do have a kernel built here. And let's look at a couple of I want to run a test. I am actually building trying to build a test here. Just a test is simple test to do demo here. And then this will build just that one test. So, I happened to an earlier case when I was playing with this I ended up when I ran case of test build, I ended up installing header so that's why you're not seeing header install otherwise, when you run case of test build, the first time you will see kernels headers being installed. Okay, so we have that. And how does clean work some going just going to do a clean. So just, you can see that it just removed right here. The one that it just built. And then I'm going to show you run now run and build since no, you know, we cleaned this up right so it should actually build size and then run it. So, you will see that it built the size, and then it ran it, and you'll see these total so it's this is the test anything protocol version I'm talking about every single report message in here is this pound, so that is easier to pass. You can say. So, it, it will say, okay, if everything worked. Okay is a keyword for parsers in test anything protocol. So, it ran it and it's saying that I ran this test, and it gave the total self runtime memory report. And then in use, and then it says that with the top header to it says. And you will also see that the testing. So this is all rapper picks this up and then reports all of this. And this test does has only one test case so that's why you are saying one dot, just one here main test and then one test case. So let's see if we run this in a summary mode. What do you see. So, I don't know why it's doing all this, but okay so summary mode so you can see the difference between the summary mode. You will see that it hasn't displayed. It hasn't reported is suppressed all of this stuff, like, okay. Get size runtime all of these totals and uses the goal for this is well did size test pass. So if a user is a testing or a user that wants to know if a size test pass this time around, then summary mode is very useful to do to for that. So, now I'm going to clear my screen. Just so that we can run another test here. I want to show you. This is this will run. This is going to to display. So this is an example of a test where you have several test cases. It will run about 110 test cases, and it will give a breakdown on what is doing and test description, and like for example, right, writing watch point, and it shows you what it's writing, and it'll run these 110 tests, and then tell you how many past and how many failed the totals for that. And it is it also has a suspend test if I remember correctly that would be not okay okay so yeah right here. So, this one requires root access. So it's just saying I'm going to skip this test because you are running as a normal user. And I'm going to run the test I can run. So that's any questions on this reporting or at this point. So I will, while you think about questions I'll also run this in summary mode to show you the difference. This does have a lot of sub test cases, you will see the difference very clearly on how it will just show you it's saying at a higher level. It's saying I couldn't run stop after suspend test because you're not running as a root I'm skipping that. And it'll say skip here. And then it'll also say I have run breakpoint test, it won't tell you how many test cases there are, but it'll run those. All right. So I want to quickly quickly show you how you can also run this tool that I talked about dependency tool. And this is, this is the tool that will that you can run to see which tests will build on your system. For example, if you, it'll tell you the libraries it'll parse it look at all the libraries you have installed the dependencies are met or not, and then do that for you. So I haven't run this in a while. So bear with me if I if this fails to run but this is what it'll show you for a VM test I want to know. Hey, do I have everything. So it'll tell you this VM test depends on all of these libraries. And then it will tell you what will be it'll check and see dependencies are met or not. So let me see timers while we are at it. So it will tell you which. Let me see how many it figured out what timers test needs for that and then it'll tell you those dependencies are do exist or not. I think I might not have few steps to let me see. I don't know if this is a test let's see now. I'm going to pick one test I think I do know these are all different tests right here. Each directory will have we ran this test right now. Breakpoints. This breakpoints test has two different tests may breakpoint test is the one that you got all of those hundred tests from and then it also has a variation for 64 because of architecture variation. And then you will see the stop after suspended that didn't run. Okay, let's see if I can find one test quickly to show you if it has dependencies it can find the dependencies here. Maybe that one. Maybe that one for dependencies missing libraries yes um so yeah I couldn't remember which test uses fuse library so I don't have that on the system so it'll tell you hey you need to install that before. All right so I think my demo part is done at this point so I can I mean you can see the number of tests we have here. And each area keeps expanding. We tend to have multiple every release we each area gets more tests added. Sometimes we have new tests coming in and rest control. There is a test coming in that I'm I have in the next that's going in changes to that going in this time around. So, and so I will stop sharing now let's switch back to the presentation. Okay, so any questions so far. Okay, I see a question. Do case of test run from user space via system calls or test can be written in Cardinal space. So, to fold the answer is to fold for system calls definitely it's a user space because all of the users are Cisco's gum and then I octaves, when they are run from the user space. So when a test to requires a Cardinal assistance Cardinal mode to be running Cardinal mode. That's when we use that model where we have a test case that will have use a Cardinal module. Let me show you actually that's a very good question I'm going to go to my. My window again to show you a test that fits that. Like for example this. So, I do know live tests do have more load modules, like for example, this one will load a module. It depends on Cardinal test printer module. So, that's how some tests exercise Cardinal code in there. So you will see, if you go into live, you will see a test module for it. So there are lots of test modules but you will see print key that one right there. That is the one that's getting triggered do using that shell script. So that's how that works in some cases. If a developer chooses to do that, saying that I need a that they need they think that they can run a test better with a test module having a Cardinal test module, they will write a test module. So you will see it is a Cardinal module. Now it's a like with Cardinal modules. It's going to do test number test string right test pointer. So it'll do all of those it when you load it it'll run all of those tests. So, these are all the tests that will run when you load the test. When that shell script runs. So that's is there a way to test hardware configuration using this. Yes, that would be one use for that. Let me see if there are any examples that come to mind. You could exercise but but you do need hardware also right, unless you can somehow market in your, in your module test module mock the hardware in your test module. So yes, this is one way you could do it. So I'm going to be switching back to my presentation. Okay, so are you seeing my slides back again, just give me a. Yes. Yes, great. So, contributing new tests. So I'll talk quickly about contributing new tests. Now that you have seen how it works and you have seen a few tests here. And contributing new new test pay attention to reporting pass fail skip conditions, you have seen how the skips and passes and fails are reported in wrappers as well as individual tests. So it would be clearly do identifying the pass fail skip conditions with clear messages will be very useful for users. And it should say why it failed, of course, and why the test is skipped. If, if a test is skipped very clearly, if it is skipped because a feature cannot be tested. Maybe the kernel does not support that feature, or you will you might run into this case. The reason for that is, we do want to run mainline tests on stables right. So stable releases might not have a cardinal feature. We want to make sure that we can run all the tests we can possibly run on that kernel version without and skipping gracefully as needed, if a cardinal does not support that particular feature. And in some cases, it is root versus non root, you have seen that breakpoints tests one one of the test cases needed to be run as root. Since I was running as a normal user it just said, Hey, I can't run this test. And the goal is really run as many tests as possible with given the configuration and features the kernel supports and skip the others gracefully. Okay, let's see how do you hook into the framework. Adding a new test to the kernel self test make file will it's a it has a list of targets top to bottom we organize them in alphabetical order, so that we it makes it easier and we don't run into merge conflicts because we the test going to various systems. And if a new test get get m m test get added, or a new another test get added coming through a different subsystem, we don't, the make file doesn't become a merge conflict so we are organized them if you look at the make file. If I have time I will show you what the targets look like in the make file. And then, once you add that you are set, but you will also need a test make file. The text test test make file under it what it does is it provides enough information for the common layer to know which programs to be run. Which programs need to be copied, if are installed, which which ones you do need to install. And then, as the test is running, it will also create a overall test script to run. When you run case of test, you it will generate if you're running it on your test system, not a problem, it'll just run it. But if you are planning to install it and run it in a different on a different system, you need a overall run script. You can create that run script for you. So the individual tests need to tell the common layer, which tests which test scripts need to be copied, and what kind of tests to be emitted. Like for example, if you are loading a test module your test says, load this module, loading this module some of it is common layer, but some of it is, you have to tell it, which test, how to run the test. So, all right. So, and then there is a configuration file, which the config file here will specify the configurations that it the test depends on so you can have configurations specify configurations for individual tests there. And leverage the framework, my recommendation is just leveraging the framework as much as possible making it easy for yourself to run test, but if, for some reason, there is a custom build necessary for for a test and override, overriding any of these run and run, there are several make file targets, those can be overridden and override them only if it's absolutely necessary. I have a question I'll take now. Is there a web link describing process on how to become a tester for a laugh. So, I'm thinking that the question is about Linux cardinal itself, not Linux foundation. The community, there is no, you can look into the cardinal repository for documentation on how case of this documentation for example, K unit documentation, they both tell you how to run those tests. So what I'm thinking would be to play with them, both of those tools on your system, and start playing with the tool on your system. So that's, that's how you can get involved. And as far as what can you do to help right run, obviously run case of test on your test systems development systems or your, you know, systems to mature systems that you want to run it on and to validate the kernels. The kernels coming in release candidates keep coming in and write new tests because sometimes not all the tests can always use enhancement, we can enhance existing tests. We can enhance the writing new tests and the driver area definitely needs some help in terms of being able to mark if there is a model of marking and then in the kernel space and exercising that would be helpful. And also reviewing tests, reviewing tests and testing the tests, of course, is also very helpful to, to any other questions at this point. If we have a bit of time I can show you the make file that looks. Back again. Alright, so I am. I'm going to go to self tests make file has all of these targets. So you will see targets. The reason I explained the reason why we keep them in this order is to avoid conflicts on this file merge conflicts on this file. So, let me go. So we do have that's the last test. So you can say we have a through Z maybe we're missing some alphabets here but we have 74 tests right now. Well, roughly 70 plus because I have some ifs, ifs, ifs here, but with about roughly 70 plus tests, and each test could have several test cases and sub test underneath. So you are looking at a large number of tests. Good question. When to pick a K unit versus case health test. So the answer your question K unit is when you have a large chunks of the Cardinal that you want to test without and parts of the Cardinal that might not have any user triggerable user interface API to it K unit works very well case health test works very well when you have other system calls for example, or like second second right here. Right. So any system calls any API that you want to make sure to test that we are not breaking Cardinal API and API. And one example I will say K unit would work well is actually the library module I was showing you the test print, print K or print print up I think. So if you have a card if you have to write a Cardinal module to be able to run a test K unit becomes a good option, because you have to write a cardinal module you can just K unit K unit being run at the boot time, or K unit can run on K unit script can go go trigger the Cardinal test directly, that would be useful. The second part of the question you asked is which is for the common which is a common example of a driver test. So there are a couple of here. Under drivers directory, you will see. Okay, sorry. All right, under diverse directly you will see, think GPU test probably is a good example here. This is a I 915 test. It might have come come about as a bug fix. I will show you the one that I wrote, because I was dealing with Cardinal bugs. So I wrote a shell script to exercise a paths of the Cardinal pass of this driver that run into problems, but I have been running into test problems. I have been running into bugs, recording bugs. So I use this actually this test, whenever I have a patch that comes in for me, I maintain the USB IP rub. So I use this test to make sure that nothing it test various combinations, it will run it has a this user part user tool within the Cardinal also the Cardinal driver itself. So what happens often with this driver is you will have situations where you have to run the user space to do something after unloading the driver and so on. So I look, I have a sequence of commands I run from the user space into the to exercise the driver to make sure that all of those that we don't panic. And if a user space command comes in to do something after the driver is removed, remove wanted. So, and so gracefully exit so that's one example of the script that I wrote the test I wrote that exercises the bug fixes, and then I still use it as a regression test mechanism. Any other questions. Let's see if you have any questions that I need to be on the have the terminal shield. Well, we don't have any questions like I can show you a few more things. I want to show you how. So, yes, I can I can show you show you the stress test modes that I was talking about CPU hot blood test. Oh, so yeah, we can also see the conflict at the same time. So this kind. The make file tells you what this particular test CPU hot blood test depends on if you have you have to build the kernel with config notifier error injection for this to work. And if you look at the make file itself. This is the one I was talking about why it's important. It's a shell script. Why do I care to have a make file make does nothing. This is the reason why. So this tells the case of test, common layer, install layer to say, hey, I need this program to be copied if you if user is installing this test. So, and then this file right here live doc make it's right under a case of test directory, self test directory. This is where a lot of the common framework is like build running emitting tests. So this is a. So when you are running this is one example where of the test that emit tests come into play. So this particular test has two modes to run. So it will, it will, this is the full test when it full test runs. It will run the full test, meaning you're yanking CPU is out right with the CPU hot plug you don't want to do the full test. If you were to run this full mode stress test mode. If your case of test will hang so that's why this is excluded from the default run and kept as a separate one. So that's a good example of a stress test. So there is another stress test. Similarly, which is memory hot plug. Okay, let's see. I remember. Okay. Yes. So this is another one, you will see similar characteristics for this test as well, because both are hot hot plug tests. So this hot plug test requires memory hot plug, and all of these are internal options to be built into the kernel. So if you look at this test make file you'll, you will see that it also supports a full test mode, and a mode where it will only do a few. Remove couple of my memory modules various the shells script for this. So you will see that it's a dash or option. It'll show you that it says how many to take offline. So you can the default mode just runs with like one or default mode and the ratio is like ratio is two. So it'll take take a ratio of it'll look at the number of memory modules and just take the 2% out of that, so that this running this test doesn't impede the case of test run itself. And then, okay, architectural specific tests, power PC test, power PC area has lots of tests. And it's they are power PC, specific tests, you can see several tests underneath. And similarly, on 64 has bunch of tests underneath. Another area architectural area you will see is spark. Spark test. It, they, they have tests for their feature, which is the ADI test one of the features that ADI driver. So it's a driver test. So you have combination of various tests. A good example of a features test will be. CIS control. And it has, it also has a conflict. Okay, let's see what all its testing test is control. So it has is the shell script. It will invoke calls to exercise. What is it doing it's doing some. Yes, it will invoke calls to exercise. It's checking various conditions. Check production CIS control right strict, and then take a look at all these tests, it's very another great way of learning what Cardinal does, actually, this set comp is a good example. It has, it's a, it has a benchmark test. Like I mentioned that we have some benchmarks. So this is one example of some benchmark hiding here. And watchdog test is a driver test. Actually, this, this will invoke this is one test that I was referring to, that it makes a lot of iottles calls. Okay, so you will see a bunch of iottles. So we have a mix of those tests. Any other questions. If not, I will switch back to the presentation. All right, so that's where you can help that is there are more work to be done. We have constantly new things coming up at times. And because of the way the test kernels new patches go into the cardinal. At times, tests don't, we have situations where tests don't build correctly or they are not skipping the cases they should skip or their error messages could go use improvement when they fail or skip especially. But in some tests, we do a good job of finding all these things but in some tests. They don't fail gracefully if you don't run them in. If it really needs root access it won't clearly tell you that, hey, why am I not running this test. So we, there is a constant improvements that we can keep making any other questions anything. Any other demo I can do with the time we have. So thank you for joining us today. We hope it will be helpful in your journey to learning more about effective and productive participation in open source projects will leave you with a few additional resources for your continued learning. Thank you so much for your presentation in your time today. Thank you everyone for joining us as a quick reminder. This recording will be posted to the Linux foundations YouTube page later today. Thank you so much again we hope that you will join us for future mentorship sessions. Have a wonderful day. Thank you.