 Welcome everyone. I hope you had a great time until now. I'm very excited to be here and we are going to look at some test practices and information and manage together from some time now. Let's start a little bit about myself. My name is Eddie. I'm working at Red Hat for the last five years as a software engineer. My development career started with no test at all. I didn't know about what is a unit test, the QA team has never tested something automatically, although they wanted to. So I had to start learning about this stuff myself and try to practice it little by little alone. And from that time I was one of my, I consider it one of my big steps forward because it really helped me do my work better, faster. And from then I'm always trying to get improved in this area. So we will start with some basic definition of testing just to be in sync. And later we'll jump into some practices that they consider useful. So what is testing? To me it's pretty basic. It allows me to write and I think for the company that produces the software application to increase quality. It helps development itself because we are not going into loops between trying it in the lab and going back to the editor. And it also has some kind of documentation. It expresses the usage of how you use the software that you or the application that you created. It shows you if it makes sense or not. So we know there are more probably levels than these three that I show here but let's focus on those because they are the main ones. Developer writes a little bit of code. He either starts writing the test before which is TDD unit test or he writes the test after. After this loop between tests and writing the code is pretty short between a few seconds to a few minutes. Integration tests is when we start testing components to check if they are interacting correctly, if our API makes sense, if there are other problems. And eventually we reach the system test where conventionally it was in the QA department area domain but lately I think in many places it's already started in the development and they share the end to end test between development and QA. All the points that I'm going to speak about here are pretty much should fit all of these levels. If there will be something specific I will try to mention it. I intentionally did not took principles that are fitting only one of them. Some test definitions so how does the test look like when you write it, it has a setup, it has an exercise which is the operation that you are trying to check. It has a verification of what is that operation that you try to check the outcome of it and it has a tear down where you clean up or do other operations to continue on to the next test. The setup and the tear down is they are called there is has another name which is a fixture. It's and exercise and every verification is the test body assertion assertion is the is the check or the verification of between the expected and the actual results that we have. They usually come up in the in the verification step, but they can also come in the setup and in the tear down because they are sometimes help us to check that we are not continuing on with our tests. If it's not if the environment is not ready yet or the given requirements are not there yet. Some some test framework are even raising different errors if there is a problem in the fixtures or in the in the test body itself. We understand that it's not that your test text test, the thing that you try to test is problematic but it's actually the, the environment that you try to build in order for the test to exist is problematic. And the last two is about skipping and focusing. Skipping is is just choosing which test not to run. And this can be done in explicitly by the one who runs the tests. So he, for example, he chose chooses to run only test that he cares about or only test that are fitting a specific setup. And there is another one is that's also explicit if a test author is writing inside the test that he is not he does not want to run this test because for example, he knows there is a bug or it was not implemented. So we'll skip it. We can, we will talk about some of the problems with it but in the next slides. And the last one is the condition. Skipping at runtime. This can happen when you have, for example, Windows environment and this test that you are trying to work on now want to run now it doesn't fit windows so you are checking what is the operation system that you run on. And if it's windows then you skip it. It has some problems. This is very dangerous actually and you should be careful and the reason that is dangerous is because you may pass a green have the test screen, but you actually did not run the test so you should be careful not to check at least what you skipped here. And focus is actually the opposite though. It's usually I found it useful when when I was trying to debug or develop a specific portion of the code is you run, you run it. For example, you run a specific test, only that test and and try to see that you can try to pass it or exercise whatever you wanted to do. There is one thing that is important here is that in order to be able to do this, you need the test themselves to be isolated, which means they should not be dependent on each other. Because if they are then you may run a test and, and it will fail just because some other test did not run. So you should be careful when you should be careful not to have written tests that are not isolated. So let's speak about this in some next slides. So let's start with the practices. I'm going to, to present here some points and I'm going to give some examples in in Python and in go in Python. There is any, any other framework overall by test anything else I'm, I think no one is using or little people are using and, and also in go we have there are many frameworks in this case I'm going to give examples in the one called ginkgo. So my favorite first principle. My principle actually from, from my days as a support engineer, it's, you need all the time if you, if you think that something is problematic, you should always try to see that it fails and only when when it's, when you think you fix it, then you can say okay it passed so writing test is the same. So one thing you want the test first to fail, because that means that it does check something, and you intended it to fail. And now, now you can fix the problem or adjust the production call or adjust your environment so it will pass. Otherwise, if you're not doing that you may have tests that are running that you wrote that are running and they will always pass. It happened to me from. It happened to me once in a while and because I'm not doing my own principles. So you should be careful about it and I really recommend you to exercise this always. So if you excel if you work as a developer and you write unit test then if you practice TDD then it's embedded in it because you write the test much before you write the production code. So this is actually VS fixture in tests. So we, I spoke about it a little bit earlier. We have an interest to separate between set up ping and tearing down the tests in given requirements. This is the actual test that we are trying to check. So here's an example in Python. We have, we create a dog. We tell him to run and then we just check that he is really running. This is the creation of the dog is embedded inside the test so if it fails. We also have will get an error and we'll, we'll think that the dog cannot run but it actually cannot be created. So what we can do is to mark the dog creation to have a feature of the dog creation and it's teardown. In, in pie test you can create a setup and tear down the same in this way. And the test only includes the running the execution of the running for the dog and it's assertion that it is running. This is something that we also raised before. This is, we are talking about making sure that tests have no dependency to with other tests and that we are a good citizen in, in order to allow that we need to be good testers or good citizens and clean up after ourselves and not leak the state of, of some object or some environment to the next next test. Here is an example in Ginko. This may be a little bit more messy than pie test but hope you'll bear with me. We have a dog that is created globally and we tell him, we tell him that he is hungry in the setup. This is the before each function. We tell it, it is hungry. And we give him in the test itself, we give him the, the goulash to eat and check that he is no longer hungry. This is the problem with this, this test here is that we have not, we don't know what is the current state of the dog. If, if the, if making him hungry or telling him to eat has all kind of side effects in the states. So the next test that will run over the dog, let's say the same test. It may behave unexpectedly. So it will be nice to have the state of the dog cleaned. So I, I chose to reborn him. So we, what we can do one option to do is to add another setup fixture before the, before the test starts and tell him to be reborn. So at this, after we tell him to get reborn with the dog state is clean. We can continue on the problem that is still a problem here and the problem is that we have no control of what happens with other tests. So if someone wrote a test and use the dog. And he didn't call the, this reborn function. And then you will have a problem. So if we want to control it and make sure that everything is fine, we should just clean up after ourselves. So in this case, we add the reborn of the dog at the tear down step. This is the after age here. And, and then any test can use the dog again with the clean state. This is true by the way, with databases with anything that you can think of that you want to revert to the previous state and clean it up. So the next text, next test will not assume anything. Keeping assertion visible. This is about where you should do the, the assertion execution. If we do it inside the, here's an example in Ginko as well. If we do it in helpers, then when I, when we read the test itself in this case, the developer that writes code. So here you see the function write code. And when you look at it at the test level at the test body, you see that you don't see if something was asserted the assertion is deep inside that that function. And this is just a simple example but if you have like 20 calls inside and the last one is a certain you have a big trouble of exposing where it happened why it happened what happened in between. You have you can you have real problem collecting the data when it blows up. So, so it is useful if you can avoid that and try to move the session to the test body where it is visible. In this case, what it's a simpler solution here with with go we just pop up the instead of a certain inside the helper the helper just returns the error, and we can assert the error at the test body, if it was by test for the helper, we could catch the exception from from the helper and then then do whatever we wanted to do to assert whatever we wanted here traceability. So, traceability is is is about finding, if there was a failure finding out where the where the failure was what the failure is trying to understand as much as possible to collect as much information as possible. In this, in this context, we want to have the assertion. Well, I mean, exposing all the information that is important like having a good test name, having good variables names that were asserted and, and the contact of it and the framework should provide us the in a nice format so we can see it. Obviously, we should also be, it should be very easy for us to understand where exactly it failed because we may have multiple asserts in the in in a test so we want to know where exactly it failed. And also another another point which which may be helpful to trace trace the problem is to have correlation between ideas used in the test and ideas used in the logs of your system, for example, I'm working many with virtual machine. So if if I have, if I have a test that test the virtual machine and this virtual machine has this unique ID, a name prefix or something, then we want to have it to be able to trace it in the logs as well. So it's important to include it in the test logs as well. We obviously should collect all the information when the test phase. This means we should go have some reporter to that goes to the system and collects all the logs from all the components and gathers them is in in a place so we can check them. In the logs there is another small point that you may find useful. We found it useful a lot of time is that we can inject the label or a marking inside the logs of your system from the test. It's useful to to track, you can you can look in the logs of your system and see that at this point the test started because it injected the recorder, and when it finished it also injects a record so it should help you understand that these logs are belonging to the to the test itself. That's one of my favorite. So it's usually not a problem in unit test to have to create resources and destroy them on each test. But if you are working in an end to end test, and you have some heavy, heavy duty resources like creating a database or creating a virtual machine for example, depends, then you will sometime you would like to reuse it in multiple tests and not create and destroy them. So here it's a, it's an example in my test that you have a virtual machine or federal virtual machine as a fixture. And it is used in two tests in the connectivity testing in the console test in the, in the way you see it now it creates and destroy the virtual machine each time for each test, but we can do a very small edition here or change. To just add the scope to the, to the fixture in practice is very, very easy, and you tell it it's the scope is a module. So the test, each test we have to test here that uses the same fixture for the, they will just reuse the same virtual machine the first one that will come that will run will create the setup and the last one that will run will just call the, the tear down parts, the deletion here. About continuing on failure. So there I was involved in some project that it's when you run a suit of tests, it failed on the first one that was in control but that's, it is useful in sometimes many when you try to debug something or when you maybe run it locally to understand what's going on. So if you are, if you're running it on a CI you will probably want to, to run all your tests with and the ones that are failing you want to try to understand if there is any correlation between the test that fell. So you have the interest to run them all. Well, this is a term, mainly used in in Python test in Python. I did not see it used in in other places. And I found it very useful when I was developing in Python and I think it's useful for anyone so it talks about expecting to first so we may have a test that was not implement that was running a non implemented feature or it, it exposes the bug someone wrote the test because to recreate the bug and the fix was is not there yet. So we want to, to run the test, but tell, tell ourselves, I will say that we know it is failing we reference the bug, and we expect this specific error. So this is how it looks. In this case, I'm giving an example of, of slack making us coffee because usually this is the only thing that it doesn't do yet. So we are creating the slack we are telling you to, to make some coffee for us, but we know it is not implemented yet so we just wrap it as x well and tell what's the program. This is also related to the previous x file is be careful if we spoke about skipping earlier, be careful not to always be careful not to run tests like this, this is the test, a test about dog talking. So if we mark it like this it means that it will be skipped always it will never try to talk to run the execution. But if we'll mark it like this as x well then we tell the test that we are expecting to fail. And this is the exception that we're expecting a trivial and it will always run it so it is important truck to try not to leave that code, which is dead test in, in, in your code base on your test. If you have a big subject we probably can take a full lecture about it, but just be careful. When you, when you think about power testing the same system, you can run multiple testing power against the system. It may save you some time but in, on the other hand it may also cause you some round, randomness in the execution because you may have collision and you need to take care of them. You need to balance between time and randomness failures. Randomization and logic is to be careful and not to put a lot of random input in your tests, because that's we do visually write tests that are what we something that we expect that are mystically if it's random then it can cause other problems, but it, it may appear in a different and logic is to limit the logic of a test it should be pretty simple and not complicated with each statements. The last thing is some anti patterns I will say so one of them is stopping on failure. This is good for the when you debug some something but it is usually not that good when you want to see the whole picture and cleaning up at the setup and not in the tear down. We talked about it it's, it's problematic because we don't know what will happen to the next test. So, if we do that it's a half, half a solution or the full solution. And that's it.