 I think it's recording now. Oops. OK, OK, guys. So this talk is about mastering a lot of open source testing. Dustin, you want to do testing here? All open source tests, yes. So you contribute to Fedora? I need Fedora, I need Fedora Q18, I need Fedora less. Oh, OK, that's great. You? I did one laptop for trial. Oh, OK, that's nice. I do commercial testing. You? I'm scared. Ah, OK, that's great. So I try to finish what's the time now? 2.40. 2.40, OK, I try to finish as soon as possible so we can have some discussion about testing. So we have a lead here, so we can ask some questions. And also you can ask me some questions about testing, open source, how it differs, those kind of things. So this is my content for today, changing the mindset towards testing, then flexible model to support your testing activities with different two sets, then finally I'm going to conclude today's talk. OK, changing the mindset towards testing. So yeah, why we have to change our mindset towards testing because testing is completely different what it was in like 20 years back then now because the software has changed a lot over the period of time, especially when it comes to open source. We don't have requirements, we don't have any other documents, all we have to do is we have to look for a box. But a look for a box is so narrow, it is like very hard to find a box. And also when you look for like things like conforming the software works as an expert, this is so, so vague as well. But we have to know what is expert, but in open source that's kind of hard to find. And even in the commercial areas, it's very hard to find. So if you see in the graph that the things what we are looking for through scripted testing, whether it is through test cases or the things from the requirements or from the developers point of view, all those bugs they are sitting in the things are in the box. But if you see there are three other things they are outside, the most of the bugs they are outside the box, so we have to find that one. Because what we are looking here is we are looking at the product itself, we are not looking at the human factor associated with the product. That's the key area we have to concentrate on. So that's the key thing. So one more thing is for me, the testing is process of gathering information, valuable information. So these information can vary a lot. So it's also process how do you gather information. So through learning it and how do you learn? You can explore the application, you can run experiments on it. Once you run experiment on it, you can observe the results and also you can analyze them. So that way you can learn a lot about product rather than going through some kind of test cases or requirements and other things that is driven by the developer. And also part of the things you have to ask questions. So there are a lot of products you can explore. You can find some interesting things about the product. The main thing is what the product is for. That is the problem we have to look for. Then we have to find an answer to that one. And what is the key thing is what does it do? And what is the process? What kind of data it processes? And what does it depend on? Which platform does it depend on? What are the dependencies? Dependency list and how it will be used by the users and why they should use and what they use it for. These are the things, these are the questions we have to find answer for through our testing. Even if you see here, right, the most important things are all outside those requirements and there are some things that, because we don't know how the end user gonna use the application. And also other important thing is we also also have to accommodate the people who are involved in the projects. Like testers for them, we have to have the future for testability, log, such as log files, and also how they're gonna monitor the process of certain transactions. All those things we have to accommodate and also we have to do for the marketing future of the application. So the flexible model to support your testing, because we can explore the application, but we can expand for years, but still we will not have any kind of information. So one of the models I find it very interesting is I use them quite a lot. It is, I take a little bit of a part of a future, then I learn exploration, I try to learn those futures. Then I create a test chatters for them. So then once I create the test chatters, then I can create test ideas for the whole test chatters. What are the available in the futures? Then I can execute them, then we can analyze the results, whether the product is usable for the end user or what. All right. So here, exploring and the learning is just a two process, but there are ways you can explore. So one of them is Turing. So it is the new model that is developed by Michelle Kailay. So the purpose of the Turing is to understand how the system works on the future work and also to create a mental model of the system works to make the other people understand how the system works. And also for you to create a list of controls, components that you come across. Because when you go through a Turing, you will able to see a lot of things, like let's take an example of Mozilla's preference component. You will see general security privacy and also they have certain futures attached to it. So you can record them or you can draw a diagram of them or you can do a screenshot, mind map. So one of the ways you can do them, record them so that you can create test ideas for them later that you can execute them once you are in the final state. So these are the aspects of the software that you can go for tools. Like the first one is the future tool. So in open source it's quite big and also you cannot go for all the futures. The futures are so, there are a lot of futures. So usually choose one future that you're most comfortable with and try to go over and try to find what are the components available, what are the functionalities, they do those kinds of stuff and complexity. So look for one of the top five complexity for those future. If it is complexity, why it is complexity and what does it consist of, how it is built. Ask questions and record them in any other format that you like. Not also the clients. So what are the product clients to be? What it says it does. Again the Mozilla preference component, it says customize controls, options, and also add-ons. So does it really help us to do that? So those are the clients you have to find it. Configurations, you have to identify the things that the applications store in the store, the changes that you make. For example, you can change the preference component. You want the homepage to be Google.com. How it saves, how it can change the different things of the home setting and also the privacy and also the application settings. You have to check that one. The next is user tours. Again the user tour is like you have to list five top users, who are gonna use it, how they're gonna use it. For example, if you check bash script, let me just show you the next one. So I just did a small bash or external, how, what are these things? If you check the users, who will use it? There are like the next voice users, system admin, there are programmers, computer personnel and students. So each of them have different requirements for the terminal itself. Some of them just run the Yalasta man to go around CES, the CT command. But there are some programmers they will try to execute some cell scripts. They have different requirements. So when you list those requirements, you can easily find out, okay, these are the users and what are the requirements, what kind of relationship they're gonna have with the system. That is important, but we have to find out is whatever the system you use, it has some kind of relationship with people who are involved. So we have to find the problem in that relationship rather than look for a problem in the system. We have to find out how the problem can affect those relationships, that's important part. And also the testability part, it's look for what are the features that can help you to speeden up your testing or support your testing. But that's the tool that helps you to find those features. Then scenarios, you have to find out five or six scenarios. Just look for the real scenarios, okay, what the user's gonna do. For example, if it's a federal system, okay, they will look for apps or they will look for how they're gonna achieve that and they're gonna use the search feature in the menu part to go and find the feature. Those kind of scenarios have to find it out and the list them. This is all used to understand how the system works. It's not used to go in depth, that okay, I'm gonna go in depth to see all the other features. Just to understand what the system works and also does it make the other people understand this system works or not. Because you know, one of the things in OpenSources, we have to confirm them, okay, whatever the future action you take, it is success or not. So we have to motivate them in one or other way. Does it do? Because that's a small thing, it helps you to improve the usability of the system. Again, variability. The variability is like you can change a lot of things. For example, change things in the, what we can, in the Firefox, let's do the thing. So you can change a lot of small things that can change the whole overall structure of the Firefox, just clear the catchy of the Firefox, right? That's a small variable. It has a whole effect on the applications. Try to use some features. Already have the catchy in the Firefox browser. Try to use them in the incongruent and see how it affects that part. Interoperability. That's quite important one because most of the systems support interoperability and also we export data and we import data. We have to pay close attention to that. That is quite important one. And the data, data is quite interesting because most of the systems consume a lot of data and they process when they output. The thing is what kind of data they process and in case of invalid data, how they are reacting to it. The other important thing is does it have some predefined data in that application already? So those are the things that's gonna help you to understand the system well. Then the final thing is structure. The structure is gonna tell you, okay, what is it built with? Do I have any understanding of the underlying architecture? So how can I use that understanding of underlying architecture to build my testing? Right, this is just overview. This just gives me what are the components there, how I'm gonna, from there, how I'm gonna create test ideas for the whole test items that I have listed here, right? So here you can see that I used the structure function data platform and the operations in Monics 2. I identify the test items and also to find the test ideas for each items. Some of them, you see, some of them I have questions. Some of them, even though I don't have experience much, you know, what I'll do is I'll just put the compatibility, usability time. Those things I can ask or I can just ask for in general how the thing's gonna be like. It's not much of extreme hard work, but it's just using those techniques to find the items and from there you can find testing ideas and you can execute them, right? So this is one of the problems I did for today. This telegram is just for test ideas, but unfortunately, can you see this? Okay, so this is just test ideas for all the components, just for the setting. I just take the state of, when you go to setting, it blocks you from sending the messages and stuff. So it's in that state, unfortunately I cannot change to enlarge the picture, I'm sorry. So it completely gives you a different idea from knowing that a lot of either test items, for those test items I created the test ideas, then how they are dependent on each other. So this is just small, my man, but it is very useful because in open source testing, you don't need to have any kind of process or document filling up. So it is, even if you after like three weeks you come back, you can get around, okay, this was the thing I was doing. It will be really useful. The other thing is test status, it's kind of a test case, but it is not. But the thing is it gives you, for example, if you have like 30 minutes to 45 minutes, you can use test status to run your test. Basically what it says is, you explore the target, the target is which part of application you are going to test. And also you can use what resources, what kind of tools and techniques, data, or you have already some kind of precise data sets. So those are the things, and to discover what kind of information to discover. Because this gives you a mission so that you will not deviate from it. Basically when you go on test, always you will move from your mission, right? When you start a testing, you will move to the other item items and you miss this one. So this one is completely gives you complete control and also one area to focus is it is amazing. This is developed by Elizabeth. I use quite a lot in open source testing. This is also a very useful test idea. If you see here, right? Just to give you example of this test status, you have that profile editing page here. If you see, there are test items, there are two text bosses, you have email address, and also you have choose file and save options, right? So but the thing is, it's not about going over one by one, but what it says is it's exploding the whole profile editing feature with data, whatever the data you have. So you may say, okay, I have, I wanted to do injection and a security vulnerability testing. So you can say, okay, I'm gonna do security vulnerability testing, but what kind of resources you have. So if you say security vulnerability testing, there are a lot, but when you check there are text fields, you can do an injection testing, right? So I'm gonna explore profile editing with injection testing to discover vulnerability testing. So what this basically does is it gives you a broad, but it's not too broad and also it is not too narrow. So you can spend like 25 minutes to 45 minutes, that range and also when you are testing, you can take notes, you can take notes during the testing. It will be more useful when you come back or when you want to share it with somebody else. This is a great idea. We tried with Mozilla, we had a program. It worked a little bit good because we were three open source testers and also the developers were working on the project. So rather than creating a test cases and uploading it in the remote strap, we said, okay, let's try this, test charters. It was working so good because it takes like 32, 45 minutes. The one person can concentrate on one part and also he can create a lot of test ideas and also test items. This was very helpful. But if you want to have a look at that one, this test charters. See, exploring editing profile with injection attack to discover security vulnerability. And also this is just one part of security vulnerability, but you can run a lot of other testings. For example, explore editing future to discover with valid first name, last name, and email address to discover the functionality of the future, right, the editing future. So you can create 10 different type of test charters and to cover the whole future. But rather if you take test cases, it is too broad, it is too narrow, that you need like 30 to 40 test cases, it's not useful at all. But in this way, it's kind of exploration as well. That is a lot of learning goes in when some other particular comes in, if you need some kind of help or some kind of questions, you can easily answer because you have very good understanding about this editing profile. So planning, so whenever whatever activity you do, one of the things I find it is if you don't plan, it's getting, you can't track and measure what are your testing efforts. Because when I started, all the time I start testing, I testing, testing, but most of the time, if I don't find a bug or if I don't, even if I find I report two bugs, I feel like I have spent too much time, but with only two bugs, I feel like, okay, this is not much contribution at all. So after that, then I decided, okay, let's plan even if you follow five or 10 minutes. So basically what I do is I explore all the applications and create test chapters, even if it's for one day or two days, I create them fast. Then after that, I come back to execute. When I execute, I always keep the time box, okay, today I'm gonna execute only for 30 minutes, only one test chapters. So I choose the session notes, I choose which one is it and what are the test notes, okay, I explore this area, I feel that it's fine or not fine, and also I have to explore the other part of the area I didn't have time today. So all those things goes to the test notes and also the views, what are the things I see when I test, what are the things I see in the test chapters. For example, if you take the profile editing page, I see, okay, I see the first name, last name, the email address, that is the change option there. And also you have, you can upload a lot of details. Those, all those items will be in the view, under the view. The risks will be the, again, the SQL injections because you have like a lot of fields, text fields in there. And also you have upload options. What happens if the user uploads like, more than like 10 GB of data into it, how it's gonna react to the, that will upload. The next, the bugs, what are the bugs I find? I just put it in my session after the session is finished, then I add it to the bugzilla or some other bug reporting system. Always, you know, when you do this kind of session, you will have questions. So I will put them under the question sections, then I can ask them in IRC or in some other telegram or whatever. So debriefing, this is one of the things I find it interesting because whenever you do a session, it's always nice to look back at your session and the session notes to see, okay, where you lacked and what are the areas you can improve during that time. Because it's not always the, it sounds may easy, but when you start doing, it's not the same. You have a lot of obstacles comes in and also you deviate from your test chapters a little bit and also other things. So it's always good to look back. So this is my summary. And for me, the testing is gather information and checking already known fact about the systems. It comes together, but the checking already known facts about the systems you can do by system. The systems are good at checking things, but we are good at like gathering information and also looking for testing ideas. These systems cannot do those things. Next thing is span all your testing activities. That is crucial if you wanted to keep track of your all your activities and measure your testing efforts. And also, this is one of the styles I follow, but there are a lot of other testing styles you experiment with them. So you can find, okay, whether it is good or not because that is not always one particular type. There are so many you have to look out, experiment with different styles to see which one suits you. That's more important. Final one is to love testing. And also, no, if you don't love testing, you cannot do the job with perfection or to do something better and spread the love. Thank you. And if you have any questions, you can ask me. What do you work on testing most of all? So I mostly, I take the component of most of our preference component. So I maintain them as well. Basically, I started testing two years back. Then I first started running test cases. I did not find it. I was not happy with that. Then I started testing, okay, let's move, not running the test cases, let's test. I test, but I was not like really organized. I was testing one part, I was moving to the other part. I had like 10 components, but I did not find it myself happy. Then I started using some kind of process. Then first I use, let's just test one component to see, but when I test one component, but I don't have that much memory power to register all the items in those components. So then I started using mind map from then. Then I started using the different mnemonics to differentiate those components. Then I create test ideas. Then I keep the test ideas. Then one Saturday or Sunday, I execute those test ideas. Then I report those bugs. This is what I basically do now. I got a question just for you, since this is open source testing, have you come up with any process that you're supposed to be publishing? Because this is basically about your approach to planning on the forwarding testing. Have you come up with a way to publish your plans for other people to use? Yeah, no. I already sending those kind of plans to people in the community, especially in the Mosulah community. We had like two meetings already. In the Mosulah community, we trained testers. So we give them this kind of ideas, just as a rough plan. So what they basically do is they like to take what they like and what they shoot them basically, because it is not gonna work for all of them, because it requires a lot of effort, especially the planning stage. Executing is the easiest part. Before that, you have a lot of planning. You have to explore each component. For me, now it's easy because I already explored and planned the preference component of Mosulah. So I have already in place. If there is some changes happens, I add a little bit to already existing one. Then I create test ideas for that. Then I run the test cases. That's the simple part. But if you are coming in new, it's a lot of effort. At least you need a month to get this. So what I'm wondering is if you have like a store or you have a wiki or mail or something, and you can keep these test plans there. So when new people arrive, you can say, hey, we have this plan for this component, and that's that's where we are trying to go. But the thing is, open source is very hard, especially in wiki, I try to add as much as possible. And also we don't have that much area for including this methodologies or those kind of things. We have very basic stuff, but we try to, we are going there. That's where our goal is. Because there are a lot of testers. There is no shortage of testers. The problem is they're not efficient enough to test. Most of them are stuck in a test case. We have like one and done. They are stuck somewhere there. So we have to somehow bring them. That's not testing, that's just checking. Ask them to create, just set up their automation framework, just run it, you don't need anything. Just come and do testing. We don't need anymore like test case execution or one and done. We just want you to execute a common test. We don't want you to do anything else. That's where we are going. But it is hard because every day there's new contributor comes in. They just go to the chain. We have to pull them out of the chain to put them here. It's quite hard. Yeah, that's mostly because the problem we've been running for a long time. Yeah, I agree. Because we do get quite a lot of people saying, yeah, we're just going to do testing. We have a page where we say, hey, you can do these things. But yeah, we don't have much of a focus on find exploratory testing. And we don't have a bunch of these testers getting down somewhere. Or you can tell people you can run some of our test cases or you can do update testing stuff. But I haven't really had anything along these lines. Yeah, there are many testers that test efficient, but they don't have this kind of document because they all have like their own experience. They have everything in their mind. They do it. But the problem is they don't have any kind of document. They can share it with others. So we have like two months back kind of meter that we share with the developers. The developers are so happy that they can even input their test ideas to the whole plan. That was amazing. Then I thought, OK, I have it. OK, let's share it to other people. If the important thing in meeting a community conference is learning from others, if you are testing for some time, we always have some input to add it to the whole testing process. OK, when I plug this one, it won't go black. I can only see this screen. You see this one? So this is how my plan goes in. So first is the general test ideas. So every week, I will have a planning session here. So once the planning session comes in here, then it goes to the test plan coverage and the risk. So this is all happens through. Give me a second. I have to take this one. See, first it goes through the touring phase. What about the new future comes in? That first it goes to the, OK, I'm going to stop this one. This is the touring phase. This is the test identifying the test item phase. So I'm going to, so that goes to the ongoing plans thing. First it enters the ongoing plan here. Then from there, it goes to test plan and coverage. So that's the part we create test status. For me, I create the status. From then I go to the daily sessions. So if I have to create and execute some of them, then it goes to the issue and the session states. This is how my process goes on. These things, these are already, I have already pre-set of heuristics, how we find bugs, like throw past histories and also throw past bugs. Test areas, usually if it's a text field, one of the things we do, we can do a SQL injection and also we can check how what is the character if you put more than 10 characters what is up and if you put 10,000 characters what is up. Those are test areas, I already have the set of test areas. So if I see some test item that matches my test idea, I'll just pull them and I execute them right away. In that way, one test tab is done. You see there's, each test station has a test log, right? And the end of the test log, then I will have a brief. Okay, what I've done for the test log, then after that it goes to the session. So I have a list of documents from where I can choose whatever I like. So this is the basic idea I have, but this is not a perfect one, it takes so much time, but I have to find something else, but I do not have time yet. But that's what I'm gonna do for the rest of one or two years to open source, let's see how it goes, but this is the basic idea I have. It's just like a Agile methodology, but it's not really following any kind of particular thing, but it's kind of Agile. This process is very open source specific. It seems very, it's like something I've applied to any test tab. No, but you can apply to open source. Yeah, but you can apply to anywhere, but this again, you know, it shoots my style because it doesn't mean that it's gonna shoot you, but you have to find a way to, there are some lot of time wasting there, but I don't see it that way, but if somebody else come change, that would be great. And if you guys share it with me, I will love that. I know my company is currently in the Amur Graphical Test Planning. You work with developers even with the Spectre Team rating, which we just draw generically how everything's supposed to interact because obviously reaping reality all fields move around and all that, so if you explicitly write that in your test cases, obviously that doesn't work well for your door because it doesn't translate into a web view. No, that's why I, no, it's not test case. In the test case, we mentioned that every field, but in test chatter, we don't mention every field. We have a broad understanding that the test chatter is about broad. It's not about like particular, but that's the part I like it. That's why I use test chatters, otherwise I will not. Is it just testing tends to be rather shallow, you know, like life out of the fucking something and our student brand, it will test it for much in a while. Then all the other little alternatives to now start popping up. Wait a minute, we have the same exact problems with our living intruders and nobody else has spotted that plugin acted on it, the desugging VTE. It's just big, you know, it's a problem in the biggest. No, but, yeah, no open source projects are big. Drive by testing, I see a problem in my file or in like a wallet, right? No, I find open source is so big as well. I just choose one component. That's the best one. Yeah, yeah. But as I was saying, it's like sometimes you find that two components are related in non-use ways and maybe probably one is also a problem with another one. I mean, everything apart from main path now, I think, is a risk to do it. Oh, I think we would probably make that one.