 Hi everyone, thanks for joining this session of APMCon and we welcome everyone to this. I have Corina with me who is coming straight from Romania and to this talk we are going to learn a lot from her because she is going to basically walk us through how testers add value to the organization, not just that but also to ourselves. So this is going to be something very interesting deep insights which we could all utilize in our work. So before we start, I'll just introduce a little bit of background about Corina. She comes from the background of 14 years of experience and apart from work she does blogging, record tutorials, do courses and she also has a fun fact that she creates a comics, designing, talking about all sorts of interesting problems and challenges and your favorite memes also could be found in those comics talking the humorous side of doing agile and transforming towards agile. So yeah it's going to be interesting talks so without further ado we will start. So Corina, stage is yours, there's a lot. Thank you so much. Hello, hello, how are you all doing? Good morning, happy Saturday, I hope you have your coffee, I have mine so we are all prepared for this talk and I'm going to start sharing my screen. So as I was already introduced I am Corina and throughout my almost 14 years of experience and career I have worked on a lot of projects on different types of projects in different domains, in different setup and of course in different companies. And based on that I've gathered some I hope useful information in this keynote to help you realize how much value you the tester bring to the organization and of course by organization I mean the company that we work for and the title should have also said and most importantly to ourselves but it was a bit too long so you get the point. We should always consider ourselves first and then you know the company and whatever else we're involved with. And I'm going to start with an image that I really, really like, one second. So this image is an image that actually took in an airport a while back when we were still easily allowed to fly and why I'm using this photo. Well if you've ever seen these two airplanes you know what I'm talking about. Here in front we have a Boeing 747 which is deemed by many like myself the queen of the skies. It's a majestic bird it's really beautiful and it's of course massive it's absolutely huge and it's the same for the one in the background which is the Airbus A380. So these airplanes are made up of so many small parts so so so many probably millions of parts. And in order for these planes to fly every single part needs to be assembled correctly. Everybody who is working on assembling it is following certain procedures so that the airplane is what it's supposed to be a bird that flies people safely from one place to the other. Of course if certain parts are not properly assembled the airplane either will not fly or the services that are offered to the customers will not be of the quality that they're used to. So there might be some issues with the quality that the customers will receive. And it's the same with an organization if you think about it. So the organization is made up of parts let's say and of course the parts here are the people who are working for the company. These people are grouped in teams and the teams are grouped in departments and so on and so on. For a company to be successful each of these departments and everybody who is involved in the company needs to bring their A game they need to provide the best work that they can in order to help the company achieve its targets. So it's up to all of us to make the company successful. But why would we who probably are not owners in the company actually be interested in the success of the organization? Well, let's break it down like this. Let's say that we have the teams which are working on the product. The product is then sent to the customers. Customers will of course try to use the product. In case they are happy with the product, what does that mean? It means that they will share they will generate some revenue to the organization and the revenue might be the fact that they are buying for the product or that they're paying a certain subscription fee for it or that they're seeing ads or whichever way they're going to generate the revenue. If they are happy with the product and they're using it, of course the organization will receive that revenue. If for some reason the quality of the product is not what customers wanted, of course they will not use the product anymore and they will go to the competitors. And in this case, that revenue is lost and the competitors will receive it. Now if the organization is receiving this income from the customers, if the customers are happy, how is that going to reflect on you? Well, first of all, they're going to pay your salary, which is pretty obvious, right? If the company doesn't have money, they cannot afford to pay your salary and that's really bad. We don't want that. But apart from that, if the organization is highly successful, if the customers are thrilled with the product and they're using it like a lot, then the more revenue comes in, the more the organization might be willing to share some of that revenue with the employees. So some companies have these reward schemes or benefit schemes where they might give you some bonuses based on how well the product is performed. So it's in your best interest to work on the product and to provide quality work so that the organization will in turn be so successful that they can reward you for your work. But how do we testers really contribute to the success of the company? Well, it all starts with the job description. So whenever we want to get hired at a company, we will see a sheet or a job ad where the roles and responsibilities are defined. So the company is going to tell you, hey, we are looking for someone with maybe this many years of experience who is doing mobile testing and they're using these particular tools, for example. So when you read that, when you're looking at the job description, you already know what is the minimum that you can contribute with. This is basically what is expected for you to bring to the table. This is what they are looking for in a person who's going to fulfill that role. But apart from that, of course, you shouldn't restrict yourself only to that job description. You have your own personal experience using either similar products or software in general, of course. You have or you might have some product experience at the beginning because you might have worked on something similar or as you're progressing in the organization, while you're working on the product, you will gain a lot of experience. And based on that experience, you can contribute more and more and more to the new features developed for that product. And of course, apart from that, you have your own personal traits that you can bring while you're working. And one of those traits, of course, is creativity. So you can come up with all kinds of ideas and you can propose so many things to implement to help make the product a success. Now I'm going to let you think about this for just one second. Your role is not to break the software. How many times do we hear this that people who are testers say, oh, I want to break the software. I want to find all the bugs. I want to destroy it. I want to show the developers what a crappy job they did. Well, this happens quite a lot. We need to think about what that actually means. So what does it mean for a product to be easy to break? It means it has very poor quality. It means it is not currently releasable and it means that there is a lot of time that needs to be spent on improving it or fixing it before getting it to the customers. Which means the product will be delayed in generating revenue. So that's not the way to think about it. The way I would suggest you think about it is that your role is to help achieve, first of all, and improve the quality of the software. Because what we want is quality software. We don't really want crappy software. Sure, we are excited when we find those really, really interesting bugs. But the more bugs we find, the less the product is releasable. However, if we actually work to implement quality in the product, we can actually deliver it faster, which means getting revenue faster, which means we are happier, more and faster. And where does our work actually start? So what is the first thing we can contribute with? Well, of course, we have the requirements which define what product we need to implement or what new features we need to implement for a product. And whenever a requirement is built or is created, the process is as follows. We have the stakeholders, the company owners or somebody in upper management who has a vision, who has an idea regarding what they want to give the customers. Then we have the product owners or business analysts who will translate that vision into the requirements. Those requirements will be seen by the teams in meetings like groomings or plannings depending on how you're working. But usually if you're working in agile, you might have these grooming meetings where you will refine those requirements. But those requirements, when they're coming to you, they are in a very, let's say, high level four. So they're not very specific. They're not like, oh, this particular cache needs to store this particular data. It's going to be like, oh, we want a certain product and we want the customers to use it like this and so on. So it's high level and it's not technical. During these discussions of groomings and plannings and so on, you have a lot of input that you can pass so that the requirements can be refined. Based on your experience with the product or similar products or in general, you can provide some suggestions and ideas to make sure that the product is interesting to the customers or that the requirements make sense considering what the product is. So this is the first step. Whenever we are defining the requirements, whenever we have the initial discussions regarding requirements, that is when you can get involved. And once those general meetings are done, development before they start working might have some technical meetings where they will actually define the product architecture and they will more go into these requirements and understand what that means from a software perspective. So OK, the initial requirements say the customer needs to have a screen when they're where they're doing certain things. The technical teams will then translate that into what that means from a code perspective, from a framework that they're using, perspective from the setup and the architecture and so on. These meetings that you should take part in will be helpful for you to first understand what the implementation will be and how the architecture will be. And so you will gain some insight into how you can test in the future. But you can also, at these meetings, come up with ideas, come up with refinements for the architecture. It might not seem the case, but trust me, because I had a project and a product that I was working on where QA was not involved in the architecture from the start and three days before the release, when the feature first came into testing, the first thing we noticed was that the development team didn't consider a certain, let's say, branch of using that product. And they had to redo the architecture in a very short time. And this really was risky for the release. We were even thinking about delaying the release. So that's really not a good idea. Another thing we can consider is that because you're involved really early in these discussions, you already have some ideas regarding certain scenarios that you are going to test. Knowing about the internal workings of the product is going to help you identify certain scenarios that otherwise you wouldn't think about. So this is going to help you to see more than just the requirements, to actually see some possible bottlenecks in the way that they're using the frameworks or the tools that they're using to do the implementation. And of course, these meetings are very useful for refining the requirements further, and very importantly, for adding testability. Because in many cases, when you're doing automation, especially if you're doing automation for mobile, you might be lacking some testability features. So maybe you are not able to grab certain elements that you want to interact with easily. But if you're discussing these things in these technical meetings, developers can actually include what you need in the development process. So by the time your feature is going to be there for you to test, it will be easier for you to test because you have provided feedback to the development team before they started working on what you need them to implement for your testing. So this is a win-win situation. Go there and participate in these meetings and have a good discussion with the developers. And don't ever start working on and encourage the development teams not to start working until the requirements are clear. If the requirements are not clear, developers will implement what they understand out of those requirements. And you will test what you understand out of them. If not, everybody understands the same thing. It's not going to be the product that you actually want. So there's going to be a lot of discussions after development has finished and when testing has started. And there might need to be rewrites. And that's going to, of course, delay the product to be released. The faster you realize the requirements are unclear and the faster you provide feedback, the faster the product will be delivered to the customers. So again, this is in your best interest to make sure that the requirements are clear for you and for everybody else. And you have a voice because as you know the product and as you have experience, you can speak out and you can say, OK, these requirements don't make sense. Let's discuss them further and always make sure that you have them written down in the proper form. So when you're looking at the requirements, they should be those which define what the product needs to be. If the requirements are unclear, how is the product going to be? And after you are done with the requirements and you have started either testing or creating your test cases, you can think about the test cases that you're writing as actually being the requirements themselves. Why am I saying this? So when you're creating the test cases, you're basing your scenarios, of course, on the requirements. But the requirements don't really tell you every single scenario that you're thinking about. They're telling you, as I said, high-level aspects of how the product needs to work. But when you're testing, you're considering every single negative case or every single corner case that the product can be used as. And by having these test cases written down, anytime in the future when somebody comes and asks, oh, do you know how the product is supposed to work in this particular scenario or the other one, you can easily just refer to your test cases and you can help them out. To me, it happened quite a lot that I had product owners who actually wrote the requirements come to me months later and ask me, you know, how did we actually implement that or how is this particular flow supposed to work? And in case I didn't know at the time, I could easily just look into the test cases and provide the information that they wanted. So test cases are the most complete set of requirements you can actually have because they are based on the original requirements, but they are refined quite a lot. So think about that. And of course, if you're testing and the requirements are not met, well, you know, first of all, challenge the development to implement what is in the requirements. Because sometimes I have seen, unfortunately some, let's say, lazy developers who just don't want to follow the requirements because they just don't want to. Or there are some cases when, because there are certain limitations with the frameworks that we're developing with or with the tools that we're developing with, the result is not going to be 100% what the requirement says. And in this case, what you need to do is actually address this problem with the business analyst and with the product owner to see what you can do about it. If there is no way to implement what is required, then of course the product owner needs to understand that and the requirements need to be updated to reflect how the product will behave. It's always important to have the requirements in sync with the actual product that we're delivering to production. Otherwise, if you're going to go sometime later and look at the requirements, you will have no idea what was implemented was what's in the requirements or not. So as a tester, when you get to that phase of actually testing the software, you have this power to provide feedback to the product owners regarding is the feature exactly what was required or not? And as I said, to help the requirements be more clear and specific. And once we are done with the requirements, let's move on to the box. So I know that many people love finding bugs. I used to do that also, but I realized that writing a bug report can be quite time consuming. So I prefer not to find bugs and don't get me wrong. I don't want to hide the bugs. I just want no bugs to be present. So that's a difference. And how can we achieve that? Well, we can prevent the bugs from occurring. And I'm going to refer to a certain situation where we have, let's say a UI, which can be of course the mobile UI or the web UI. And in the UI, customers can input all kinds of data. For example, they can fill in certain forms. When they're filling in the forms, they might fill in all kinds of characters. And these characters, some of them at least might not be, let's say, allowed to end up in the database. There is a first layer of bug prevention here, which would be an input validation on the UI layer. So when you see these fields, you can check that if there is data that is not supposed to go into the backend, cannot get past the UI. So when you're testing the UI, you can check that there are errors coming up whenever you're trying to input that invalid data. Of course, if, let's say for some reason, the data would have passed from the UI because there was some glitch in, I don't know, the network or something, you also need to make sure that the backend rejects the data. So the data should not end up in the database no matter what. And in this case, you can check on these two levels, the front end for different errors, and then the backend, of course, you can test the API differently, like separately, and just make sure that the API rejects the same data. Of course, the UI should be the first layer. In most of the cases, if you implement validation on the UI, the data will not get past the UI, but you need to make sure, and therefore you need to check for the backend layer also. And here you have the input of telling development, if you find that there is some validation that is not in place for the data, you can just tell the development to actually implement that validation. Because if you don't, and if you find the bugs later, who knows how many bugs there can be there, you know? Maybe because you're not doing this prevention of bugs, you will find only one bug, but who knows, maybe you will actually find some catastrophic situations because this validation was not in place. So it's better to actually prevent the data form from ever getting into the place that they can, where it can actually cause the harm. And as I said, okay, we like to find bugs, but our role is not necessarily to find bugs. We don't go to work every day only to find the bugs. There is so much more we can contribute with. As I said earlier, we can start with the requirements. So instead of spending that much time finding the bugs, let's actually make sure we don't find the bugs by preventing them. And then we can spend that time instead of finding the bugs, doing something more interesting as I'm going to show you in the next few minutes. And let's talk about the hot topic of the moment, of course, automation. And here I'm not just referring to testing automation, I'm referring to automation in general, because we have automation, especially if we are working in large companies. We have from automated deployments to automated metric validation, to automated code consistency checks, you name it. There's a lot of automation in place, especially for large companies which deal with a lot of data transactions or which need to deliver frequently and constantly your product to the customers. So these are some things you can consider to look at. And because there's automation everywhere, of course there should be automation in your testing also. Now in many cases these days, automation testing is part of the job description. So it's part of the roles and responsibilities. Why do we need automation, first of all? Well, it's not just because we want it, there is a good reason for it. It helps with easy and repeatable automated regression. So think about it from this perspective. You are working at a large company. You have hundreds of features that you have implemented throughout the past years. Every time you're making a release, you need to make sure that the product is still working properly. You cannot assign or you cannot have somebody who for years they will check the same feature over and over and over again manually. So you cannot say, okay, this particular person is going to redo the test scenarios manually that they did five years ago. There's no point in doing that. If you have automation, you can easily recheck the features that you've developed years before or months before, simply by having a CI job like Jenkins, starting the tests and there you go. You just need to analyze the results and that's it. If you find any issues, of course you can address them easily. If you don't find issues, well, that's really, really good. Then you can push that feature to production. So there's nothing stopping you from running these tests once you have them in place and you can run them even before a release happens. You can run them continuously so that you can faster get feedback on something that's not working and you can faster implement the fixes for whatever is not working. And whenever you have the automated tests, you need to spend the time required to actually have reliable tests. So you need to always make sure that when you're creating the test, you don't just create it and just drop it afterwards. Make sure you create tests that are reliable, that can easily be run at any time and that whenever they run, they don't have random failures. I know that random failures are quite frequent and we have so many issues with those tests, but if you spend enough time on creating the right test, you can then easily run them anytime and the fact that they have reliable results will give you more confidence in your automation and will more easily validate the features that you're working on. So make sure to spend enough time to get it working properly. And now I'm going to go into a little bit another, let's say area of automation, namely some of the automation that we're probably not doing as part of our roles and responsibilities because many times we have dedicated teams that deal with this type of testing or this type of automation. And I'm going to start with security testing. So apart from the test cases you would normally do for the product that you're developing, you could also include a set of security tests and even automated security tests in your regression suite. There are certain, let's say minimal or let's say like the least or the most frequent security issues that pop up in a product and you can at least identify those and create some automated tests for those and include those in your regression suite just like you would have any other automated tests. When you would run your regression suite because you would have these test cases included in your suite, you would constantly also validate that you don't have any security flaws. And of course you might have some dedicated security teams that are dealing with this, but maybe sometimes they just don't have the time to look at your product or maybe you need faster feedback or maybe it's just something you can easily do and why not do it if you have it there? Looking at these security issues is going to help you again, understand more about how your product works and it also helps you to validate more the feature that you're going to release. And another type of non-functional testing is going to be of course the performance testing. So whenever we have automated tests already created by the performance teams, we should also understand a little bit how to run those tests and maybe how to do some basic result analysis. Because again, if we have these tests we can run them at any time that we require it and we can look at the results ourselves instead of waiting for another team to take a look at them. This is going to give us a little bit of independence in our testing. And again, we could probably include these tests also in our regression. Because as we're developing new features and new features and new features and new features sometimes the performance will be degraded because of that. And if we have fast feedback by ourselves running these tests we can fix any issues as soon as we find them. And additionally, in some cases there are these bugs which only occur if there is high load in an environment. Unfortunately, some of these bugs are only found when you are already in production with your software. If you would have these automated tests for performance and if you would start running them in an environment where you are normally testing you could probably find these bugs earlier instead of finding them in production. It's not always the case, of course it depends on how much load you're generating but if you're thinking about it when you are testing in, let's say in a test environment it's not that much of a load. It's either you, your team or maybe two, three other teams which are working there maybe not even at the same time. If you generate this load using performance then you simulate more the way the software actually works in production. So it's easier for you to pick up on some bad behaviors earlier than directly in production. And of course another thing we should consider is accessibility testing. Now this is something we could easily do because there are a lot of tools out there that can check your software just by a click. You don't have to do a lot of things you don't even need to read all the specifications because accessibility testing has a certain documentation attached to it where different accessibility issues are documented. And then you have these tools which based on that documentation are going to just validate everything for you. You just need to either in some cases include something from that tool in your own code or you just have some add-ons you can run in the browser for example and just start the checks and then you just need to see the results. And wherever you see that there is an accessibility issue you can discuss this with developers so you can actually have developers implement whatever is missing from an accessibility point of view. And of course when you're discussing requirements so in the initial phases you should also have accessibility in mind and you should also encourage developers to implement at least a minimum set of accessibility features in your product of course based on maybe analysis of what the customers are facing. So maybe you have a certain demographic or some kind of information regarding your customers and then you know maybe certain areas of accessibility that you should focus more than the others. So this is very important to also keep in mind. Now I'm going to talk about releases. So whenever we're going into a release us the testers are the ones who know most about what is going to be released. Developers work on their small parts independently and usually they don't see the big picture. They don't know what is going into their release they don't know everything about the product that they're releasing. They just know they're small bits. Because we're testing the whole thing we see everything and we know everything. So here again we can contribute by coordinating a release and trust me this is both fun and a little bit stressful. I have done this a lot of times and I found that it's better to have the tester coordinate the release because as I said we have so much knowledge regarding what is going on and we can ask certain questions or raise certain issues really, really fast. The first thing we should do when coordinating the release is to check that we are going to production only with the features that we agreed to take to production. Usually when we start a sprint or a certain time period where we're working on a release we agree on what we want to go into the release. So we write this down in some sort of documentation. Once we are ready to go to the release we're going to generate the release notes which are going to tell us what is actually going into the release. And these release notes can be either automatically generated from the code which is the best case that you can have or somehow the developers will manually generate these by looking at the commit list which is of course more error prone. Either way you need to make sure that what you have in the documentation the agreement of what we need to release is reflected in the release notes so that we're not putting something into production that we're not supposed to. And this is really important especially if you're dealing with companies who are under certain regulations where they are either supposed to release something by a certain time or they're not supposed to release something by a certain time. So this is very, very important not to take something to production you're not supposed to. And of course afterwards you need to check that only complete items go into production which means these are only items that you have already signed off on as QA or that you don't have any missing pieces or that you don't have any critical bugs that you're going to take into production. So here again, release notes and commits should be checked. And because you're going to coordinate the release you know exactly how much time you're going to need for testing or at least you can estimate high level how much time you are required. If the management is going to tell you okay for this particular release we want to assign three days for testing and you know that no, we need three weeks for testing here is your chance to actually fight for the time that you require for testing. Your goal is to make sure that the product you're releasing is of the best quality and most importantly that it doesn't break anything and that there is no risk involved in actually delivering that code. So you need to spend all the amount you need in order to do proper testing. And here again, you can discuss with management and you can come up with arguments regarding why you actually need that time. Why do you want to spend three weeks on doing the testing? And trust me having conversations with management and actually pointing out the reasons why you need this is going to go a long way. It's going to make them more open to your suggestions and you might actually receive that time that you actually need for testing. So I encourage you to do this to talk to them and explain why it's risky to just spend three days on the testing and risk is always a word that they will be sensitive to. If it's risky to release it, well, we don't want that. We want to make sure we're not breaking anything. Another thing is that when you're coordinating the release, you should make sure that the release starts on time. So you have all kinds of communication channels like Slack maybe, and you should make sure that everybody's there on time by binging them if they are not and discussing in Slack channels, stuff like, okay, we're going to start in 15 minutes or stuff like that. So starting on time is very important due to two things. First of all, you always have a time slot which is allocated to releases. And especially if you're working in large companies, those time slots are very important because if you're exceeding that time slot, certain other processes might start and there might be mayhem there. So discipline is very, very good here. And it's also very good, like let's say mentally to focus on, okay, from 12 p.m. we're starting the release and we're only going to be there until two. So mentally, you know exactly what to be prepared for. And of course in the release, everybody needs to know their roles. So you need to know that you're going to do the testing. Everybody needs to know who is going to do the deployment, who is going to check for, I don't know, the logs after the deployment, who is going to check for certain metrics after the deployment and so on. So once you make sure that you are not missing on any of these roles, the release can start and you know for a fact that you are checking for everything and that everything is in place. And of course, you need to have everybody present at the release. It would be a very bad idea to have key people missing the release. For example, the person who's supposed to do the deployment if they're not there, you're not going to start the release of course. So everybody who needs to be involved should be there. And again, this is a little bit to think about discipline, but yeah, it is what it is. And another important thing to consider as testers is customer feedback. So many times, whenever the customers are unhappy, they're going to create some tickets in your system telling you that either the application is slow, they don't know how to use it because there's no documentation. Maybe a feature that you're developing is actually not needed or maybe something is very difficult for them to use. Based on this feedback, you actually know where you have issues with the quality of the product which are not necessarily software related, but maybe it's just related to the fact that you didn't really know what the customers wanted. Based on the feedback, you can actually create proposals to address this feedback from them and to make them happier by addressing whatever their pain points are. So if you see that they're saying, okay, this particular area is difficult to use, you can actually create the tickets and the development will implement changes, release the new features to the customers and then if it's easier for them to use, they will be happy. And again, they will share the revenue and they will make the company happier and it's very easy to do. You just need to take a look at their voice because after all, they are the most important voice. Even if in the company, when you're creating a feature, you think that they are going to use it in a certain way or that it will be useful for something, you might have no idea how they will actually use it. So the customer voice is always the first thing to look for when it comes for feedback. And another thing to consider is development. You have developers close by even if they are on Slack only, but you have them available and you can see how they're implementing the features. You can see what tools they are using and how they're using these. And this will help you with your testing and it will also help you with improving the way you work because for example, maybe they would have some automated scripts which they're using to perform some manual tasks that they will have to perform otherwise and maybe they can share those scripts with you so that it's easier for also you to do your job. So you never know what good information you might get from them. So it's always a good idea to keep in touch with them, to discuss with them and to let them kind of inspire you and to be more tech oriented because I know that many times testers are a bit afraid of tech, let's say. They're afraid of looking into the code, but you don't need to know all the code. You don't need to have to, you don't need to look at everything. They can tell you the most important things or when you're testing, they can actually suggest certain scenarios that you can test because they are very aware of some limitations in the tech they're using. And by discussing with them, you will also be aware of the same thing. And whenever you're presenting a problem or a bug to your developers, in order for them to, let's say knowing that you're doing a good job, you need to present the problems in detail and you need to tell them all the information that you have. Give them exact steps to reproduce, give them screenshots, give them logs, give them everything you can in order to make sure that they can easily reproduce something. The more they realize you know what the bug is and that you've tested properly, the more confidence they will have in whenever you're going to report a bug to them. Now coming to you, it's very important for you to include your achievements and to tell your managers about your achievements, especially if you have yearly appraisals or monthly appraisals and so on, tell them the progress you've made, tell them the great things you've done. Sometimes they just won't notice these things or they won't be aware of them. Tell them, tell them, tell them and kind of sell yourself to them. Make sure they are aware of what an amazing job you are doing. Also, don't get stuck. Don't work for years on the same thing on the same project, on the same exact scenarios. Make some changes, try different projects, try different tools, try different techniques, learn new things. It's only going to help you in the long run because you will be able to tackle more projects, different projects, you might get promotions and you might also get the opportunity to be hired, maybe in a company you've always wanted to work for and maybe in a job that you weren't able to tackle before because you didn't have this knowledge before. So make sure you're always growing your knowledge and make sure you're always learning new things. Don't burn out. Don't just work for eight hours straight without taking any breaks. Clear your mind and most importantly, focus your eyes away from the computer otherwise you're going to get glasses like somebody who also did. Make sure your personal time is your personal time and your work time is your work time. Don't mix the two of them. Don't always do extra hours. Don't always add this fatigue to yourself because the more you add fatigue, the worse it's going to get in the future and it's going to be really difficult for you to bounce back once you have reached that level of exhaustion. Try to separate things, work is work. Personal things are personal things and always take your days off. Travel, go out, see, I don't know, flowers, see animals, something like that. Just go out and make sure you spend the days off and not on work. And as a conclusion, what I want to suggest for you is to set high goals because high goals means you're going to learn a lot of things while you're achieving those goals. Take care of yourself physically and mentally and of course, always be amazing. And thank you very much. Okay, I think we can go to the Q&A session. How should we do this? We should probably pause the sharing. Okay, so let's see some more questions. Do you think it's possible to push review activities in a bottom up way? Maybe you want to explain a little bit about what you mean here. I'm going to go to the next question. How Q&A can make sure incomplete code is not pushed to production? Well, you can check the commits. So you can check the list of commits and you can actually see in the commits whether the commit refers to a user story which you know you didn't yet finish to test. For example, whenever the commits are done they should have a reference to the user story. Maybe the user story number or something like that. And if you see that there is a commit for that particular user story that you know you haven't fully tested you can easily say, okay, Mr. Developer this is something that is not complete so it shouldn't go to production and discussing with them will make them remove that particular code. Also, I'm not sure what MR review means but maybe you can get some details. Okay, how to handle situations where management pressurized for conditional sign-offs. We should not really give into pressure. So management really likes to pressure us into doing things but what management doesn't like is risk. Risk is a word they do not like. Whenever they're pressuring you in doing something whenever they're saying, oh, we need to go to production you need to sign off on this you need to present what risk will be associated with that. If you risk of having a critical issue in production they will not want you to hurry up with your sign-off for sure. If let's say the risk is that there will be some label that will not be properly rendered to the production it's their responsibility. You can, I mean always be covered always be covered when you're doing such a sign-off always make sure either their manager is aware of what they're asking for you or maybe just send an email in which you're saying, okay, based on your request this is my feedback and if you want to go to production it's your choice but I personally don't feel like we should. So make sure they're aware of all the risk and then afterwards it's not really up to you it's their decision whether they want to go live or not. Okay, I don't know how much time we still have for questions. I think we have five more minutes going on. All right, all right, okay, good. So let's see what strategy should we follow when the requirements are not properly documented. Talk to your product owner and your business analyst. It's always important to have a communication line with the people who are writing these requirements tell them that if the requirements are not clear the product is not going to be clear because nobody will really know what they need to implement. Everybody will implement whatever they think they need to implement but the result is really not going to be what they expect. So try to convince them that, you know the requirements need to be adjusted. It's always important to communicate. I think this is something that people are not really they don't really see the full value of it. It's so easy to tell someone, look this is not working properly. If they know about it, they can fix it. Sometimes they just don't know about it and the more information you provide them the easier will be for them and the more open they will be to actually address that problem, okay? Good. How to cope with lack of or poor requirements. I think I just answered that. So it's the same always make sure you communicate with the people who can actually write these requirements to you or for you. Okay. Let's see. Any more questions? Okay. Asking PO to do an acceptance criteria test is a good practice. I don't know if you could actually trust let's say the PO to do that. Project or product owners don't really have that much technical experience in general. I've only worked with probably two who are aware of let's say technical stuff. When it comes to testing I mostly trust testers to do testing. I don't even trust developers to do that usually unless you give them a very clear test case that I mean they should only deal with the development of it. Why? Because maybe they don't understand the entire context. Maybe they don't see everything that you as a tester see. Maybe they're not aware of certain things and they don't think like us for sure. Product owners think from their own perspective. They see the high level picture and they don't really see some other things. Some tests yes they could implement but mostly I would recommend to have testers do these tests and to be in touch with product owners also. There's one aspect for example, when it comes to demos usually from what I've seen testers are those who are doing the demos. You don't even have project owners or product owners doing demos because they don't know everything. So it's up to the tester to do that. And why? Because we are those who have the most experience with the product. So yeah. Thanks Karina. I think there is one more questions I would read out for you. The question is from Anil. He's asking if as QA we should also get involved in MR review. I think what he means is the code review pull request or much request. So should we also get involved in that kind of testing? Yes. Yes. So it's a good idea whenever a code review is done whether it's done by a tester or whether it's done by a developer for testers to see it. Sometimes when you see the code review you might realize certain test scenarios that you are lacking. You don't need to know all the code and for sure you will not understand all the code. But sometimes you do understand enough of it to realize, okay, there's a potential bug there or there's a potential test scenario there. So yes. Yes. And make sure you also include developers in the code review for your own code because you will learn a lot from their feedback. So yeah. Yeah. Thanks. This is my personal favorite tool. It saves more than actually finding it saves a lot of time for me not even needing to test it. So would you also share something on the lines like how a tester can prepare themselves to be able to have a very healthy code review over the time? So the first thing you can do is analyze the code yourself before you're putting it into the code review. So you can use, for example, if you're using IntelliJ there is an integrated analysis tool there. So you can just run a code analysis on your code and you will see all kinds of improvements you can actually make yourself. Some of them you are not even aware of, you know either because you didn't know about them or because you just didn't see them when you were writing them. So doing this first code analysis yourself is going to help you improve the code before you're getting it to the developers or to the other testers to look at. And then of course, based on the feedback that you're getting, you might learn new things there. So and very importantly, do not take it personally do not ever take it personally. Code review is for the code. So because somebody says, okay this code is not going to work properly. It's okay, it's not going to work properly but we can update it. Now we know that it won't work and we can update it and make sure in the future we're not going to write something similar to this. So it's a learning process. Yeah, that's true. And how about reviewing the other devs code? Like how can we prepare as testers we can prepare for that? It's a good idea to do some research on let's say at least know some minimum development. So if let's say they're doing Java we should at least know some basic Java in order to be able to look at that. And of course, if we are also writing tests in Java it's a good idea for us to know basic Java because then we can actually write the proper code ourselves. So try to learn about the programming language or frameworks they're using and you can also just grab them and ask them, okay, can you please explain what this code is doing so that you can understand it? So again, it's a matter of communication and discussion with your developers. They will be happy to answer for sure. And if they're not maybe they're not exactly at the right job, you know. Yeah, so thanks a lot, Karina. The way I actually, this is my favorite tool and the way also like just to add your points they were like very interesting points just to add there. I also started learning this by reviewing the reviews the comments basically how other developers are reviewing others once and that's how I started learning picking up the nuances of that language more faster. So it's a very interesting thing to do to prevent bugs than actually finding. So it's a very interesting talk. Thanks, Karina for coming in all the way from there and early mornings. So I'm sure attendees had good time with you.