 Okay, going live. Hello everyone, I am Sanjay Gupta. I welcome you on Sanjay Gupta Tech School. So this is day 89 of Salesforce Learning Bootcamp and third part of Salesforce QS sessions. So I have Irwin with me. Welcome Irwin on the platform. So Irwin will deliver a few more conceptual talk and will explain you few more things related to QA. So I hope you have just gone through both the sessions which he already delivered. And if you are facing some issues, if you have any questions related to those, so those you can ask in today's session as well as if you have any question related to this session that you're watching live, you can just ask those questions in the chat. And I hope you are enjoying the sessions, whatever Irwin is delivering. So basically whatever in real terms of like QA practice, whatever he has learned so far, he's sharing those insights. So I really appreciate his efforts that he's doing for the community so that those who are struggling, like how they can become a QA or how they can prepare for QA interview. So these sessions will surely help them. Okay, so with this note, I just want to go to next slide. And I would like to ask Irwin if he can introduce himself. Like there will be more folks who just joined like session for the first time today. So Irwin please, if you can talk a little bit about yourself. Thanks, Sanjay. So hi everyone. So Irwin, this side, I am someone who is a Salesforce QA certified administrator a defense podcaster, a certified nutritionist and a person with Australia. So the journey began, the professional journey began when I became a nutritionist and post that I joined in MNC but there I was a national ambassador coordinator. In 2020, I joined another company where I worked as a workforce experienced socialist and got exposed to Salesforce. That's when the fascination began for Salesforce in me. And eventually I gave my first shot to and learn Salesforce. And finally, like I'm practicing as a Salesforce QA in blueprint advisory at the moment. And talking about experience, it's one plus years for me while being a Salesforce QA, nine plus years being a nutritionist and a person with Australia. And recently I launched my military podcast that goes by the name, Pritching Docs. Awesome Irwin. So you are doing lots of things like all different things are like together. I can see. So I'm really impressed with your profile. And like, I know you personally. So I know your journey. And as we already discussed and planned, like in next week, you will be sharing your journey a little bit so that it can inspire lots of folks who are seeking or searching jobs in Salesforce ecosystem or they are just dreaming like how they can start their Salesforce journey. So I know like your journey will motivate lots of folks so that they can learn and try their luck in this technology, right? So with this note, moving to next slide. So if you have any doubt, you want to discuss with other folks. So like this year in January, I started this telegram group. And this telegram group is purely for learning purpose. No promotions, nothing. So I think almost 3000 folks are connected in this group and they are just learning by asking question and sharing information with each other. So if you want to become part of this group, you can join. And like this, if you go to next slide, so you will see the timeline of this bootcamp. So I know like, I just want to tell you like this bootcamp is about to end now like week 27 is there, which is QA related and next week will also be related to QA. And I can see there is some mistake in the weeks. Like I just moved 27, 28, but didn't modify the number. So this is week 28 actually. And in week 29, we'll be having like couple of more QA sessions. And then we'll be doing some more interview sessions. And I think after that, I will say like, maybe week 31 or 32, if I won't be having much content. So this phase of bootcamp will be completed. And then I'm just planning to start new bootcamps related to different, different clouds. Okay, so like if you want to follow all those things, so just follow Sanjay Gupta Tech School on YouTube, LinkedIn, Instagram and Telegram. So those are available in the next slide. So even if you can just show, yeah. And all the important links are available in the video's description. So just next slide. So if you can share a review or feedback like how you're finding the sessions which Irwin is delivering. So I will share those feedback surely with Irwin so that he can get motivated and he can become happy like whatever he's doing for the community that is benefiting people. So with this note, I handover session to Irwin so that he can discuss types of testing today. Over to you Irwin. Thanks Sanjay. Okay, so guys, today's topic is types of testing. So before we dig in like testing, we have heard it many a times that, you know, when it's about software engineering or let's say cars or any other gadgets or automobiles. We hear the term testing, but what is testing? Testing is basically, you know, where we are executing the functionality and looking for, you know, either the defects or bugs which in case are there. So that's when testing comes into the picture. Now, talking about types of testing, how they differentiate from each other is the fact that how much we are covering, what is our aim behind that? That particular testing might be there for some important functionalities where in some scenarios, we might be testing the entire thing that has been developed with an integration of it, of course. So without any further ado, let's move ahead and let's start with unit testing. So if I talk about unit testing, right? Unit testing basically is a method which is more or less practiced by developers. So that is when they develop our functionality, they go ahead for unit testing, wherein they are testing a particular unit and considering it as an individual unit, right? Just for the sake of it, let's say I have this org. Supposedly I have test cases as the object, okay? And if I say that the moment my test case has the status as FT or QN progress, high fit and high respect, right? It will create a new defect over here. But there's a scenario that these three fields should have a value. If they are empty, either of them, then our defect record will not be populated, right? So over here, the unit testing for this would be that they will be checking that every criteria, every single criteria has been met for it. Let's say I just mark it and I change the status and I save it. It saves that defect title. Defect title cannot be left blank when there's a defect detector. So now if I just fill that this up, let's say it's a testing defect title and I save it. Before that, you can see that there's just one defect. If I save it, what do you find? There's another defect populated. So this is what the unit testing is all about. I'm just checking the one component here. That this status can only be marked when I have these values populated. Irvin, sorry to interrupt. Can you please zoom in? Actually, font size is very less. Oh, oh please. Yeah. I hope this is better. Yeah, a little bit more. Yeah, how's it? No, no, it is better. Okay, coming to, I hope this point is clear. This is basically when we are talking about unit testing. When we are just checking one particular module that has been configured by the developer and now the developer is just more or less is going to know whether that is working or not. As of now, we test it for defect title, right? Let's move ahead again and change the status for once. Let's save it. Now let's make sure that we just don't have a value in the field priority. Now if I try and mark it with the status if FTOQN progress has defect and I save it, it will again hit a snag and will say that priority cannot be left empty when there's a defecting. And if I do this, if I populate with the value, I save it. What we see, another defect has been logged. And it has been successfully populated. So that's where unit testing comes into the picture. In the initial phase, it is done. The main aim is to find out the defects or the bugs so that in the long run or in the near future, they don't arise. And as we know that if the effects comes in the future, they might have other following functionalities as well or the related functionalities. Moving to the next slide. Now, what's functional testing? Functional testing is basically like, wherein we are concerned about the fact that the actual result is at par with expected result. In other words, whatever has been configured is matching with the requirement that was received by us from the customer. So over here, what we do is we make sure that we are comparing the output with the requirement received. And when the comparison has been made, it's at par. There's no diversion or digression from the desired output. Moving back to the org, let's say, let's say if I just talk about this thing, then I'm logging the defect through this logger defect flow. I feel like it's a demo defect. I mark it as B1 priority, which has a major severity. It's in draft, ticket number is DCKT1. Actual result is desired, and expected result is let's say just expected. And what I do is I upload a file from here, which is like test case. And now, when I save it, what happens is no defect was logged in here. It's yet three. But if I move here on to the next tab, which is defect stab, I see another defect was populated, which is double 0.17, right? Over here, what you see that there, no values have been populated. So here, in functional testing, this is the main step to make sure that when we were populating a defect through this flow, the defect should have been populated properly with the values which were fed into the fields at the time of creation. So hey, this is more or less like a digression from what we were actually expected to configure and have a end result from it. Also the fact, when we are doing this, it is more or less a black box testing. We are not concerned about the fact that where the flow is breaking or how the flow is actually created. We are just concerned about the fact that it should work as expected. Quickly giving an example, let's take this action, which is log a defect again, and let's take the ticket number as TICK, one, let's make the severity as minor, let's mark the priority as P3, status as draft, it's like the desired result, it's the expected result, so just expected for it, and it will be demo, sorry, demo titan for defect, right? We have populated values in all the fields present here. And now if I actually save this record, what do you see? The number change, it's now three plus. We have another defect logged in, which is defect number 18. And now that's a successful outcome or the desired outcome of the functionality that has been configured by our basis, our requirement received. So that's where functional testing comes in. Just concerned about the fact that our actual result is always at far with the expected result. I hope this makes sense, also the fact if I am going a bit faster, please let me know. Moving on to the next one, it's integration testing. So integration testing is, we have different modules populated. And now having in-unit testing what was done that those modules were executed separately, or in technical terms we can say they were executed in isolation. Where in an integration testing, what we're doing is we're actually clubbing those modules together, having an integration among them, and executing them in a continuous manner. Like we also call it as string or thread testing. Intendance is just to make sure that when, even if you perform those functionalities consecutively, we are getting our expected result. As you can see in the last bullet point, just for the sake of an example, when you're working on the login screen, you enter the correct password and username, you hit login, you enter the mailbox, and then you move to the delete box or the crash box. Basically these three are three different functionalities or modules of a software. We just want to make sure that everything works equivalently well, when we are forming them, either in a sequence or we can say consecutive, right? Let's get back to this effect. This test is back again, okay? Now, for instance, what I do is if I log this out, right? We don't need these anymore. So if I log this out and I start from here, that is, I have logged in my org, now that I'm in, the first thing comes on is my landing page, which as of now is just a setup, right? I leverage the app launcher and I navigate to my application from the app launcher, which is QA wisdom. Then I move to test cases. I click new. Now, I expectation as usual is that a dialog window pops up, which has a title, and for now, for us, the title is New Destinies, right? Let's say we put it over here that, you know, verify that the user is able to log in successfully. That's what we have done so far, right? Three requisites. We touch base test cases in our next session, by the way, this is just to help you understand what the integration testing is as of now, right? Let's say the following should be configured beforehand, right? And all I say is log in page to fields, to fields, which are like username, right? Three is Nexus password. And the third point is login action. Oh, you can say it as a login part, right? Coming to the steps, what I do is I populate different kind of steps. Let's say one, two, three, four, five and six. Right? That's our steps for execution. Moving on to a ticket number. Let's say our ticket number is ticket number two. We move to the status, we mark it that it's in functional testing or QA. Expected result is not filling much as of now, just saying that's an expected result. Test results, we are not touching now because we haven't tested it yet. There's no defect as of now. So let's say we save it. We have our test case. This is the third thing we verified. Now coming to the fact that we actually have a defect on it. What we'll be doing as of now is we'll be trying to change the status. As we checked earlier, it should hit us a snap and it did. That values in this fields are mandatory as of now. So if I fill one field, feed one field with a value and I save it, I get for the other two. And if I repeat that, I get it for the last one. And then I do that as well. I fill this to you. In that case, I am able to save it. And what we have is a fresh defect populated. So what we did, in a chain or consecutively, we performed various functionalities. We checked, we made sure that everything is working as per the expectation and nothing is breaking down. That is what integration testing is all about. To integrate, to have them performed in a sequence or we can say consecutively. I hope this makes sense. Moving on to the next one. It's smoke testing. So smoke testing, when we are talking, okay, I'm just giving you an example that you know there's a fire at one place. Aim is always to make sure that we get our valuables out of that place, where the fire actually broke out. So over here, when we talk about smoke testing, we are basically concerned about some major functionality or the most important functionalities that we are gonna test in the quick round of testing. As you can see, in the second point, it's clearly stated that it is the process of verifying the important features of working well and no showstoppers in the build being tested. So that we make sure that the most important functionalities are not breaking down at the time of UAT. So at least it's tested beforehand. Before hand, when I say this, I mean for handing it over to the customer, end user to test the functionality. It can also be defined as a quick short regression of major functionalities. I'm not calling it as regression testing, I'm just saying that it's a quick short regression. If I talk, navigate back here, right? For me, the most important functionality would be that if I have the status as this, which is functional testing or QA, and I have all the value fields, all these three fields fed with the value already, and I change the status to functional testing or QA in progress has effect, I should have a new defect load. That's my major functionality as of now. So when we see that the defect has been successfully been created with all the values that we had. Of course, we don't have the actual result as of now because the defect has been created is it has not been implemented or has not been yet received by the developer in our bucket, right? So that's it, smoke testing comes in. This was just one example of it, right? Let's say another fact is that I log it effect again or from here, and I'd be like it's test, priority is P3, severity is just minor this time, the status is very draft, ticket number is TIC4, actual results is just mentioning actual for it, for expected image, expected for files, let's this image looks quite colorful though. Working with them, saving it. We don't get any defect here. Now, this is considered among the most important functionalities for me based on my requirement, but I didn't find any. Now in this case, if I move here, I do find defect here, but there are no values populated, wherein we populated values over here. So now two major points which are of major concern for us is one, no defect got logged here, even when we configured it in a way that we were creating it from this particular test case, it should have taken a value from this particular record being the parent record. Two, even if it got created separately, it should have had all the values. So that's what smoke testing is all about, to making sure that all the major functionalities, all the important functionalities of a complete requirement are working perfectly fine. So if I go on a little theoretical part, right, it's smoke testing is also known as build verification testing or confidence testing. So if we say confidence testing, we overhear you, all the mean is that we are not going end to end, we are not deep diving in at that moment, but we are making sure that the important functionalities or the highlighted functionalities which can be at end, which can be a big showstopper for us, are at least working well, working as expected. Right, moving on to the next one, here comes regression testing. Regression testing is basically now the point where we are talking about the fact that, you know, we have, we are being checking each and every part of the functionality while deep diving inside them. It's re-executed from end to end. So if I say, usually if to be more precise, regression testing is done when we are more or less, let's say, you know, we are working on partial sandboxes of now. We move the functionality from partial to full copy. That's where regression testing will come on into the picture, because we'll be making sure that every functionality is working as expected. Nothing is broken, because in case it's broke, we'll have to fix it back again. Before we jump on or navigate to our org again, let me just share a quick point with regards to smoke testing. Smoke testing is also part where in, we are also checking the functionalities, right, which might have got hampered because of a new functionality that is configured. That's the main aim of having smoke testing done for us. So that those major functionalities are checked. What we saw was we created a flow, log-in effect. Sorry, here's another QM in me. More focused. Okay, so, right, so moving on to regression testing. Being able to make, thank you, Santek, making sure of the fact that, you know, when you were talking about regression testing, at this moment, we are not at all concerned about the long-term functionality so far, right? We have logged in. We are considering it as a separate functionality altogether. And now what you're concerned about is that our flow of creating a test case, then logging a defect through it, and having considered the fact that every single field is fed with the value, the status changes to this particular status, and then we get our new defect, should work well. It will be a quick repetition, but that's what it has done when we are testing in the phase of regression. So let's say it's just a test case title for session, right? Let's say it's take five. Let's say these are just the prerequisites, statuses so far, just functional testing or QA. Test steps are just the steps of execution. Expected result is what is expected out of the functionality that is being configured. Test result, we get when we test the functionality, right? So far, what I do is I just save it. Voila, finally, our record is ready, right? Moving on to the next one, we make sure that we are marking up the status again, saving it, we are getting the error, we are saying it is a test defect title, we are saving it back again, it's yet working. This is a scenario, basically, when we have deployed it from partial to full copy, right? We're not in the same log anymore. And now if I, again, fill it with a value, save it, it's yet working. And now if I save it, we get a defect. So being in, which was to make sure that even after deployment, our entire functionality is working as expected has been checked now. And the best part is everything is working well. This is the entire functionality so far. Coming to the fact, if we are talking of flows separately of logging a defect, we have two flows. If I log a defect to this, I'd be like, it's six, it's a test defect title, which is a demo, we take the priority, severity is minor, we take the priority as before, we log the status as draw, it's an expected result, as we know, basically that requirement to see, we save it, we have another defect here, which is right. The functionality is working as expected. So now, again, we are in the regression phase, we are checking each and everything, it's just the fact for this particular thing, it was more or less the integrated functionality which we were testing. It's a separate one where we are testing a separate flow altogether, which you are picking and we are creating the defect manually. Talking about another flow, if we do this again, we give it a priority, we give it a severity, we mark it with the status draw, we take the ticket number as ticket A7, it's the actual result, and this is the expected result, and I upload a file again, the colorful one, and mark done, I click save, nothing happens. So now, if I move to defect stat, I see another defect has been popularly successfully, but again, it's a failure. So now the concern comes from the fact that while moving it or deploying it to some other environment, our flow broke and it did not work well. That's where our regression testing has helped us, and this is how we help entirely with our different kinds of testing through our entire phase of queuing and till the time we don't hand over functionalities to the customer. So also the fact, if I talk about regression testing, regression testing can also have, you can say, more subparts of it, right? Let's say it's top to bottom or bottom to top or there's sandwich testing, right? So if I say just these particular testing types, we have in a way covered everything. So when we're talking about sandwich testing, we are making sure that everything with regards to the most important ones, to the least important ones of the functionalities are covered successfully, right? Coming to the part that when we are regression testing it, we cannot leave no stone unturned with the fact that we can skip anything. When I say skip anything, the functionalities or the features or the coverage that we had in the initial phase of our testing in our own org which is the QA org usually. Some take it as the partial org, some take it as the developer program, I'm sorry, right? Moving to the next one, user acceptance testing, right? The most crucial phase after regression testing because this is the time when our customer comes into the frame. The customer starts the testing of what they requested to us for. They make sure that they verify first initially and if the functionality works as expected, there's an acceptance received from there. The main aim of having user acceptance testing is so that we at least get to know that if there was any sort of deviation from the functionality that was well, basis the requirement received by the customer. So just quickly navigating back to my org, right? And let's say Sanjay is that customer. He asked me that he needs a flow over here that should create a defect whenever populated through this very flow and it should reflect over here. Let's say for just time being we are considering that this flow is working quite well. So had it been the case that when I created the flow, it had reflected over here that the newly created defect, Sanjay would have given me a green flag for it, stating that it is at par with my expectations, basically my, I mean his expectations. And considering the current scenario or the situation of this flow, whereas one, it's logging a defect separately, not linking into this test case. Two, when the test case is created, defect is created, there's no value in it. So that's a deviation. Deviation comes in when there's no linking between the defect and the test case. The latter part where I said that there were no values populated even when we populated the values over here in these fields, that turns out to be as a bug. We'll be touching what a bug is, right? But just for the context sake, when the customer raises a bug, while UAT in the functionality, that's called a bug leakage. What bug leakage is? We'll be checking that up in the next session. So far, this was it for today's session, which was a tight soft test. Okay, any questions? Yeah, so I don't see any questions so far because I think it was pretty clear. You explained each and every type, theoretically, conceptually, and you did some demonstrations as well. So I think it was a great session. And right now, I don't see any specific questions. So guys, if you have any question, you can ask in the chat, so Irwin can answer those questions live. And if you will be having any question, like watching the recording, or if you watch it later on, so you can ask your question in the telegram group. So maybe I can take those questions to Irwin and he can answer. So there is one question, Irwin, from Sudhir. Sanity testing is part of regression testing. Is it right? Okay, so Sudhir, I would say in a way, you can say that sanitary testing is a part of regression testing, but not entirely. When we are talking about regression testing, we are testing each and every functionality on a whole while deep dining in it. When you're just talking about the sanity testing, in sanitary testing, we are making sure it's a quick regression test. So when we are doing a quick regression test, we are just picking up some functionalities. We are making sure which might have got hampered because of some other functionalities configuration or deployment of working fine. So that's fair of difference function. If sanity, just for the clarity sake, sanity testing is also confused with smoke testing. In a way, smoke testing and sanity testing are two different testing types. Smoke in smoke, we are testing some important functionalities, the major functionalities. When in sanity, we also tackle the small functionalities, little less important ones, but we make sure that they also work well as expected. Right, so there is one more question and it is from Prashant. He's asking, please add some more, some real-time testing scenarios in upcoming sessions. Sure thing Prashant, you will get that. Okay, so A.G. is asking, how to create test plan for Salesforce testing? Okay, so A.G., my take on this would be that when you are creating a test plan, you make sure that you're creating it based on the functionalities that are being developed, right? There are different approaches which people follow, right? Supposedly there's a QA one. He or she might be populating a test plan based on the objects they are dealing in. There's a QA two. They might be creating test plans based on their epics. Then there might be QA three, who might be making sure that they are test plans majorly on their epics followed by their objects and then the stories and the subtasks. So having test plans populated is basically to make sure that you have a bifurcated version of the test cases shall be covered for a particular functionality or an object. I hope you got the answer. If not, please let me know in specific what you are actually asking because this is the basic practice of creating a test plan. If that's what was your question. Okay, so I don't see any more questions. So I think Irwin, we can wrap the session here only. So thank you so much for sharing all the knowledge with the community. And I know like tomorrow also we'll be having one more session. So yeah, folks can learn few more things related to Salesforce QA tomorrow as well. Okay, with this note, like I just want to end the session. Thank you Irwin for sharing your knowledge with the community. I appreciate your time. Thanks everyone. Okay, see you guys tomorrow, same time. Bye.