 Welcome everyone to Free Your Frontend Client from the Backend Cage session by Shivani Gaba. We are glad that Shivani Gaba can join us today. Thank you so much. Hello everyone, thanks a lot for coming to the session. Thank you so much everyone for coming to my talk of Freeing Your Frontend Client from the Backend Cage. I'm super happy to be here and to have you all here. Before I start, a little bit about myself. My name is Shivani Gaba. I'm a Lead Q Engineer in Newark SE. I'm originally from India and currently I'm working in Germany. And I'm a person fond of discussions. I love discussions. Please do catch me up. I would love to have on Gmail, on Twitter, LinkedIn, even in the Hangout session later on. So let's catch up later, definitely. When it comes to passion, testing is my passion. I love testing. It makes me go, it brings a charm in me, I feel like. So I'm really, it makes me so happy, gives me so much of energy and all. But there are also times when testing makes me like this. Very crazy and almost about to bang my head in front of the computer. And I'm pretty sure this situation is also relatable to all of you. You must have faced a situation when testing would have done like this to you. And that's why, so why I face this problem. Mostly while testing very complicated solutions, a system. So there were systems which have multiple number of APIs involved. And when I want to, for example, test my front end only, but it depends upon the APIs. It's very hard to do so to reproduce some bugs, to test some systems, to test the edge cases. And due to this, a lot of tests were also flaky. And that's why I thought that it would be very interesting if I tell you my story. If I share my experience with you, if I tell you about the products that I had, what were the problems that we faced? What were the solutions that we found? Why were our tests flaky and so on? But then I thought that it would be even interesting if I, if we explore something together. An application that we all are familiar with. So we can explore it a bit, understand its architecture, see, try to test it together, see where we are stuck. What automation problems are we having? And what solutions can we find for these problems? So let's explore something together. And the application that we'll explore is a weather check application. It's a small application that I created specifically for the demo of this talk. And it's an application that we'll explore today. So let's have a quick look to this application. And at the end of the talk, I would give you the source code of this application and all the mocks that we'll create. So this is the application that we are going to test. Let's see. So it's a simple weather application, right? So let me enter a city, say, Kolkata and we get the weather. So you can see on the left side, there's like, okay, it's scattered clouds, it's 33 degrees Celsius. And on the right side, there are some specific information like what's the temperature, what's the wind speed, humidity, sunshine, sunset, and so on. So let's see how this application actually works. Let me open the console tab and make the request again. So we can see that it made on when I clicked on the get weather, it made an API request to this system, which is open weather system, basically. So this is a third party system. And when we make a request to the system, it gives us this response with different parameters. And then we render this parameters and display it on the UI here. So that's how this application is working. Now let's try to test this application of it. Let me enter an auto city, let's say Hamburg where I am based. So it's sunny here. I'm super happy that it's sunny here. Usually it's not that sunny. So it's working fine, right? I get the Hamburg weather. Let's try an invalid thing now. So let's say maybe I enter Selenium. Okay, it says sorry, we cannot find city. It would be surprising if there's a city called Selenium though, but good that it's working fine. Let's try other things. For example, let me try some special characters like this, invalid string. It still says sorry, we cannot find city. Fair enough. Let's try some numbers now. Okay, seems like it takes the weather of the postal code from which this number is, which is currently the city that I am in, which is Hamburg. So this also seems to work fine. Now that we have tested some of the cases, let's think about automation. Let's say I want to automate this case of if I give this city or postal code, I get this weather. How would I do that? It's super hard, right? Like the weather would keep on changing every time. And then my automated test, how would I do that? Either I would have to first get the data from the API in my test and then display it here and then check that it's displayed fine. But that's only an end to end thing, right? What about other cases where I want to test different fields? Is there a better solution for that? Or even forget about automation. Think about different cases that we want to test. For example, I want to test that it's thunderstorm here and not sunny. How would I do that? Either I have to find a city where it's thunderstorm, which would be like looking at the database forever. Or I just pray that it's thunderstorm in Hamburg, which I really don't want. So how do we test cases like this, which are very hard to reproduce? Or consider the case that this API is giving us error. It's giving 500. And how would our system, how would our application react to it? How would we test cases like this? So the root cause of all these things that we discussed is basically the dependency of our front end on the back end. So let's try to understand this a bit deeper and find solution for it. So what we saw right now was that there's a client, which in our case was the front end of the weather application. It made an HTTP request to the server, which was the API weather mock server. And then the server was giving the response back. And this response was then manipulated and displayed on the client. And to test and automate a lot of cases in a stable way, what we don't have is control over this part. We are too much dependent on the server side. And what we need to solve this problem is to control the API. And how do we do that? So we have the current setup where a client is talking to the open weather server. We would basically break this connection and we would create another connection with a mock server. And this mock server would be under our control. It would be a server that we would set up. And then with this mock server, we can give a stub response. Stub means here that we would tell what response to we want. So we would preset up the response. And this response could be anything. It could be a valid response, a invalid response, or even some error response from the API. So now that we know how to do it, we would use the tool to do it is via mock. Why I chose via mock is basically it's very easy to set up and configure. We would just see in a while. It is open source. It can be used as standalone or embedded in your own application also. We would explore the standalone version today. And it has a very active community support. So when you're stuck somewhere, you're not alone. There are people to rescue you, definitely. But before we move further, I have general disclaimer that I'm a fan of approach over the tools. So if I want you to take something from this session is the approach of how to mock and how to isolate your front end and test it properly. And the tools we can choose anything that suits our requirements and our products. So let's see how this tool would help us remove our dependency, how this technique can help us. So what we need to do is first to create a server, a mock server. So we have in this form XML, I've added a dependency of via mock. And then there's a jar file that I've added here. And to start the server, we just need to start this file. That's it. So you can see here that the server has started on port 8080. That's it that we need to do for the server setup. The second step would be to create this tab response now that we saw earlier. How do we do that? Basically, let's just copy the exact same response that our real servers giving us. And let me add it to the files. Let's call it current weather.json, right? So this is the exact same response that I've just copied now. And then we can write a function saying that, okay, I want to stop for the URL, which is called the URL, right? And I would return a response. We want the response to be... So we want the response to have the status 200. And we want to have the body file, which we just have created, which is current weather.json, right? So simply once again, once again, we are stopping for the weather, this endpoint, basically, and we would have a response of 200 with this body file, which is exactly same as our server response. And if I run this function now, you can see its success. Let's just confirm it once. So instead of this, let's say localhost 8080. That's where our system was, right? So we can see that now localhost 8080, where our mock server is, and if I call it for this Kolkata endpoint, it would return the stuff that we have just created, right? So our mock server is done, our stop is done. Now what we want is our application to talk to this mock server instead of the real server. And how do we do that is basically very simple. We just need to change this that instead of talking to the open weather API, it would talk to our server where we have created localhost 8080. And I've created a drop down also on the UI, but that's purely for the demonstration purpose. And we can also say that I want to have this as so you can see here like on the UI, I've created this mock and actual server thing. And I've now created this mock server selected by default for easy demonstration later on. So now let's see our mock server was ready, our stop response is ready. And we have also this drop down where we can change the servers. So let's see if it actually works. If I go to the actual server and let me open the console tab. Sorry. Yeah. So if I put Kolkata. So we can see if I do the actual server, the request has been made to the actual server, which was open weather. But if I make the mock request now, sorry. Yeah, this is the this is one thing with the application. Sorry for that. So if I now do the mock server thing, sorry Kolkata. So we can see that it has made the request to the localhost 8080 instead of the real server. Now we have made this connection to the mock server and our application can properly talk to it. And we don't need to depend upon the actual server. So our request would go to the any response that we want to stop here. So this is basically coming from the stuff that we created. And now it's super easy for us to even automate these cases. Because any time I would make the request, it would give me the exact same response that I have put in that. So now let's try to automate this case also. So I don't need this anymore. For automation, I have already to automate this case I have prepared this. Like the locators and everything. And we have the entire location. So what we just did manually, right? We would just automate that that will send the text for the location and we'll click on the weather application click to check the weather. And let's try to verify the description that's there on the left hand side. So I have this get weather description. So if I automate it now. On the base test we have this drivers, which we can just open and close, right? So let's say let's say Selenium code. It's weather app. And what we want to do is weather app dot interlocation and we won't call it and what we want to do is assert that the weather that we are getting is let's put an empty string for now. So what we want to do is that we are getting is weather app dot get weather description, right? So we want to verify that when I enter the location, I get this particular weather. And what is this? So we are verifying this field basically. So let's do it. Right. And if I now run it. It was 37, like what we have marked it is there and our test has passed. And this test was easy to write because we were controlling what we want to see on the UI and we have checked for that particular field itself. Had it been the real server, it would it could have been any response that the real server was giving us. So it would have been difficult for us to just test test this UI in isolation. And I've tested it for one field, depending upon our business scenario, we can test it for multiple fields for multiple cases and so on. Right. So now that we have learned how to remove the dependency, which can help our system to test in isolation and even automate the cases easily. I would like to go over with you different scenarios where we can, where we can use this technique. So scenario number one. Difficult to reproduce scenario. A lot of times we have to set up a lot of things before we even start testing our scenarios. So it's like a jigsaw puzzle or a Rubik cube where we have to do a lot of things beforehand right. For example, consider the thunderstorm case that we did. How would we test that there's a thunderstorm. Again, we have to at least either find a city where it's thunderstorm, or we have to, we have to pray that it's getting thunderstorm anywhere, which we don't want right. But now we have mocks, so we can actually create the thunderstorm in the file and then we can test it. So let's see how to do that. So if I go here. Let's just copy paste this file and say different weather. And now we want to test for different weather, which is thunderstorm. So let's put under storm and trust me on the icons it's 11 the I've seen earlier. I hope it works. So now we would have this thunderstorm weather and let's also maybe change the temperature to something like I don't know to 54 just for the to have a lower temperature. And now what we would do instead of putting the current weather we would put this different weather in the mocks. So let's go here. So let's just copy paste this. Let's say different weather. Right. And we can add the city name code later on. Let's first check manually that it works. And we want the different weather.json. Right. So let's run this. We need the driver. So we can see that the stuff is successful. Now if I go back. And if I check here, we can see that. Instead of cloudy, it's not under storm in Kolkata and we have the different icon and also the different temperature. So let's test. We are in the mock server. Right. Let's do Kolkata and it's thunderstorm here now we can see and it's minus 19 so we can even do any case that we want to do here. Now that we have thunderstorm easily, we can even automate this case saying I want to test for thunderstorm. I want to test for sunny weather. I want to test for minus temperatures. I want to test for anything which are in fact very hard to test with the real systems. And then we can also automate these kinds of cases similar to the way like we did here. Right. And more ideas to test here would be that we can set up any body response that we want to. Not only changing the thunderstorm or not only changing the code but any field in the body response. We can test complicated cases like we did for the thunderstorm right now. We can check edge cases of our functionalities and so on. Scenario number two invalid responses. Our favorite word right. It gives charm in our eyes when he's in box. But what if these bugs are not coming from our system but from the systems that we depend upon. How would we check our front end for that if the bug is there in the API and we don't have any contractors for that. How would we do that. Again, we have our solution here mox. From now on I have pre recorded the videos to show you the how the system would react from for more to the system. And I'll just play those videos now. So let's see for the invalid response. So this is the. So this is the same function that we had. Right. And the status was 200. And we had the invalid body. Now we will change the body to invalid ourselves. So we have this temperature minimum and we have this temperature maximum fields, which are integers right now. And I'll change it deliberately to strings, which is ABCD and EFGH. So now I've changed the data right. So this is invalid now and I'll remove the mandatory fields which is sunset and sunrise. So I've removed these fields. Now this file is invalid. And if I run the function, we can see that it's running. It would create the star. Let's check if the star is created. So we can see that yes, the temperature minimum maximum are there ABCD FGH. And there's no sunset and sunrise here. So changes are reflected. And now let's test on the mox server Chicago for that. So we can see that we have any N as the temperature. So we have found bugs in our system. If our API is giving us incorrect thing. And in the sunshine and sunset, it's invalid IST. So we can find these kind of bugs easily with help of mox, which is not possible with the real system. We can get them fixed with our developers and then we can write the automated test to check that these kind of things don't happen in the future. Similar ideas to test would be in the body file. Not only removing or changing some extra field adding some extra fields. Let's see what's the response that we can even cut up the format. For example, I can have examine instead of Jason. And not only about the body, we can even change the headers. Like we have missing headers. We don't pass some headers. What would happen then we would give some invalid headers and like see what is happening on our application on our UI for that. So these all cases can be easily tested with help of mox. Scenario number three, API responsiveness. It's 2022, right? We all are so impatient. We don't want to wait for a weather up weather report five minutes. Who would do that? We want everything like this quickly. But what would happen if the API that we are depending upon is slow? How is our front end reacting to it? How do we automate these kind of cases? We cannot make the API slow in our test, right? But we have again the solution of stopping where we would just add some delays or something and then we can test it. Let's see how. So this was the function that we had earlier. I would remove the I would add the delay now with fixed delay. Of let's say 9000 milliseconds. And when I run this function. This would now create a stop, which would give the response, but only after 9000 milliseconds with the same current weather. So we can see that it's loading. It's still not loaded on the UI. And if I check it on my application select mox server give Chicago. And you can see that on the left side, there's an airplane like it's loading. It's loading. So it's a good indication and on the it's pending in the console. And now after the 9000 milliseconds, the report is there. So we can test these kind of cases easily because we can add delays. We can do like this stuff and see how our application is responding, which in our case was good because we had a good feedback for the user. And then we can automate these kinds of cases also easily. Scenario number four, errors and faults. What would happen if the API that the front end is depending upon is giving a 500 or it's not available or any other error like 404 or something like that? How would we test that? Again, we have our solution ready. We would use the stuff here also. So we have this function which we created earlier and we just would change the status from 200 to something else, which is and see what happens. So if I wait, we can see that we have a status 200 and I'll just change it to 500. And I'll remove the file because we don't need it. And I'll stop this now. So I'll run the function and we can see it's running. So if I go and check on the, so we can see that our mock is there. So it's 500 years. So I was stopping was successful. And if I go to the application, select mock server, give the city. Oh, our application crashed. This, this was an error that we couldn't have tested otherwise. So we can do these kinds of cases also very easily with mocks. And then we can also tell our developers that, hey, this is the problem that we are seeing. We fix it and then we can write test for it automated checks that would test for these kinds of cases also. And similar kind of things, similar kind of things that we could test are actually different. It is quotes, like all the 500 ones, like I see the service of unavailable. We can check for even client error quotes, like what would happen if bad request is there if there's unauthorized, if it's not available. And we can even check for non error quotes, different kind of quotes. And not only quotes, we can check for faults also. What if the backend is faulty, it's having connection reset, we cannot even connect to it, or it is having some random data conclusions. So all these ideas would be super easy for us to test now. Scenario number five, API not ready. A lot of time it happens that front end is ready, but backend is still under construction. They are not ready, they are slow. So what would we do in this case? Should the front end just wait for the backend to complete and then test their part? But do we have a different solution for that? Definitely we do what we did just now. So let's take an example. Let's say in this application, we are having a new module, which is which user is logged in. So we have a user face and a name. And let's say that this team is not ready, this profile team is not ready. So to test this on our front end part, what we should do is basically mock what the response profile API would have given us. And then we can test for different kinds of cases, like what happens if the profile API is returning long names, different language names, different pictures, invalid things, and all the cases that we have discussed before. So basically, we are free to use this technique even when our backends are not ready or they are slow. So we have seen that how in different kind of situations in different kind of scenarios, the technique of mocking can help us. But when we create stubs, how our front end is freed from the cage of the backend, and we can just test our free system in isolation. You might wonder that the things that we just saw, are they only for web apps? Because we just saw that we were testing on the front end and mocking the backend. There's a clear cut answer. No, definitely no, no matter what your system under test is, be it a front end application, a mobile application, or even some micro services. If they depend upon some internal or even external third party dependencies, like in our case, we had open weather. To test any system in isolation, what we need to do is to mock the response, we need to create a mock server and stop these responses basically. So that now our systems would be free from the backend cage, and then we can test our systems in isolation. If we look at the bigger picture, what we saw was there was a UI, and there was a microservice which we were calling. And to test the UI, we mock this microservice. In reality, in a lot of systems these days, the architecture is like this. There's UI, there's business logic, and there are a lot of microservices depending on each other. So let's say if you want to test this microservice in green, what we would do? We would try to identify the dependencies it has and mock them, and then we can test this system in isolation. So that's how the bigger picture looks like. A lot of time when I'm talking about this technique and everything, I'm asked that does this replace end-to-end testing? So we don't need end-to-end testing at all. No, no idea, friends. Please don't use Swatch to switch clothes. There is needle for that. Let's use correct tools, correct techniques for the correct things. For example, if I want to test the end-to-end scenarios, if I want to test the user flows, I cannot escape end-to-end testing. We definitely need to do end-to-end testing. This technique of creating stubs and testing system is isolation is there so that we can test our systems easily for different kind of cases. But this does not replace end-to-end tests. So let's summarize the pros and cons for this technique. We saw that we can test our systems in isolation. We can reduce the dependencies that we have. We can do parallel development like we were doing for profile API was not ready. So we can have our front end already there. Mox are very reliable because we have put them there. We know what they would return and they're under our control. They're very fast, right? We don't need to go to the actual systems. We don't need to go to the third party. So they're very fast. We can test complicated and edge cases with the help of these. And we can even test delays like we did. We added the 9,000 millisecond delay. And we can have very stable automated test, meaning that we have put the reliable things ourselves. And every time we want to check anything, we would be sure that we're testing the right thing. So our test would not be flaky due to the changing of the backend response. And if I talk about cons, it's a maintenance overhead. The more mox you would have, the more systems you would need to maintain. It would be hard to keep a track and maintain them. And the complexity increases with time. They become very hard to do. And sometimes we also overmark. If my system is dependent on, let's say, 20 other systems, sometimes people try to overmark things and mark everything that a system is dependent on. We don't need to do that. Let's not overmark. We need to identify which are the complex dependencies that we need to get rid of and then test our systems. So these are certain cons where to define these lines and more cons. I don't know off. So if you know any, please do let me know. And then we can discuss about that too. So to summarize my presentation, we learned how and why to set up the mock servers. We learned how to mock different responses like invalid response, valid response, error response, and so on. This made us test our system in isolation. So our application was actually free from the dependencies. There's no cage anymore. And we saw different benefits of mox. And at the end, we learned how to check the weather. So next time, whenever you see a weather application, I hope you remember me and this talk and you think about how we could test different kind of cases with help of this technique. With this, I sign off. Thank you so much for the reference. This is the weather application that we have created and the code for the mox that we have created. I'll share also that with the organizers later on. So please feel free to use that, explore that and learn more. Thank you so much. Thank you, Shivani. That was really insightful. And we have a couple of questions. So the first question is like, how do you ensure the response JSON scheme structure and data types are in sync? Yes, Naresh, very good question indeed. So there should be contract testing for that in place. Without that, it's very, very hard to distinguish or to make sure that our systems are correct, like the actual and the mox is correct. So there's a contract testing technique in place, which would basically create the contract between the provider and the consumer, which in our case was front and back end. So they would agree to a contract saying we should have this header, we should have this body. And in this case, we should return this kind of thing. And anytime there's a change from the provider side, the consumer would be notified. So our mox should basically refer to these kind of contracts and then we wouldn't have these kind of problems. The next question we can take it from Bhuvan Kapoor. So the third question. How do you maintain testing in isolation along with service upgrade and optimization? If I understand the question correctly, it's in the similar lines that how do you maintain or how do you make sure that your mox are always up to date and they're not getting invalidated. I would give the same answer, one line answer, contract testing. If your contracts are up to date, they should be, if anything is changed from the consumer side, the producer side, the contract should already break and then you would also know that your mox are also getting out of date. So first the contract would break and then your mox. Okay, next question. How do you ensure your mox data is in sync with a agreed contract for the endpoint? I think all the question point in the same direction, maybe with just different wordings that how do we make sure that they're up to date? How do we make sure that they're actually similar or same exactly same as what we expect them in production and answer to all of them as contract testing? Otherwise it would be very hard and it would be overhead for us to maintain and always check ourselves. And even it could lead to problems like we are testing against the wrong thing in future if we don't have contract testing in place. Okay. Next question maybe. Hi Shivani, how do mox are stuck with 50 plus fields, any smarter way to handle it? Also wanted to check challenges you have based in and sold in mox APIs for dynamic pages. I can tell you first about the problems that I faced, like there were really challenges, especially when we were having like very complex system with multiple things. And the thing I talked about like over mocking, we did over mocking, we were having 20 different things and we were trying to mock all the dependencies and that led to overhead. And there was a learning that we don't need to mock everything but select which are important for the business scenarios. So these kind of challenges I've faced a lot. And how do you mock and stuff with 50 plus different fields? I don't know why that would be different from what we just did now or what would be different from just mocking five fields or something. It's a JSON or any kind of XML or any kind of response. So we can, it's just a matter of having the right body. So I don't know why it would create a problem. If we have everything in place of how the response should look like, putting a response with five fields or with 50 fields, that wouldn't create a difference in my opinion. Thank you Shivani. Thank you so much everyone.