 it's almost over. Anyone feeling like that? She's tired. She can not go. She'll need to be here. Sorry. You'll have to get it to yourself. Yeah. But how have the two days been? Interesting. Sorry. Informative. You can take back good ideas. Things that will work for you will not work for you. That is also equally important. We still have a few minutes before we start officially. I thought till that time we'll see if no one is having coffee. At least we'll have some conversations to wake up. Maybe. Which has been the most interesting session so far. You said it was informative, which was the most informative session for you. Very different. No pun intended. Absolutely. Good. Good. Anyone else found any really interesting sessions? No. Hopefully this will be your most interesting session. Almost end with a bang, right? Do you want to start? Wait for one more minute. Officially wait for the clock to take 415. Mic is. Yeah. So guess we'll get started. This time. That was when she was a QA. Yeah. And now she's a developer. I'm Anand Bhagmar with ThoughtWorks in six and a half years. Been doing various different types of testing for quite some time now. I enjoy testing whatever it, whatever involves testing or building a quality product. I jump in and get involved over there. That's enough about us. We've got Twitter IDs up on the screen. The slides will be available on Slideshare. Video has been taken so don't bother about really writing down what is there on the screen, of course. It will be available to you. Other contact information can be found from the About Me page as well. And of course, we are still around for the rest of the day today to have more conversations following this. So today we'll be talking to you about sharing our pain. We are really glad you are here in Big Numbers because the more people we share the pain with, the quicker it subsides, right? That's what we hope to do. We want to share what we have gone through as challenges and how we have potentially overcome them as well. That is just our perspective of it. That said, why are you here on a Saturday evening? Yes, you have paid for the conference. But why else are you here? What do you expect from this session? Excellent. So I'm hoping we'll be able to know some of your solutions as well to the challenges that we have faced or are still facing. Okay? Anything else? Any other expectations? What is protractor? I don't know anything about it. I'm sorry to burst your bubble. We'll not be talking about what is protractor. We'll just introduce it briefly. Yeah. But once you look it up, the documentation is fairly good. And after that, you'll be able to apply what you hear today. Okay? Anything else? Good question. Web driver is already there. Why do you want protractor as well? We will be covering those aspects in certain levels of detail. Sure. Sure. Why protractor? Why should I leave Java and learn something new? So there has to be some good reasons to do that, right? So yes, in some ways, we will be talking about that for sure. Anyone else? Any other expectations from the session so that we are clear about yes, we'll be able to meet that or not. Okay? Great. What we are going to do is we are going to share a case study of a project that we are working on currently. And as part of that case study, we'll be sharing some of the learnings that we've come up with. This case study is about optimization, workforce optimization for any industry. You can take a more think about it as a automobile industry having to optimize how their sales people need to be spread out in the region to generate more sales. Think about it in that way. Okay? So the domain could be any, but we are going to assume it's a automobile industry. The product is supposed to be able to cater to any geography. But of course, we are going to be focusing on we started focusing on North America. And soon our project is evolving into supporting other regions as well. So optimizing the workforce in different regions, which also brings up questions like localization. No, of course, but so far we are a little bit away from that. This data of optimizing is going to be churning big numbers and it's going to be shown in very visual manners. A lot of maps, charts, in some cases, big tables as well. So given the volume of data and the nature of our visuals that would be shown, it's not suited for small devices. So our supported browsers are the four big browser Safari, Firefox, Chrome and Internet Explorer. Thankfully, 11 plus, but it's also on larger screens. So laptops, desktops or which are larger screens is there. That is our use case from this particular product. Little more understanding of this product. This is a single page application. As is the nature of sites these days, a single page application. It is built in Angular using D3 Google maps. And of course, because of all this is also a lot of Ajax components to this. The architecture is very straightforward. Everything that you see in the browser is the application. It is going to talk directly to the database to put information in there or to retrieve information from the database. There's a small layer, of course, in between to talk to the database, but that is just a trivial wrapper of sorts. All business logic is implemented in this front end application itself. Now is the time for a big disclaimer. I admit openly in front of everyone, this is the first time I'm working on JavaScript on an Angular application. My second attempt at trying to use protractor and so far it's going okay. I don't know about Nikita. In fact, for Nikita, it was even a bigger turn around because a prior experience has been in completely different type of the internet's mobile applications, native applications. You ask her about different iOS, Android type devices and she'll be able to talk about it at length. So completely different type of application for us that we are working on. So why protractor? Given these constraints, why protractor? It just does not make sense, right? Completely new technology. Why do we want to select that? And it's important to understand certain things of the team dynamics to see why we chose this. We work in the agile way. For those of you who might have not realized we are from ThoughtWorks, of course, all our projects are in the agile way of execution. In agile, the team owns quality. It's not the QA responsibility. That's why we have a developer over here who's also equally talking passionately about testing side of things. If team owns quality, that means it's not just one set of team members who's going to be doing that. Everyone needs to contribute effectively to doing that. That includes our product owners, BAS, developers and QAs as part of it. That's one. Second, we strongly believe in the value the test pyramid brings to the table. What value it brings to the team? We believe in the quick feedback cycle out of it and we also believe that the NFRs are a core part of your testing activities. To enable the test pyramid on the team, do effective types of non-automated testing and even more effective types of automated tests to get quick feedback is essential for us to deliver a quality product in a quick release cycle. Every iteration we should be ready to go live. The team composition itself is very interesting. We have 10 developers and two QAs on the team. Except for one developer who knew Angular before, everyone is a full stack developer mainly focused on server side applications. Excellent in designing software, building software, but everyone is now focused on building this application in Angular. That is what the team composition is like. Now, given these constraints, we looked at various different options. Someone mentioned over here, we already have WebDriver. Why did we look at Protractor? When you choose any tool set or any tech stack, you have to consider a lot of dynamics. The context is very important in which you would be choosing your tool set, your tech stack. Given this set of context, given this type of application architecture, the kind of visuals and all that is there, as a tester, as a QA on the team, my first choice would have been, okay, let's use Kucumber Ruby or Kucumber JVM for example. I love that tech stack, that tool set. It's beautiful. It's very easy to use. Anyone can use it. But question that came in my mind was, do I really need that BDD layer? Why am I adding that complexity on top of it? Though it's a wrapper, what am I getting by introducing that complexity? The product itself is quite complex. I don't need anything else for it. So, okay, but that is an option. Let's think about it. There was of course WebDriver.js since it's a JavaScript stack, everyone is, the code is in JavaScript. Why not just use JavaScript for it? Now, the first option we shot down very quickly, Kucumber JVM or Ruby, for two reasons. One, we don't need that overhead of an additional wrapper of BDD. We don't have that kind of requirement or that kind of collaboration that is there. And second, it's a completely different language to what the developers would be using to build that product functionality. Which would mean they have to switch the IDs to understand what tests are there. It might mean different build configurations to make sure your tests are running in the same code base. We definitely do not want our tests to be in a separate code base than what our product code is doing. So very quickly, we knew that JavaScript is the option to go at. And of course, WebDriver.js is great for that. But when we started reading about Angular itself, Protractor came into the question and Protractor is the recommended tool set to use from automation perspective for Angular applications. And we'll talk a little bit more about that as we go ahead. Yeah. Actually, you see the output of the action, and when you adopt the output, it's something that we're expecting you to do. This was very well in the jar of one line, even before the right, and it's easy to read and understand. But when it comes to JavaScript, the power of JavaScript lies in its source infrastructure. So if you compare what is output, what is used is the power of JavaScript. You fire a certain task that needs to be done. Once the task is completed, then you get a power of that as to what will be the cost of that task. So you try and map the source. What we want to do is we want to have a sequence set by self execution to be done in a language that is powerful in a first-year understanding. Then, it's a big challenge for us to take in a sequence set. The other aspect over there to understand by E2E test what we really mean, right? It's the functional UI automation test that we're writing at the top of the pyramid. And these are not just granular validations. I open a browser, I validate certain action and its behavior. These are user journeys that we are really trying to identify an automate, which means it's not just one step, it's potentially a good workflow that we are trying to execute, and we want to do assertions, validations along the way in that process. So this synchronous mapping in an asynchronous world becomes even more challenging when we have that kind of a user journey that we are looking at. So that is a practice we follow very diligently. Before any code commit happens in Git, the developer or the tester, whoever is updating code, whether it's product side or test side, should be running all the tests locally before that code is pushed into Git. So it's a very important aspect for us to be able to run the test on local machine before we say, yes, this is good to go inside Git. Another aspect of trying to use phantom.js because a CI environment was not controlled by us. It is some other technical support team which manages environments. Another option of running in headless mode is to use, we heard Dave's session yesterday, a couple of options, right, is to use XVFB and then run the actual browsers in headless mode over there. But what that meant is really working a lot with those teams to set up the XVFB and everything to get the browser set up correctly. So we thought, okay, let's try and avoid that and see if we get the same kind of feedback from phantom.js, why not? So we actually found out by chance that why is our test failing in certain things, certain cases. When we investigated more deeper, we figured out this problem actually happened in older version of Selenium WebDriver. That's what is the forum's report. And then we actually looked into saying, okay, we are using the latest gull-pangular protractor, which version of Selenium WebDriver is it really using? And that was pretty old. So that's where the complexity started coming into picture. We went into a wormhole essentially over there. It was an endless spiral of managed dependencies and packaged dependencies rather inside one plugin over the others. I'll rate their own libraries or move to some third-party libraries. See, they look for some multi-series columns and fancy charts and stuff. Accessing them in a unit S allows you a problem for us. So the power protractor actually lies in what protractor and why Angular admins protractor is. But the protractor when you tell it, this is my algorithm, get me the text for it, it'll ensure that all the components faster and later that are completed and then it will give you back the handle so that you are in a way sure that the element is visible on the screen and then you can do a research. So a lot of boilerplate port-off will actually wait for the element to be visible. But let me rate for a particular element when it needs to be visible. To be sure that I am on the page, it's taking care of the protractor. Fine print by handling promises effectively. It is not certain, non-annual aspects of it. So switching between an Angular context where the protractor is really helpful and gives us back the confidence where it can assist us, versus a non-annual when you have to do a boilerplate port and then you work together in the same test set which will help us to challenge the view of the system. The threshold is low. Since we only need to loss if you are understanding the logic for the system of it, this is something that we will look at and we will hit the challenges. Where do you go to break off? Where do you go to model? Give a process trying to understand it. Where do you execute in Angular in order to call that? This is one of the challenges that we face. The next challenge is around visualization. So one of the modules, the visualization in this format, whatever data is here, it is for example in the country level view of visualization of whatever work was done. The other thing that we want to do in this world, if you do it, you get different value values in this world. At the state level, you might get more information versus the report level and so on. To automate maps it was something that was not possible in this world. Trying to ask for information and see whether the value was there. Also if I see there are a lot of meta data on that particular state. So trying to capture the information and then actually assert it on test was not starting in the first place. So what we call a state problem is also a visual problem. If it's not there, then I think there's some sort of sanity we could be visual, virtual and then compare it and see whether it's a prototype. Unfortunately, here our product itself was a bottleneck. In terms of every time you try and you don't see the information, it used to generate value points. And even visualization and trying to observe the functionality that we have on maps is something that people not able to do. And given the multiple browsers different combinations that we run in the resolution that also changes the aspect of what you see visually. So before we move on to the next section there have been a lot of people in the room who have worked on Angular applications are working on Protractor or have worked on Protractor. Any other challenges you have faced using this? Yes, you are right. It is there we were not successfully able to use it with our chaining of methods of promises around that. But you are right. It does have provision to provide breakpoints. It is. Absolutely. You are spot on. But given that a tool of choice is now Protractor, how do we ensure we are able to automate the functionality of a product that extends also that we have in the product? Yeah, absolutely. Which may very well be the case in our case we just chose to go with the X path way of interacting with elements. Yeah, so the challenges in terms of locating elements in different browsers it works in one case not the other and that is what he is sharing. Fair. So let us move on to the next thing about what are the different things we try to do to overcome some of these challenges automated but not using the functionality provided by those plugins. That said, as of Thursday evening, we have a latest version of WebDriver 2.53 which is not yet available in the latest version of Protractor. It is there in master but not released. So which means the latest version of Firefox which is going to be required for rather the latest version of Firefox which requires the latest WebDriver we are not able to use that we have to hold back on that upgrade of Firefox. So we still have the problem but it is still not as bad as before. That was there. So we have like in Lui database copy which has the data in it and ensure that we use the same thing for our local versus CI and the QA environment is there. What that helps us is we do not have to end up writing different kind of, have different test data for each different environment. The only thing is because amongst environment is very, very environment specific configuration like let's say that you are a better accessor. If you look in most of the sites whatever your application you are in on CI as well as your QA environment. The test data again just to talk a little bit more the type of test data that we are talking about we are trying to optimize for work force there could be different types of work forces. The work force size itself can vary a lot it can go as much as million people in the work force for whatever reasons. So it is a really huge database and there is no way we can really see that as part of a test data setup. The types of sales people that we have a lot of metadata related with executing a functionality, validating a functionality that is essential and that part is what we are saying we will keep it same across the different environments we want to run the test in. So there are two types of changes that happen right one is the actual schema changes and for that any deployments that happen we run a migration scripts as part of that to make sure the schema is updated. Now if the core data itself is changing you are right that part has to be managed manually and that said that doesn't really change that often the core data set. We clean up the data that was there as a part of our last test execution why we do that is we think it is better to actually have to see the state in which after which our test ran. One of the reasons why we did that was since this was an inherited product we don't know all the functionality all at once. So having the state of the application after it ran really helped us in understanding the application as well especially during the initial days. We also ended up doing is coming up with certain utilities on and above protractor and in general that helped us in overcoming some of these challenges and getting more information as to what is happening during the execution. We do have a short we have the code on github but we can quickly go through some of the utilities that we have in our code. Is this visible? The first and foremost would be this is the protractor configuration file and protractor in the github have an extensive configuration file that they have given which has all the possibilities and different flags that you can use with protractor. The learning that I had was to see that and see what you can so most of the tools will already give you certain things that you can do and you don't have to write boilerplate code for it. So you read about the tool and see what the tool gives to get most out of it. The first thing was using the capabilities option that protractor gives. So we have a bunch of capabilities for different types of browsers that we have. Firefox versus Chrome and PhantomJS depending on different things that we want to do on different browsers and how we change browser on the fly is by an environment variable. So using that capability plus giving it by an environment variable helps us in running our test against other browsers that are there. The next thing that helps protractor actually gives a good capability of running different test suites which means if we have four modules in our application itself we can divide our specs or test cases in that and say this is my test cases for module one. So if I know the change has been made only in that module I can run only a specific subset of all the tests that we have. Since this is something that protractor already gave us we use that to our benefit and for each module we have our explicit suite that runs it and especially we have something like a common or a sanity or something that if we want to run all our tests we run all our tests. This is a sample application hence you'll just see one example. The next thing that we did was using this tool called Jasmine protractor screenshot which helped us in getting reports. The only tweak that we made over the reporter was if you can see the output directly on the left is we have what we've done is we've appended our date time stamp to each and every folder that's there. This we do it only locally so that whenever let's say I'm writing a test or I'm running tests multiple times locally I even have a history of the previous executions that were there whereas on CI we only run it once. So the reason this is very important is the challenge that we faced about not able to set breakpoints at the correct place and do debugging effectively we had to resort to another option of taking effective screenshots whenever we want so we can see the state of the application at that point in time and also do lot of console logging anyway about the actions and verifications that are happening which will enable us if the test fails to quickly trace back through the logs and understand what is gone without having to rerun the test. Especially important if it's a flaky test which very often happens in E2E as well. So there were two purposes we were trying to solve with this approach. So like Anand was mentioning what we also have is we have we have embedded screenshots so every time a test is executing at each and every step whenever we want to take screenshots we have provided that as a utility which enables us to take browser screenshots and give it any name that we want. Let's say the screen that you are in or the action that you are performing what helps us was giving upending this number which in a way helps you to see what was the execution and what was the flow in which your test ran. Now there is another challenge with this you would think about why do I really need the sequence numbers okay. The sequence numbers still don't help us 100% as we have learnt more about protractor angular we still see the screenshots appearing in different orders and this is again due to the way we have not correctly handled promises potentially in all the places. So what happens with promises is in some cases if it's not a async call that method would execute immediately the async calls would be fired off whenever the response comes would really be fulfilled. So you would see based on the sequence of screenshots the execution flow might not really be the same in all cases. So that is something that we fixed in most cases but still we struggled within some aspects. The other thing which helps us in this screenshot utility the numbering aspect of it is if I say for example go to the homepage multiple times in my test execution I may end up taking screenshot multiple times of that particular homepage. Now I don't want to handle the aspect of do I need to create a new file name, overwrite the existing file or whatever things around that so just using a unique counter in front of every time I am taking a screenshot it makes it very easy. What we also have is like a page object model that is implemented where things which are common in all pages like accessing elements which actually use protractor APIs are all in the base page and all you have in your actual pages of your domain is something domain related like get me the main image or get me the title of this page something that since we are on this page where did that go? Alright here it is. Something that remember we had a problem was angular versus non-angular so the thing that protractor itself provides is ignoring synchronization which means protractor will not wait for promises to be completed like Ajax scholar HTTP request that it does but then give you the handle directly so that really helps us in automating the non-angular aspects of like the login page that we had in our screen so when we come to the login we ensure that the protractor synchronization is turned off then we do the action that we want to login do that and then turn it back on wait for angular to be ready so that we can then use protractors capability and not have excessive sleep statements that are there in our tests this was a big learning for us when we started implementing we started off with the login page and where this works beautifully directly out of the box the minute we went to the angular page why is the test failing now we didn't know why the login test also was failing when the transition happened and that's where we started really understanding more about what can be done to handle it the last thing so this is the screenshot utility that we were talking about so every time we expect we are on some page or the other we say this is my utility take a screenshot this is the name and it ensures that it depends the right number depending on the counter that is running so all we have in our spec is just what page it is what you want to get out from that page what an expectation is and if you want to take a screenshot of what is there or not what we are there's another thing that we are doing is capturing the JavaScript console logs that are there on the page and we do it in the after like here we do it in the after hook that is running now this spits up a result something like this if you see console error it will give you a json of all the console error that you see on the browser the console information that you see on the browser and console warning this might be a good thing to have and you can actually also go and assert on it for example if you have some elements in console errors that means something has gone wrong it's not something that your assertion is catching but then there is something that has gone wrong in your application which you have not tested so we can also go one step further and add assertions to this console error itself which says if this is not empty fail my test here explicitly and have something like these also embedded also embedded in reports is something that helped us and this can be done at individual action level itself what we are doing right now is at the end of the test we are just printing it out but it's very easy to add assertions at specific actions as well let me add here one of the things that the console warnings especially also helped us with was we were trying to run our map the module which has maps in it tests in and around it on different browsers and we were clueless as to why is it working on one browser versus another so there was a certain thing I think WebGL layer is something that we were using on top of Google Maps and only when we looked at the console warning is when we realized that phantom.js was not able to load that itself so this actually helped us in figuring out that maybe these tests is something we cannot run on phantom.js itself and instead of trying and figuring out what went wrong we gave up on phantom.js and then started running our map based tests on Firefox so some level of meta information is also helpful if you look a little bit deeper into not just your application logs but also the browser logs that are there I think we've covered almost all the utilities that we had we covered everything over there apart from this we handled very we had to create utilities reusable components rather in our framework to handle various different types of functionalities that our application gave us things like chart functions CSV loader why because we had certain types of files we wanted to validate the CSV data it downloaded models and alerts because it was inherited product built as an MVP for that matter in some cases we had JavaScript alerts showing up in some cases we had custom models also there were models layers of models also possible so how do you really interact with those effectively so we built utilities around that which any page could call and achieve the correct validations file upload is something that was very tricky because specially we are running in headless mode how do we really interact with different browsers and the type of file upload that came up now because we were interacting with we had control of the product code as well what we did is we refactored that product code to say what is the action that is triggered after selecting a file from that model and trigger that action directly by executing a JavaScript and to that we passed it the file path that would have been passed as if doing it from the browser itself so we sort of tweaked the product functionality itself to help achieve end to end automation otherwise we would have had no clues really how to handle it for all different browsers on different OS combinations likewise for file download as well given a certain set of data I want to download that file as a CSV how do I really making on the download button is easy but in some cases based on the browser configuration it will automatically download it to a specified directory or it will throw up a module again to say okay where do you want to save this file what file name do you want so we did not want to get into these kind of complexities and making sure our test environment is exactly the same every time because someone can come and change the browser configuration on any machine right it's a CI machine used by various teams as well so we bypass that what we did is in case of file download depending on what needs to be downloaded we save that file as an expected data file and whenever we go to that particular screen as part of a test execution we do whatever actions we manually scrape the data from the screen and ensure the downloaded file that we had earlier is exactly the same so that's where a CSV loader also came into picture we start scraping the data from the screen and compare it with what the CSV file expects instead of actually doing the file download it saved a lot of effort in terms of how can we validate that functionality also locators is very important yes given its angular protractor we can interact with the angular elements directly using models it's great for certain types of forms or screens but especially in case of complex nested structures it was important to say so there is a table there is a header row and of course there is a main body where all the content is the header row has got column names which is coming from the model now I want to select a specific row for a specific column how do I really do that because of ng repeat and model I am not able to get to that element directly even if I can it's going to be really complex in trying to iterate over the rows or some complex x-path manipulations that I would need to do for it so what we did instead get into the product code modify the html give custom attributes thanks to angular over there I can very easily add in double curly brackets over there whatever model name or attribute I want to add and very easily from my test I am able to locate specific elements directly I can say go to row number 5 which is based on the index and select column name first name and directly I can access that element it made accessing specific elements very very easy by using custom locators so the most important thing why we came across these solutions is we started spending time in learning not just going to implementation learn about the tech stack that you are really using we started spending time in understanding what JavaScript is learn that understand what is angular what is protractor the documentation is really good for all these we had just not spent enough time to solve our problems we directly get into solution mode without understanding what it is so learning was very important at the end did we really solve all our problems yes because we spend time in learning we built a custom solutions right that's what you would think would you of course no there is still a long way for us to go we are still in this learning journey we are still trying to implement certain things a lot of complexity still remaining maps still remains are to do list how do we automate functionality on the maps there are lot of suggestions that were made when I made put in blog post or request on LinkedIn Facebook to understand if people have done this type of automation lot of suggestions have come in we have to spend time again investigating those suggestions in more detail seeing what will work for us if at all or not and things like reports one thing that we did not really show was we get really very basic type of reports right now when a test run it just as very says very simply what is the name of this module how many specs ran how many failed and for each spec what was the screen shot out of it not really very meaningful so if any test fails for us in CI we have to go and look at the console logs which we have made very verbose we have to look at the console logs and say when this error happened which were the set of screen shots that were taken before this we trace back and understand what was really going on around that the other aspect that we are getting better at is where to really put assertions typically I would say never put assertions in your page objects page objects are dummy they do not really know they should not know what the business functionality is they should just be getting information or setting information from the page but in some cases it is bad practice on our side right now but that is where we have had to put some assertions expectations to ensure when the promises are called back or promises are fulfilled it would be triggered correctly so some work to be done on that from a design perspective of the framework how to make it better the other aspect because we are doing user journeys automation from this tool set there is not just one assertion that we are really doing there are a ton of assertions validations that are implicitly and explicitly happening as part of this user journey execution so what we really want to do is the first expectation or assertion that fails we do not want to stop the test at that point in time if it is possible to proceed and continue with other validations as well example being if I want to consider banking project I need to log in to transfer funds if I am not able to log in there is no point proceeding with the test that is a hard assert but in some cases I am able to log in I am able to transfer funds but the labels that I am seeing is different or some other things failing so I can still continue with other validations and then at the end of the test capture all those validation failures fail the testing there was so many errors that happened during this one test execution so implementing soft assert functionality is something that we want to do to make our test much more richer in terms of getting feedback out of it okay so there are a lot of things that we really want to do we will get there sooner or later there are certain references that we have listed out over here you can look at that the sample project that we showed we did not really run the test this GitHub project the last link it's available then Nikita has built that as a sample application and tests around that you can use that to get started potentially with your protractor framework it has protractor config, gulp, task, everything setup the basic utilities that we showed right now it is all available you can definitely refer to that take clone fork the repo or whatever if you think there are other utilities that you have and you would like to contribute that for others as well please do send in a pull request that would be most helpful for us as well with that given that Naresh is already breaking down the walls we are really done over here Naresh we have time for questions though right yeah so do we have any thoughts questions or any suggestions what we could have done different or better so one thing that was debugging will definitely look into that that is on our list anything else that we could look at there's a mind you can take this so the question is it not better to take screenshots only when expectations fail yes of course but give absolutely but many a times you would see especially when you look at functional automation the expectation fails not just because of that last action that you did is because of the way your execution has been going in the flow so at important points in time we are not taking the screenshot automatically we explicitly say after this action take a screenshot so we know what state the application is and we build the trace of events accordingly so that's what helps us in debugging as well I don't want to know why this expectation rather I cannot know potentially why this expectation fail but if I look at the trail of events it might help me so it's from that purpose that it helps us so page is a dummy object so question if others didn't here why is it a bad practice to put assertions in page objects page is a dummy object all it really knows how to interact with the set of elements it is representing how to get information from there and how to add information in there it doesn't have any business logic of is this right or wrong whoever is calling the page object knows is this supposed to be right or wrong and that's where the expectation should be so it's just in terms of having a good framework architecture having the right level of abstraction details in place it helps in maintenance and scalability as well so sorry can you repeat that part in the page object you handle the promise and then return fair I agree on that and that is the direction we are getting into in terms of we are understanding promises better we are understanding which API's return promises are actually are synchronized APIs in a better way and now we are refactoring evolving a framework to say this is a promise it should actually be handled in the place where it is really been called the sad thing with that is the handling of promises becomes really bad chaining of sorts from wherever you are calling this right so it looks like a really ugly if else if else kind of code if you have seen those types of code so yes there are some tradeoffs to be done in that perspective but I appreciate that idea that is something that can be done can you use the mic please this my company is not about building friends my company is about building mom and pizza shop this is all about this is not all about so as a key way trying to spend too much into brain world because other than that the technologists are fascinated by new things I am getting a new life I am not I am not I am not but we get too much interest we are solving how it will work then what is my point as a personality fair point but I think that is a completely separate topic about why automate or not itself in the first place is about how much complexity yeah in technical things absolutely and it comes down to the context of why do we want to automate what tools do we want to use for automation as well and that is why we shared our context of the application and the team earlier to ensure that we are doing the right thing based on that context but fair point slightly different but yes you had a thought fair point we should look at that thank you any other thoughts does that work with protractor with protractor another good tip excellent so I think the walls have been broken down thank you so much for being here