 So have you guys heard about Moe Engage? No, what we do? So let me give you a brief detail. So we provide solution to our customers by helping them re-engage their customers and re-targeting them. Am I audible till last? OK, fine. So how this re-targetment works is, I'll give you a simple example, a funny example happened with my friend. My friend is Nitin, and he just recently got married. And his wife's name is Nisha. So they both invite me for a dinner. And in the middle of food, Nisha's phone beeps. And there's something which takes all her attention. And she's like, yeah, I got it. The moment Nitin thinks that, OK, I should celebrate her happiness, his phone also beeps. Can you guys guess what would be happening? I think most of the men can guess this. What kind of messages they both got? So Nisha just got a message saying, hey, Nisha, your favorite dress you selected. And this is on 40% off just right now. Sorry, technically just. So yeah, so Nisha just got a lucrative offer which she could not reject. And Nitin got obviously the projected bills. So this is how re-engagement re-targeting works. And we help our customers to re-engage their customers. So if you are one of those business holders who want such help in your product, come talk to my team. And if you're one of those who paid the bills, just look at the men sitting next to you. You will be fine. So coming to the topic, today we are going to talk about release status analyzer. I'm keeping it very short and simple. Why, what, and we'll end with Q&A. To give you a detail about why, I need a few answers from you. What does it look like, this light? Something like before release, I'm assuming all of us are doing some other level of automated testing. And before release you run those tests or there is some automated way which runs this test and there is a feedback there. But can you just tell if you had been sometime a gatekeeper or DevOps guy, and you might have given the charge of taking the release and making this, analyzing this test reports and give. So at any estimated time, minimum time, you would be taking to analyze and tell that the release is good to go. And if not, why not? Any estimated time, rough idea? A day? Yes, so if you are having a very big product and they are around 30 to 40 tests, every test on an average, you have to go, dig into the test, see the reports, come back, see another report. And if you have a modularized system, then you can actually distribute your task also, different team looking into those feedbacks and then collect this feedback, put into some Excel sheet or mail or somewhere and you again take the feedback forward. But in this process, what we are doing, what we are lacking here, we are lacking one very important factor is we do not have transparency across the teams. That is why you are the person who is always doing the test analysis. And you are not having accessibility of your test feedback to everyone. Why? Let's take into the real scenario. Let's go to the demo directly and see why. I'm sorry, this happens. This is not coming out. What do I do? Is the slide moving at least? No, it's stuck. It's freezed. This happens if we take without testing. We are using bugs in production. Can we show on laptop directly? Maybe I didn't know this. Try switching the screen. Will you be able to just talk for the rest of time? OK, it's happened. Will you guys help me with this? You have internet on your phone. Can you just guys open P00J4.github.io? We will take it live that way. Meanwhile, I'll just shut down and restart. That's what we do. P00J4. It comes back. Good. It's frightened of shutting down. You are on the page. P00J4.github.io. P00J4. Who said Mac is best? Only Apple. Yeah. Yeah, all on the same page. Do you see some screen shots saying taste feedback, which shows some metadata? P00J4.github.io. I'll just write some. J4. J4.github.io. So you come down. You see a screenshot out there. So you see a test, Android test, iOS test, Firefox, Chrome. So compare this screenshot with your traditional reporting. In your traditional reporting, what was missing? You were only aware about the tests which were going. You were not aware the metadata where this test was run. You were not aware that which machine, which code, against which this test was run. So that is why nobody was able to access, except the one who has written the test. Now, just analyze the screenshot and see those fields. First field you see on top, the environment, and the tag on which the code version, basically, on which this test was run. So the code version says you that version gives you the detail that I am the test run against that code on the environment version given there. So now, the developers whose build is broken is now automatically curious to know that oh, did my build broke this test? He will look into it. You don't need to sell that bug to him at this moment. And you see in the town, you see the direct links to detail reports. You see screenshots. You see console. So right now you cannot click. It's just an image. But in Debo, I am trying to open it. So we can actually directly see. So now, the dependency of analyzing the feedback has shifted from that one person who is going to take call to everybody individual in the team. Be it developer, be it QA, be it DevOps. Anybody can see and process that information to collect that feedback releases to go and if not why not? So let's let me just try one last time. I'll not do anything juggling. I'll just take to the demo. Since I shut down, I need to just start my process to show you. I had enabled the test locally. I'm sure. So all my services are up. We can see. I was talking about this comparison with this slide. I'll not enlarge this design so that we do not get into problem again. So I was talking about this was traditional way in no metadata was attached and but if now you see now if you see in your screen there is a metadata attached with it. So come on mouse. Let me work. Okay, let's give it up. If I am just able to open my slide I could show you that. That's fine. This is decided to not work demo blues. So in this case the paradigm understanding about this tool is beyond the tool to understand that if we are able to bring agility in the test feedback process also in such a way that wherein every single person can understand your test reports the problem is halfway solved. Number one second thing you are not depending on that one person and a lot of manual efforts are reduced just coming. I'm so sorry. So I am just I just want to open for the questions you have seen the screen shots there. So I just want you to be open and ask questions so that I can tell you how you can utilize for your work. Do you find this kind of feedback interesting than your traditional ones? Or anybody is doing some other way using some special tools which gives them feedback like this? Yes. So one problem there is so gentlemen says that whenever some test fails they might have integrated some e-mailer which notifies the particular person he fixes, he or she fixes that but the problem here is assume that you are in the fashion of work where releases goes every day and in morning 5am the slot is fixed night test starts and they break. There is nobody else to look into it. So at least for a DevOps guy or the person who is taking the release live for him to reach out to make sure is this the web test only failed? Can I take the android native still? So the person the dependency is still there in that case because that mail which he is talking about that mail is went in silos to the only that developer. The other people are not notified about it. So if that person has not seen other are still unaware about that fact. So you can configure but again it's about yes so coming to your point I am not talking about ideal world wherein everything goes smoothly and everybody which is true but idea is not about test being failing or passing. The idea is about bringing your making making your test getting your sorry getting voice to your test the idea is about showcasing your test in such a way that everybody feels encouraged to see look at it just like you monitor your production data if you are product manager or somebody right. You will be monitoring your clients what they are doing in some kind of dashboards but why don't we think about test about that same way exact same way so I have seen from my example will come to it so I have always seen from my experience that people who has written test they are still they are still acting as a actor to see them what it is fail whom to reach out. In this fashion we can actually plug in the data if you see the screenshot there is a responsible panel also responsible we can actually add those responsible names there by just plugging in GitHub commits author names. So in a short every single person can tell looking at it this machine against this code this test has failed and these are the responsible commit authors let's reach out to them only it benefits from range of people from QA DevOps CTO product manager everyone dependency is reduced in this fashion and it brings more transparency for all the teams for developers they also feel happy because if test are flaky then QA has to look they are not bothered ok if it is not my code why should I look into it they are not bothered about it for QA they are being heard so it's a win-win situation for all the parties and as an organization it's a win-win situation for me also I am seeing the live feed so when the test runs there is a live feed going on the test is in progress which is not mentioned in this screenshot so every single person get to know that ok that the confidence builds up ok I have n amount of tests before releases goes yeah commit from the release management perspective the main thing is we do have a release implementation plan what you are referring to as pipeline kind of stuff and it is essential that you have different different environments so I think it's pretty common in lower environments because that's the main challenge and that stops the flow in higher environment it is much less obviously but from QA then seed then higher environments as you go the pattern changes so it's very useful in my opinion that really happens it's not so uncommon actually ok thank you what I am not totally clear is you know the tool the real novelty of the tool is in the visualization or it's in the way it aggregates all the result of the different tests and you know so is that intelligence what you are really you know referring to yes the way it aggregates all the details so the future also can be like we can also segregate this visualization in terms of release numbers so against this release these were the test and this was the state so we could do that also so it's about aggregating results and putting into some feedback manner and if I may ask how does the tool actually do it so it's a very customized kind of logic or it's generally applicable yes so it's actually simple small thing but when the idea created I give to my CTO at that time I was working with Guaibhub and we had this problem we had enormous test and he was never able to understand like what all what all teams are doing our teams actually writing enough tests or all teams actually like mingling up with QA team and sorting out things faster so it was a lot of heat in his mind and that's where he said that we should bring something like this which brings the transparency and that is where we thought okay look into the solution so that is where we were already using in case we wanted to think about something like that we did not want to create our own server and client application just for this so we found one plugin called extreme feedback plugin which used to do only one job was it used to bring the job name the test name basically and it is pass or fail that's it so I looked into dig into the code base of it and I figured out okay what if you look at the below the screenshot there is a description I have given just scroll down there is a job and there are some sentences written which shows that there are some fields environment and the code version against which this test are running so there should be a particular job which actually I'm sure you all guys will be doing something like that one one job which deploys code on some server so there you give already giving information on this machine deploy this code for me so these two items I got it I trigger the job in turn it deploys the code and triggers all those tests so it pass on these two details to this test this test knows about the code version number code version number and the where it is run so this test run and it keeps saving the data for it now individual test has their own metadata again they know where the screenshots are placed they know where the reports are placed right so I just aggregated all of those in one screen so that anybody can just quickly click and look look for the details thank you it's a simple small effort it was not simple at that time it looks simple now but yeah it's a small effort but can bring I feel that can bring agility so it's the talk is not about just a tool but to understand okay the giving value importance to your test feedback is equally important as you give importance to your post production analysis am I able to make so maybe another question so do you think this is brought a change to your team works yes so initially the pin point from which it started that same thing so we fail that okay we started from zero automated tests to automating things so we had web test and then we created mobile test creating mobile test already was a very very difficult challenge and the moment we feel the happiness okay we return test now everything will be caught the very moment if something goes wrong everybody puts pointer finger on us oh look check now we need to check and find the responsibles and then go back to those guys and ask them and sometimes we also do and end up doing some mistakes we might have ran the test against wrong version wrong version then developers say it's not my code what do I do so that that heat if we could reduce in this way we wanted to do that was the initial point and that brought us to the layer wherein everybody is able to utilize it so there is one more field in the screenshot if you see at the right side last stable tag you see last stable version that's a pretty interesting one so this gives devops a clarity if something goes wrong in the production and they want to revert back release to what kind of versions they can revert back to so it's a win-win situation for everyone of us that is where what we wanted to head to so if you are having enormous test I suggest you should bring in some kind of system some paid tools have their own system to display these results for you directly but this is an open source effort so maybe you can like you can give a try utilize the way I have described the information also there and if you feel that there is some bug you can post me I hope you enjoy the video and I'll see you in the next video have a good one bye