 All right, good morning Ready to start just to clarify this is marked as a beginner session and You know, we're hoping that people want to focus on beginner session But if if it turns out that people want more advanced topics We are happy to kind of shift gears and get into more more advanced topics But just trying to clarify that this was marked as a beginner session So Vivek and I work together at hike messenger. How many people have heard of this company hike? All right That's pretty cool. So both of us worked at hike where Vivek was leading the QA Automation test automation team and he basically set up the entire test automation infrastructure everything So we cool to kind of work with him on this presentation Where we'll be sharing some of the lessons we learned from hike But also over the last 15 years of doing CI Things that we've learned and so kind of trying to summarize that I happen to be on the first And even aware which was the first CI server cruise control No, all right, so cruise control was the first CI server that was ever built back in 2003 2002 I Happened to be on the team that built that it really was was a cron job that you know Someone came up with and then that's kind of how you know It ended up being a server and then lots of things happened post that Modern Fowler wrote about it and then it became really big So if you can just move to the next slide, please How many people use some kind of a CI system at work just quick show of hands All right, awesome pretty much everybody right so this diagram should be no surprise to you Yes basically you have a bunch of Engineers developers testers checking in stuff into the version control once you check in things into version control It could be any kind of version control I'm assuming get is the most popular one that most people should be using but SVN whatever CVS if anyone's still using that you can check in code into any of your version control you have your CI server Which is essentially monitoring? Your you know source control and as soon as a new update comes into the source control either a PR Pull request or you know someone makes a commit then essentially it would take the latest stuff from there Build it run, you know run a bunch of tests Package it do some analysis on it and do a various different set of steps and finally publish Some kind of a report to let people know that the last Change that was pushed to the version control. What is the state of it? You know, is it good? Is it bad should people continue on it or not, right? That's kind of in a nutshell the idea behind CI or continuous integration and again I'm hoping everyone's familiar with this concept. We're gonna You know go deeper into this. What does this mean? What are the different stages? What are kind of some of the challenges and kind of go through that? So if we move to the yeah Anyone's heard of and on One person. All right. Where does this concept of CI come in? Software people usually copy things from somewhere else Right, so we copied this concept of CI from somewhere else Is anyone heard of the Toyota production system? Right all quality folks usually would have heard about it because it's kind of one of those things that everybody Talks about in the quality world. It's kind of the state of the art if you're talking about quality So the concept of CI actually comes from and on And on is a Japanese word where essentially it means What happens is if you think of a typical assembly line like imagine you're in a Toyota production system there's there's an assembly line on which the cars are stuck and You know each person in the assembly line is doing something to the car like someone's fitting Some piece of equipment someone else is tuning some piece of equipment So there's a big assembly line the cars are moving at a steady pace Everyone's just basically doing their piece of work The next guy does the next piece of work and it kind of keeps moving on if a guy spots that something is wrong For example, someone notices there's a drop of oil on the floor right something is not usual So there's a chord on the top. They basically pull that card. It stops the assembly line, right? And basically they pull everyone together. They say hey look there is something unusual What happened? They do something called as the 5y to do a you know root cause analysis of why is this and Then hopefully they find the root cause they fix the root cause and then they let the assembly line move forward Right, that's the concept of add-on and on which is kind of what inspired people in the software world To say we need a similar system in software, right? What does this mean? What's the assembly line? equivalent in software Different developers Testers are pushing in their changes, right? That's moving it at a steady pace to go into production and what we have is a CI system that we have set up Which basically analyzes as these changes are coming in if something is not You know right then it stops the production line, which basically means stop you can't keep now pushing more changes This is not going to go forward until we fix this Unfortunately a lot of people just fix it and move on the idea is actually to understand why this occurred So hopefully you mistake proof it and does not happen in future, right? That's the real idea of CI and how you improve quality By doing CI is actually not just fixing and letting the build go but actually doing a root cause analysis and trying to figure out or mistake proof such that this thing does not happen again in future and That's how you're kind of building quality in to the process rather than leaving it till the end Make sense so far Everyone's with me. All right. Cool. So that's the stop the production line culture If you go back to your company, you may see someone using like CI is like we want to stop the production line culture So hopefully you will be able to explain what that actually means now, all right with that Let's look at a real-world example of what a CI system looks like in you know, at least in our case You know probably all of you have seen this in your own respective companies But here's a quick demo of what it is. So we wake over to you Thanks, Arish. So would like to start with a real-world example that we implemented at hike So whenever so CI starts whenever a developer starts pushing code to the your version control system, so as soon as you Make a commit to the main branch or you may raise up here to the main branch There's a webhook that works in place on the version control with sense ping to your CI system and Then all the stages that are running on the CI system are getting displayed on on a version control This is specifically good. So here you can see number of checks that are running on the floor along with the status that are there and quite to note here and the merge Merge pull request burden is like off until all the checks have been passed similar to the end-on line That we discuss now one of the example one of the best thing they are here that we have done is Running few benchmark test which is like the performance test over here and some end-to-end integration test So as soon as a developer push PR to the main branch, this is what we was the status on the PR Just to show as soon as the this is running off regression test on a lap So on the first stage we Saw the PR being pushed to the version control system from there the request went to the device lab Here all the integration tests are being run and once the whole Test are complete all the validations are being done the CI system pushback all the Feedback to one place that is your PR because that's what the developer There's the easiest way to find all the feedback on the PR and so here the end-to-end test failed and the report is directly being pushed to your PR which basically blocks your pipeline and It says be why because it found a crash with number of tests failing and the whole of so once you find This like how many tests have failed on my particular code changes because of my particular code changes and you can directly see the app crash with my changes being pushed to the main branch and Then This is how so once you find failures and crashes on your PR The link is there which basically have all the reports So it basically information about the PR on which the report being executed the build ID and that all the test info here if you see the Test that has been crashed you click on that test and it gives you the video of the running test Including your test logs in and device logs to find basically at what particular point of time you are at crash So the idea over here is to define the traceability Of the runs the whole report so it gives you a basic idea of on how your runs are looking Looks like in what all devices it is running on many number of times the test has been Retried on a particular phone or on a different phone So this gives you basically idea on how the whole Running tests onto the PR running and to and test on the PR and how the reporting structure looks like So so far everyone with us just wanted to give you a real-world example We'll now jump into a demo and kind of show you step-by-step what actually happens and kind of deep dive into it But so far is this something that is similar to what you're doing at your work How many people actually push stuff back on the PR itself? couple of folks If if you're not doing that we would certainly recommend that you set that up that way because you know There's one place where your engineers can go and look at things instead of having to go to multiple different places And whoever's basically approving the pull request It just becomes much easier for them to have all the information readily available on the pull request itself So that way it's just efficient for people to kind of work in this manner, right? so Yeah, there are so all of all of a git lab github Bitbucket all of these have plugins that allow you to integrate with whatever is your CI system is and allow you to push Your results back now when we show the demo will actually deep dive into some of these things We've also built some other interesting things like PR risk assessor and so forth and we'll we'll also demonstrate that in a minute Right before we start on a demo just want to put up a question How many of you are running your end-to-end integration test on the PR? few folks, okay Okay, so we'll quickly start with a demo on How we have built end-to-end pipeline both for Android and iOS, so I'll quickly start with it So we have So we have created two pipelines over here One is specifically for Android and another one is for iOS We'll quickly start with Android and this is just a blue ocean plug-in for Jenkins which gives you a good view of the same So Here I have all my Android code on my local Can you guys see it at the back? So what we gonna do over here is we gonna trigger There are multiple strategy that you can follow on running Executing your CI one is running on every commit running on every PR that all depends upon the strategy that you are following So here the demo is Completely on running CI on each and every commit So just Here we'll just try to add one File so this is the code that we have I'll just try to add one file I'm not trying to make any changes on the code right now, but this is the idea is just to show how The change in in in the rapport figures the CI So I'll just try to add one test file over here. Just make some changes So now you add this changes to your So here it made a commit to a main branch and now I push a To a Jenkins So this is the pipeline that got cured as soon as the test as soon as the changes is pushed to the main branch So this is how CI pipeline looks like so I would take some time to go through each one of the single stage So first is the checkout code. So it depends upon how you are like running a CI If it is a PR you take the PR and merge that code on top of master This is what happens over here Then you compile the code because you might not take chances whether So here one of the advantage that you found was The source code is completely compilable But once you remove few IDs from the source code your test code becomes start growing compilation error So this is the place where you can like check compilation your whole project and then comes the stage where we test everything There are few stages that we have added over here. One is static code analysis. We'll talk about it And later stage next you run unit test and some part of rent Then once this validation pass it moves to the next stage where it collects data from all the static code analysis that we have done and Basically creates a report and push back to your PR again and once all these passes It it basically build your APK builds your application I it can either be a APK or IPA and once the build is successful What we do is we try to run integration test at this point of time and here the build The build stays only build a testing So-called testing build not the build for all the architecture since Android has like multiple like x86 You have to build for arm and all these architectures plus it again comes with our number of flavors, right? so here we are just building it for testing for integration purposes and It triggers it moves to our integration and perf test stage where We do some integration test which are real APM tests running on the pipeline and then We also run performance test Just a note over here. These are not CPU memory and battery test These are like application performance test that we call for example few P0 metrics like app launch. How much time it takes your app launch app to launch? And any code change that you push into a main branch is not making is not like increasing This threshold so it checks over here if anything fails your PR gets blocked and the last parting part is reporting So reporting is something which collects all the data from all the stages and then push it back to your GitHub get lab or any other version control. So, yeah, so we have like now just for demo purposes We have committed few of the code because building the application will take time So we have taken the build directly over here. So integration and perf tests are over I would like to like trigger it once more so yeah, so This is the real device that we are running our APM test on CI So right now for demo purpose. We are running on a single phone We'll like later on come up with what all challenges that we are faced and what's next on this So I'll come up. I'll come with those like and like So by some of this why this is not working and this is something just a basic calculator app that is running your APM test and Once the stage is over basically You can see here on the lock So it finishes it the reporting is done at the same time if you go to Your good lab over here You click on commit. So this is the status That you get all the status are being pushed Real time so simple way to go over is directly go to Jenkins you go over here and you are directly So this is where you lend From that leaf from your version control system. This was for Android. I would like to put one more sample for iOS and Let me start it. So just like Android. We have one more iOS test app over here So this is again the rapport that I have flown on my local and just like another adding one more file Just to see where it triggers The iOS pipeline or not you commit Back to your main branch So here you saw few of like pre-commit checks that ran again We'll see why again will like discuss on why we have used this over here just to put up some difference wrong over here somehow it's like I'll directly show you the pipeline that So similar how the Android pipeline got executed this. This is the iOS pipeline that we have created You just say run It almost follows The stages are almost same The underlying tools might be different. So for example on Android is land and that's similar for iOS it's OC land and Yeah, and test here you run all your exit as you run all your x ui test you run your IPM test over here, and yeah This is how it works. So as soon as it comes to integration and purpose Yep, this is started running tests. So it started a simulator for iOS Started doing some action again. It's a sample calculator application So it just do one plus two equal to three and verifies it on the UI and if you go back Once it finishes you get the status over here and the same has been pushed here What's in the door system going back to this? So any questions over here on how we ran it how we added it? Yep in the Demo, okay, so yeah, we've skipped that stage right now because for the demo Yes, so there are No, you wouldn't skip stages in real world We've just done that for demo because compiling will take a while no point trying to sit here and compile and but Ideally none of these should skip if it skips it basically means something's wrong Right. The idea is that all these should be green and It's simply we skipped it on iOS because it takes a lot of time to like compile the code So for Android we did it for either Yeah, good question So basically his question is that if I have made a change then can we target a specific set of tests to run for that? That's something that we're going to discuss in terms of strategies of how to speed up your build and how to get feedback early So just hold on to that. We will cover that Someone had a question here. How long it takes to get the real-world examples Yeah, how long does it take? Yeah, so for this particular sample, we were able to give feedback back to the developers including running around 175 to 200 and under 30 minutes and Somewhere when we expanded our mobile device lab infrastructure including both in-house and the cloud providers, we were able to bring it down to 20 minutes So within 20 minutes any change that is being pushed to your main branch. You get the report for the same So hold on to that question. We'll come back too early to jump into that question right now But we'll come back to that the difference between real devices cloud providers Simulators or emulators will discuss that in a further section. All right. Yep I'm just interested to know what sort of scalability you're working with in terms of like developers inputting and having this as a one more thing slowing down a build and What what sort of success you've seen it working with like does it work with 10? I guess be looking at it to work at a scale with like 70 to 80 people inputting and added another 20 to 30 minutes and The potential blocker is is going to be a bit of a hard sell despite the fact. It's kind of what we're after So his question is basically, you know at what scale so at hike we were about 120 ish engineers working on this fairly Large setup in terms of the test infrastructure and things like that You know again, there were specific reasons why everyone bought into this We will talk about why see I in the next few slides what convince people and how it actually helps them So your question is valid There are a lot of times when they will be pushed back for things like this and this is where we will try and cover like What will hopefully help people understand and see the value in this, right? My question is When you run your see I what all you take into account It's just the functional testing or you do performance and other tests that would be applicable for that We will cover that. Thank you. So so far. I think let's let's move on because there's a lot more interesting stuff We're just scratching the surface. Honestly, we were supposed to finish this in 10 minutes Because there's a lot more interesting things in terms of what kind of you know Challenges you will run into what kind of decisions you need to take when you will run on a simulator when you will you run on cloud How would you distribute those are the kinds of details? We want to get into right? So just hold on to questions for now the demo made sense Yes So far good. All right, so that's let's look at this thing All right, so stepping back right if you look at how an overall see I system looks like This is kind of giving you a zoomed-out view of what a see I system would look like the demo only showed you like maybe 10% or 20% of that right, but there's actually much more to to a see I system So if we actually step back all the way So what's happening locally on on on a developer or a tester's machine, right? So essentially they're making coding a feature Once they they code the feature as they code the feature. They're obviously running unit tests and component tests They are running static code analysis on their machine Usually you can integrate with your IDE and it can start giving you static code analysis feedback as you're developing and then you run Some pieces of functional tests which basically are the changes that you're making So if you're working on a feature, you know it's going to be touching this piece of functionality And so you're running those functional tests locally on your machine So once this all looks good That's when basically you will raise a pull request and once you raise a pull request basically what happens Once you raise a pull request Sorry keep all right. So once you raise a pull request Basically the static code analysis on the chain set is triggered Once the static code analysis is successful then you essentially These are not parallel what is put in that box is essentially kind of sequential these two are sequential Static code analysis on the chain set that you made is also done on the server now You may ask why you're doing it again on the server if the developer is already doing that over there It's it's essentially to keep a consistent version on the server site on the on the CI server to make sure that you Can see how you know static code analysis results are across all the builds We run all the unit tests because again, you know the developer may have run may not have run ideally They should be running but you know on the CI server We will run everything and then it would be ready for code review We also do some automated code reviews and we'll show you how we do some automated code reviews but then there will be a Manual code review where someone will look at it and then if everything looks good They will merge the pull request onto your master or trunk whatever you call it And once that is done then we trigger the build on the main line on the trunk Which is what essentially goes into production that then runs the component test the sanity test You know basically after instrumenting your code This is you know if you're running all kinds of static code analysis in this Dynamic code analysis, sorry, then you run your functional test cases And then if that is successful it would basically trigger in our case a nightly regression Which would run the entire regression suite do a full static code analysis on the tool Do benchmark test performance test dynamic analysis in detail across the entire Product line and then eventually if everything is successful It will basically produce your APK or whatever your artifacts are and put it in In our case we were using for storing all of this just the next one So whatever your consumable artifacts are basically certified them and put it in artifact review That's what we use so from there basically anybody could take the latest build that is produced nightly And this is also something that would actually get shipped into production So the nightly builds will build for each architecture would do all of those kinds of things so it's ready So effectively every night you have a certified build that is ready to go We'll also talk about now. This is not the end of it We've actually further gone ahead and you can in fact automate the rest of the pipeline where you can even upload it into You know, whatever you're you know if you're using Again different ways of deployments, but you can push it into App store and things like that you can start deploying it to 1% of your users You can start monitoring the data that's coming in from there If everything looks good then roll it out to the rest of the users so the entire Pipeline essentially is automated We want to stop here because here. We're only talking about CI that starts getting into the continuous deployment stages after this right so Just roll back. Hang on. Just go so so far This is kind of a zoomed out view of what a CI system would look like, you know in in a typical workplace What are our functional tests? So we have three different levels of functional tests. We have acceptance tests Which are again driven through APM or other kinds of tools then we have workflow tests Which is essentially takes a whole workflow not just a piece of functionality But like for example a new user comes on board walks through the entire onboarding process And then there is UI tests, which is basically offline mode online mode You know if someone goes offline, how does this look if you change the orientation? How does it look? So all of those we categorize as functional tests Yes, there's a UI tests where are you? We are only talking about Client side here similarly there is a server side build for this right and Similarly the server side builds will be going on and that's where you know what we call as component tests here So if you notice we are calling it component so an API test would fall into a component test for the server sides Hang on let them let them complete then we'll come How many PR requests we get every day on an average on an average we get about 50 to 55 PR average over like yes Maybe more than twice Hang on we'll come to that, you know, what's the point? There's a lot of importance, right? It's like saying basically, you know, I have JavaScript on my client side So I'm not gonna do any validation on the server side. Let let these people send stuff to the server, right? No, you can't do that. What if someone forgot to check in the file? Why not because your test will fail, right? If it doesn't have dependency, why are you writing that? I mean in 15 years I've not seen a class that just sits outside has nothing implication on any of the other application This seems very hypothetical, but I'll cut you out now because I don't want to get into an argument Let's go there someone had a question. You have to speak up louder if the PRs are dependent How can two PRs be dependent on each other? Yeah, so there is a back-end and that is a UI and So now it is a back-end and UI Changes that that's fine. We will talk about that when you have back-end and UI How do you handle that typically the back-end builds go fast and then the UI builds typically run after that? There is a green blue deployment on the client side that we on the server side that we do So the latest build from the server side is what your this CI will point to We have a slide later, which we will discuss that is that is before Mars You will be talking about before Mars Yeah, of course because if an API has changed and you have changed your UI code to react to the new API Then obviously if the server side is not available, then your build will fail, right? So obviously they have to be synced 52 PRs per day Not before release If the first PR failed then other 51 PRs in block, right? So how you manage that until the first one go these 51 we can't match I think so stop the production line culture That is I that is the baby basic premise of CI, right? Why is this failing you want to go to the root cause analysis try and understand that and even a 51 are blocked It's okay because you want to make sure that this mistake does not happen over and over again What a lot of companies do is they just take this Request outside and then let things go, but then you will see this behavior consistently happening, right? And so what you want to do is try and address that You know if there's a breakage you kind of address that you know you do root cause analysis on it It's okay to block what you will notice over a period of time is your number of breakages will become a lot less Because everyone knows that it will block everybody else. Yeah, so we have to finish this process much earlier before release, I think So we are saying a lot of stuff needs to happen on local dev environment Okay, right if they don't do it then it gets caught here What you don't want is in a position where people are just chucking in code Which there is no way to verify the quality of the incoming code, right? So we want to make sure that whatever is getting into the main trunk your main branch is always in a deployable state Right. I'm sorry. We can we can take too many questions and not get to the core of the topic I'm happy with that, but you know that's So that's there Locally run that's here When you do the merge But we also have done some changes where we may run You know part of it before we do the merge as well depends so that's where we'll get into more details Depends on what kind of change you have made you don't need to run apium tests for every change that is made Right, and this is how you speed up your build and this is where we talk about some CI strategy and things like that All right, so the question was asked earlier is like why you know why bother, right? How many people have this question? Otherwise we can skip this nobody Few people All right, we'll try and cover this very quickly. So if you jump to the next slide How many people have seen this in your company once in the last 12 years perfect What about the next one Not many people, but you often see that you know things are kind of not everyone's in sync and then that's visible too late and By that time you don't have enough time to kind of fix it So you just you know kind of say we'll fix that next release. Let's move on And that causes a lot of different kinds of problems So effectively I mean I can talk about a lot more reasons But what we are trying to do the the mindset we are trying to create here is to essentially Get feedback as early as possible so we can try and you know fix things as as soon as possible And we are not building on broken things Which basically makes it very hard to fix stuff later, right? So detect something is wrong as early as possible and that's kind of what people talk about as these whole shift left Mentality so try and pull things as early as possible So instead of waiting for people to finish the entire feature and then integrate Why not start integrating pieces of the feature as early as possible, right instead of waiting to run Performance and regression test still the entire thing is over why not start running them, you know as early as possible So the whole idea is do things as early as possible get feedback as early as possible So people can course correct and have enough time and you know once you start doing this What you notice is because your feedback cycles are improving your delivery overall delivery time is essentially reducing You're catching things much faster. You're getting feedback much faster You're getting people to collaborate more effectively when you start doing these things You also see there's a lot of reduction in wastage for example Two different sets of developers kind of trying to change same file in two different directions And then having to roll back and do things like that So you start getting this kind of a feedback especially if people are raising 50 PRs every day Then a lot of this feedback is coming in early people are collaborating much more effectively And finally you will see the overall quality of your product is Going up because you're building quality in and not waiting till the end, right? So that's kind of in a nutshell why we think you know This is important now kind of going back into this and deep diving into some of these things. How much time do we have left? 20 minutes 15 minutes. All right. So just to do a recap on the demo This is what and what the pipeline looks like. We're gonna specifically talk about static code analysis and Static code analysis and the commit risk advisor and how does it help various stakeholders? Which basically participate in like taking the code to production and The first one is static code analysis. I found it very useful in like reducing my PR review time and Moreover, it makes me a better developer by automatically detecting areas which needs to factoring and it also detects like duplicate code look out for code smells and Basically has been maintaining high quality code and running this on my local with every code change that I do I'm getting a continuous feedback and that's the like that That's the end goal for why we are setting up the whole See I think I can shift left right like instead of waiting for someone else to review and give you feedback So just get it right now So right now we are using sonar, but there are a lot of tools that we have integrated on Android front One is PMD one is info from Facebook and there are other tools That are already in market that we can just plug in to our CI system and we can get a report I would suggest better Is better to take the rules from all those tools integrate in one and keep it on the sonar So that sonar keeps track of each and everything and all all the rules are defined at one common place There's pre-commit hooks that you can define and then pre-commit hooks it can reject code that does not meet When you're committing when you're committing and you can also integrate with your IDE some of these static code analysis So you can get the feedback so for example web storm has a pretty good intelligence that it has built in and maybe visual studio code It doesn't have as good right so what you can do is you get plugins that can allow you to do this in some cases You may not have plugins in the IDE But you can have command line tool CLI tools for things like this which can for example sonar You want to you can actually run from a CLI Sonar command and that will run all the rules locally before you actually commit things and you can make it a develop a habit That they do this regularly so that they don't get a surprise last minute again This whole idea is to build the shift left mentality So try and get feedback as early as possible instead of waiting till the end and by that time You would have done a lot of work and then you have to do a lot of rework because of that So going back to this commit risk advisor So as previously discussed what we are doing here is collecting all the feedback from various static analysis tools and various other analysis and this is something that we push back to get What all files that have been changed? What all modules? What all files have been changed? What are the meta data of that files and what is the complexity? What is the code coverage? So idea over here is when whenever you are submitting your PR for review reviewer looks into this and Basically helps you in identifying how much time you want to spend in doing this particular PR How much risk this PR will bring into your main branch? So this is how it looks like and the best Thing that you can get from here is like the sonar. It gives you a wonderful report So we have pretty much covered on like how a basic CI infrastructure looks like and how the CI infrastructure look like from a very high level Any like everything clear till now and any questions Any questions? Let's get into the next section because I think that's kind of the yes important So yeah, so that's yeah, whatever was marked as component tests. Those are API tests So this was for the mobile and when it goes on to a server The API test No unit tests a unit tests component tests are API tests for server-side code for Android or iOS it would be a component that you're essentially testing in isolation You're not actually bringing so you'd use like for example Robo electric for Android and then you'd basically Run that particular component that particular activity for example and run tests against that, right? So it's a particular component so you run those if you're a server-side developer then you're essentially running those locally API tests locally Then you're also running those here on your pipeline This is your entire API tests and People talk about API tests and contract tests. They differentiate between those Right, so there's there's API tests and the contract tests contract tests resides on the client side In terms of how they're consuming an API and so those tests also you would run on the respective builds of the consumers So that we call those also as component tests We are using a little bit generic terminology to say, you know, these are all different kinds of tests that fit into your You know component tests So, yeah So all the things that we had talked about comes with a lot of challenges and at every moment you have to take some decisions Right, so because we are running a business it it the the first thing that Decision that we have to make is how much we gonna invest on the system. So there comes the cost Which has like a lot of parameters How much you want to invest in building a mobile infrastructure of your own you want to go to cloud infrastructure You want to use Jenkins or you want to go with cloud CI like circle CI Travis CI and all those stuff so it covers a lot of things and How we reduce it? Sorry, but is is cost a concern for people in their companies Only one person to Yeah cost is one time is that these are all parameters that you have to balance out, right like You you want to pay certain kind of money for a certain kind of frequency of feedback So when it comes to cost, what kind of things you could play around with to kind of reduce the cost? so Just one answer just save your resources Which includes basically how much machines you are like consuming while running a test for how much Device how much usage you are doing on cloud labs? How much usage you are doing in-house labs? and You how we handle it? It basically I would like to talk about two main things one is like test priority and then another one is CI strategy So in test priority, I think a gentleman asked a very good questions on what what which test to run when? so this is the priority that comes over here and So each and every PR so you analyze the PR and analyze home where which are the risk risk areas But which are the areas that this code can affect and here comes the test prioritization and you can always handle it using annotations or Using a proper package structure over here like you can use tags or things like that, but that's dependent on people's intelligence to categorize things correctly and things like that We can also do fairly simple but intelligent things for example if if this test failed last build Does it make sense to run that first? Or should you go through the same order of all your tests and run it whenever it's supposed to come in right? So just by simply reordering your test based on which test failed last Can speed up things and reduce your cost right that? Now that this 2006 I built a open-source project called protest which stands for prioritized test Which essentially looks at a bunch of different parameters to dynamically prioritize your test and execute them in that order So the thing kind of things that we would do is we would look at which test fail last and prioritize them Which if you change a particular file you build a dependency graph And then you say if this particular file was changed then any test that is basically dependent on this file run those tests First before you run the other test. So again, these are all standard tools that are available these days You can you know basically kind of get your prioritization done, but you can even write them very simply you can look at essentially Also categorized based on the time it takes so you would want to run You know if the two tests had same priority then you might want to run the test that run faster Then the ones that runs lower so once you sim do these basic three or four things You will notice that a 50% drop almost in the feedback time if if things were to go wrong I mean obviously if everything works fine, then it's gonna take the same time But if things were go to go wrong you'd start breaking out much faster We'll catch up offline. I've been timed out All right. Yeah, we run out of time Over time. All right, so we will be around we will answer more questions. We will hang out here Sorry, we kind of took too many questions and ran out of time But we'll be around we'll answer more of these the slides will go up So you should be able to see the rest of the slides Just a little bit of time we just started with the most interesting pieces This session didn't serve the purpose of beginners. That's what we were doing all this time now. It's advanced topics So far what we've covered was all beginner stuff right like YCI. What's the purpose? How do you do? You guys you have to decide, right?