 Welcome, everyone, to the next session. I'm sure we will build fast, ship faster, and iterate better. It will be presented by Deepika Upadai. And just one note, if you have any questions during the session, please use the Q&A section here in open. Great. Thanks for the moment. Cool. Let's get started as stated. We are going to discuss about the build and release process and all the learnings that I gathered working with large open source project. Myself, I work as a software engineer in Chef Storage with that. And without further ado, let's get started. So let's take a look into what the problem is. So from a view of the story, we have Alice and Bob, two developers who thought that they would collaborate on a project. They start working on a project and they have the basics in. And they decide to make their project open source. So the code is open and the users find their project interesting and they start using their project. And with the adoption of the project by users, they start getting some complaints and issues. This code is not working in my environment. The code is breaking. The code is not compiling and the code is buggy or not. Alice and Bob have just done the first stage of project. They are still trying to identify, okay, how can we make sure that these things are not repeated and what do we have to do for that? So they seek our guidance and they identify what all things they have to work on. So they finalize that we want some kind of standardization, some kind of maintenance of the code quality so that the project that we are working on is becoming scalable. And others can also study and contribute to this project. They want a matrix for code quality. They want that the project that they are working on when it goes to the user, it should be less buggy. So they want some kind of identifying and catching those bugs early on in the development process and they want to test with the environment in which their users are using their projects on so that they also keep things out of picture of issues related to different built environments. So this is what basically that they find the need for and the whole addressing of these needs, they read about continuous integration and deployment pipeline. As we also say it as DevOps. And what it actually is, let's look into the definition. So it is a coding philosophy, a set of practices that drive the development teams to implement small incremental changes to the code base. So the whole philosophy states that if you have a code base, you want to incrementally make changes to it so that you know that if there are issues arising, these changes are thoroughly tested for them and if something is missed, we can go back to a good version. So that's why we use version control in these repositories and then while we are developing these codes, we also want to make that these small changes are tested on various platforms and teams validate with all the thorough testing that they can think of on these changes. So that's the process of continuous integration does. The second thing that we want to make sure is that these changes are then reaching to the customer in a good manner. So we want that these changes will be built and packaged in an automated fashion. We don't want developers to focus their time more into these processes. So we want to employ automations here. And in such a manner that consistently we have all the tested changes built and packaged and shipped to the customers environment, customer or users environment. So this leads to overall improvement in our software quality and that's what is the purpose of CI CD pipeline. So as I said that this is the pipeline. We specifically see that there are roles related to DevOps but as I see them, it is more about a mindset that every person in the development team should be aware of and because they are the active developers they identify the shortcomings better. It is the role of everyone to contribute to this whole ecosystem actively. Of course, if there are specific roles to them then shortcomings could be identified better but I think it's more of a mindset of every individual that they should have. So if the project is in initial stage we are worried, should we invest time in developing this pipeline? Is it worth it? But I'll say that it is worth it because of the points listed here. Basically, if you have a stable project then it's going to equate to greater customer satisfaction, greater user experience. User experience is better, your word of mouth is going to travel, your project is going to be used and it's going to be adopted well. The production would be less buggy and if the production is less buggy that means developer productivity less downtime for the users and that means you are already on the right track from the very start and then if you are, if you want to be a little bit risky with experimentations on code you are in doing that in small and calculated fashion so the cost of experimentation is reducing multiple. That's why it becomes more forgiving to you and your risks. So as it stated that continuous integration doesn't guarantee that your project would be bug free but it tends to making it such that we identify those bugs early in the project and we create an ecosystem that these bugs if identified we have the tools to identify them easier. So that's why we have to continuously add the feedback to the pipeline and that's why I would emphasize everybody to invest in having one for continuous integration development pipeline and what are the essentials that this CICD pipeline should have. Let's take a look into that part. First of all I'll just quote about the philosophy from where it all begins that leave the campsite in a better state than you found it. That simply encapsulate that you have to adopt that you have to add a feedback to the whole process if and when you identify a shortcoming, a buggy code a thing that hurts production code and you have to go back and address that how can we incorporate testing for that such point. So from there it begins and how other tools and design philosophies that you can incorporate in it let's look into that. First thing is the architecture of using stable branch. So the whole point is that we use a development branch master. We want to make changes into the code base. We open all the changes against master and we want to first add these changes into master the second feature is added, second feature is enhanced. We again add those features to master. Master is continuously under testing and everything is going on and we find that there are bugs in these features. So again bug fixes are added to master. Once we know that this feature is stabilized now we cherry pick those changes. Cherry pick means picking up those changes those commits and adding them to a stable branch. The similar process you keep on continuing stabilize the feature and then add it to the stable branch and then at certain point we release this stable branch. That's what stable branch adoption means. The second is using feature toggles. Basically we are making the user adopt a feature without being worrying about what if the feature breaks. So we simply add a feature trouble that if you have turned off this feature you can't revert back to the OS type. But if the feature is working fine we add the new feature toggle as on and in the latest stages once this feature is adopted by more customers we can remove these feature toggles. Then the third thing comes is code reviews and player programing and then version controlling you can use Git, SVN, Mercurial any version control of your choice build artifact and automation some tools like Jenkins, GitHub actions these can help with automating that the whole process of compiling the code and building packages we can offload them to these automations. Then general tests so once your code is in you should make sure that general testing beginning with unit test integration test, API test regression test, performance test are performed so these are essentials that we want to bring into our pipeline. How do we do that in Chef? Let's take a look into that. So as we see in Chef what we do is if a pull request is open against the master branch or the development branch we track this in our project tracker now tracker.sef is a using readmine as a tool and we track a bug or a feature in that and make sure that nobody is doing the done then work that feature is being tracked and we identify if we see similar failure in future future. So we track the pull request in the readmine and the readmine in here and after having the pull request changes reviewed by the reviewers simultaneously we have checks that are checking for certain things. Now what those things are essentially are covered here with the tooling we use Jenkins as well as GitHub actions we'll cover that shortly later on. So we perform static checks that is the code quality it's up to the mark and styling guide is supported as per Chef's guideline the code commit is some having signed up by line so that we know who the owner of code is and build testing this code changes not breaking any test API testing unit test these all are employed in the PR itself when a pull request is open these checks should succeed for the successful merge of the changes. So once these checks are passed and the reviews are passing we have a Jenkins trigger and the Jenkins trigger once the pull request this branch is pushed to a branch that triggers Jenkins we have build artifacts building with a Jenkins stop what this Jenkins stop does is basically it will test or build all the everything from scratch all the packages for various distros that our users use so as you see CentOS 8, Ubuntu, Windows and we keep track of seeing how the builds are looking in the Jenkins dashboard and once the packages are available we want to do the integration testing in Tharu so what we do is we use a framework of our own methodology using methodology and the packages just built we create scenarios which are similar to what our users do so if we want to stress test input output of our function project in a network intensive environment we can do that we can introduce network failures we can perform all kind of operations long duration runs short duration runs these jobs are picking up different distros CentOS 8 and REL different distros from the packages just built and once these tests are passing as you see there are some failures but with those failures we always record those failures and if those failures are not related to the code change the PR is merged if not again the whole cycle of updating the PR and doing all these tests it will pop and once the PR is merged the feature is stabilized we or backport take those features and bring it to stable branches perform the whole testing thing in the stable branches as well and collaborate using Trello and the project tracker to have this feature in the stable branch once these features are in and it's time for shipping we update the release notes and we have mentioned the notable change and then the packages are built again going through the whole process getting checks from team leads to see how the tests are looking we finally release the containers as well as the packages in self mirror that's how the life cycle of code change looks in large projects like self now let's now let's go back to slides some of the utilities that are really that comes really handy for large scale projects first thing that I would suggest using is make now make is really good if you have you are working with cc++ projects or any projects that use shell shell script based operation and you want to not do redundant work of compiling every time you can use make script we use cmake which is kind of a wrapper over make and generates make file to do the same work but in more project independent environment independent manner so I think we are short of time so I will just mention the utilities again github action it's a really good tool and it runs a github container it deploys a container and then based on push event if the pull request is pushed if it's opened based on these events we can run the changes or checks on them an example of this in our code basis basically if the code is outdated the github action will flag it with needs replace it's doing that when a pull request is opened or synchronized or reopened and based on these API codes it's going to check an update whether with a piece required text Jenkins is a really good other utilities like github actions it's providing the workflow and all the builds are everything we saw in the previous slides Jenkins space it's very robust and good community support is there so Jenkins is another tool and I would suggest now to summarize what all things you should be looking for and how large scale projects have built and released project process incorporated I'm going to suggest these dos and don'ts use a version control have a project tracker you can use readmine or any other project tracker even github issues work for initial time but track the issues track the features and then have peer reviews at least the condition could be two peer reviews and documentation styling should be adopted to be standard and of course make do not work again and again on the build artifact and everything those pipeline should always tend to be automated and ensure that if the code is breaking from where the failure is coming from and everything you have monitoring of those things in place we use utility call sentry you can read more about it and off topic but telemetry incorporating how the user is behaving with functions and your project you can ask them to agree on sending those matrix and that's going to also improve on identifying user adoption and other things nightly bits and testing there could be all those tests that we saw we could have nightly bills to check whether there are no failures in such and everything is on track if there are failures we identify from which point they are failing that and then once you have this enormous pipeline in process it should be evolved to support the increasing workload so the thought process of constant revolution and constant feedback should always be there then using stable branches and feature toggle as we mentioned and if you ever find any manual or repetitive tasks always look forward to automating them and try to always do this process make those developer experience smoother so challenges we are working on Kubernetes based environment and I think we are off time and drop this but these are general tools that you can use and any questions I'll okay I don't see any questions in the Kubernetes sections if you have any we have still two minutes feel free to put it there if not you can use your last minute to wrap the session feel free to reach out to me in my contact details as well okay I don't see any questions so if any question come to your mind you can move on to work adventure which is a virtual platform where you can meet Deepika and you can talk about the topics I want to thank you very much for your presentation and that's for it thank you