 Okay, so Hi everyone, let's let's get started. I there is a lot of Ground to cover so sorry if I speak a little bit fast Hi again, my name is Joel Travieso. I'm a senior triple engineer out for kitchens. You have my contact information there Below and you'll have it at the end anyways and welcome to my presentation continuous integration in Urbana tricks to reach heavily automation this presentation is basically about sharing some experiences code slash ideas about some Stuff we have been including in our continuous integration Workflow for a year or two why we have been working on the on a project we have been working at that time The pleasure for if our college who's actually product owner is in the audience day and We are very proud of we have what we have achieved on that time in our Continuous integration workflow particularly using circle in this case and that we wanted to share some of the cool stuff We are doing as part of that One of the first challenges we faced when we started building that pipeline that allowed our Devs to focus on just the development part of the process and not how that code is transported Was problem. We were facing with pantheon multi-devs Basically, we were using this workflow where we were creating multi-dev environments per each pull request we were creating in github And the problem basically was that every time we merged one of those pull requests into the master branch We would remain with a multi-dev in pantheon and I remember by that time I'm not sure if that continues to be the case, but I remember there was like a hard limit of 10 multi-devs So once you reach that you basically you couldn't build any additional multi-dev environment, so all our Circle builds will fail from their own So that started to be a very annoying problem. It could escalate very quickly So the first aim Resolution was basically going to terminals and trying to find something to include in in the in the config that you know file That could basically tackle that process. There is actually a command in the build tools library to remove pull Multi-devs associated to open pull requests in github, but it wasn't working by that time So we took a more custom approach. We had everything We needed to do that basically using that same build tools library of terminals You can list all the multi-devs you have and it gives an output like that you are seeing in the in the screen In the first column you can see the name of the pull requests pull requests I mean, sorry the multi-devs which are named after the pull request they represent So basically the convention we are using a convention naming convention That's basically PR hyphen than the idea of the pull requests that corresponds to that particular multi-dev So basically we we we were able to tie each multi-dev to a particular Pull request in github now We didn't know how we didn't know which of those pull requests were already closed or merged and which were Still active so basically for that we needed to communicate with the github API and Our script the script we wrote what was basically executing that the first comment We're seeing above and taking the output parsing the output to get the idea of each Pull requests associated to each multi-dev and with that ID query the github API So we we will get the whole information about that particular pull requests Including whether or not that because was still open or not and in case it was closed Which is the the last line basically running another terminus con to close To delete the the multi-dev because it's no longer useful This is based you can see it in the in line 17 there Not like line 19 This base of that function get PR object, which is Which is this we are seeing here. It's basically a call Get request to the github API you can see we are using credentials from github And we will be using a lot of credentials to communicate with many of the Services we are using basically those credentials. You can set them up in the Circle interface in the environment variables interface of circle so they can populate from there to to anywhere you send them And that's that's the way we solve that problem. It runs pretty smoothly I'm up to up to now and Below you can see it's not very Visible there, but you can see like a very very quick Description of the github API. It's there are a lot of endpoints and points for pull requests Issues commits reviews and many others. It's very versatile So look it up. It gives a lot of opportunities some of the challenges that That you face when you want to Get to extreme automation in your process one of the most important is the Difficulty at prioritizing this kind of of work basically This problem will we just describe was a very difficult one was something that we really needed to address But not all problems are like that. So sometimes it's really difficult to get Priority for this kind of of work also sometimes clients fear of unexpected consequences I remember this this Same so this solution we were describing at the beginning We we were facing an issue that from time to time the screw will remove all multi deaths not just those who were Closed so that what that that remains in the mind sometimes of the client and that that creates some Problems in in order to extend the automation in a process. Of course, there is a lot always of resistance to change don't touch if don't touch the whole thing if it's not just failing and There is the problem of over automation, which is a real problem because automation is usually very fun so sometimes you find yourself basically automating just for the sake of auto automating just because it's fun and You really have to look at whether or not you you should be spending that time on automating Sorry so this is one of the one of those cases where we Struggle to have priority Assigned to this problem basically our get flow Consists of a master branch. We have developed branch, which is kind of intermediate branch a release branch in the middle And we have all our poor requests appointing to that developer branch. We Release in a timely basis. We don't release when we get to a particular Level of completion, but we just release every week or every two weeks. Sometimes we change that Bounder We release every week or every two weeks no matter what we have by that point As long as we have something so basically we use the same release branch for all releases and All our poor requests point to that release branch not to the master branch so one of the things we wanted to Make sure is that no branch is pointing to the master branch ever But all of them are pointing to that developer branch and what we do for that It's again querying the github API and asking for the base and head branches of each poor request I'm blocking I mean making circle fail on every build When those conditions are not made basically if the base branches master the the head branch must be developed And we also included an exception that if you included the hotfix word in the title of the poor request We would allow them that to happen because we identified that we will have some cases where we actually want to merge poor request directly Into the master branch not to let it go through the the release process And in a similar kind of solution coming from a similar problem Right after Basically we we merged that developed branch into the master branch every week or every two weeks But that developer branch. It's the one who feeds the dev environment in our Pantheon site so as long as we don't have a poor request Coming off that developer branch our dev environment in Pantheon doesn't get us. It doesn't get sub data So basically every time we merge that developer branch into master We need to make sure the next time there is a difference between both branches a poor request is created from the developer branch To master so what we do is again Communicating with the github API you can see that the quick script there Which is different because it's it's in this case. It's not a pulse request. It's a get request I mean, it's not a get request It's a post request because we are sending information the title the body of the poor request And we're sending the head on the base branches of the poor request. We are not actually committing anything We're not actually changing anything this poor this request will basically fail either in the case that there is no Difference between the branches or in the case that there is already a poor request existing between those branches And it will fail but it will not make the whole circle build fail So it's very it's very safe just to just call it and we call it just in the cases where we are deploying to the Dev branch, which are on the only cases where we actually Merge I mean where we actually need to recreate that poor request from develop to to master so so we don't call it We call it always with her with the reason and in the right moment some of the Questions that come to mind when dealing with these is whether I mean where to run These advanced automation steps one of the Options and that's that what we have been talking about. It's making it part of your continuous integration delivery platforms In this case using for some to circle, but you could be using Travis or Jenkins or anything else the obvious Advantage of that is having a unified vision of everything what you're doing every time you build An environment or your circle builds basically some if you have stuff in in spaces like cloud hooks I mean pantheon quick silver or Akia cloud hooks you you you have Automation steps happening from two different fronts. So making it part of your continuous integration Configure that Yammer file it kind of unifies The process which is good the good part of using cloud is of course consistency all across board stuff gets done Every time there is a code deploy or code merge It doesn't depends on whether or not circle is the trigger of the of the event Of course, there is another difference to consider and that's that There is not there they don't have the same context You don't have the same information available in those two. So sometimes it's a dear breaker You you can't just use either of them. You you must choose one of them One of those examples is communicating through slack the classical communication when you your build fails and you get Notified that you build fail In this case in our project We have a different approach We don't just send a notification to a general channel saying the build fail we notified the particular user That's the author of the pull request whose build failed So it doesn't get as annoying as annoying as it is when we just send notifications to the general channel Obviously we're using circle 2.0 You can see there we're using the when attribute on a step Which is an attribute that allows us to react to Failure case and other cases, but in this case the failure of the whole build and what we are doing in the script Sorry We have at the beginning we have we are pairing github user names to the idea of those same users in the slack space And then we are requesting the pull request object Getting the author of that particular pull request and sending a post request to the slack API with the idea Corresponding to that same author and a message To to to to say to that person in slack You can see there the function that does that basically sends a post request To the slack API using the text that you want the text of the message What's in and the the channel in this case is is the idea of of a slack user The slack API again, it's it's also very wide. You can do a lot of stuff. They're very versatile You can handle channels files Conversations groups private messages even stars so make it up if you're interested in that The GI API, it's pretty similar. You can see the canonical form above and below you can see like an example of Getting all the comments Associated to a particular. I mean to an issue in Jira of a particular ID, which is provided as a parameter It's again a cruel call get request to the Jira API with the of course with the information to authenticate to the Jira API and With the idea of the issue to request the correct issue and in this case all the comments are set it to it This is this gets used in our project with With this inside this script where we basically every time we build a pull request I mean we Every every time circle builds off a pull request it Posts a link To that build in the ticket whose ID is part of the title of the pull request Basically, if you have a pull request title, I see hyphen 501 It searches for a ticket that's named like that and it posts a link to the builds there So you have a connect a very straightforward connection between your Your your tickets and your builds and what's doing it's again getting the the pull request getting the issue ID of From from the title and sending a message which happens in the last line and sending them Invoking a function that basically comments on the on the ticket on Jira and this is the function basically Here you don't actually get to see what we are posting. It's that dollar text variable But what we actually do is not just posting the link, which is the most important part where we also the post some Important information like statistical tracking of how many times the build has failed and who's the person on top of review with the pull request and stuff like that so it can get Very wide. It's not just about posting a link and you can also remember you can also Use the Jira fields. You can create custom fields on issues on Jira So you can use those fields too. You don't necessarily have to use the comment thread for Actually putting this kind of So this is another fun thing we do we have kind of a Automated change lot building in giant included in our circle workflow What we do basically every time circle builds if it hasn't been done before We we make a commit from the virtual machine where circles building That committee is basically adding the title of the pull request into the change of that TXT file So at the end of a release we have a change like that TXT file with the titles of all the pull requests listed You can change that to whatever you want. We just use the title But you can include as much information as you want Of course the first part of that is checking that the file already Doesn't already contain that line because otherwise you would be in an infinite loop adding that same That same line again and again and if it if it's not included what we do is basically Making the change and adding it and committing it and pushing it because your circle build runs in a virtual machine That has a local repository quite like the one you have in your computer So basically committing in your computing the same thing that committing in the in the virtual machine So that's the way that works and it's it's pretty fun. We have also some tax to Skip this step sometimes it's convenient to not include a particular pull request in a In a change lock So we include those tax in the description of the pull request and it comes with the pull request object when we request it And also to alter the the the content of that line. We are going to add to the change TXT file And another last example This also Opens a little bit of I mean, this is a little bit about our migration process Which is probably another talk content from for a totally different talk But we have a we have a legacy site that can be subdivided in different subsites So what we are doing on migration process is we are migrating Each of those subsites independently into a migration Environment and there the content moderators go to adapt that content for the new site And once that that process is completely done We run then another migration from that particular migration environment into the life environment where all comes together One of the problems of that we are using an approach where we don't have all the configuration files imported In every environment, we have the configuration configuration files only imported in environments different from depth test and life Because we don't have accidentally accidental migrations direct migrations from legacy into life environment For example, so it's creating a migration environment It's not just creating a multi desk a multi depth of the dev environment It's making sure the right configuration files are imported into that Migration environment. So what we do is What we use automation for it's basically we create If we create a pull request and we include a tag in that pull request with the name of the migration environment The name we want to give to a migration environment if that's the case Instead of treating the pull request as a normal pull request and created a PR hyphen PR number Multi-depth we will create we will create a different kind of multi-depth with a different set of configuration files And a different name so next time the The first that first script we were talking about the one who removes pull requests The next time it runs it doesn't it doesn't remove the the multi-depth associated to migration environments I don't have I don't have the code available for for this I couldn't wrap it up on time, but basically that's the idea. It's something that's still in progress, but I think it's gonna It's gonna work Well, um, so that's all the shine keys I I have for you today I think the bottom line of the compensation is sparking ideas and Given an extent of how much we can automate and sometimes how much time and how much effort we can save in a daily basis by by by automating all the things so only left for me to Invite you to join us in the contributions Prince on Friday Fill out the Survey and reach out to me. I will be let me see how much time we have. Oh, we have a lot of time Yeah, we're on time actually So, yeah, we don't we don't have a lot of time for questions now But I'll be around so if you want want to reach out Reach out to me here or go to the for kitchens booth, which is a profound place to go by the way Or reach out to me online at those endpoints. So thank you very much