 Hi. Just get started. So I'm Derek. This is Diego. We both work at Pivotal. I'm a software engineer. Diego is an engineer lead at Pivotal. And we're here today to talk about pipelines and how you should write your pipelines, or at least how you should try to. So our agenda for today is we do a briefly introduction on continuous integration and continuous delivery. And we'll talk about some things you need to have before you start to do this thing at home. We'll do a very brief description of what concourse is. If you don't know what concourse is, this is not going to be enough. But just to remind us a little bit of a few concepts. And then we're going to jump straight into some demo. We'll be writing some pipeline here today. We'll do a recap of the principles that we talked about when we are doing the demo. And then we open for questions. Cool. So just to define the terms, we'll talk about it today. We talk a lot about continuous integration. And then continuous integration for us, when we talk about it, we are talking about the practice of building and testing our application in every commit. So every time someone pushes a code, should your repository, something will happen in your pipelines. And CD, or continuous delivery, is an extension of that. So it enables your team to release your software quicker in a more sustainable and automated way. To some extent, the goal of both CI and CD is to releasing software boring. Because in this case, boring is good. You don't want surprises when you're pushing code to production. You want it to just go smoothly in a reliable process. Cool. So what do we need before you start doing CI and CD in your organization? You need to have an automated, or at least an automatable build. So you need to be able to, from the common line, to run your builds and compile your binaries and then deploy your software. You also need some version control, especially if you plan to use concourse or anything like that. And more importantly, you also need your team to buy in. CI and CD are not our tools for delivering software. If your team don't want to do it, so there's no point in trying. All right. Why concourse? We choose concourse to be our tool for continuous integration and continuous delivery. Because it treats your pipeline as a first class resource. So the pipeline configuration itself is a YAML file that you can commit and push your version control. And also gives you reproducible and debuggable builds. Each concourse run will run in an isolated container. And given the same set of inputs, it will produce the same set of outputs in general. It's also extensible. But it don't extend it to plugins or some configuration in your UI. You extend it to custom resources. And in a nutshell, this is how a concourse pipeline looks like. This is a really simple one. On the left, we have the input resources. Resources are the things that your pipeline use that come from the outside world. The green box is a job. A job contains a set of tasks. And each task will consume these inputs. And then it may or may not produce an output. So you can see on the right side the binary is the output of the build, of the build job. So just a disclaimer for those that are already familiar with concourse. In this talk, we are assuming that there are people in the room not familiar with concourse. So we'll be like sometimes explaining some context, like giving some context and explaining what concourse abstractions are. So just a disclaimer as we go. Yeah. And that's all we had for slides. So we're going to jump straight into some YAML. This can cause anxiety in some people, but stay firm. The pipeline we're building will have four jobs. Builds, deploy to staging, test, and release, release to production. And Diego will be facilitating it. So cool. So as Eric mentioned, we are going to call the pipeline. And the goal of this pipeline is going to give examples to exemplify six best practices of continuous delivery. So we are going to call this pipeline and we are going to call out the best practices as we go. So this is our YAML file. I'm sure that you love YAML, all of you in the room. Very excited about YAML. So we are going to create a concourse pipeline. So concourse, all the pipeline information is in a YAML file. And that YAML file is nice because you can commit, you can push that to Git. So you can keep all your configuration from your pipeline. So what we have in here, we have a resource. It's already in here because there is no point in typing this. We are just consuming this project from Git. It's the Springpad Clinic. So it's a project that's already there. So you can just use that. And we have already a container here. So this container is not like a concourse notion, like a property here. It's just here because we are going to use that. So first thing, we are going to create a job to get started. So in concourse, we can do this by creating a job that we are going to call build. And that's it, right? That's how we get started. So let's try to set up this job now. Want to explain the fly command line? Why I typed this? Yeah, you interact with concourse through the command line. You cannot upload your pipeline using the UI. So for some reason, concourse decided that everything will have airplane-related names. So fly is the CLI. Yeah, this set pipeline command sets the pipeline. DashL loads variables from a file. So in this case, it will be loading the secrets. DashP is the pipeline name. And the rest is the configuration of the pipeline. I hope you can all see my screen. So I'm going to fire these. And there you go. So what's happened is that our command line is talking to a concourse server that we have deployed somewhere. So we are creating a pipeline. So nothing exists. And what this CLI is saying, well, since we are creating this pipeline, this is the diff between what we already have and what we are creating. So here we are just creating a new job. So there is an error here. Just got the resource. Yeah, we need to get the resource in here. There you go. All live coding, we already made an error. That's going to happen a lot. Right. So what we are doing here now, we are saying, well, there is this job. And we want this job to consume this repo, right? So this is the only thing we're doing right now. We are just instructing concourse you're doing that. So let's try that again. There you go. So we created our pipeline. Let me go back here. I might have the pipeline. Let me put this in here. There you go. Hopefully, you can still see my screen. This is the pipeline we just created. It's called CFSubmit. And there you go. This is just like a build called job with GitHub resource reading code from GitHub, right? When we create a pipeline, the pipeline is posed by default. That's the way concourse works. So we need to unpause that. There we go. So we are going to introduce our first best practice now. And our first best practice is, well, every time I push a change to my pipeline, I want this change to be propagated, right? And that's not happening right now here. I don't know if are there people familiar with concourse that can tell me why that's not happening yet? Just looking at the pipeline is not triggering, right? That's correct. So I'm going to set up this again because we created some files just to make life easier for us. This is the task. Yes. Cool. So now, if I go back to my code, so I'm going to jump some files in here just to avoid doing some typing. There you go. So we stopped here. And I just added a new task. And this new task is just going to compile the Spring Petty Clinic code. And that's the only thing it's doing. So if we go back in here, we can maybe trigger this build. I'm not going to do this right now because we are going to implement our first best practice. And our first best practice is just to make sure that we're triggering the build, right? So there you go. We're triggering the build now. I'm going to set this pipeline again with my new file. And this is the only change we have right now. All right. So let's apply this change. Let's go back here. And I am expecting this build to be triggered automatically because it's the first time I'm setting this pipeline. And there you go. So since this build was never triggered before and this resource is triggering this build, the build is running out. So it's just compiling some Java code now and generating a package. Right. So now that we are compiling this code and building this code, we are going to deploy this code somewhere, right? And we are going to jump to our second best practice, which is, well, we need to deploy this code in another environment, which is not production yet because maybe we're not ready to production right now. But we want this environment to be as close as possible from production, right? And that's the second best practice. And why is that? Well, to avoid some problems of, well, I'm deploying my staging environment. I'm using a MySQL database. But in production, I have an Oracle database. And then when I deploy to production, things go wrong. I'm compiling things in a different OS. So I test this binary. And then when I deploy that binary to another environment, to the production environment, well, there are some libraries missing or some stuff can go wrong, right? Another example could be, well, an OS, right? Let's say that in staging, I'm using, I don't know, Linux. And then I'm going to deploy into production using Windows. That's not like every time possible. But we should try to do that as much as possible, right? Keep all the environments as close as possible from production. Right. Just for you're not familiar with maving, we are not skipping unit tests. The package command that you're using to build also run the unit tests. Right. So we are cautious of time. So I'm not going to type the second job. So here we have a second job ready. I'm just going to explain that to you. And we just set the pipeline. So what the second job is doing, the second job is reading the code. The second job has a task inside. And this task is going to compile the code. The same thing the first job is doing. But once we have the binary, we are going to deploy this code to staging, right? And in this case, we are using pdubs. We are using Cloud Foundry. And we are just going to push that to Cloud Foundry. The bit where we want our staging environment to be as close as possible from production, it's somehow guaranteed by Cloud Foundry in this case, right? Because we are deploying to the same Cloud Foundry deployment. And things tend to be the same. Right, so let's try to set this pipeline here. There you go. We have some changes here. We added a new job called Deploy. And then this job has two tasks, the package one, and the deploy to staging. There you go. It's there. Let's go back to our pipeline. I can go back in here. There you go. We have two jobs now, right? Right. So there is something still wrong with this pipeline, or at least not ideal, right? And we are going to introduce our third best practice. And to do that, we are going to edit another file. So what we want to call out here is that we want this job here Deploy to fail if the build job does not go well, right? So let's imagine that in this job build, we are building something the tests have failed, and then we are going to deploy a build with broken tests, right? So we want to have a sequence, a linear sequence of steps here in order to guarantee that what we are deploying has actually been tested. So let's go back in here. I will try to set this pipeline here. This is our, they should be the same. Cool, cool stuff. So how to do that? Well, if we go to our Deploy job, we are consuming this source code thing here, right? And when we come back to the pipeline, both jobs are consuming that. I would like the Deploy job to consume the source code that was used, or the same commit shot that was used to build the code, right? So the only thing that we need to do here is to use this past property from concourse and say, well, this resource here is coming from build, right? So I'm going to set the pipeline again. And the only deep should be the line I just added. There you go. If we go back in here, if I refresh, there you go, introduce our third best practice, right? So that means that if it fails on the first job, we're not going to trigger the second job. Yeah, cool. So moving along to introduce our fourth best practice, looking at this pipeline, we can see that in the build phase, we are compiling the project. We are creating a package. We are creating a binary. And in the deployment job, we are doing the same thing as well, right? So what we need to do here is to avoid like building the binary more than once, like we can build many times. And that's because we want to make sure that the binary that was built by the build job is the same binary that's going to be deployed by the other jobs in the pipeline. So let's do that change. So the goal would be to remove this code here that's actually building the binary. So we will need this code here in the task that we're using in the build step, right? And now what we are doing here, we are well copying the jar that we're building into this directory here. This directory is going to be created by this output property in the task, right? Also, when we have inputs and outputs, the path does not tend to work correctly, as far as I can tell from my experience. So we need to see into this source code here. This is where we are cloning our GitHub repo. We are compiling that. And we are copying that into this release jar. So now this job here is going to output this jar. The problem is that concourse doesn't have the concept of sharing resources between jobs. So in order to achieve that, we need to move the result of the build that is the jar from the local container to an external place. And we're going to do that by just putting it into an S3 bucket. There you go. So as Derek mentioned, we have a resource that we need to put in here. So we have three main properties in this job. And we are going to create a compile jar here. And we need to pass some prams. And the pram, I'm going to remove this line. And the pram here is going to be the file that's going to be in the release jar. There you go. It's removing some white space from here. Right. So what this job is doing is it's compiling, creating a binary, and making this binary available inside this resource. And now we need to use this resource in the second job, because we're going to deploy that. So let's grab that compiled jar. And let's use that compiled jar. It has to be all through through. In here. Well, we don't need this one anymore, right? And there you go. Am I missing anything? I don't think so. I need to change the name. I need to change the name. There you go. Script as well. You can also extract this and put it into an external file. You don't need to type all of your tasks and configurations inside the pipeline. It's just simpler and requires less file movement. That's why we're doing it. There you go. So if we look at the diff, we are removing that package task in there. And we are creating some logic here in our first build job in order to export a compiled binary. And we are also using that compiled binary in our second one. So there you go. If we come back in here, we have two compiled jars. Let's go back to our conf. Let's try to spot what went wrong. We need to say, well, this one is actually being passed by the job build. With the trigger. And we need to add the trigger as well. Good call. The left one. So as the build will actually build the jar, we don't need to trigger this job on source code changes anymore, but rather on new versions of my binary available in my bucket. There you go. So now this jar is being compiled here. This job build is going to pass the compiled jar. So we make sure that we're not building the binary twice. So we implemented this best practice as well. So the next one is going to be to add a smoke test to this deployment. This is our fifth best practice. Every time we are deploying to a new environment, we want to make sure that our deployment was successful, that everything went well. So in order to do that, we are going to add a smoke test, what we call a smoke test, which can be like a very lightweight test. In our case, we are going to perform a curl call. So I'm going to reset the pipeline to the next one. I have some changes here, but these are some small changes, just not dealing with the paths and the directories. There you go. Coming back here, we're going to pick the next one. So this is what we've got so far. In our deployed job, we have this deployed to staging. And as I mentioned, we want to add a task here that we're going to call smoke. I also like using verbs in tasks and jobs, because this specifies an action. So I'm going to do the same thing here. I need to set the config. Actually, I need to specify a container I'm going to use here. It doesn't matter. I just need curl. So I'm going to use the previous one. And I'm just going to run curl here. So I'm going to use this dash f option, because I want this command to file if I cannot call the URL. Thank you for the indentation. That's bad programming, but it's OK. Right. And now I need to specify the URL for my app. So that's going to be, I don't remember the variable. So I need to check on my secrets, because I have everything in here. So I need to use CF domain and the app domain. Sorry? I'm going to use CF domain. Yeah, I have an example here. I'm just going to copy this down. There you go. This is the right URL. Just getting rid of some spaces here. There you go. Changing from prod to staging. All right, these are very, very basic smoke tests. I hope you have a better suite of smoke tests. Some indentation problems here. It's the joy of YAML, right? There you go. Going back to our pipeline, we can trigger that once again. And hopefully, our smoke test will run. Can trigger that from the beginning, actually. All right. So the last principle we are going to talk about is deployment to production. And when we deploy to production or to other environments, we want to do it using the same scripts, right? The same way. Sometimes we have different scripts and different tools and different procedures to deploy to environments that are not production. And then we use other stuff to deploy to production, right? So this is also a source of problems when it comes to pipelines. So we are going to use the same script. In this case here, just to make things simpler, we are going to use the same script. But the script we are using is actually these lines of code here, right? This is an example in a more elaborate project. We would have maybe several scripts. So the goal here is to use the same script to deploy to various environments. And we just need to pass the variables, right? We just need to pass the configuration. The configuration is going to be different. But the scripts and the way is going to be the same, right? So I am going to set the next version to six. Some change in the naming. That's fine. There you go. So what I'm going to do, I'm going to copy everything here, because I'm lazy. And I'm going to call this one release, right? So we are creating a new job. And this job is called release, right? Yeah. Good spot. So this job is coming from deploy. This source code is coming from deploy. This compiler is also coming from deploy. So I'm sure that I'm passing all my artifacts through the pipeline. The question was, why do we need source code as an input? We don't. So yeah, we decided to skip one important test, one important job on this presentation because of the time. That is the test. So between actually pushing to staging, you need to run some acceptance tests in your staging environment, like to test the load and test things like that. And in this particular case, the test suite is on the source code. That's why we were passing it through. But yeah, for this example, we don't need it. There you go. So I just added the release job. I'm going to set my pipeline. I'm going to get back here. And then now we have the release. And if you come back to Veeam, we are using the same scripts. Now it's hard coded in the pipeline. This is an example. But this would be a script outside in some GitHub repo. And the things that are different here are the CF space, for example, and the app name. So we just changed the configuration instead of how we deploy. Yeah, now we're deploying to production. Hopefully that will go through. What else are we missing? Is it? All right, so we got through it. I don't know how much time we have if we can show the example of the test. Right. Let me just set the final pipeline so you can see the example. You show the one that's already set. We have one that is already set to the S or not. Not in this space. That's fine. I'm going to set anyone. Right. Oh, we don't have anyone here. No, we don't have that one with the test. Actually, in the full pipeline, we have, for this example, we have another box between deploy and release. And that box is just running acceptance test, performance test, other tests that we have. So all the tests that are running side build are sort of a unit test that are related to the code, so not black box test. And the test that we would have between deploy and release would be black box test. So this test would be using an environment. Maybe the staging environment we are using to deploy. We are using the deploy job. So maybe we should come back to the presentation. There you go. All right. So just to recap the things we saw really quickly today. The chains that propagate through the pipeline immediately, that means you don't run your pipelines every week or every night or every couple of hours, but you run on every commit. And you do that on concourse by using the trigger true. Deploy into a production-like environment. You can do that with CF. It's a little bit easier. But I know that most of us have more than one foundation. So make sure that you have a similar foundation, at least. You also want to stop in any failures. Otherwise, it's not even a pipeline. It's just a bunch of scripts that run in a schedule. You want to only build your binaries once because you want to ensure that all the pipeline is testing the exact same code and not that someone is compiling the binaries somewhere with a different flag or a different config. And that's why it was passing. You want to run smoke tests in all of your deployments. One thing we may have missed is that we are also running smoke tests in our production environment. It's exactly the same suite, which is a curl. But you also want to test all of your deployments. And if you can, deploy the same way to every environment that ensure you that you're not testing your deployment script when you actually want to release your software, but you're testing it every day or every time that someone checks in. And with that, we open for questions. Any questions? There's one here, one there. I guess it's just out of this talk, I would guess. Normally when you push, maybe you don't want to trigger every time because you may have a lot of push. And you want to have some other branches, maybe. So how usually it's organized, like development branches and then the actual pipeline? So generally, we use trunk-based development. So we try to avoid branch as much as possible. And all the commits are going to the same branch, the same master branch. And then yes, this can happen. We can have a lot of commits in the day triggering the pipeline. But if you look at your pipeline, there are some jobs that are slower than the others. So at some point, if you have fast tests and if your build job is fast, even if your build job is going to be triggered first, the build job is going to trigger the other one. Let's consider the other one is a bit slower. So at some point, all your commits will get through until normally the bottleneck on your pipeline. So the pipeline is not going to run for the same commit like all the things. Maybe the second job that's going to take a bit longer is going to run a batch of five commits that you have in your first job. Does it work well when there are a lot of developers working parallel on different features of the same application? Yeah, sometimes it does. Sometimes it doesn't. One thing you could do if you want to use branches is to set up multiple pipelines. But as soon as you start to diverge, you cannot guarantee that you're running the same things. That's why we try not to do that. That's what we usually do is try to use more lightweight tests. And if people are submitting pull requests, we run these tests. And this gives us a first glance of, OK, so we can try to merge these. And maybe we have some more heavy tests. And we try to merge and run them. When you switch back to your pipeline, can you do that? What we normally have is a user acceptance test in between what you call deploy and production deploy. So you run a first bunch of tests on the platform in deploy. Then you say, OK, this is fine for someone to go over it manually and do user testing, manual testing. And then at a certain point you say, OK, this chart that we built before, this is fine for release. So how do you now trigger the release stage with this four-built chart that you take from some artifact store without going through the whole pipeline again? So yeah, it is possible to not trigger the release in here. So if you want to have manual tests at some point, like a point in here, you can have a box that is not triggered. And you can run that box with the version. So the way that Conquest works, the resources will fly through your pipeline. And every job will run with the latest available version of that particular chart. If you want to control which one you have, you can use the resource page and then maybe control it in here. But we can talk about it, I think, after we are a little bit out of time. Yeah, so in this example, very quickly we are using F3 buckets to store the binary and pass the binary. But you can have other concourse resources that talk to more elaborate repositories. And these concourse resources will keep a version. And if you upload the binary, the concourse resource will keep a version, for example. And like that version, when it goes to another job, the other job will say, OK, so I'm going to ask this concourse resource to download this artifact from the repository using this version. So this could be an example. The pipeline is available on this GitHub page. And here are the resources we use for this talk, so give it a go. We went for the branching approach and for all pipelines. And we usually have pull requests which have the same qualities as the master, so that you have the same test suite on a pull request so that you can ensure that the master won't break, because otherwise you can have a red master for a while, because a commit brings a break. And you only realize it in staging, which takes one or two hours. And then other devs couldn't push their code. Can you do that with concourse as well? There is a GitHub pull request resource. I've used it with mixed success in the past. Concourse is not really ready to deal with branches and pull requests, but there is an effort from the community to make that a little bit easier. There is a resource. It's useful. Give it a go. My question is more regarding the branching. I have understood that you recommend to having one branch would be recommended, but how do you deal with the hot fixes if you have just a small correction to be released to the customers? But the other piece of the software is not planned for the delivery? Yeah, the way we deal with it at Pivotal, with some of our products is we have, when we release a version, we branch out and then we leave a branch and a pipeline running from that branch. And then if you need a hot fix, you push to the branch and that will push that, like run that pipeline from that branch, and then you can merge it back. That means you are working more with the toggling process, right? Toggling business has to be delivered, and the rest of that should be, is that safe? Is that safe? Well, it depends what you want to do, but I think the takeaway for me is try to avoid branches as much as possible. So we also always try to do some sort of trunk base. If that's not possible, we try to leverage pull requests and plug some steps from our pipeline into pull requests and try to avoid situations where things can break down the road in the pipeline because in our pipelines, at least in one of our team's pipelines, more like the artifact goes through, more like the builds are going to take time because they're doing more elaborate things. So we try to avoid having a bottleneck down the road. There are many ways to do that. It depends on the limitations we have at hand. I think we're a bit out of time, so Derek and I, we are available to have a chat with you. If you want to know more, if you have more questions, we want to go deeper. Thank you very much.