 Hello, everyone. I actually didn't expect that there will be lots of you here for this kind of session. We, together with Alexander, we are going to show you some theory about the deployment procedures, and another helpful presentation is about the practice. So we start with the theory, and then some practice. If, at some point, you don't understand what we are saying, just raise your hand and say, I don't understand, and we'll try to help you. So let me introduce myself. My name is Alex Eikashenko, and I'm working in IDEX. Since quite a long time, I'm managing projects, doing some technical consultancy, lots of stuff. I was a developer before. Now I moved more to project management. And as the second speaker here is Alexander. He's our technical team lead, doing a lot of DevOps for us, leading Drupal teams, have quite a lot of experience in complicated, complex projects. And just before we start, what we are showing you is the actual thing we are using for our biggest clients. This is not just it's work, it works. So you know what software delivery, right? So everyone delivers at some point, some software. It could be a website, it could be anything. Most of the time, there are four steps. You have build phase, where your coders are working on it, and then they do some magic, and the website is working. But in order to deploy it, there is another step. So you have to deploy it, then you have to test it and release. All, you cannot avoid these steps. This is like a basic stuff. So whenever you build something, you have to deploy it somewhere, someone has to test it, and it's going to be released at some point. Here is the thing. Other people think that spending a lot of time on this DevOps stuff, on the standard builds, they think that's a waste of time. And in the beginning, it's probably a waste of time if you are doing a small project. There is no need of spending lots of infrastructure and money on doing that. But in the long term, this will really help you. And this will help you standardize the process in your company. The anti-patterns of the deployment, the software release, are manual deploys. That's awful. When you need to do a manual deploy, all the time it can be a human error. It can be anything. It will not work. You don't know what to do with that. There are no logs. You don't know how to repeat it properly. That's a big problem. When you are releasing your software and you have a long release cycle, that's a big problem also. Because it will create a lot of issues. You will have regressions. You will have lots of various stuff which will create problems for you and for the business. Because in the end, you're delivering for the business. And we hate the manual configuration of environments. This is something that everyone is doing all the time. And you start the project and you start doing that. You are wasting time on that. This is what's happening. If you deploy guys who know how to deploy stuff, he's sick or dead. You don't know what to do. You're stucked. That's a big problem for you. You can be a best developer in the world. But you have no idea how to deploy what you actually did. Friday deployments. Everyone knows what's that. All the clients say, OK, I want to deploy on Friday night because there will be less people on the website. You should never do that. And this is what the team thinks when there is only one guy responsible for the deployment. This is for them, that's the hell on the earth. We need to test and test it again. And we cannot repeat it normally because each time it's different, the deployment. What we think the deployment release process should be? It must be frequent. So it means that you should be able to release your product faster. And you can do it every day, every hour. It depends on your skills. It must be automated. So everyone can do that in there. So you don't have to wait for someone. You don't have to get a specific guy, the only one guy in the company, knowing how the projects are deployed. I saw lots of examples like that. Small company, lots of good developers. And the only one guy, an architecture guy, who knows how to deploy stuff. And everyone else is just helpless. So that's why we think this should be automated. The deployment process, the release process, should be repeatable easily. It should be very predictable when you are releasing software. On all the steps of the release, you should know what's happening. Of course, the risk must be low. At some point, you will probably even end with deployment on Fridays. Because you know the process is working. It's probably business, which will block that. But you will be able to do it. It must be cheap, of course. You should not spend a lot of money on the deployment. This is something that will be very transparent for you. Your developers should not really understand what's happening. But they know that whenever they do something, it can be easily deployed. It must be fast, of course. Because otherwise, there is no sense. And as I said, it must be also predictable. Yeah, so again, this is like the main principles of the delivery. So it must be repeatable and reliable. It must be automated. So you cannot automate everything, of course. At some point, you will have to have one guy who say, OK, I want to deploy something. But this is the only process he should have. I have a button, and I click Deploy, and everything happens. There is no need of someone else to do anything. That can be a business guy who just arrived and say, I got a notification from my developer that the product is ready. So I click Deploy, and it's just go there. When we say done, for us, the definition of done is the project is released. It can be anything, but the main point, it's released. So everyone has access to it. Business is happy. And eventually, everyone is happy. And the main thing is that everyone is now responsible. It means that each one in the team, and you will see an example in the end, should be able to see what's happening when deployment is going. Each one in the team, the QA guy who check in the website, the developer, they all should understand and see that everything is transparent. And when I deliver something, this is what happens. And it's automated. So I can see the errors. I can see all the problems that happen. So some theory. I'm quite sure that most of you know the terminology. But let's start from that. The continuous integration, what's that? It's just the practice of continuously integrating code from different developers into the main line. So we have seven developers working on one project. And they are continuously changing stuff, and merging it in the main branch, let's say, several times a day. Continuous delivery. Lots of people are confused with different terminology. The delivery is the strategy that enables your organization to deliver new features to users as fast as possible. It means that you are delivering all the time. Every time you deliver small features, small task in red mind, or whatever you're using, you know that this is deployed. You can go and test the stuff. And your development environment is always stable. That's the main point. Lots of issues are there because of that. So developer environment are always broken. Something is wrong. You cannot test. You have to create another environment just to test your code. That's the best of time. So what we think for the continuous delivery is that every successful build on your master branch should be reusable. When you put something in your master branch, this is a stable code. It means that you can deploy it at the end. And we test the code. So it means that we are only merging branches with features only when it's tested. And the third terminology that we are going to use is continuous deployment. So this is like a next step of continuous delivery. Continuous delivery is just a continuous delivery of something to production. And we can actually automate that. So you can set up your process in the way that when something goes to the production, it's delivered. The automated tests are passed. You can deliver it automatically, like, for example, Drupal update. There is a new Drupal update. The system detects that there is a new version of security patches. It happens automatically. It gets merged to your main branch and deployed. This is totally possible. So now to the subject. So what's the pipeline? That's a way to break up your different builds in the various stages. You know that, right? When you develop the website and deploy the website, you have most of the time. You're changing some code. You need to launch some drash comments in the end, like drash update features or synchronization import from Drupal 8. We are doing this all the time. So that must be automated. So what's the goal? It gives you a nice overview of what's happening. And each developer knows exactly what's happening. So you will see the example. There are lots of nice integrations with the GitLab Jenkins. So the guy, when the guy is committing the code, in his feature request, he have a full process of what's happening. You know, he knows that the code is deployed. He knows that his drash comments are launched. He knows the output of the comments. He knows everything. So he see directly the impact. It quickly gives you feedback. It means that at some point you have a long list of drash comments to run when your site is deployed. And you don't know what's happening. But with the pipelines, you can see directly the output. And you can even stop it and restart again. Because that's one of the goals of the pipelines, that each stage of the deployment can be repeatable. And it is resilient to restart. And you can just restart if you want. So the pipelines are the enabled team to deploy and release any version of the software to any environment. It means that that's up to the developer to decide. He knows better what he's doing. That's not the deployment guy who knows as the developer. He must decide. And if he wants, in the end, he will just be able to create an environment only for his feature, for his branch. So why we need pipelines? Any changes on production can create problems. Hopefully, your code is covered by tests. You probably have some big hate tests anywhere. And they are launched all the time. You're updating the code. And the main point is that it basically enables the collaboration of different groups working on the project. Because you have probably different teams, the QA team, the development team, the CIS admins team. And CIS admins are doing nothing in this end now. Because everything is automated. It's up to the developer, to the team to decide what's happening. And you need to have a visibility. You don't need to just press the button and wait for something to happen. And if something is broken, you cannot see it. With the pipelines, you will see the visibility. So what's pipelines in Jenkins? That's quite a new approach. It was released recently. You can actually set up, break up one job on Jenkins to different stages and visually see what's happening. So it's called the plug-in called pipelines. So you should not confront that with the Aquia pipeline stuff. This is the actual Jenkins plug-in. It's based on the DSL, the specific DSL to describe your pipeline. This is what I'm going to show you next slide. It's very simple. So you don't have to really understand what's behind that. But just you can safely say what I need to do in what way and what are the steps to deploy my website. So the DSL is based on Groovy. So that's why we say that you should probably know more than just Drupal to understand what's happening. Because you need to understand some Groovy. The documentation is fine. So you will see what's happening. But still, the pipeline can survive restarts. That's very nice. Something, something's broken. You restarted. You restarted the job. And it will restart from this stage. Not from the beginning. You can wait. You can even ask your client all the time when you deliver something. And the client says, stop. There is a problem. There is a business issue. And you can stop and wait. That just works just fine. And it can restart, as I said. But the problem is that it's in the on-debate Jenkins version. So you can restart from save checkpoints. It means that your deployment process stopped at some point. And you can restart it again. And it will start only from that checkpoint. That's very important. Because sometimes, your deployment process is long. You're probably importing something, like a huge file during the deployment. And if you have a big Drupal 7 website with lots of features, the feature is where it takes ages. So you can restart it. And it's visualized. That's one of the main points, in fact. Everyone can see what's happening and when. And what's the output, if there are any errors. So the anatomy of the pipelines. You have a step, which is a single task. For the part of the sequence, you will see the example. Then there is a node, which contains multiple steps inside. And there is a stage. And the stage is where you define your steps. So here is the example. This is the pipeline code in the DSL. That's what they call this. That's what they say for the DSL. You have a stage hello. You have a stage build. And you can launch some comments. So echo, hello, DrupalCon. This one is just a step. And the pipeline log. This is exactly what you see when something is going well or wrong. That's it depends. So the pipeline in Jenkins can be in the Jenkins file. So it means that you can actually version it. And there is no need for you to just point and click all the time in Jenkins to create multiple jobs. It will just happen, because it's already described in the Jenkins file. And you can basically create a multi-branch pipeline, which means that you have probably different environments. You probably want to deploy some features to another environment. And you can just multi-branch it. So now to Drupal, because we are on DrupalCon. You know that now the best practice here for Drupal8 is the Composer workflow. Because it's easy to apply patches. You don't need to commit third-party code, which is vendor code in the source control. And it's very easy to manage dependencies. So this is the way now in Drupal8. So what are the requirements? You shouldn't commit a vendor directory. There is no sense of it. You can get it elsewhere. Why should you commit it to your code? You must focus on your code. There is no need to get some package from somewhere. You never run ComposerUpdate on production, because this will create an issue. You will have different code in different environments. ComposerUpdate, at some point, will update the library. And in the end, it will not work. And the same. You should not run ComposerInstall in Pro. Some people will say, why? It fails sometimes. You have timeouts, different issues. And you need to test the code. It means that you cannot just deploy the result of ComposerInstall, because before that, you have to test it. And it can take a long time. Everyone knows that ComposerInstall, for the large projects, takes time. So now there are some hacks you can parallelize to getting lab libraries. But still, it's still slow. So what's the problem with the ComposerUpdate? There are always problems. So how to do that? How to deliver the code to production without committing this vendor code? Everyone has different approaches. The clients have different hostings. If you're working with Acquia, for example, or responding, your target is the Git repository, where you push your code. So you have to push your vendor director inside, in the end, because it's the only way to deliver the actual code to the production via Git. You cannot launch ComposerInstall in Acquia. So what are the solutions? First option, you take your code, you build your own system, you create an archive of your code, and deploy it using Capistrano. It means that some intermediate procedure gets the code, builds the website, and then push it to any target. But it will not work for Acquia, for example, because you have to deliver through Git repo. And the second option, this is actually one option, Forced, not Forced, but recommended by Moshe Weisman, he said that if you want to do it with Acquia or with any other hosting company which use Git to deliver the code, you can create a system, the continuous integration system, with your own repositories. And you push your code in these repositories, some processing Jenkins gets, see that there is an update, and run ComposerInstall, manage package the code, and push it to the target environment. This is the option we will show you today. So let's stop with the theory. Here are the tools we are using right now. Composer, of course, the Dockman, this one is one of the IDEAX development tools which is open source, you can find it on IDEAX GitHub. This is actually developed by Alexander. So if you have any questions, you can ask him directly what's happening there. We have a DrewFlow, it's, again, another custom in-house development stuff which is open sourced, you can take it and use it if you want. That's about deployment procedures, Drupal procedures, of course, Drash, you cannot live with that. And we are a big fan of Behath. It works quite well for us. So this is what we are trying to see, to use it all the time. So you can imagine that one of the steps, one of the stages in the pipeline is the Behath tests that are running. And you decide what to do with them. On our end, we are not blocking the deployment procedure because we do not have whole coverage of all the features. But you can set up your system to say, if there is any test which is not passing, no deployment happens. And the pipeline will wait, or even stop. So the pipeline, this is a plugin for Jenkins. And we have pipeline libraries to make it dry. Dry means don't repeat yourself. Because if you will try to set up just the pipelines in Jenkins, you will see that at some point, you will repeat yourself. You will say, my development environment, I'm running Drash commands. And they're always the same for development, staging, production environment, any other environment. And with just the pipelines, you will have to repeat this code. And if you want to change that, you will need to change it everywhere. So we have libraries that helps us to, and they're also available in GitHub to make it dry. So we will never repeat ourselves. And in the pipelines, you will only say, for the staging environment, I want to launch Drash commands. And this is only place where we will change it. So now, I think we can switch to Alexander and he will explain better. And with the details and show some small demo about what's actually happening. Thank you. Hello everybody. My name is Alexander Tolstikov. I'm working as team lead in Ajax for quite a long time. I see so many different approaches to do deploys for these years. Even on some project, I saw how the developer that was responsible for deployment, he did a database div with the production. He checked the structure, he exported structure of each database and then just div this. And then they applied some procedure to apply database structure differences to the production database. It was real mess. And then I was switched instead of him. And I just was very unhappy because each time it was like hell. It was for Drupal 6. And then I started wondering how we can do it better. And this time the Drupal 7 appeared and it had very good modular features. And with features you could put every configuration you have on your site into the code. And if you respect this process, you can always be sure that you don't need to do some manual click and point in the Drupal back office. I mean admin interface because you have everything in the code. And if you respect this process, you have no issues and you can do your deployments without problems. But as long as we started using this approach, I saw much that many guys developers was unhappy with this approach. They used to do some deploy and then go to the production and do some point and clicking to make sure they changes were applied and create some variables, probably change some views, et cetera. It was not okay for me. And I started to, I tried to make sure that everything is using features. And to use features well, you should make sure the features are always in default state if you know what I'm talking about. Okay, let's return to our pipelines and Jenkins. I wanted to show you some real life example which we are using some form on several projects. We have quite big projects with lots of sub-brandings and lots of Drupal sites and it's probably hundreds of sites that should be automated. All the processes should be automated for them. The essential tools that we are used is Composer. Like I already said about issues that Composer can't have with deployment. We have a document. Document is a tool that allows us to show the next slide to give it one or more source repo and give it some config and then he do his job and on output you have target result in the target repo. That's what's about that Alexey said, you don't commit your vendor code into the repository but document builds your repository and he sees that you need to run Composer update or Composer install and the document puts all the code and pushes into the target repo and you can deploy target repo on Acre for example or any other hosting provider. I'll return back. Next tool we use is named Drupal. It's about Drupal flow, Drupal workflow. This tool knows how to deal with Drupal. It knows how to deal with Drash. It knows how to deal with Acre cloud integrations. So you can do for example, copy sites copy from one environment to another or even between different doc groups on Acre or something else. So it's everything about Drupal. Of course we use Drash. But to use Drash efficiently, we need to use Drash analysis. So we can easily access any environment and any site or multi-site that we have on any of our environment. So we had created some tool. I didn't want to push it here because it's too much tools but we have some tool and if anybody interesting, I can show it after my session because it's very interesting and I'm really proud of it. So just remember it. It's one of its goals to create Drash analysis automatically. So you don't need to create it for every site and every project you have. Of course we use Behat. We use Pipeline and Pipeline libraries. Alexey already described all that. So Docman, you can see the address for the project. It's open-sourced. I've used it about two years now. It's works good. For you, it can be just a black box. You don't need to know what is inside. You just give it proper config and forget about it. Just do it, Drupal 4 already described you. Knows how to do Drupal, call it AP. It knows how to work with multi-site because for example on Acquia you can use multi-site but you don't have everything and you don't have each tool to deal with multi-site properly on Acquia. For example, you can't copy files from one multi-site to another multi-site or another environment. You can only take all the files from for the old multi-sites and in one big bunch and move every multi-sites files. So, but Drupal knows how to deal with this situation. Also it can create databases on Acquia interface and that create aliases, domain names automatically. So if you have some project with hundreds of subsites, you don't need to do minor configuration in Acquia interface. You can do it automatically based on your multi-site directory structure. Also it can do Drupal updates, automatic Drupal updates. So the client just received notification when Drupal update arrived and it possible to update. He just hits a button, deploying to def update deployed. He received notification that update was deployed. It was tested. Client can test it by himself. Make sure that everything is working and then just hit the next button, deploy to stage and deploy to production. That's all. He doesn't need to go and get and make some desks what was changed, what was not changed. It's all done automatically. Also it can do Capistrana deploys. If you want to extend it, have a pluggable structure. So if you need some specific types of deploy, you can just add it. FTP deploys, SSH deploys, name it and you can create a plugin that will do this job. This software is groovy based like Jenkins pipeline. So you don't need to learn new language for each task. You can just invest some time for your operations team to know some groovy. That will be enough to use it. You don't need to be expert in groovy for this. And when we developed this software, we tried to respect some known principles from SOLID. And just two first principles is a single responsibility principle. So as I can describe it in simple way, it's just do one thing and do it well. Don't try to do many things in one class, in one part of code. Just do one thing and do it well. And if you need to change some functionality, you need to change it in only one place. You don't need to change it in several parts of your code. And open-closing principle. It means that your class or your part of code should be closed for modification, but it should be opened for extension. So if you need to do some changes, you can easily extend this part and just do the changes you want. So let's talk about a sample of workflow that we want to show you for this session. It's quite usual process that you do during your delivery process. You commit code. Then, for example, an example of GitLab. GitLab webhook triggers Jenkins pipeline execution. And I want to see several stages during pipeline execution. Stages, as Alex said already, explained to you before it's just a way to break up steps of your pipeline. So I want to have a need, config stages. Then I want to build. So my one repository or necessarily repositories will build. Composer update will run. Composer install, sorry. Composer install will run. And then he will build your code with vendor code and create some take in the Git repository. After that, you can prepare your stage. What I mean, go prepare. You want to test your code before it gets to production and to test it, you need to make sure that everything is like on production on this environment. So in prepare stage, I want to do a copy of database from production to the staging. So make sure that everything is the same. Next step will be stage deploy. So code will actually be deployed on the staging. And then I want to test it. If everything is okay, I want to do some preparations for the product. On product, I want to backup my database. So in case of some failure, I can do a rollback. So I didn't do anything and the whole process will take as less time as possible in case of any issues. After that, I can do a product deploy, do a product testing and then finalize. It make, I mean, for finalizing, it makes some reports about test coverage, et cetera, et cetera, et cetera. And then it just gives some feedback to GitLab. So let's check how this sample pipeline can look like. Oh, something with the resolutions, but okay. I hope everyone can see this font. It's not small for you. It's okay? Okay. So we have several steps. A note step that note step just gives us some workspace and some executor from Jenkins, from master Jenkins instance or some slave Jenkins instance. Stage config, I will not show you anything because it's too much to show. Stage build, I skip some code too. And then we go. We see that we have different tools used here like dogman, like Druflo, Druflo again, Bihad, Bihad again, Druflo, Druflo, et cetera. Definitely we have some issues with this sample pipeline. What's of almost the same code? It's not okay. It's not dry. Everybody knows dry. Don't repeat yourself. If you do some copy pasting of code, you're doing it wrong because you never should do the copy pasting of your code. So it's an issue. If I need to change, for example, how I call my Bihad testing, I need to change it at least four times in the one small pipeline. And if this pipeline will be big, or if I have 100 pipelines, I need to change it 100 times. It's not okay. Logic is complicated. For example, Drupal developer will, even experienced Drupal developer, will see this code and he will say, oh, I can't change anything because I don't understand how it works. Just not possible to guy just came, see this code and say, oh, I can change it easily. It will not work. And you can end with many jobs doing the same job differently. I mean, if you have hundreds of pipelines, you can do some change in one pipeline and other changes in another pipeline. And then you just lose control. And it becomes very hard to support because you don't know what you changed already, what you don't change. So you have issues. And you don't have any common strategy for handling standard things. For example, email notifications or how you discard your buildings or loc rotating, et cetera. So it has more issues, but this is the main issues. I don't know how to support, I already said that. So how to fix these issues? As Alexey already mentioned, we can use some library. So some common place where we can put a shared code and move out complexity from the deploy pipelines to this library and they just allow us to configure the pipeline. And just provide some glue code in pipeline that can use some stages that we have without any complexity. So development guys don't need to know the complexity. They just don't take some bricks and build a small wall with these bricks. And they don't need to build these bricks. Again, single responsibility. I already talked about it. Everything that we put in the library respects the single responsibility principle. So if this part of code should know how to deal with BCHAT, it do only this. And we wrote additional DSL domain specific language just to make sure that it is specific for Drupal. So we don't need to create some complex things because it's already moved into the library. There are several ways to bring libraries to the Jenkins. And in the last couple of weeks, the pipeline plugin was updated and using of libraries becomes very easy and much better than it was previously because previously you have only one way to push your library to the Jenkins. It's through the inner Jenkins Git server. So you just need it to make sure you have access to this Git server and set up it on your machine and then you just need to push your code into the Jenkins directly. It was not a very good approach, but now it's changed. So you can use external libraries and just pick up what library you need. You can have many libraries. So you can build it according to your needs for your project or for your shop. So it's a good news because it was an issue. And also again, it keep everything. Yeah, sorry. So you can check the code or use it as this. Okay, after we fix these issues, we have final pipeline for our session. As you can see, it's much more clear. Now you don't have any code, you only have configuration here. So this is the stage name and this is the action. Action divided into parts. Class name and the method name. It's very easy. So I want to do action document config. I just add the action into my pipeline. For example, when I want to use some command from Druflo, I added action, copy site from environment test to dev and give a parameter database name. And for every stage, we have some actions and I don't need to repeat the code that only repeats some configuration and change the parameters when I need it. So if we compare what we had, what we have now, you can clearly see that we have much more clear pipeline now without any complexity here. I think I will skip this one because I already talked about it. So what we have else? We have a job DSL plugin. Why we need this? The main goal to have the job DSL plugin is to create all the pipelines automatically. It just allows us to create any Jenkins jobs, including pipelines based on the code and configuration. So I don't need to click in the Jenkins. Again, it's a domain-specific language and this job DSL plugin is not developed by us. It's Jenkins plugin, the Jenkins community do it. It's groovy based, of course. Configuration is a code. Every pipeline create automatically and create from the code. And I think I will explain this slide. This is about to create every pipeline we have for many projects, for many sites from the one mothership config. Mothership config is just a list of projects with some links to their config wrappers. And the mothership pipeline takes this configuration and creates many Jenkins pipelines based on it. This was explained on this slide. Okay, everybody, probably everybody, knows how classic Jenkins interface looks because it was not changed for 10 years. And on the left side, it's some changes that came with the workflow or as we now know is S pipeline plugin. And there is initiative called Blue Ocean that aims to create new interface, new user interface, new modern user interface for the Jenkins where you can visualize your pipelines easily. It's on the right side. As you can see, it looks pretty good, but it's still a bit and not very stable, but you can use it in the read-only mode. So in the read-only mode, it works just okay. Some of the examples of this jobs on this plugin. We already also have a GitHub integration now. So developers don't need to have access to Jenkins to see the status of the builds. As you can see on the left side, there is a status of each commit, running, paste, failed, et cetera. And if we go into commit, we can see the stages. That the same stages that we had in our pipeline, you can see this in your GitHub, the same stages and status for each stage. Also, this is some example of AquaCult integration. When you, for example, want to copy a site, you have it in your pipeline and it calls through the AquaCult API to do a job. And I want to show you some action finally. Okay. First of all, I want to show you what we have on Aqua. I will show it on my free account so I don't have any production environment here because it's free. But for this session, I will say the stage is our product and the dev is our stage, okay? And as we have no development environment, I will show you only the final stage of the deployment delivery. This is a release stage, release part of deployment. So this is when we have our feature branch and we have it tested and now we pushed it into our master branch and we now consider it as stable. So I want to make a stable bump. For this I will use a document but you can use anything you use usually. So I just launch bump stable command that creates new stable tech in the Git repository and I just will check what happens on the Jenkins side. I will show you this with the Blue Ocean plugin. So it's more great than the classic Jenkins interface. As you can see, the build already started and now the build stage is in progress. I think I could make it a lot of bigger. You can see the current processes here. And for example, while it's in process, I can show how it looks in the classic Jenkins interface. The same thing, just another interface. It's a classic. You can check other jobs if we need to. That's all the jobs we have now for the example session. Let's go back. Probably I can answer some questions now while we're still waiting for the pipeline to finish. I think it's a good time for you to ask some questions. What else do you need? Sorry, I didn't understand you. Let me try to answer. We are, it's a feature that we are always launching them. We are usually clearing caches, running tests, doing backups of the production environment when needed. Launching some imports depends on the website. What we are also doing. I think that's pretty much sums up what's happening. It depends, of course, on your project. If you have like us, like central repository of the configuration for different websites, you are launching, for example, you have six websites based on the same code in the multi-site environment, and you want to launch every, the same command on six websites. So this is what you're doing in the building process. When the project is built, you are automatically launching the commands for all the websites in the end. So this is how it works. So, we are shifting to another stages and the stage deploy already was done and we copied the database we can check in Acquire. I hope I'm not walking, walked out. Yes, sure, I walked out. Thanks, no problem. As you can see, it's now deploying code on the stage. We came to the good time because it's already finished. Now it's pro-deploy stage. As I already said, the blue ocean is still in beta. So I think it's this stage already finished but it's not updated in here. So I'll switch back to the classic Jenkins interface and check it here. Yeah, it's already start to do pro-smoke testing. Okay, let's wait for it to be finished and I explain the stages. After each deploy, I want to be sure that the deploy was okay. And I first run the smoke test at this essential test that shows if something was wrong or like site is not open and the toilet, et cetera. And I have some simple tests for this session just to make sure we can do the failed build. Yep. It's a real depends on the project. We have some main smoke testing like home page is working or not. This can work for every project and we for sure need to put some tests for each project that will be specific. Yeah, smoke tests are basically when you're testing the basic things. You want to be sure that there are like no smoke when you deploy something, the website is still working. So it depends on your business requirements in there. You will probably have something which is running anything and you must check that only this is working because this is business critical stuff. Sometimes even showing home page is not that important because you must be sure that internals are working. So this is very basic smoke test. And then we have acceptance test. They are huge, normally. For example, for stage smoking testing, open this stage. And I can see that I just checked for the Drupal cone. Text will be on the web page. And I just go to the web page and check it. So we have this. So we have the test that makes sure that we have this text on the website. And now I'm going to change something and make sure that this test will be changed. I've committed and do another bump. While it's in progress, I return to the pipeline stages and explain a bit more. So when we did our smoke testing, we then do some stage. It just, acceptance testing is just another test. Then we do prod prepare and prod prepare can check the comment that asked Acquia to create a database backup for us before we do deploy. And on deploy stage, we can see how code was deployed to the production, in our case, production environment. And then we just deploy. We'll do some smoke testing and acceptance testing. Let's check the next. Yep. Yeah, we can do it automatically. Yeah, we can check the last good version of code deployed on the production, for example. And if we had some problems or failed tests, we can do the rollback automatically. Yeah, we can do it for sure. Some other questions, yeah? Evaluate it, GitLab CI as an alternative to Jenkins now that it has pipelines and you don't need the Ruby part because it's all in YAML. So I check the GitLab CI, but it's, it don't have enough features for us to use it on complex projects. And, but the Jenkins pipeline with GitLab plugin works in the GitLab CI simulation mode. That's why we can have our feedback in GitLab. So it's not an issue because it do the same job. It gives the same feedback in GitLab. I was just asking because in the talk about GitLab CI, the demo is exactly what I see here is basically a YAML file that calls on command line scripts, captured the output, and then depending on what happens, proceed with different stages down the pipeline. And it can do, basically can do tons, but it can't call on command line functions which you're doing basically here and which is the way you should be doing stuff, I guess. Have you, if, I don't know, if there's something specific that you feel is missing from GitLab CI that would be a showstopper? Yeah, it, it, it's groovy. That's actually... Yeah, it's a joke. Yeah, we just, I don't remember for sure which functionality we missed in the GitLab CI, but you're absolutely right. If it's, that's enough for you, you can totally differently use it. It's actually quite, it's actually quite hard to follow the process, to follow the development of GitLab and all these tools because I think they're developing, there is a risk each month with tons of features. And each time you have to just go and check what are the changes, when we started doing the system, just didn't exist the CI in the GitLab. And now we see that it's becoming more and more major. So at some point we'll probably, as you said, move back to the CI because we already have that there is no need to another entity to manage all this because as you said, it's the same. Yeah, now we see our build, our pipeline was failed because we changed the text that should be shown on the homepage. So it's no more here. So we had a broken build, so we didn't know any deployment on the production. So, and if we change it on the next stage where the pro smoking testing is executing, we just ended with the failed production if we have no automatic rollback. Any other questions? Okay, so we can wrap up then. Thank you guys. Any questions? Something's coming in your mind. Yeah. Just go see us. We are still around.