 Welcome to my session about continuous integration to automate your Drupal workflow. I'm honored to speak here today in Dublin. I hope you've seen several good sessions today. This will be the last session. You're probably dying to get a beer after this one, so I'll dive straight in. My name is Bas van der Heide. You pronounce that like a bus, a school bus. I work at IMR, which is a Dutch Drupal shop in Rotterdam. And I work there as partner and full stack developer, which means that I know stuff about the whole process of web development ranging from front-end to back-end to DevOps. So that's why it's my responsibility at IMR to have to implement the continuous integration workflow. So first I wanted to start with why would you even want continuous integration? The first reason is for quality, and there are two sub-reasons for that. The first quality reason is that if you have continuous deployment set up because we are all humans, stuff can go south if you manually deploy your Drupal projects. So if you do that through the process of continuous deployment, you will make sure that there is no human interference and that gives a boost to quality. But also, your code structure will be better if you implement continuous integration through the notion of testing because then you can make sure that your code is being tested prior to deploying and that also boosts the quality of your code. The second reason is for consistency. So if you have continuous integration set up, that's an automated process, which means that it's the same every time you deploy. So you have a consistent process for your deployments. And also for speed, of course, because it's a real time saver because if you have to manually deploy and manually test every time you do a code change, that will either way, precious time as well. So those are the three main reasons why you would want continuous integration within your projects. So to start off with the definition of continuous integration, continuous integration is a software development practice in which you build, in which you test, and in which you deploy software every time a developer pushes code to the application. So it was my responsibility at IMR to implement continuous integration, and I want to take you through these three concepts, building, testing, and deploying, and tell you more about how we did it, why we did it, and which processes it entails. Okay, to start off with the build aspect of continuous integration, I will first tell you something about the Git workflow we use and how it offers continuous deployments. Then I will tell you about development machines, which we use for our projects to maintain the same experience for each developer involved. I will also tell you about task runners and how it makes projects understandable and more easy to manage. Then I will tell you about package managers because that will keep your projects lightweight and small. And lastly, I will tell you about the two repositories we use for each project. So to start off with the Git workflow, when we started implementing continuous integration, we were searching for a suitable Git branching model or Git workflow. So the first workflow we looked at was the immensely popular Git flow, which is illustrated by this picture. And I'm curious, which one of you fully uses this Git flow? And by fully, I mean not only the aspects you like, but the whole Git flow in its entirety. Okay, that's not many people, which are already suspected. So first, about the Git workflow, it's a very complete flow in the sense that it entails everything that you need to do continuous integration. It offers non-linear development. It offers the possibility of hot fixes to address issues you may have on production sites. It offers release branches. It offers topic branches. So in that sense, it's a really complete workflow. But yet, it can be quite complicated. Especially for new developers or people that are new to Git, it's very difficult to understand all the intricacies of the Git workflow. So we found it good, but a bit too complicated. The next flow we looked at is called the GitHub flow, which is really GitHub's answer to the Git flow we saw before. So in the GitHub flow, there's only one long-lived branch, which is the master branch. And then when you want to do some work, you branch off that master branch, you do your work, and you integrate that branch back into the master branch, ultimately. Of course, as you can see, this is much, much simpler. But maybe it's a bit too simple because it doesn't cover all the workflows that is necessary to have non-linear development, but also to have hot fixes, for instance. So we found that a bit too simple. So that's why we invented our own workflow, which we call the merge request workflow. And I will tell you step-by-step how it works and what it entails. So as a start, it's basically the same as the GitHub flow you saw before. You have master branch and you have separate topic branches in which you isolate your development, in which you isolate your features as well. But in the merge request workflow, you start by creating a topic branch. You have a new feature request, and at that point you create a separate topic branch to implement that feature. Secondly, you do your work, of course. So you create some commits that implement that feature. Then at that point you create a merge request, and now it's the responsibility for another developer within your team to closely look at your code before accepting it. So at this point, a discussion may arise of the code that may be on the level of code style, but it may also be certain solutions you implemented. Why did you do it this way? Aren't there better ways? So this process means that there are always more than two eyes on the same piece of code before it gets integrated. And also, at this point, the code can go back and forth between ready to be able to merge or additional work needed. So sometimes it happens that the code can be readily integrated, but other times heavy discussions arise and you have to alter your code before it can be merged. So finally, you have to merge your code back into the master branch. Now the second step in the merge request workflow are these other long-lived branches, as we call them. So it's important here to understand that we have one branch for each environment that we have for our project. So in this example, we have three environments. We have our staging environment, which is mapped to the master branch. We have our pre-production environment, which is mapped to the pre-production branch. So that's our acceptance environment where the client can see the work that has been done, and ultimately, you also have the production branch. So every time you push to one of these long-lived branches, that triggers an automated deployment to that environment, and I will get back to that later. But for now, it's important to understand that the flows in which the merges happen are always acyclic, so that means that it always flows from left to right. First, your code gets integrated into the master branch. Secondly, via a new merge request, it gets deployed on the pre-production branch and gets merged there, and then on the production branch. So finally, you also want a strategy to be able to address issues you have on your production environment. So you can do that via the hotfix branches, and this is the only exception to the rule in which you branch off your production branch. So you check out your production branch first, then you created a separate topic branch, so to speak, but in this case, it's a hotfix branch because you want to address an issue on production. You do some work, and then via merge request, it gets merged back into the production branch. And at that point, an automated deployment happens again on the production branch, so also on your production server, which has been solved. Also, of course, it's very important because this acyclic nature, where code flows from left to right, that you also merge this hotfix branch inside your master branch as well, so it will get deployed on staging and at a later point, maybe, on your pre-production environment. So the best is to visualize this. The blue circle is a new feature. So you start off by creating this topic branch and you first create some commits. And there may be more commits if a discussion arises and you have to alter your code as necessary. Then you merge the topic branch inside your master branch. So at this point, you can see your code on your staging environment. If everything goes well, then at a later point, the master branch gets merged into the pre-production branch and this triggers a deployment on the acceptance environment and ultimately on the production environment. So this flow has the benefit that it covers all scenarios. So it's, in that sense, the same as the Git flow, but it's much, much simpler to use and easier to understand. The second aspect about continuous integration build aspect I want to tell you about is the development machine. So you've all probably had scenarios in which you deployed some code, you did some work on your local machine, you deployed it to production and then you got some fatal error because, for instance, a PHP module was not enabled on production which it was on your local machine or you have a different version of Apache Solar installed on production. So to summarize, there is a mismatch at this point between your local environment and what that looks like and your production server and all the packages installed on that environment. So to mitigate this issue, all of our projects ship with a separate development virtual machine which can differ between projects. So we use two techniques to accomplish this. The first one is Vagrant which is a scriptable virtual machine which is very suitable if your production environment is self-contained. And by self-contained I mean that all the packages you need in order for your website to work are installed on the same server. So you have your MySQL database, your web server, Apache or Nginx, for example, your memcache server and maybe Solar. So that's all contained on one production server. Then you can use Vagrant and you can script your virtual machine just in the same way as you have set up your production machine and you have the resemblance there. But in other cases where you have more complex setups on your production machine, it quickly becomes complicated because you may have a front varnish proxy, for example, with different web servers behind or you may have MySQL servers, several of them in a master-slave combination. So then it becomes tiresome for Vagrant to script this. So this is where Docker comes in. So I won't dive deep into the intricacies of Docker itself, but for now it will suffice to say that it's a lightweight container that you can have lightweight containers in Docker so you can separate concerns. And by lightweight, that means that it won't blow up your local machine. So you can have one container that is your varnish server, then you can have multiple MySQL servers inside containers so you can easily replicate the situation you have on production on your local machine as well. So next section I want to tell you about are task runners. So at Immer we use Grunt as our task runner and we mainly do that to abstract away all the complicated commands that one has to know to, for instance, boot up your virtual machine. So generally speaking, a front-end developer working at a project doesn't really care if you use Vagrant or Docker or maybe Kubernetes underneath for your virtual machine, for your development machine. All he cares about is running a site. So that's why we have one command which is called GruntVM and that's responsible for booting up this shared virtual machine. So whether you use Docker or Vagrant underneath, that doesn't really matter anymore. So on the other hand, the back-end developer doesn't really care what you use to compile your CSS files. So maybe you use Les or maybe you use Sus or maybe Stylus. And the back-end developer all he cares about is seeing a pretty site and developing his stuff on top of that. So to accomplish this, we have one command, GruntTheme, and that's responsible for compiling all our assets, maybe minifying, maybe you have Webpack for your JavaScript modules as well. And GruntTheme is just one command that does all that stuff. So we use Grunt for this purpose, but there are many, many solutions to achieve exactly the same result. So you could use Gulp which all the new kits use, I believe, which is just as fine. You can also use MPM scripts or just plain old Bash scripts if you don't like to have a task runner installed. That doesn't really matter as long as you have one front controller to rule them all. So the developer only has to memorize GruntVM and that boots up his machine. The next section I want to tell you about is package managers. Package managers allow you to leave your repository relatively small and lightweight. So instead of having all your third-party assets installed directly within the repository, you reference them from the package files. So we have several package managers installed. We use MPM, and we use that for our Grunt plugins and our front-end assets, such as Bootstrap or Fontalsum. We could, of course, also use Bower for that, but that's another package manager, so we opted to use MPM instead. We use Bundler for our SES compilation and also for our SCSS linting, which I will get back on later. And we use Composer for our PHP assets. And also in the case of Drupal 8, we use it for our contributed themes, modules, and everything else that Drupal needs. So package managers allow you to keep your repository lightweight. You don't store the packages directly, but you have the package manager set up. But, of course, this situation can become quite complicated, as is illustrated very well by this tweet from Stephen Baumgartner. So he says, what's Bower? It's a package manager. You can install it with brew, install it with MPM. What's MPM? It's a package manager. You can install it with brew. What's brew? And so on and so on. So the current situation with all the different package managers available gets complicated quite easily. So to mitigate this issue somewhat, we use MPM as our global package manager, so to speak. So MPM has this notion of post-install scripts or hooks. It's inside a post-install script. So in our case, we have the composer install and the bundle install inside our post-install hook. So every... So the thing a developer has to memorize is just run MPM install if he does an initial git clone, and that will be responsible for running the composer install and the bundle install. So with this setup, you can also add Bauer quite easily just add another package and you add it to the post-install hook and it will take care of the rest. The last package manager I wanted to tell you about is Drushmake. So for your old projects, mainly Drupal 7 or backwards, you could use Drushmake as a package manager to download all your contributed modules, themes, and libraries. So just as with Composer and the current situation in Drupal 8, you can use Composer to download all the modules and also the core of Drupal, if necessary. You could use Drushmake for old versions of Drupal. So in this example, this is just a snippet out of my Drushmake file for Drupal 7 project. I have field group, field permissions and Google Analytics to identify them inside the YAML file and Drushmake will be responsible for downloading the right versions of the right modules. So also this makes sure that you have a very easy update path. So if you want to update Google Analytics to version 2.4, for example, you just change that one line, you rerun the grantmake command and it will out-download the new package. So last aspect of building I wanted to tell you about is the two repositories we use. So what I've just all told you about is concerned in the development repository. So the development repository contains everything that makes the life of a developer easier in the sense that it contains the package managers, it contains the development virtual machine, it contains all the grant plugins. So everything for a developer to do is work properly in the development repository. But we also have a second repository which we call our dump build repository and those in that repository we store the build artifacts of our development repository. So the build repository only contains the compiled CSS files, the minified JavaScript files and nothing else. So just code to be able to run it on your production server. So via this mechanism you can separate your concerns because we host our projects at various locations and if you would have your development repository checked out on all those locations you would have many packages to install on those locations as well. Because you will need your grant to compile your JavaScript files, you will need SES to compile your CSS files. Just add up to the complexity of your project. So if you have a separate build repository you abstract away all those intricacies to the development repository and only the build artifacts will be deployed on your production server. So I have seen other agencies also have some form of build artifacts and mostly that happens as a tarball which they then are synced over to their production server. But we opted to store everything within the build repository in Git as well because that offers us to have all the advantages of version control that we like so much. So we can tag releases and we can refer to those releases by their tag names. We can have automated release notes that's just the sum of all the commit messages that you've placed. You can have date information to properly see when you did releases. You can easily do a Git diff between two releases for example to see what has been changed between release X and Y. And last but not least you have an easy rollback mechanism because you can just go to the production server check out an old tag and the rest is taken care of. So to summarize the building aspect so first we maintain a Git workflow that everyone understands and make sure that we can do our automated deployments properly then we make our project reproducible with a development virtual machine so every developer has the same experience for the project while working on it. Next we keep our repository clean with package managers and we have a clear and understandable build process with task runners so there's only several commands to understand in order to accomplish certain tasks. And lastly we separate our concerns with a separate build repository next to our development repository which is checked out on our production server. So the next section in continuous integration is about testing and I want to touch briefly on PHP, JavaScript and CSS. So when I talk about testing I actually mean static analysis testing which I also call the low hanging fruit of testing. So there have been many talks before about unit testing, integration testing that really runs your code which is good practice but now I want to focus more on static analysis testing and that's testing from a static perspective so that means that your code is not being run so there are these tools that just look at your code so what can static analysis accomplish? It can look at coding standards for instance so when to use camel casing and when not if you have successfully abided by the Drupal standards, the Drupal coding standards it can look at code complexity in the sense of method length or nesting too deep, stuff like that and it can also look at unused variables so variables you've declared and then never used again. So we enforce our static analysis testing firstly by githooks so that means that every time you issue a git commit that triggers a grunt test command and grunt test is another example of a front controller command that is responsible for kicking in all the static analysis tools we have for the project and which tools that are may differ from project to project so that's why the grunt test command works as a front controller that way so via this mechanism you can make sure that every time you do a git commit grunt test is being issued and you know that your code is okay on a basic level coding standards are applied stuff like that. Of course also the grunt test commands runs prior to each deployment you do so that means that if you push your code and it gets deployed then first run and if there's a mistake there if something went south your code is not being deployed. So now I want to tell you about some of the tools we use as our static analysis testing tools. So the first one is the immensely popular Coder module the Drupal module. The Coder module uses PHP code sniffer underneath and via configuration you can set it up to check your code for certain style aspects. So in this case I have used the t function in my Drupal 7 project inside my hook menu which I shouldn't have and the Coder module finds bugs like these and tells you about it. The second tool is called PHP mess detector and that looks at your PHP code from a more general perspective so it doesn't look at your coding standards but it looks at code size rules for instance. So if you have a method within a class that is too long in this case 176 lines then it's probably a good candidate to split it up into multiple functions of its own. So you can have it enforce a certain threshold say 100 lines of code for a method and then enforce that structure. Also it looked at naming rules so in this case I had a variable called s which is not very descriptive for naming your variable so you can also have a threshold here that your variable should be three characters minimum for example. Another example is unused code rules so here I have declared a variable that I've never used and it starts complaining about that as well. The next tool is called the security advisory checker which is a really handy tool because it looks at your composited log file maps that against a database of known vulnerabilities and tells you about those known vulnerabilities if you have a package installed which has a leak inside of it. So while back for one of my projects I started noticing that my deployments were suddenly failing so I went into the project and saw what was going south and it was the Gossel library so there was this leak in Gossel not too long ago so this leak was reported within that database and my security checker was failing so if I didn't have this process there I wouldn't have known that Gossel had this vulnerability inside but because I had the security checker my deployments were failing I had to update Gossel in order for my deployments to start running again so this is a real time saver. Also what we use is PHP copy paste detector which you can also configure a threshold and then it looks at your code for code you've duplicated more than one time so this keeps your code dry if you have the same ten lines of code on three different places for example that's probably a good candidate to abstract away inside a function or inside a class so the copy paste detector finds issues like these and then forces that the last tool, the last PHP tool is called the PHP Reaper so what that does is it finds potential SQL injections inside your query statements so of course since Drupal 7 we use the database abstraction layer so that's not really needed anymore but if you still have older code or if you use DB query directly and you have an SQL injection inside those it will find it of course your code doesn't only consist in PHP code alone there's also JavaScript code that needs testing so for this purpose we use GSHint GSHint is just like the PHP mass detector sort of but then for your JavaScript code so what it does examples of what it does of course is trick checking that you use triple equal signs instead of double equal signs everywhere it prohibits you from nesting your code to deep so if you have an if within an if and that 10 times deep it will start complaining as well and it also detects unused variables and this is just the top of the iceberg there are actually some alternatives to GSHint the first alternative is called GSLint which was written by Douglas Crockford he is a really good JavaScript developer but the problem with GSLint is so he has a real strong opinion about how he wants people to write JavaScript so with GSLint it's not configurable at all you just have to write the same code as Douglas Crockford does so if you like his style you can definitely use GSLint there is GSLint which is also very powerful because it it also offers the possibility to check your JavaScript ES2015 code so if you're into that GSLint is your tool probably so here are some examples of what GSHint can find so here I've missed the semicolon in the first example the second example shows that I've used two equal signs instead of triple so as you can see it also points out the location where you've made the mistake so that makes up for easy fixing so the last tool I want to tell you about is SCSSLint SCSSLint is also a linter tool just as GSLint and PHP Mess Detector is but then for your SAS code so what that does is enforce structure in your SCSS code as well so you can have it enforce property ordering for example that you have all your properties within your selectors alphabetically ordered you can also have it prevent inline hexadecimal codes so you use your variables you've set up for all your colors instead of the direct hexadecimal codes you can have it enforce single quote structure or double quotes if you prefer that and it prevents illegal statements such as border none which is strictly speaking illegal it should be border zero so it also finds stuff like that and last but not least it enables you to disable units so if you and your team agreed upon using REM for your new project and a new developer joins and starts using pixels everywhere SCSSLint will start complaining about that so here's another example of what SCSSLint can check so here I've set it up to that each selector if I have multiple selector statements should be on its own line and I forgot some and the second example is properties should be ordered by border color font family font weight so in this example I've set it up to have an alphabetical ordering and it didn't check that so the drawback of SCSSLint is it doesn't show you the code just as GS hint does GS hint really shows you the code where you made the mistake and there you have to look up the line numbers yourself so to summarize what does static analysis testing do for you well first and foremost it greatly reduces your code review time because at Imra there are always more than two eyes looking at the same code and by having your code confide with basic standards so your Drupal standards are okay you don't have methods with 200 lines all stuff like that is taken care of so the developer can really focus the reviewer can really focus on the code itself and the contents of your code secondly static analysis enforces structure in your code so you and your team can sit around the table and discuss how you want code to be written and then you can have static analysis testing and force that and last but not least static analysis tools are highly configurable so you can each of these tools I told you about with GS Lint being the exception you can configure to your own needs so the last aspect I want to tell you about is the deployment mechanism the automated deployment mechanism and I will do that by telling you about build pipelines, stages and tasks and finally something about environments so we at IMR we really love GitLab and the way it's going we don't only use it for our issue tracking system but we also use it to automate our deployments and to have continuous integration setup so a build in GitLab runs through a pipeline and that pipeline is the same for the same project over and over again so you can specify what that pipeline looks like on a per project basis and then that pipeline is the same throughout all your builds so of course you can have multiple builds for the same project and that builds run through that pipeline a pipeline consists of one or more stages and it's important to remember that these stages run in serial which means that first the first stages run and when it is completely finished then the second stage will run and finally the third stage so you can have dependency a dependency mechanism inside your build process the stage further down consists of tasks one or more tasks and these tasks run in parallel so that means that these two tasks run at the same time and when they're both completed will the next stage kick in so you can see this process also via the UI here I've declared four stages my prepare stage my build stage, my test stage and my deploy stage and you can see the green question marks the green check marks which means that all the stages were completed you can see the prepare stage consists of three tasks which is the installation of all my packages so install, bundler install and npm install in this case so they can all run at the same time and if they're all three finished then the make stage will take place so what this pipeline looks like you can define in your GitLab CI YAML file so first typically you start your GitLab CI YAML file with listing your stages and underneath you start defining tasks to a certain stage so that means that task one in this example is part of stage one then via the script key you can give it a script so that means that for the task one example npm install will run as the task and when it's finished the task will be finished as well lastly you can have it have artifacts so npm install within the node modules folder so you want everything within the node modules folder to be taken into account for your next steps and you can also make these artifacts downloadable in the deployment task you can see the environment and only key which I will get back on shortly so now I want to tell you something about the several tasks we have set up so this is an example of the composer task it's part of our prepare stage which means that prior to deploying you want your composer libraries to be installed so the script it has this composer install and you can see the output on the right so you have composer install it installs the all the libraries and then it uploads everything inside the vendor directory to the CI server for later use a second example is our test code task so that's part of the test stage and what it does it runs grunt test as I explained earlier so you can see on the right what it does it does the PHP mess detector first then PHP lint which I didn't even tell you about but that just checks if you have any syntax errors within your PHP files because then again static analysis testing only looks at your code from a static perspective so it will not by default find any syntax errors because it will not actually run your code you have your GS hint set up so that it will check your JavaScript code your SCSS lint and finally it checks if your translations for all your modules are in place so you can have it have dependencies on other tasks as well so in this case I want to have my npm bundler and composer as dependencies so it will take all the artifacts from those tasks into account before running the test code script here's an example of our deployment script task it's part of our deploy stage and the script it runs is grant deploy and then our environment in this case staging so grant deploy is another example of a front controller command that is responsible for the things you see on the right so in this case SSHing to the staging server checking out the deploy tag clearing the cache then importing your database your configuration management and then running any database updates if necessary so what the grant deploy task does changes per project of course because for a Drupal 7 project you probably want to feature revert for example but by having this front controller command which is the same always you can have one mechanism in place so you also see here the environment double column staging and the only master so that means that this task is mapped to a certain environment in this case the staging environment and that this task will only run on a certain branch in this case the master branch so for this mechanism you can have a branch mapped to an environment which you also see here so here I've got two tasks I've got the deploy to staging task and the deploy to production task and the deploy to production task only runs on the production branch and the deploy to staging task only runs on the master branch so this offers you the possibility to have these branch mappings branch always maps to a certain environment so here you can also see that in the user interface so you can see your environments in this case I have two environments you can see when you've done the last deployment what commits it point to and how long it was ago if you click on a certain environment in this case in staging environment you can easily see all the previous deployments you've done as well and see more information so here you see that I've deployed an hour ago and then seven days ago and you can even do an easy rollback so that story where I told you you have to manually SSH to your server and check out attack that's not even necessary you can do it via the UI as well so to summarize deploying with GitLab always runs through a configurable pipeline that you define on a per project basis then a build consists of multiple stages one or multiple which run in serial and then a stage consists of multiple tasks that run in parallel that run at the same time and finally you can map a task to an environment for your deployment purposes I thank you very much are there any questions I will ask you to use the microphone hello does GitLab provide using the face to display test reports or other reports like Crippiness Detector and all these stuff the reason why we don't do GitLab so far and went through Jenkins is because they have used interface to display all these reports yes you can actually so let me get back to that slide so here you can see all the stages and you can see which tasks a stage did and you cannot see it here of course but if you click on one of those tasks it shows you the exact output of the task it runs so via that mechanism you can easily check what went south so if something went wrong you will see a red cross for instance at the test code task and you can click on that and you will see what went wrong there does that answer your question yes and directly so you don't have a real user interface to go deeper into the test reports and see what line of file or whatever was affected or see graphs like how the Crippiness progresses over your code well yes you can because all the test commands have some form of output so for example this is the output of our SCSS test linter so this will show if you click on the test code task and it went wrong for these reasons you can easily see them and also you can have via this build artifacts you can also generate reports and have them downloadable as artifacts so after the build has been completed regardless of whether it went or not you can download the reports hi so we use Jenkins as I'm sure there will be other people in the room is there a specific reason that you chose to use GitLab over Jenkins and or any other competing software packages well the main reason we use GitLab is just that it resonates well with our brain they recently released their master plan I just think generally speaking GitLab is going in the right direction and it offers so much more than continuous integration alone so it offers these I haven't shown you but the merge request so via the UI you can create your merge request assign them to users and also refer to issues because we also use it as our issue tracking system so in that sense we use GitLab as a total package for everything how do you keep up to date the software on the live on the production machine and well the basic software like PHP and Apache and the system software and the website the web application well for all our underlying techniques so PHP Apache all those packages we still do manually but for our Drupal modules and stuff like that we also use the CI tool we have an update script which is also a task and what it does is it checks our Composer.json file for all the contributed modules we have installed and in case of Drupal 7 projects it checks our droshmake file the update XML on Drupal.org and if there are security updates it auto updates the droshmake file for example and then also automatically create a merge request and assign it to one of our developers in order to test it before deploying it to production directly. Can you elaborate a little bit on how do you send or compile everything from developer repository to the build repository like you create a grant task that will copy stuff and select which one is copied and compile such files to there or something so yes that is what we call our make command so what that does is it first installs all necessary things so for instance it looks at the droshmake file downloads Drupal core, downloads all our included modules then compiles our sess files so then we have a local setup that you can run that you can view as a website as a whole and then that whole directory gets rsync to separate directory that's our build directory which is stored in git and then we just automatically do a git commit on that one and a git tag and we push that up to the developer repository so that's quite simple Sorry Hello Are you planning or are you doing any kind of browser testing to check because we're talking about committing changes and deploying to the live server but do you use any kind of browser testing to check that actually your site is not really broken or really badly broken with some of your commits I mean just check if your live site is still working after your commit or something like that We use browser testing for one of our projects but we still do that locally unfortunately because within our build mechanism there is no running site so that's why we also use the static analysis testing as much as possible there's no running database there neither so the next step we're currently looking at is creating an environment every time a build occurs and that will enable us to do more thorough testing in terms of unit testing, browser testing Hi How long did it take you to implement this entire workflow from the start to the finish? Yeah, that's that took us a while to be honest and of course it's not a binary it's not either you don't have continuous integration or you do there's always a path in which you start implementing more and more so what I would advise if this sounds interesting first start by just using GitLab for example but that can also be Jenkins and then gradually work your way up so maybe then a few weeks later you can implement static analysis testing and just one tool and then maybe two tools and just add up and work your way up that way so for us it took about maybe half a year or so from zero to where we are right now alright, thank you very much for your time