 Everybody is walking up. First of all, I will ask you a few questions. Who is deploying in production at least once a month? I hope everybody? No? At least once a week and at least once a day? Okay. It's the right audience. So, to start, a small word about me, about myself. I'm Frederic Devin. I'm CTO and co-founder of the platform Continuous PHP. I'm also technical account manager for the European Commission in Europe. Yes. I'm 15 years of background as a CIS admin and PHP consultant. And also I'm PHP certified engineer and evangelist and continuous delivery deployment addict. I want to accept and to you to accept the apologize of Peter who originally should be with me to present this talk. I couldn't make it, so please accept his apologize. But he helped me a lot to writing these slides. First of all, a little word about the projects we are talking about. So, it's European Commission. Everybody in the room knows about European Commission. No? No? It's a government organization. It's a group of all the European governments to write some rules for each country in Europe. So, there are about 50,000 people working in the European Commission. So, there are hundreds of websites, many in static HTML, but not all of them. There are hundreds of dynamic websites using a lot of different technologies like PHP, yes, but not only PHP, JavaScript, yes. And there are millions of page. The total is completely unknown because it's so huge. In fact, every single decision made at European Commission has to be published on the web. So, it's why it's so huge. This is the main portal of the European Commission. But there are other portals like Horizon 2020. It's a program for research and innovation with a budget of 80 billion euros of funding available over seven years. So, you can find all the information on this website. You can propose your project. You can see what project is part of it. And so on. You have also the commissioners website. So, on this website, you will see all the commissioners, all the members of the European Commission. And you can see the news about them, what's their agenda and so on. Also, you have the European news portal for young people looking for jobs or studying in Europe. All the information are available on this portal. So, there are a lot of websites like this, hundreds. And for an audience with approximately 500 million people on 28 countries, with 24 languages, it's huge. So, to develop this, European Commission have chosen Drupal. Why they chose Drupal? It's a platform, it's a solid platform. It's stable, secure, scalable. I don't have to explain you why to use Drupal, not here. But it was the choice, the choice was made by, for example, you have the white toys using Drupal and so on. It was the best choice, best option to the European Commission. So, they decided in 2012 to create a multi-site distribution for approximately 100 subsites to start. So, it's a Drupal 7 distribution and it's the best framework with the common features available for all the subsites. So, it's using Drashmake for the packaging and for the installation and the deployment. It's using all the best practice you can find on Drupal.org. And the development has started in 2012, as I said. So, a long way has started just there. So, in 2014, there were a lot of challenges to rise. The team is growing up too fast. So, 100 websites in 2014, 100 more plans, more than 200 developers. It's huge and all the developers are split across the different countries. They are not in the same location. So, also you have more than 1,000 content editors. It's also a big challenge with 250 modules. So, about performance, it's a pill. So, I talked about the distributed teams. In the organization, they have many contractors in many countries. So, everybody is working with different operating systems. Some are internal, others are external. So, they are using the company cultures instead of the European Commission culture. Some are working on VM, others on their own laptop. Also, at European Commission, everything is behind DMZ. So, to access the versioning system from outside is very difficult. And as everybody has a different type of computers, it's really hard also to install the application locally because the scripts are completely different if you are using Windows or macOS or Linux and so on. So, also they were using outdated practices. They were using SVN. Yes, a lot of people are still using SVN, but it blocked them to have a better agility. And they were using batch scripts to deploy the application, to install the application locally. It was very hard to maintain. And they were using internally a shared development server. So, they connected through SSH to this server to work. And contractors were using their own laptop. And all these practices also were... The worst was they were re-using only once every three months. So, the size of change was really huge. So, it's part of the bottlenecks. The bottlenecks they encountered, where the code review is completely manual, was completely manual. In fact, every single change made by contractors or a developer was manually reviewed by a quality assurance team member. And the deployment was completely manual. So, it takes a long time. And also, the risk is very high when you deploy manually. And as the updates were every three months, it was hard also to wait for the new features, to have outfixes and also to test it. The QA team, in fact, every three months, has about one month of testing of the new release. It's huge. So, also, they encountered lots of regressions. So, as there is high rate of code change, the regression risk was really big. And there were no semantic automated tests. Systematic automated tests, no coding rules check, no behavior tests, nothing. So, it was really hard to check that. So, they decided to optimize their processes, their quality assurance processes. So, the goals were to reduce bottlenecks by automation, to speed up the release cycle, and to adopt the new version of the best practices. But not only for Drupal, for every single website, not for every single development project, and to use a common build system, not a specific build system per project. Also, to ease the external access to have the same way for every single developer to develop. And sure, to automate the testing and the QA processes. To do this, they decided to move to industrialization. It's quite logic. But when you start from nothing, or almost nothing, you have, at first, to define the plan. To define the tool set you will use. So, it's what's done. In a few weeks, we analyzed the needs of every single team, of every single project, and we found the common needs and defined the best tools to use for the needs of the entire team, more than 200 developers. So, first of all, we decided to move to Git. But not only Git, we decided to move to GitHub. So, it's not only the branching, the ease of the branching and the merging you can find with Git compared to GitHub. A tool to SVN. But also, we have the pull request and so on. And the ease of integrating other platforms, other systems, other tools with the repositories. So, we started to use GitFlow 2. And to convert all the SVN history into Git. And also clean up the history, because as the deployment tools, the deployment was using internal script, the hysterics used also proprietary script and we didn't want to put this script on GitHub. So, all the platform code you have is public on GitHub. You can look at it, you can fork it if you want. It's an open source project. So, also, as everybody were using SVN, we had to train the entire team, team by team. So, next step was to define the tools to manage the dependency in the code. On Drupal 7, yes, Composer is not often used in Drupal 7, but it was the choice to use Composer not only for the code itself, but all these dependencies, all the tools you are using in a project are dependencies. The testing tools are dependencies, so it's quite logic to use them to define that in your Composer file. It's what we did. So, in the project, you can find the Composer file with all the tools needed to execute the test and to automate the delivery processes. So, after that, we decided to use Task Manager. So, we used Thing, why? Because it's a PHP project. It's easy to use for PHP developer and it's easy to extend for PHP developer because everything is written in PHP. Yes, Drupal is using Drush, but Drush is easy to integrate in Thing. You can find the Thing-Drush-Task project to add the Drush-Task in Thing. So, you can have the task like this. You use DrushMake file. You can use all the commands you have in Drush into Thing. So, additionally, you can add all the automation you need in Thing. Also, it was important to add the coding style rules. Maybe you attended a previous session speaking about coding styles. So, we created a custom rule set based on the Drupal 7 rule set. The configuration is generated with Thing and every single push is checked before pushing, actually pushing, with PHP Cosnifer. So, if there is something wrong in the code, it's impossible to push. Yes, you could remove the hook if you want, but it's not a good practice. And just after, it's blocked by the CI platform. So, it was one of the first steps to add the coding rules. But after that, we decided to add a functional test. To do this, we used BIAT. Why using BIAT and not a simple test or PHP unit? It's simple. The tests are written in almost English, the gherkin syntax to define the specs. So, it was easy also to integrate the product owner in the spec definition and the test definition. As it supports the web browser, it's very easy to test even JavaScript, integrated with your application. And it was also the best way to ease the maintenance of the behavior test. A test like this, it's just for the login page. It's really easy to understand for a product owner, for a non-developer. And when the test fails, you exactly know which feature is blocking. Because in the end, what is really important for you is to know which feature is impacted, which feature is ready for the product owner. It's the most important for you. And just after, we decided to have a testing environment, but ephemeral testing environment, not only a single environment. So, every single branch can have an environment. To do this, we needed an immutable infrastructure. So, using tools like Packer, it's really easy because you can have for any version of your code, a version of your image with all the system dependencies you need. And there is no vendor locking. You can use as well AWS, Azure, VMware, VirtualBox to create your image. So, it was really important for the European Commission to not have this vendor locking. And so, they're using tools like Packer and Salt to pack the application. So, as you can see, it's very easy to use if you are already using provisioning tools like Chef, Puppet and so on. You can combine them to your packing system and using this, you won't need to provision your servers when they start. You already have an image. You can create new image on the fly and integrate them into your code. So, the next step was not to only have the image but to use this image to have the formal environment. To do this, we decided to use AWS at first. Right now, we're starting to use Azure as well. Not all the environments are on AWS, so we can easily switch from one to the other depending on the needs and on the cost and so on. So, it's a coded infrastructure. And the most important is, yeah, we defined the infrastructure as a dependency so you can manage the version of your infrastructure as easily as you manage the version of your modules and so on using Composer. And as you can see on the new deployment, we can create a new infrastructure in only four minutes. So, it's very short and you pay only per use. So, you can have a lot of new environments in a few minutes. After that, we decided to use a deployment platform to continue the deployment platform to orchestrate all these tools. To do this, we decided to use, they decided to use a continuous PHP. Why? Because there are no vendor locking. It's not a worker-limited model. And it's possible to run tests in parallel and it's focused on PHP, not on any language. It supports other language too, but it's focused on PHP. And it simplifies the delivery workflow because it proposed a workflow just at start. Here is the way we can manage our workflows. So, after that, as there are many websites, many subsites, and the number of subsites are increasing, so we decided to create a skeleton. Thanks to GitHub, it's just a fork of this skeleton to create a new subsite. And every time this skeleton is updated, the subsite can benefit from the change thanks to Google request. So, it's a common starting point for all websites. And all the tools I presented before are included in this skeleton. And everything is documented, so a new developer can start in less than two hours. A completely new project with all the tools needed to manage the quality assurance. After that, yes. It's not because you have a toolset you know how to use them together. So, we decided to select a good practice to manage these workflows, these processes. There are not so much practices. You can have continuous integration. Maybe everybody knows about it. If not, almost everybody is using it maybe without knowing about it. So, every single branch, when it's merging in a developer branch, and this developer branch is tested. So, it was the start. We started with that. So, every test was put in this step. And after that, we moved to continuous delivery. So, in continuous delivery, you have everything, you have the continuous integration. Plus, you have a package. This package is very helpful for deploying. You can deploy this package on any kind of environment. This package should have been tested before. And if you are using a scrum product management, it's very useful because at the end of the sprint, you have a package to deliver to your customer, to the product owner just here. So, after that, the next step is continuous deployment. Continuous deployment, it's almost the same, except there is no manual deployment to production. Everything is automated. So, any new feature is deployed when it's ready. It's the best approach if you want to deliver more business value quickly to your users. And in this case, you are using a Kanban approach, a many more scrum approach. So, a little comparison. It's continuous. You have age development. It's what was made before. So, only code it and build. And in the end of the sprint, we manually deployed without any tests. After that, continuous integration, in the same way, except every single push is tested. Not released, but tested. With the continuous delivery, same thing, but a package is done. And once the package is done, it could be automatically deployed on a new environment to test it. Manually, you could deploy and check if everything is good in production too. And for the continuous deployment, it's the same except there is no any more manual interaction. You push if all the tests are okay, it deploy. It's what we did, but not for everything. Because, yeah, we cannot start to deploy in production every single new feature. So, to do this, we started with only the odd fix. So, it's the branching model, the most successful git branching model. So, every target is on production. The master branch is the pre-production. The developer branch is the integration testing. So, every new feature is as its own branch. Once a feature is finished, it's merged in the developer branch. And in the end, we can have a release branch at the end of the print. And if it's accepted by the product owner, but an odd fix could occur anywhere. And the odd fix is deployed in production using continuous deployment in this case. So, the next step was to have this ephemeral environment. So, using the pipelines, the deployment pipelines, we defined some use case. The production, as I said, is deployed from the tags. The pre-production is deployed from the master. But any odd fix, any features, any release could have an ephemeral environment using all the tools we saw. So, these ephemeral environments could be destroyed automatically every day at night, every night. And in fact, once it's more important, what is more important is we won't deploy the branch. We will deploy only the pool request. So, in this case, we won't test, we won't automatically test only the feature branch, but we will test the merge of the feature branch and the developer branch. And we will deploy the merge of the feature branch and the developer branch on this specific ephemeral environment. So, we can have all the new features included in the specific features we are testing. So, if there is a regression after the merge, we can prevent it. So, in the end, what is the achievements in only one year? Because it started one year ago. We drastically decreased the time to market. No, we can deploy a new version of the application with an average of four deploy a day compared to four deploy a year. We have multiple release per day and we decreased the bottlenecks. As I said, the QA team was testing any single change. No, an average of 80% of the change are blocked by the CI platform. So, the bottleneck has decreased a lot. They almost have nothing... No, they are doing their job and they can test more and more features, but ready features, not in progress features. So, if there are coding rules not respected, they haven't to check the coding style and so on. It's blocked before it's right to them. Also, using GitHub, they can have an online coding review. They don't have to clone the project to check out the branch, they can do this online. The test coverage has increased a lot also. So, the assurance of the quality is very higher than before. Once it's very important for them, any single contractors can start and run the website locally with the same configuration than in production very easily without adding a lot of tools on their computers. If there is any problem, it's directly alerted in the Slack channel and so on. So, it's really important for them and they earn a lot of time and they can release a lot of new features compared to previous. So, is there any question? And thank you for your time. Yeah. These tools are maybe not common for the Drupal community, but they are common for the PHP community. Like Composer is very... Yes, but Packer is written by the same team as Vagrant. It's from HashiCorp and yes, it's not focused for PHP. It's focused for the infrastructure. So, maybe it will become more and more used for PHP developers in the future years. Other question? No? It's on the cloud. So, every FML environment is on the cloud. So, once the developer creates a new polypressed, we are able to... The CI platform automatically will trigger the stack creation and deploy the new version on this specific environment and at night, it will destroy this stack for saving money. We are not using production database in the test. We are using a subset of the database but not the entire production database. So, using migration, Drupal Migrate, we are migrating the schema. We inject the testing data and we execute the data because if you are using the production database, when you create new features, the data for your specific feature is not yet in production. So, you have to create the fixture to test your feature. And also, using production database is very loud because of the size of the database and the only thing you need is specific use case, not all the database. But on the FML environment, what is possible is to use a snapshot of the production database but the pre-production database, a sanitized one. In this case, you can make additional stress tests and so on. Yes? Yeah? The pre-production. Pre-production is contained only the master branch. So, before deploying the new tags, the master branch is always maintained with the sanitized version of the database and any product owner can test something in this environment, in this specific environment. Other question? Yeah? Sorry, I don't hear. Maybe you can go to the microphone. Hello? Yes. So, let's say you have four different repositories and you commit to each of them and your demo is going to rely on changes to all of those. How do you test that? Combine them all together and then make the demo? Combine all the... No, no, no. We don't deploy all the repositories at once. We have the platform, so the distribution, we can deploy only the platform or we can deploy each single subset at once and we can test each subset at once because once we deploy on the fMRI environment, we include the last version of the platform and deploy both of them, the platform and the subset. Any other question? No? Okay. Thanks a lot for your time and have a great group welcome.