 this thing is clear, right? You can hear me fine, okay? Thank you. All right, this afternoon, I hope you're all well. So, I am Hosein Abbas. That's me. I currently work as a director for Drupal practice at Accelerant. Basically, my job is to define best practices and oversee the entire, you know, whatever we do with Drupal at Accelerant. That's, that's my job. I'm also a contributor to the whole Drupal ecosystem since sometime now. I think about five years for Co. It's on and off, you know, it's, but, you know, contributing to Drupal Co. contract and the entire PHP ecosystem has helped me a lot. So, I would, I would encourage you to, you know, try it out if you are. Okay. So, we are going to talk about designing a CI CD systems, system for decoupled Drupal systems. And I'm going to, like, be as generic as possible, but, you know, specifically, I'll take examples from a system we built sometime back at Accelerant. I'll describe more about that. It was a decoupled system, Drupal 8 in the back end and Angular front end. And it was a multi-site, multi-cloud, multi-neuron. We'll get to that. So, let's start from the beginning. You know, and I mean, beginning of the DevOps era, you know, what are the typical practices that, you know, that we saw at that point of time. So, before, before there was cloud, before there was cloud, you know, we had software being shipped on floppies and CDs and stuff like that, right? And, you know, then we had cloud and, you know, websites and SaaS products and everything. And software could be shipped real time, but since there was this culture of shipping things on floppies and CDs, you know, with a specific time frame, that habit lingered. And, you know, we have, even now, you know, I think many organizations, they have this practice of deployments being a huge thing, you know, you stop everything else when you're going to deploy something. Basically, it's a huge dramatic event. And it doesn't have to be that way, you know. I'm sure it's not a new thing, you know. People, people are very familiar with systems where deployments happen even 10 times a day, more than that even. So, let's talk about that. And let's start by talking about something which is very critical to every deployment, human beings. Well, this is not really surprising. It could be shocking, I know that, but it's not surprising. We are not consistent, you know, and that makes us unreliable. Why? You know, because people are complex entities, you know, we're not simple, you know, we're not simple machines, you know, where you put in one input and you get one output. We get bored easily. Okay, we forget to follow instructions. And even if we have instructions and even if you remember to follow them, we can easily misinterpret what the instruction is trying to tell us. But also people, they're creative, you know, they can understand intent. There is, you know, if you don't have to, you know, people don't do exactly what they're told, but they can understand intent and do what is necessary, not what is told. They can solve complex problems that are creatively and people are awesome. Right? So, there are these, you know, people, like I said, they're complex entities, you know, they're not one thing. But we need people to be able to solve things creatively. So, how do we, how do we make sure that people do the best job while recognizing that, you know, people get bored easily? How do we solve problems that happen when people, you know, like out of boredom, you know, I'm sure you must have heard stories of, you know, someone typing out rn-rslash and deleting the entire server, entire operating system. It's happened. So, how do we make sure those problems don't happen while allowing people to solve complex problems? And of course, you know, that's where we get to automation. So, how do we bridge these two aspects? You know, how do we automate those parts that are boring, that are repetitive and let people do the creative work? So, that's, you know, that's the promise of the DevOps era right now. And when I say DevOps, the whole revolution, you know, we in fact heard a lot today morning about this, you know, something to this effect. You know, what is DevOps? Is it a tool chain or is it a culture? Right? And we heard a lot about, you know, how DevOps is a culture today morning. And I happen to agree, you know, by the way, whatever we heard in keynote today morning, something we do practice, we live those practices at Accelerate. I like to think of DevOps as these three C's and I don't know if I read this somewhere, but you know, it's just something that I identify with. First thing is culture. DevOps primarily is about people all coming together and agreeing to follow one specific protocol, maybe. But it's that understanding, excuse me, it's that understanding that everybody is going, everybody understands all the steps of an entire software life cycle, that's development and deployment, the operations part, you know, of DevOps. Then it's collaboration. Like I said, you know, people need to talk to each other, people need to understand each other, everybody needs to be in the same team, same page, so to speak. When we are talking about how are we going to roll out a software artifact to the servers or wherever they need, that needs to be there. And continuous improvement. And we'll be talking about continuous again and again. This thing, I think this word appears almost everywhere whenever we talk about DevOps. So we'll talk about continuous integration. I think, you know, whatever we said about earlier and of course the topic of this session as well is about building a continuous integration pipeline. And these are the three main things I identify when I say continuous integration. These are the three main things. Of course, you can split it up, but basically you prepare the environment where you're going to do whatever. You're going to build your software and you're going to test it, that it works as expected. This is, I think, like the most basic nuts and bolts of continuous integration. And then we get to deployment. And that adds a step, sync deploy. Now, to make this work, to make this deployment work, you know, again, our objective here is to make deployments boring, that you don't have to think about it. So to make this work, you need to make sure that your tests have reasonable coverage. If, because it does not have coverage, if you're testing the wrong thing, you don't know the thing that you're building and automatically deploying if that thing is going to work. You don't know that. So if you want to put deployment in your, as a part of your CI pipeline, you need to make sure tests have reasonable coverage. And I don't mean test everything. I think we'll talk about what tests look like in the latest slide, but test the right thing. Those things that are important for your business, are they working? Realistically, we have, we discussed those three things earlier, but realistically, we do have a more elaborate pipeline in the CI chains. Nowadays, we don't really have an explicit repair stage. This is more from, you know, the Jenkins time of world, you know, very explicitly had to check out the repository. But nowadays, it's like most CI pipelines are CI tools. They would just, they would have, you would be starting with the repository already. And then you do your thing, you know, you would have things like static code checks. By the way, is there anyone here who does not understand static code checks just so that I understand? Okay, that's great. So static code checks, then you build the thing, test, deploy and acceptance tests. So before we get into tools, I would like this, take this opportunity if anyone has questions. So we spoke about the culture, the, you know, theories, so to speak, so far. And now we'll talk more about tools and the specific example that I mentioned earlier, the case for the decouple website. But before that, if there are any questions, I'd like to take them now. I mean, of course, at the end as well. All right. So I think any DevOps presentation worth its salt has a mention of Docker. It doesn't have to be Docker, any container, any containerization technology does, but Drupal is kind of ubiquitous. It's kind of, I mean, like, you know, when you say container, containers, people probably mean Docker. And, you know, Docker images are a good place to start because on, you know, there are 2.35 million images on Docker. Actually, I think the number today is around 3.2 million. This is a little slightly old slide. So there are 3.2 million images on Docker, Docker Hub. And that's just Docker Hub. You know, there are many other Docker registries out there. Like I said, many more elsewhere. And of course, you can create your own images that was, that's pretty basic. One of the Docker images that I use specifically with Drupal is this. That's me. Of course, I wrote it, so I would be using it. This is, this Docker image contains, you know, some of the code snips from the Coder module. You know, the, you might have used this PHP CS with Drupal standard or Drupal practice standard. It includes that. It includes Drupal secure snips, but it doesn't really work with the latest versions of PHP CS. So it's there, but it's not enabled by default. There is peer review.sh. Anyone here who wants a walkthrough of peer review, like familiar with peer review, anyone? Anyone not come across peer review before? I'm sorry. Have you come across or have you not come across? Okay. So peer review is kind of like a shell script, which basically runs PHP CS with those standards. So the idea is that, you know, whenever you write a module, a custom module or a concrete module, you would, you can, you can use this peer review script on that module and it will give you a list of all the issues that, you know, that it found with your module, mostly to do with the Drupal coding standard. But, you know, things were like Drupal practice and Drupal secure. They also highlight things that could be potential bugs or potential security vulnerabilities. It's, it's kind of a static code, static code analysis, you know, not, not PHP standard. It's like a more on a, on a more basic level. I'm happy to talk about this later. If you only can, you can catch up with me or you can just do a search for any of these peer review or PHP CS. Then there is Drupal check. Drupal check is an, is an wrapper around PHP standard. I'm forgetting the name. I'm sorry. The person who wrote it, Matt. Matt Glenn, he wrote this, NMD Matt. I remember the Twitter handle. I think that's how we recognize people in the online community by the Twitter handles and by D dot own names. But I think it's NMD Matt. I'm forgetting the name. I'm sorry. But yeah, it's a wrapper on PHP standard. And PHP standard is a more comprehensive static analysis tool. And I recommend you check that out even without Drupal check. You can use it with your regular PHP projects if you're using that. And this image is actually based on another image called, I'm not really sure how to pronounce that, jksda slash PHP QA. So it contains all the tools that that has, it combines it. And this image is by the way, it's, you know, it supports PHP 7.1 to 7.4. 7.1 is not supported. It's just there for like, you know, archival reasons. So this is about the image. So this image contains a lot of tools that you might want to run during your testing phase or static code phase. And like I said, you know, I maintain this image. So if you have any questions, please feel free to reach out to me later. And then there are CA tools themselves. These are four which I have used personally. Am I forgetting something from this list? Maybe, but like these are the four I have used most. Today, we are going to talk mainly about Jenkins, even though I think I've pretty much used all the other three more than Jenkins. But this project, we implemented the CI system with Jenkins. So we'll talk about that. But yeah, I think the most I've used is GitLab CI. Anyone has any preferences over here? The techniques that I mentioned are equivalent, you know, they work with any CI tool, there's nothing special, but we'll be talking about Jenkins today. So let's talk about, you know, what a flow looks like, a CI flow, you know, CI pipeline runs, what does it look like? The first thing that you should aim for is developer ease. And the reason is simple that if it is not easy to run, people won't run it, people will stop looking at it. And I've seen that happen in my teams. If the CI, for example, if it runs in some corner and, you know, nobody even looks at it, nobody bothers with it. You know, things move on and like this very valuable tool is there, it's just there unused. It should be fast. If the developers are expected to wait like about 10 minutes, 20 minutes for each build, people get bored, you know, people, again, stop paying attention to that. And when they see later that okay, there was an error, there is a, they've already moved on to other things, you know, there's a context shift and people find it difficult to come back to this task. So it has to be fast. It should test the right thing. I mentioned this earlier. It does not have to, each CI run does not have to test each and everything. Okay, test the right thing. Because one of our concerns is the speed of the run as well. So you might want to, you might, you might decide that, okay, for every push on a feature branch, you would just be looking at like, you know, very basic tests like static code check tests and maybe just unit tests. But once you push to a staging branch or, you know, like before you're about to merge, you might want to run all the functional tests as well, which typically take more time. And you might even decide that there might be some testing like commonly called smoke testing and all that, which are, which take a lot of time. You might want to run that on a likely basis. It doesn't stop merging everything, but you still get daily feedback about what's happening. So again, there is no, there's no standard practice here. It's what works for your project, size of your project, the needs of your project, the size of the team and everything. And notifications. And this might seem like an odd thing, you know, odd thing over here, but it's actually very relevant because if, if there are more notifications, it's very, very easy to overlook that there is a CA test running somewhere. Tests and all this deployment and everything that it's actually running somewhere. So we use Slack internally, it was, it was very obvious that we just put in a Slack plugin and whatever happens in the build, it gets sent to Slack where the team is actually working and they see if they're near us, they see it immediately. So I'll talk about a bit about our experience with Jenkins. The developer experience is not that great and I'm sure Jenkins is changing that now with its, again, forgot the name of the revamped they are working on right now. Yeah, yeah, yeah. Blue Ocean. That's right. Blue Ocean. So we didn't use that. We used the mainstream version of Jenkins and the developer experience is not that great as in for someone who wants to configure the pipeline. Like I said, I have experience with the other CA tools and I like everything, the entire CA configuration to live within the repository itself, like GitLab, CI.yaml, or Circle, CI.yaml file, something like that. So we did something like that. Jenkins, of course, it does support pipeline input using some plugins and the cool thing about that is that the pipeline can be written using Groovy, the Groovy language and that actually makes it more powerful than the other CA tools I've used because the other CA pipelines are just declarative and their Jenkins file can be declarative as well, but you can actually write it as a script. Third-party support, Jenkins has been around for a long time, so third-party support is pretty good. Integration with our workflow, again, because we had plugins for everything that we needed, the integration also worked out. It is customizable and it is secure. Security comes in, I mean, of course, security comes in at the level of locking down your builds, but also at the level of how you pass things like secrets to your application, to your build process. So in this case, we needed to pass in AWS keys and Aliyun keys. Aliyun is like an AWS equivalent for Alibaba Cloud, so we need to pass those keys and many other things, like basically the secret management and Jenkins has its own secret management layer, which is, I mean, that can be called, that can be extensible, but again, the one that comes with Jenkins is pretty good too. Now that is Jenkins. Our case was with Decapal Drupal. So when we needed, when we went about planning our pipeline, we had to consider these that this was, like I said, this was a multi-site architecture. We had five, I think later on we had one more six Angular websites, only one backend. So there were multiple websites, but it was also on a single backend, typical multi-site architecture. Then we had, because we have all these repositories, we wanted to reuse code. So we had some more repositories for storing infrastructure-related code. So after this session at 4pm, I'm going to talk about Terraform as well. Again, the same project, but how we handle the infrastructure side of things. So we had repositories which defined the entire infrastructure as a Terraform module and ability to share all of this configuration. We needed that. So the other CI tools that we commonly use, it's not very simple to mix and match different repositories, but since Jenkins has kind of programmatic in nature, it was straightforward, how do we pull in common pieces of code, libraries, so to speak. And again, I'll come to some examples. This is, I don't know if it's a little blurred, but I think it's visible. So this was our CI workflow and every time there is a push, a developer pushes some code, we used GitLab. And so the GitLab, it sends, you know, it fires off a webhook to Jenkins which starts the entire CI run. And we configured it to run in all branches, but deployment only happened on certain branches. So a few things. The back end lived on Acquia servers. Are you familiar with Acquia Site Factory, by any chance? It's an Acquia product, allows you to build a spin up additional multi-sites, you know, using the UI. So we had to deploy to Acquia Site Factory for the back end. And for the front end, we deployed front end application, well, not through Terraform, precisely. We deployed it using AWS and Allusion's own tools, but we created the infrastructure using Terraform. And in the initial stages that happened for each CI push. And the reason for that was, like I said, there were five different websites in the beginning. And each website, some of them could go to multiple clouds on both AWS and Allusion. And for each website, you would have four different environments, a production environment, staging and developer and testing environment. So we had to maintain, even using Terraform, we had to maintain, you know, at that time, I think around 24 different instances of our entire front end architecture. And that changed. Initially, we created a lot, you know, we identified what work, what didn't work and we changed. So we just put Terraform in our pipeline, so that whenever we deploy a particular environment, Terraform would run and it would update the environment. And like, you know, we'll talk about Terraform in the next session at 4pm. Eventually, the implementation matured and we didn't have to write Terraform, so we just removed that part. We just left the deployment part and that actual pushing of files from, you know, the build artifact to AWS or Allusion. Like the deployment was basically moved to S3, AWS S3 or Allusion OSS. It's the equivalent of S3. And yeah, all the steps used to happen using Docker. The Docker server actually lived on another machine, so that's not a very big deal. It's Docker is a server client architecture anyway. So all the tasks themselves have executed in Docker containers. We also had additional code quality checks through SonaCube server and then any notifications that needed to be sent, we would send that to Slack. And yeah, that was essentially our whole workflow. So this is what our pipeline looked like for Drupal backend. And it's very, you know, simple. We just check out which is similar to prepare. Very, very basic linting code quality checks. Code quality checks are just those PHP CS checks I mentioned earlier. Build is Composer install, pretty much, you know, that's the only thing that you need for Drupal. And then static analysis and that was PHP stand. So this happens after build because PHP stand needs the whole, all the dependencies installed. It cannot just work on the code you want to check. It actually builds an entire tree of all your code including dependencies. So it needs to run after build and then finally deploy. And this is yellow because that's a different story. We needed a lot of time to figure this out, figure this whole pipeline out, you know, and the team was already moving. And by the time this pipeline was effective, sorry, implemented, the team had already made some less than optimal decisions. And of course, we didn't want to break builds for that. So we, nothing serious, but you know, things like mostly like warnings that like, for example, cyclomatic complexity was 11 instead of our threshold, which was 10. So we let that go. We took all of those as technical debt issues for the latest print. And we just marked this builds as yellow. And, you know, Jenkins has like Jenkins is pretty solid. It gives you logs for all of these different stages. Linting was, it'll just check, you know, syntax errors in different PHP files, very, very basic. And then code quality checks go a little bit deeper once actual PHP CS tests. And then you have build, which is essentially composite, composite install, like I said. And that's about the backend. For front end, the pipeline is a little bit more elaborate. And one of the reasons is, like I said, we actually deployed to multiple clouds. So you see that, you know, we are actually, and multiple environments as well. So in cases like these, you know, we have actually first building the dev environment and deploying it, and then building the dev environment for AliU. And we need to do this twice because the architecture for application was completely server generated. So it got its data from a Drupal backend, right? And that URL needs to, that URL needed to be hard coded inside these files because these files are served from an S3. You know, there is no opportunity for modifying these files during runtime. It's just these HTML files which are directly cached on, even at a CDN level. So we had to generate these HTML files with the appropriate API endpoints hard coded. So we have to repeat our builds for, like, you know, we have to first build for AWS and then also build for AliU. And yeah, these are, like, you know, what an Angular build looks like, ng-build, the output for that. And yeah, we would, whatever output we got, we would sync it to the S3 bucket or whatever. So you can see the command AWS S3 sync. Very simple. We just copy the thing. And that was the CI, that's what the CI pipeline looks like. Are there any questions at this point? And I'm going to just, you know, look at certain sections of the code, Jenkins file code. But if you have any questions before then, yeah? Okay. So in this, I should say that, in this particular project, we didn't really aim for that, considering the timelines and everything. We didn't really aim for that complete end-to-end test pipeline. We attempted that, but timelines didn't allow us. But yeah, the way we were going to approach this was to, first of all, agreeing on a contract, front-end and back-end, and actually stubbing those contracts as files, sorry, as like specific JSON files, which we would use as pictures on both levels. So on both layers. So back in, on the back-end side, we would test the API and check if the API output is similar to what we're expecting from the fixture. And the front-end side, it will just use the fixture as it is and render the component. We wanted to achieve a reasonable coverage using this method. Unfortunately, we didn't manage to. Does that answer the question? Yeah, yeah, yeah. Okay. So it's a, I've done it before. Thank you. On the front-end, do you check accounts with multiple browsers? Multiple? Multiple browsers. I think that was not automated. I could be completely wrong about that, you know. It's been a while. But yeah, that test was not automated as much as you can remember. The browser tests. Any other questions? Okay. So this, now, this is Groovy code and just walk over some of the relevant sections over here. So the first thing that we do is check out. And like I said, in Jenkins, you specifically need to check out the repository that you want to use. So this line checks out the target repository. This check out a cm. But like I said, we were sharing some common infrastructure code. And this second block of code that you see the checkout and this whole, all this five, six lines that let us check out another repository. We just had additional Groovy files. So we split up our Groovy files and put all the common code in this repository. So we don't have to update each and every repository that way. We then, of course, like, you know, you can see over here in the last lines that we load the Slack.groovy. That we load for every script because Slack notifications are always there. And yeah, just do a little bit more configuration, you know, like setting up the Slack notifier and things like that. Yeah, we also said that, you know, in many cases, we didn't want to automatically deploy for some of the websites. So we just marked it as not like, you know, should deploy is false. This is how we use Docker. And again, so it works off a plugin, but in your Groovy file, this is how you would use it. You know, you would write some syntax like this, specify the image name. So we're actually building, you know, directly using an image. We're building an image, building a Docker file and going to tag it as Drupal build, build ID so that it doesn't get cached. And yeah, I mean, like, you know, when we directly want to use an image, we can just use the syntax, Docker.image. Like I said earlier, I mean, we are using this image Drupal QA PHP 7.1 because it was PHP 7.1 at that time. Site Factory Site Factory now support 7.2 I believe so we can upgrade but still 7.2. And, you know, similar other comments like PHP CS and everything. I mentioned that, you know, like we didn't really escalate certain things as errors. I think PHP MD was something that we chose that will keep tracking it, but not stop the bills for that. You know, because PHP MD reported things like like I gave an example earlier, cyclomatic complexity being like 11 or 12. And that was okay, you know, we could live with that for a while. So we just marked the bills as unstable. We would not stop the bill. Normally this would stop the bill, but you know, we have this try catch like structure very similar to what we have in PHP. So if this PHP MD command returns a non-zero exit code, the pipeline would stop. But because of try catch, it will still continue, but it will just mark the bill as yellow. Then I already kind of covered this same thing. This is how we handle failures using this try catch structure. And conditional deployment, you know, we didn't always deploy. We only deploy if we should be deploying like configured and only on certain branches, not all the branches. Again, this was carried over because of our infrastructure decision. Site factory only supports a limited number of environments, actually a cloud in general. So I think by default it's three. We got it extended to four for our needs. But only those four, so we had a branch mapping. This branch goes to this particular environment and only those branches should be deployed. And we just passed that to BLT. BLT does the deployment for the back end, for Drupal. So I'd like to leave you with this. DevOps is culture first and tools next. Again, I don't need to spend a lot of time talking about this because Kevin did it this morning. So if you have any questions, I'm happy to answer them. And you can always reach out to me on Twitter. Twitter is where I'm most active. And I'm pretty much, you can just search for Hussien Web. Okay, I think, you know, if you have any questions later, please feel free to reach out to me on Twitter. Thank you, everyone.