 Thank you. Thank you for joining us today. Really happy to see all of you here. Well, thank you very much. We really appreciate that and Today we are going to talk about Testing and how we can continuously test our Drupal distributions. So this is our topic welcome and Hello, my name is Alex Shadrov. I came here from Ukraine. I work at the FFW as team lead and architect and also Will we are building open wide distribution? So potentially you may hear something about it and Here is my contact information in case you will if you would like to catch up on some things my blog. I write articles about technologies events and stuff like that so welcome and This talk is going to be based on the experience that we've got from the open wide distribution So that's the Drupal 8 distribution with all fancy things like composers the coupled things integrations so This talk is completely based on this experience. So in case if you haven't Took a look on the open wide. There is a repository on on the github So let's get started and today I'm going to talk about our continuous integration server and our continuous Testing for this distribution and about his evolution and all we know Yeah, that's that that's the fact that every project starts from the first commit when we are so excited We create new folder we like we We run get in it get commit and just push and here we go So we have empty repository. We promise that we will cover everything With tests so 100% test coverage. We are so excited, but then we realize that Instead of writing codes, we need a little bit more We'll we need a little bit more tools in order to keep distribution stable in order to test some specific features Because writing code, it's like it's okay. It's day to day day today work but testing this is something excited and and Was that when we created open wide distribution when we pushed our first commit? We realized that we need a little bit more and first problem was how to react to proposed changes So we have our team at the FFW. There are also some teams and other agencies who are working on the distribution So we have to Somehow react to created pull requests on github So then we came up with an idea of using Open-source product that's called CA box CA box is just Jenkins plus some customization on top of it. So when I say CI server, just think about it as Just Jenkins instance for the distribution So if you take a look at the software that we are using so the first one that Jenkins It's just kind of worker. He's looking at something and he's doing something Also, we are using pull request builder plug-in. That's Jenkins plug-in. I know there is pipeline and you may think that Come on buddy. There is Jenkins to pipelines. But yeah, we are all cool guys We are using Jenkins and pull request builder because we've started our distributions distribution like more than one year ago and Also, also, we're using github user in order to give an access from Jenkins to github And actually Jenkins will be able to take a look at pull requests post comments and do different stuff on github and How eventually CI is triggered? So the first step Pull request is created. Let's say you found a bug that Buttons shouldn't be red. It shouldn't be blue. So you commit your code. You create a pull request on github Then see a server pulls that code from you from your pull request to some server It doesn't matter like the destination, but it just pull the code then Then the most important phase he can perform some actions on the Scott analyze do something and then Generate the report based on this and eventually developer will get this report on github So he created pull request then Jenkins chime in do everything and then developer will get the report on github So if we take a look at the our CA server open wiser server, it's it's it's pretty small. It's just digital ocean droplet for $20 per month. It's it's small. It's like nothing. So we have Jenkins We have two CPU two gig memory 40 Gigabyte SSD and also Ubuntu as operation system and for for all this automation We are using Ansible in case if you are not familiar with Ansible I really encourage you to to try this thing because it will just change your life forever There is no way back from Ansible to Bosch. So just just try it So now We have a tool to do something. So you remember that that that step and we can perform some actions so and We need something to perform actually on the code that has been pulled from github. So our next step was to Add cut sniffers. So the problem that we were trying to solve to keep cut base according to standards Since we're building open source product, we should be really on the same page work with everyone with contributors with the community So we have to have cut sniffers. So in our case We've had PHP cut sneaker Drupal cut sneaker From Drupal coder module GS hint in order to check JavaScript code and SCSS linked in order to like Take a look at the our theme how it how it was built and stuff like that So and try to to write it in the right way What will be refined with this with the sniffers first of all We check our installation profile because all we know that this throw is just installation profile. We check Custom modules and custom themes. We do not care about contrips and core and everything that is Kind of third-party libraries to query plugins Just we check everything that we actually create And here is a response that developer actually gets whenever he creates a pull request So that's the response from our bot from our CIA server and we see that there are some Report and links to artifacts. So in this specific case, we have two errors from PHP sneaker We have a few errors from GS hint and a lot of errors from another GS hint in theme GS files and This is the artifact file. So whenever A developer opens those links, he will just see the output from from the sneaker sneaker Applicate a sneaker tool So and in this case when we see that there are plenty of errors. We just say hey, buddy Go back to work and fix all of them and then ask for a review But in general we have a rule so whenever developer creates a pull request He should make sure that there is no errors and then ask for a review and yeah, that's that's the way how it works but you may think that Alright, there is Jenkins. There is some plugins of indication Digital ocean for 20 bucks. So if there is something that I can just Enable in a few clicks. Yes, there is but they are not as Customizable as custom Jenkins and the first one called caught climate So there is there is service that provides cut sneakers for you for some Subscriptions so they provide subscriptions and for open source. They provide kind of free free subscriptions. So This tool my look at your repository They may generate some fancy badges for your tools that we may put in the read me MD file But it doesn't support Drupal yet. So it works with Python PHP JavaScript, but there is no Drupal and they I know that Somewhere in their support. There is a ticket to apply like Drupal coder standards and stuff like that But there is no support yet and another thing Another service cod ac so the same idea as caught climate badges Like pull requests and stuff like that and again subscriptions So in case if you're looking for something like late less customizable and more easy in configuration So take a look at those two things Eventually eventually we've got sniffers in order to make our open source Distribution better because everyone will be looking at it and in case if we have something not according to standards Everyone will blame us for that and we'll say that here is like bad caught. What what what what are you doing? So that was only second step and the third step builds. So we've applied Kind of sniffers. We were happy for a few days and then realized that we need a place to test our changes And the main idea was that we should test distribution before merging to master branch Let's take a look at the example with Drupal dot org Never someone creates issue whenever someone submits patch We should not it apply locally and then just do something test it and report feedback back Of course, there is unit testing but unit testing cannot like test Color for the button or general appearance or stuff like that. So we have to the law this patch There is no environment. We have to the law docker background or something like this So there is you have to spend a lot of time to test it with builds It's like more easily to test it because you will have a builds and pull requests So we've started from the installation profile before releases before everything we just Started from installation profile and you may you may have a question. Isn't it just? Simple drush site install command. Yes, but no Because it's not just simple drush install. We have to perform a lot of actions like prepare for folders Add some comments to settings PHP enable some modules So there are plenty of things that should be actually executed and here is the process Here is the process how we actually built the installation the installation profile On pull requests the first step we pull the cut we create some necessary folders for files set appropriate permissions Then we go to drush and install a simple command in order to install the the Distribution then of course we add some post settings to settings PHP like we have to specify some cache settings We have to specify Mamcash settings varnish settings and stuff like that Then we go to modules and drush Let's say we have to change the password for admin or we have to enable devil module or disable something or enable DV lock so we do this at this phase and then only then we generate report and send it to github Then at some point we've had a requirement that we should add some steps to the installation process That's the default process when you go through installation steps and like set up database and stuff like that. So we had to Create environment for that because our qa and we are didn't want to spin up environments locally So we we've had any requirement to add additional step to this form So after all default Drupal steps, we had to add Additional step and we are lazy. We didn't want to Pull caught locally and just imagine how how we would need to explain this to qa and to someone who will test this Like cologne decode set up environment. So we we really wanted to have this in the pull requests Process almost similar. So first step cod folders permissions Then magic trick to create empty database Then add credentials manually to settings PHP and then send the report Whenever you have empty database and you have credentials and settings PHP this step With setting up database will be just skipped. So you choose language very far requirements will just Skip set up database will be skipped and you will just install the website. So that's that's pretty simple However, it will give you an ability to test the installation process separately So we were happy. We were building a distribution and at some point we did the first release so we We we we we we we drunk some beer after that. We were hanging out We were happy that we realized that we need an environment one more environment like one more thing for for that distribution So we were talking about this and question was should we really test all hook updates that we are We are writing and yes, of course we are we should because This is the most complicated thing in Drupal and this is the most important thing in Drupal That's the support for already Accessing websites and if they are using your distro. So for that we set up additional environments and process Pretty similar pre settings Then magic step with importing SQL dump to the database. So after release we have a rule once we do a release We just create SQL dump based on this release and on top of it like different layers We add hook updates on pull requests. So we have kind of previous version It's kind of emulating a real website that client might have so we import the SQL dump run drash update to be add some post settings Enable modules and send a report to the developer and here is how a report looks like so we send three links To the developer first one for fresh open wine installation the second one for upgrade path and installation process So you open the first one you have the fresh installation of the distro you open the second one you have installation of distro plus all Updates hook updates that were created From the previous release and you have installation steps so as I mentioned we are using Ansible and this is just Like example how we how we write scripts on Ansible There is a link in case if you'd like check out the whole like the whole folder with scripts Please go ahead and do that and this is our main script for builds. So we We have general playbook and we just include other playbooks. It's like just split out caught into different different files So for a new installation, we are using steps where we specify credentials prepare environment install Distro using profile then modules then rush. So that's it Then we have upgrade path. We just skip the profile and we enable the SQL workflow Ansible playbook And for the installation step the simplest one, we just Create settings PHP create empty database and here we go We are ready to go and of course credentials and settings PHP So with this diversity of builds our QA is happy our developers are happy because they do not care about Local environment anymore whenever they are reviewing something. They just open the pull request Make sure that there is no accessibility of Sniffer problems make sure that there is no problems on the builds So they merge pull request and then like it it appears in the master branch that will be eventually released But again, that wasn't enough. So at some point we've had new functionality that should be Super stable so we added be had tests to that in order to cover primary functions and primary features that we've had in distro So we've added users have been testing a kb hat Most of us are familiar with this and most of us are we're trying to Actually at be had tests and we were trying to do that in real projects. However, like only on destroy it It shows the really good results with that so our our approach was to Create and integrate be had test once vanilla build is ready. So that build from just installation profile We just run be had test so be had will go through necessary pages and make sure that everything fine or everything completely failed so We are using composer in order to add be had to your projects or to your distros You have to add Drupal driver Drupal extension be had mink Got here driver, but last two. They are not they are not required. They're optional So the first one is just generate really cool reports I will show them a little bit later and the last one it contains a few cool Syncs and features that are not available in Drupal extension by default So in case if you're using be had take a look at this Then we have to create be had dot yaml in be had dot yaml with specify Context so actually those libraries that we are going to use in our tests use It's it's pretty simple again. There are always link to the distro Well, so in case you would like to take a look at the code open ads and take a look and This is sensible playbook. That's the beginning of our Ansible playbook that we're using for for automation That's just list of variables, but the important part here that we are using Docker For selenium so in order to run JavaScript tests, so actually tests that will go and do something on a page We are using docker and selenium container. So That's the beginning of actual tasks that display book will go through We wait for selenium container to be stopped You may think What the heck is this means that we run only one single container we do not run multiple containers for different builds So let's say we have two builds We run container for the first one it will run be had tests then we stop it and the second one will be Run after that. So we do not don't don't want to run multiple containers because they will eat memory It will it's like disk space and stuff like that and we are just lazy to implement that right now because it's not that important so Tasks in tasks we run selenium container for current build then we'll wait for selenium container to be actually run Then we run full be had tests in case in case if we would like to run headless tests plus JavaScript tests or we would like Run just headless tests separately and those two comments They will actually generate report that will be available for some link and in post tasks We just stop container and we move that container and here is how the report looks like for the developer Just simple link simple link to something like this This is the report from be had in case if we have some everything green That's good in case if we have something red That's very bad when you have to get back to work and fix that Until that your pull request will never be merged into the master branch story Sounds good, right sounds good be had accessibility sniffers bills, but it was Very slow. So be had tests that we've developed only for a few features they like They they they take 30 minutes in order to just finish be had tests That's that's not the way how how we were going to go with with be had test because we have to wait 30 minutes And only then we can we can like analyze if this pull request should be merged or not Here is an example of simple feature that we have in the in the distro So that's the membership calculator page. We have three steps on the first step. We select the membership type on the second step we select location and The third step is just confirmation step where we just show the map location info and Membership type and complete registration as will lead you to third-party service So just simple form three steps But in order to create be had test for this we have to generate media images Then create location not so from the list that was selected Then create Membership content types then create paragraphs then create one more paragraph that is that will wrap All that stuff then create landing page and embed that Specific component that will render this page and only then we can run tests Just imagine how much time be had will go through all these forms and fill them out Or just imagine how many code you should write in order to use Drupal extension To generate this content using just build in be had be had stuff So there were problems references by these so there is no way to easily handle that duplicated caught slow tests and complicated maintenance for that because we just will copy and paste our Tests in order to generate just the basic content so we decided to just write custom be had context for that and This context should really generate content for us and we wanted to split out generating content and Tests use itself so generating content It's not a part of test itself because it's something separate something that will be used in tests But it's not tested self. Let's say we create a page and we want to make sure that on this page We want to see label and image We want to see label an image. That's test Creating content and creating some page that not the test that just preparation for tests So we created custom context where we have a few just a few methods that accept a lot of different Parameters, but eventually they generate content for us and it's easy to maintain that So here is an example how we use that context in the background in be had background It means that it's not a test It's a background for test in background would generate all these content and there is a really complicated fields like field location so with it accepts country got address line locality Area postal code and stuff like that. So it's really complicated field. However, like visually when we look at this We understand that it's content. It's not a test itself. So we create branch paragraphs media items Landing page and then we are anonymous users. We are ready to perform tests and our tests Look very very simple. So we have a content. We know that we already have a content So we just go through steps and verify what we should see on the page and in this way It's really easy to maintain. It's easy to work with that And it's like easy to generate new tests because let's say we created a new feature We already know we should have now the image and some pair of we just write this these fancy tables And that's it, but there may be some problems with debugging in case if you are performing JavaScript tests So debugging debugging is important and when we use JavaScript tests We really want to see what the selenium is doing and for that. I I would like to suggest using standalone Chrome Docker container with debug suffix. It means that it might be used for debug So let's say you are using this container and there is a great tool called real vnc We'll vnc. That's the application that Allows you to connect to some remote remote screens You have a container kind of remote screen. You have a tool to connect it So just specify IP address port and you will see everything that we had is doing So he's clicking on some things like slideshow is Changing his slides. You will see everything on the screen with this with this tool. It's really cool thing for the debugging So with be had tests we covered the behavior part We covered the important the most important part of the distro. Let's say membership page Membership page for the ymcs for the open-wise almost the most important part because this is the place where? Custom new customers are joining their branches. So we covered primary functionality But again, guess what that wasn't that wasn't enough So we were going deep and deep with tests and with distro building. So second thing That's we applied after be had tests We added the accessibility sniffers and we need them in order to check theme Accessibility because we are working with non-profit organizations. So ymcs. This is the largest non-profit organization in the United States They have many different cessation that They really need accessible websites. They really need accessible websites because they really care about people with accessibility problems So that was the challenge for us to create the accessibility distro and in order to kind of create good a Good framework in order to create Accessible theme we were using the html code sneaker. That's just the tool that checks html markup according to some standards there are Web content accessibility guidelines built in in this tool So we just have to install it using composer and you can use it and just verify some pages and we did that Here is our playbook that we are using. So yeah, we are using a lot of playbooks again link there Invariables we just specify a list of the most important pages that we would like to check for the accessibility and Then in the loop we go through the this page these pages and check them for accessibility. So And this is the report that Developer eventually will get so he will get a lot of comments on github. Yes, but they are important They are always important so oops whenever developer opens these artifacts these links He will see something like this. So we will have a list of we will have an output from that's HTML sneaker we will have a lot of problem there We will have some errors and warnings and notices in our case Like in all cases, it's impossible to just clean up that list because it will be huge There will be tons of these errors and warnings like one hundred one thousand two thousand 20,000 warnings and that's okay because we really have to care about errors because errors the most important part of this of this Report because they will really block the people from visiting the websites They will really block blind people from being able to Navigate through the website. They will really block someone from being able to use the content of your website So we have a rule that pull requests shouldn't be merged whenever we have these errors Warnings that's okay So with that tool we achieved the accessibility We actually created accessible theme. It's not fully accessible because it's it's kind of impossible to create accessible theme that will much That will actually solve the problems of everyone however, that's the good like foundation for creating more accessible websites Based on this we've performed some testing with blind community We've performed some testing with people who are using the website with accessibility problems, but from technical perspective That's awesome foundation for building accessible themes and front-end and designs But after this we realized that our builds like took one hour to generate builds to run be at tests Plag accessibility is and cut sneakers and that was insane because you just create pull requests and You have to wait one hour in order to make sure that there is no errors So you have two options first one just sit and wait and everyone whenever someone asks you what are you doing? You might say that you're waiting for a bill. That's a good excuse or that will block you during your working day because you have to create pull requests switch to something else then switch back then Like fix one tiny error or type on your code and then why wait one hour more So that's that wasn't saying so we came up with an idea how we can speed up our build how we actually can accelerate our builds and Improve them. So we wanted to convert them from something like this to something like this so they be they should be really really really fast and The main idea is pretty simple. So we split out everything that I've just outlined to primary build and others secondary not important builds that are only for geeks and like Technical guys and stuff like that. So primary build. That's the Drupal installation This is something that we're really building and this is something that our distribution should do other builds Accessibility be had other builds. They are secondary. They are important, but they are secondary So and we wanted to run them in parallel. Here's an example. How how it was working So we generate primary build once it's ready We send a comment to github with set status and github then we trigger all Secondary builds they are running in parallels and whenever some someone is ready. It will just send a new comment So it looks like you create a pull request in 10 or 15 minutes. You get Comment with three links to vanilla great boss and installation steps You open these builds you click something you verify that it's working and then new comments pop ups in your pull request And then when you finish testing with the real build, you just make sure that everything is fine here Since we are all school guys, we're not using pipelines in from Jenkins. So And at that point when we developed this there were no ability to Actually change the status somehow out of the box from pull request builder So we had to do this manually. That's that might be scary No worries. I sometimes even I do not understand what does it mean like I spend 30 Like seconds were few minutes to understand but it's it's really simple. We just generate JSON and We use curl in order to send this JSON to github IPI using bot name and token It's it's simple and cool thing cool thing that like will blow your mind That's the application called Joe. It might generate just on files right from your CLI So all those variables they will be placed in pending JSON and you may just say to curl that here is Jason That's the cool thing instead of creating the whole Did the really big Jason in your Bosch just that application does that so that's how we actually set a statue So some github here is an example of how we run tests. So Ansible we go to folder. We Execute Ansible playbook test.yaml generate JSON Setup stat use generate JSON post comments. So it's it's it's really Easy, but that code is hard to understand. Yeah, I know that I am so sorry for that But it works it brings that value that we're really need so But there is always but there is always but so that's life Because Jenkins whenever something is failed in your job Jenkins will just stop your job Jenkins will not proceed with anything that is like below that failed job and in order to set failed status we did that trick we is Plug-in called post build task This plugin allows you to Run something even though your job has been failed and with this Let's say we have a lot of different playbooks We run them separately and the first playbook has been failed in this case. We say it's okay It might be failed. We will check it later At the end we just go through console output and verify if there is a word mark build as feeler If there is word was aborted or failed We just send the failed status to get up like this So you've seen this code, but just with the pending status in case if we found something in the console output We just set failed status and here is our build Timeline after all those improvements so we split out our job and we split out our Jenkins Stuff into different builds. So our main build approximately takes 15 minutes because we have to install Drupal Import the skill database Trashup DTB stuff like that After this we just run other builds Let's say accessibility sniffer takes like 30 minutes depends on the list of the pages Sniffer's takes 30. Oh, sorry. It's simply sniffer 30 seconds sniffer 30 seconds We have tests up to 15 minutes depends on the Number of concurrency builds So that was new timeline Looks good. That's for me because in comparison to one hour one hour delaying your pull request It's it's really good result and here is like a list of statuses that we get on github after Disimprovement after less the builders so default in default we have vanilla installation Great plus installation installation steps plastic sniffers So this is something that we should separate from from the default job And we have be had tests and accessibility tests and here is how failed status look like in case if we something red We just see it's not acceptable go and work on it again But the main thing from that will main thing that we learned from this when we split out into different jobs Realized that now we have an entry point for new tests for new jobs So we have the primary thing we can add Secondary jobs and we can add them as many as we want and that's the great thing And now we are working on so exciting things so I wanted to share them with you as well So now I'm going to talk about what we are working right now and potentially in a few weeks Maybe after Drupal contest will be also available so we can just visit open a while antique look at the code and reuse it decoupled tests so decoupled is a word that came to our lives from Drupal 8 and Since we're like crazy technical guys, we realized that we need something really fun really interesting So we wanted to challenge ourselves to Create good product so we came up with an idea of of using decoupled tests and Decoupled tests. It's mean that we really wanted to test the distribution and its components separately from the destroy itself So we are using component based architecture in the distribution means that every component that we render on the page It's something separate. It's not the like monolithic system It's something separate the component is separate and here is just rough diagram Do not pay attention to this part just like take a look From from your side how big the system is and those are not all components We have like hundreds of components and they are small and big and middle So we main point that we have a lot of components and we wanted to test every single component Separately from the distro So we came up with an idea how to do that So first step we just select a module from the list of modules that we actually want to test Then we install Drupal using testing profile Then we get a list of modules before installation Then install module get a list of modules after installation Create difference between before and after and then go back and select another module You may wondering what the heck is this? What are you what where you will use this data and these reports? So it's pretty easy what we will get from this report First we will see how much Dependencies how many dependencies each component will have so let's say you've specified some dependencies In your info file, but eventually when you enable this module it has so much more so many more Modules that should be enabled. So with this report, we will see that identify unnecessary dependencies We will see if components are ready for being used and other Projects because we have a lot of cool things that maybe we use just one clients projects like media features Some landing page with our templates and stuff like that And since we are using all p object oriented programming, we can check coupling and cohesion So those two criteria is the they actually show how your objects are ready for being used in another system or they are not ready and they are Embedded into your system. So it's really hard to decouple them from from from your system Ansible playbook that we are using that's the master playbook that just go through the list of modules and Include another playbook. So just that's kind of just worker that will go through the list of list of modules and pass some additional variables to the another playbook There is no link. It's not available yet. However, I hope that we will have time to finish that So in another playbook really simple drush PML that will show you a list of modules then drush module enable Then we just check check if module has been enabled at all because there might be some use cases when module Will not be enabled without another one that hasn't been specified in dependencies Then we do drush PML after and then we create a diff so Our goal and our idea to have report like this We will have module. We will have statues It will demonstrate if module has been enabled at all and we will have a list of dependencies So you may see that open wide media image hasn't been enabled separately on testing profile. It means that We did something wrong with it and we should check it make sure that it's decoupled But there is a list of dependencies So you may wonder how we how we may use this example team lead on or architect They may just quickly go through this list of dependencies and as a temple we have open wide not landing We have meta tech and scheduler scheduler sounds like Separate feature right it. It's not the part of content type itself. So we may notice that meta tech Should be part of cell feature like search engine optimization and scheduler should be part of separate feature Instead of those two modules first We should specify dependency on these features is you all or scheduler or we should get rid of these dependencies from here So it's there is no obvious solution for developers how to react to this report. However, it will give architect and team lead Some fee some data to think about how the modules can be better Decoupled from distro Yeah, yeah, we are crazy. We're working on some kind of stuff from time to time in order to get some In order to bring some fun into our lives and that's true But with that we may create really sustainable distribution and modules and components may be reused So you may just reuse it on on on your own projects that you're building for the clients Why we should like create over and over that media feature because we already have it Well tested with a good design with the good stuff in the distro So we won't make sure that if we pick it from the distra enable somewhere it will work But that's wasn't enough again so we came up with regression testing because Just imagine like from the first release up until like 1.4 and 1.5 The distro like grew up We've had plenty of pages that should be tested an RQA just burned out because he every time he has to go and Just compare pages if there is any regressions Just imagine how miserable his life was during that time So he he had to open a lot of builds. He had to just compare them. That's insane so and We started working on the regression testing. So it's it's it's really simple in implementation however It might be very difficult in implementation as well And here is a workflow how we do that first we set up list of pages that we would like to check for regressions Second we just generate a list of screenshots From primary environment, you may just have some depth server that you can treat as a primary environment And whenever we generate new builds, we just compare these screenshots. We just compare them Using be had extensions So there are two extensions that may help you with that first one be had screenshots Compare from name. We can figure out that it will help you to compare screenshots the second one be had screenshots It's obvious that it takes just it just takes screenshots. So yeah those two those two we had to Composer JSON or just Composer require and Specify those two libraries and then we had to a little bit just or be had YAML We had to specify those those extensions and a few configurations like where we should save our screenshots on filler on filler Whenever we compared them and stuff like that. So Then we created just custom method in order to simplify saving on screenshots in order into specific directory so and here is how the be had test be had feature look like So first two lines. They are triggered on primary environment So first one will just remove all screenshots and then take screenshots from primary environment The second or third third part of this script. It will just compare different screenshots So it's it's really simple, but it will show you really Things that you might not notice just by comparison to pages and here is how dated report look like in In addition to the link to be had to that be had fancy report with green Circles, we just send the link to directory with where we have screenshots And here is an example of report from be had where we compare a screenshot. So Screenshot compare has been failed. We see that home page doesn't match the new home page and eventually screenshot Of regression will look like this. It's okay that we updated menu items. That's totally fine, right? But we see that margin was completely changed in case if it's fine We can say that it's completely fine I'm gonna merge this for request in case if we have something really crazy going on with the page We cannot use this here and it's it's it's cool because we Don't not do not have top and primary environment current environment and just compare them So it will show you on the builds right away But in case if you would like to have less customizable option or you have no time to play with be had or you have Or you need something really like you need quick solution There is a service called backtrack.io was created by my ex colleagues and friends So this service allows you to generate and compare different environments even with production So just take a quick look on it in case if you would like to catch regression bugs and with Custom implementation we actually will be able to catch this box. So it's in progress. We are working on this. However We will we will implement this very soon and the last piece of this that we are working right now It's completely kind of break through at least for us for our team for our technical team So that's TDD testing. That's TDD testing based on unit testing And we need that in order to test third-party integrations and in the district We have a lot of different third-party integrations because YMCA's they are using different services for different stuff like schedules personal trainings and stuff like that So we found a way How we can perform unit testing on a real database? So instead of installing Drupal from testing profile and test everything there We can install the profile using install destroy using our profile and test everything there Or we can in if we're talking about projects with clients We can take the database from prod production and perform that there So live database that's the main the main difference from usual unit tests And I would like to say kudos to those two guys who actually came up with this idea One of them is here. So in case if you will have any questions just ping him instead of me He is the kind of he's the creator of this masterpiece that we are working on right now So his name is Mitrov and Levski and another one is Andriy Penangka. So thank you guys for this idea Let's take a look at the example. So Example is Really complicated. So we are building integration with third-party services first We should get a data from rust service Second we should process that data Since third-party services, they are not always as friendly as we expect Based on this data, we have to scrape website HTML markup and get additional data That's insane that whenever we create integration with third-party services They do not have everything that we need so we have to find the work around how we how we can work with it Then we get data from rest service once again perform calculations Go to the next step of the form and once again, so Just imagine how complicated this from the technical perspective from from the development perspective But from user perspective, it looks really simple. So here is a form select type select location select school Then select program Then select raid and here you go You will be just right directed to another link to some third-party service It looks simple what like why do we have so Like difficult and complicated backend the reason for that because third-party services And I'm pretty sure that if you've had created integration with third-party services You understand my pain that I'm talking about right now So those two guys they came up with an idea of TDD unit testing This is the module that contains only like most important parts that the drush file that's the services file and source folder with the three services and just file with some basic configuration and I'm I hope that At some point this module will publish in Drupal.org if you need something like this ping that guy Just push him to publish this module on Drupal.org. So everyone can reuse it By default it provides a few drush comments First one. It allows you to run these unit tests second one you can run test that will actually perform request to third-party services Collect data and save this data in the specific file like create a mock for your test So when you will run test once again instead of pinging third-party service, it will use data from mock file So you can enable disable some tests and you can set that mock data manually. So this is the drush comments and Next slide is like Completely blew out my mind when I saw it it's the masterpiece in programming believe me like This is it. So that's the that's the unit tests for for live database. We bootstrap Drupal we Lot some additional classes with a lot PHP unit we like reflect those libraries We add our test suite and we run these tests on live database Let's say you have database was in use you pull them from third-party service and you can perform these tests on your on your live environment with this with this Awesome class and method and here is how the test look like itself So it looks pretty similar to unit tests in Drupal We have a list of methods the first one We just want to make sure that whenever we get categories from third-party services Our code will process them in the right way and the second one based on previously Previous category that we've got whenever we send the request to third-party Our code will process this data in the right way and the same with registration link Here we test only our code. We do not care about third-party service because we are not developers from third-party service We want to test only our code And this is how we're doing this This is the way how we can run it. We can run it using drush We can run it using just PHP code through devil PHP and with this tests We can mock data from the third-party API We can how they can be helpful for the team because even your QA can run them you will using devil PHP It's time-saving because you need only a few seconds to run them and stuff be hat and one hour And it's easy to debug. Let's say you are building this forum for the integration with third-party You just in order to get to the last step you have to go and click a lot of different options With PHP unit you set break point you run drush comment in one second you already in this break point That's that's the breakthrough as for me like for our team because it like speed up the team Work in ours. It's just you feel like you're doing something really cool instead of just functional programming and functions and altars and stuff like that. So this is our setup of CI for open-wide distro and You may reuse it just ping me if you need any additional information So I just wanted to summarize everything. So we have CI lazy builders sniffers builds p-hat tests accessibility sniffers and right now we are working on Some really exciting stuff like the couple tests visual regression and unit tests So here is our our goal. That's the desired list of statuses for pull requests. So we have We would like to split out everything into different jobs and we would like to add a time at the end of the status description So that's it. I really wish you to do not fix buck Do not fix buck later fix them right now try to identify them before they appeared in some production environment So happy testing everyone join tomorrow for contribution sprints. Thank you very much That's the time for questions and answers. Please evaluate session. Thank you very much Any questions? There is a microphone in case you'd like to ask something Hello good session and my question is about the default content that you create you only created in the B Hat test or you make some kind of default content available for QAT more something like that Yeah, that's a very good question. So at the very beginning we decided how we will go with the default content because whenever we Install the website. It's just empty. There is nothing to test So we came up with the idea and actually this guide me draw was working on this. We're using me great module in order to generate content and the main idea that we have Set up of YAML files with the content. So they just we've split out everything into entities So we have YAML file with images. We have YAML file with nodes for locations We have YAML files for blogs Well, then we have YAML files for landing pages paragraphs and me great module Just build everything and assemble everything and just create a content when we install the distro So in case if you would like take a look at the examples, so just Came in after session I can like it's amazing because I have a country module I school my great default content that it exactly does. Yeah. Yeah. Yeah, actually. Yeah We we noticed that module one once we already implemented this. Yeah Yeah, yeah, cool. Thank you for a question Hi, thank you for a great session. I have a question What about projects based on open wide distribution? So the first question is like on what repository like and hosting they are hosted and like what kind of this test that you Described you run on this like an enterprise project Yeah, so you you heard we're using CI boxes kind of our internal tool But that's just simple Jenkins plus some unstable playbooks So on the enterprise websites that we're building we are usually using only builds and in some cases we're using be had or unit tests and In case if client requires have accessibility we can easily set up that so like it's basically all those things They kind of based on projects for clients, but based in this experience We really added them to the distro just assemble different good ideas from different projects and Do you in most cases projects hosted on? Acquia, so we perform tests on on local dev instances from digital ocean We do not perform tests on one broader that was our stage Any other questions So if you have no questions, thank you very much for coming There is contact information in case if you like to just drop me a line I will be happy to answer your questions or like perform a demo for you. So thank you very much for coming