 All right, it's one o'clock, so take your seats and we'll start. So welcome to writing your first and to end test for web application. So quickly about me, I came all the way from Australia. I'm automated test specialist and I'm a backend developer by day, my name is Vladimir. I also run a small consultancy. This is my third time presenting on DrupalCon, but the first time presenting on DrupalCon is in North America, so I'm pretty excited, I hope you're too. So I'd like to make it very interactive if you want to be. So if you have any questions during this session, Mike just used the mic for the sake of recording so everyone can hear it later on. Cool. All right, so the session called writing your first and to end automated test for web application which sounds very technical, but we are in user experience track. So Kendall, who is the track chair for these tracks ask, will be tying your session to UX or is it actually going to be technical because what's the difference, like how are they two connected? So I actually said, yeah, it's exactly what I'm trying to achieve and that would be the main goal of the session. So about the session, the session does not contain any technical details at all. It doesn't contain a reference to specific methodology. I'm referring to a gel, but you'll see I kind of use a lot of stuff from there, so I'm going to use stuff like definition of Dano stories, but that doesn't relate to a gel methodology and can be used on any sort of a project. And this session doesn't contain design conversation as such because it's tightly related to a testing. So what the session is about, it's actually improving your user experience or your client user experience, your end user user experience by leveraging end to end tests. And we're going to look at what are those and why they need them and how we can actually create the common language which can be used throughout the whole team for the project. And we're going to see what sort of benefits we can achieve by doing that. So this session is basically for everyone who is on a, let's say, Drupal or web application project. And I mean everyone including the client. And that relates to project managers and testers and developers and designers. If you have a quick agenda, we're going to talk about user experience testing quickly. And then we'll move towards, before writing our first test, we'll move into towards why do we need to do that, what actually gives us. And that'll give you a few tips that I pick up along my long journey through building web applications. So as a developer, which I am, testing was always at a table. And everyone said, no, we don't care, but that sounds good. And usually when we talk about testing, you used to use technical language. So a lot of people would just come and say, well, that doesn't really mean anything to me, I don't really care. Especially clients when they come and say, hey, it's tested. So and let's be honest, Drupal is a developer's baby, right? It was built by developers and mostly for developers. Drew said yesterday, we have great tools for technical collaboration. But we get nothing like that on non-technical side. And that's exactly what I'm referring to. So when you talk about Drupal, you're basically referring to majority of the people are developers. So as a user, as a end user of any product you build or deliver, I'm not concerned about testing, really. I'm not concerned about version of flavor or even sometimes CMS we're using. I'm interested in results. I'm interested in speed. And I'm interested that it actually looks nice and I have a nice user experience. A lot of people come to me and say, oh, you're a back-end developer. I don't speak technical language. I heard this phrase a number of times. And sometimes it came from the client. Sometimes it's from someone else. But I heard it quite a few times. So we're gonna jump and see what's actually, why do we need user experience in the first place? So you probably want to provide a great service, right? And in this day and age, Facebook is a standard of media content and stuff like that. So you're obviously here to make money or save money. And it's always about the reputation, right? It's always nice to hear that this product looks nice and anyone using YouTube for smart TV or Apple TV or something like that. They did a release on in February, right? And you can see all your subscriptions to your channels. You were able to see about 30 of them at the same time, right? And then one day they had a release and they put, because they were trying to go to the standard kind of look and feel, they basically took all the channels and put them in ones. And I had subscription to about 180 channels on YouTube. So imagine that finding something like in the end of alphabet, if it's alphabetical order. So there was a lot of complaints actually about the layout of the YouTubers. And they actually have been forced within two weeks to change the layout and produce another app for TVs that actually try to kind of suppress this angry mob on the internet. So here's another app, Strava, that I use for my running. So I did it for the last couple of days. And here's another thing that relates tightly to user experience. So here's me saving activity. And I usually put the custom name to my activity. And now I actually have to click for the keyboard to go down. And there's safe activity, but it's fine. But before that, they actually used to have the save button on the right-hand side. And now this place is empty. So eventually on the bottom and they removed from the top. Although if you look at the other screen, the edit button is still there and it's fine. So things like that actually makes us very, very frustrated as a user. So now let's go and look over testing. Why do we need testing? And it's not that different from UX. You also want to provide a great see. So last month, Canada actually scrapped the payroll system. It was a big deal and it cost them a lot of money. And the payroll system was provided by IBM. So I use that as an example maybe of the bad testing. But I'm sure there are other things that came through. But when I saw this story, I was like, that sounds very familiar. Because the state in Australia, where I live, Queensland actually tried to implement payroll system for medical hospitals statewide. About five years ago, and IBM was involved and it was a payroll system debacle. And it cost a lot to state and they even lost a couple of court cases to IBM. But apart from that, again, something very similar like big payment system deployment failed. And then I saw this Canadian article that actually says that one of the head of the union said, I can't believe that Google search wasn't done on IBM and payroll. So what I'm trying to get segue into is that sometimes IBM is just good enough for people to throw a lot of money into them at the software. Whereas we at Drupal trying to get there and say, hey, we got a good software and maybe you should use that instead. So reasons for project failures are usually poorly defined goals over optimistic expectations and projects can be too complex. At the same time, the project structure is usually the requirements, the implementation and tests in any particular order. So where end-to-end testing tied to it and what's end-to-end testing is basically a browser that makes me as close to the user as possible. And when we write end-to-end test, there's only one rule really which is that should be automatable, which maybe sounds not that clear, but we'll get to that pretty soon. And what if we'll combine two practices, we'll combine good testing practices and we'll combine good user experience practices. That's maybe a good idea. But the first we need to see, we need to give something to the client. So when you come and say, hey, agency X and I also do testing, they usually don't care, client needs something, they need something solid because they want to pay money for something that you have. And when you actually bring the testing, you don't go and sell the testing. You actually sell the outcome. So on Twitter, a person from design side basically complains about developers saying, oh, client is wrong because they're using software wrong. And I said, well, you know, client, and he goes on this rant about developers and I said, you know, the client doesn't buy. This is exactly where we need to take a lot of drooping. So the testing outcomes that you actually can show to the client, they are artifacts. And we're going to look at artifacts. What actually testing can give you? Testing costs money. So artifacts are test reports, release notes, and notifications. And I'll just stop here for a second. I want to do a live demo. So I'll just see if it's internet working. So basically what I'm doing, I'm pushing a new feature and I'm going to show you what I'm talking about is not just that it actually works live at the moment. And it's what I use on daily basis as well. So it's not just, you know, theory. So I just put, if you look at a Git lab, at the moment the feature just, just kicking, starting doing some CI stuff. We're going to get back to that later if guts of the live demos are good to us. Okay, back to our presentation. So three artifacts of testing you can actually show to the client. Test reports, release notes, and notifications. Why client carry about them? Well, the artifacts is actually tell you the code they just pushed, right? My system going to come back to me and say, hey, your Drupal installation doesn't work. Or, hey, your Drupal installation works, but some of your features are broken. Or maybe something else. We'll see. I got quite a few tests there, so we'll see if it actually. Test report, make sure your code is deliverable. Why do we care if code is deliverable? Because it actually saves us time and money. Because if we need to release the code as quick as possible, we can't do that. That's going to cost us more. Test report usually produced every time the code changed. And every time and all the time. That's kind of the idea of test report. It looks something like that, right? The stuff that they just pushed, they just show you. I got four different stages of testing. And the test report says here, it's all good. Or sometimes it come back like that, saying something is broken. So we can actually go inside and see what's wrong. So in this particular example, I have a LinkedIn stage which checks the code quality. And other stuff, like Drupal configuration quality there. The second stage is actually Drupal installation and end-to-end tests. Then it checks some stuff that hosting is consistent. Maybe have some security patches and then deploy stuff. So what this workflow actually does. Or sometimes you actually need the test report that your Drupal can be installed. So let's say you push some configuration features and forgot, I don't know, one field. You're going to push it into your repo and the other developer is going to pull it. But they won't be able to install the Drupal because you forgot something. You want to know that before that happens, otherwise you lose time and money. So sometimes the test that you wrote before, your stories, that's the report you want, right? So some features, here's the three sample features from one of my clients. They passed. That's all I want to know. So release notes. So release notes, if client is not impressed by artifacts, that's understandable. Again, I use a lot of developers colloquium here. Well, when we come to release notes, that's what client understands because here are 1,352. But something more that client understands is actually going to give them more leverage saying, hey, this feature is deployed, so now I can test it without you telling them that. Right? So it was released, where and when. If you have a development stage in production, that's where. When, yesterday, three days ago, five days ago. So you basically have a receipt for your software, something that as you might be in didn't have. And it can look something like that. So here's an example of five features released and release notes were there. As you can see, they are phrase like stories and specific to a particular client. But we're going to look at those once we start writing our first test. Here's another example. Six stories were released to this particular release. So pretty simple. If we look at the first, for example, as administrator, I want to edit the same metadata as in Drupal 7 version, right? Pretty vague. But then again, in this particular release, probably client knows what it means. And the last thing that you can show to the client is notifications. Emails, Slack, SMS, push notification, whatever you use. Why do you want to spend time to actually contact the client saying, hey, we just push stuff to your stage. You can test it where they can get one of those or your developers, right? So bring this release notes in an email notification. Here is the sample notification that was produced on Acre with a version number, date, and number of stories that were released. Here's a simple Slack notification for developers. They don't want to know what was released. They probably can check what was released, but they need a link. If they know that Dev was updated to see maybe the changes went wrong or someone overwrite them. Something like that. Or you can actually, you know, in some cases, I embed the release notes to Slack. And it looks like that. Simple. And not a single person's paying the time after it was implemented in the continuous integration system. So testing coverage varies. And you always go to the client, say, I can do you full test coverage. They say, yep, that's what I want. And then you say, okay, here's a budget. And they're like, now let's just finish features. So always start. I found always put the automated system and put one or two tests. That's it. That's all you need because that's going to save you a lot of time. So Drupal installation or Drupal update, if you already front page comes up, that's all you need. That's bare minimum. If you decide to go on the path of producing tests for every feature, time box it. Make sure it doesn't go over 30% of development time. And it usually does. Because, you know, you can lose yourself in the world of Selenium and writing tests and stuff like that, forgetting. Because developers are like, you know, you show them something shiny. They're just going to run with it. So, and I know I'm developer. I know that. I can see them write this test just to perfection, but that's not what client paid for. So allow approximately, if client agrees to do the test, allow approximately 30% of your time to do the test. Plus, minus, depending on the test. But that's rough idea. So how much it costs? That's a very tough question to answer. But as I said, again, if we get development time right. Testing is hard to sell. So this is one of the hardest things I'm trying to sell. And then there's also price of maintenance, right? You have a Drupal website. You have security updates that will cost time and money. Same for the testing. So be honest with you and be honest with your client. It's never easy or cheap. So the stats that I got from the various articles and my own experience usually say, whatever you plan usually three times more, you're going to end up an M or maybe just a number. So let's actually test as a common language, right? So as we saw before, the project works easy. You have requirements. You have implementation. You have tests. Here's a code-driven development where you have requirements. You go through all the funky stuff, whatever methodology you're using, Agile waterfall, break it into stories. So in development tasks, you do implementation, then you write a test and then make sure tests verify the requirements. We can change the workflow and do a test-driven development, write requirements, write a test. And based on tests, write the code, write your implementation. In circle or bi-directional connections, you have a possibility of Chinese whispers. It's not necessarily what's in test to what's in requirements going to match. Why? Because sometimes developers or PMs are not too attentive to client needs. Sometimes clients doesn't know what they want. Sometimes, yeah. I mean, it can happen anyway. So how to actually make sure that the tests that you write fit the requirements. So requirements explain the system flow. And the tests, they also explain the system flow. So why can't it be the same thing? Why can't we write requirements that are going to end up in tests? Well, the answer to that is we can. So we're going to define the common pattern and by no means use it as a template or use it as a template. It's fine. That's how I work. And I've done it on a multiple projects and that works fine. So we're going to define roles because that's what some methodologies called user stories. But we're going to define roles and we're going to define the action that these roles does. And then there's a reaction that happens to that action. And there is a possible exception to that. So finally, after 23 minutes of me talking, let's write our first test. So I'm going to roll back again and say, okay, when you're writing the first test, follow the user. We are providing software for a user, not for a developer. So follow the user, not the developer. Start where the user starts and do what the user do. So we're going to define the actual rules, again, we're going to define the reaction and let's do what the users do. So what do they do there? How do you go to the home page action? It goes to Drupal.org and we see the home page. So does the user really want to see a home page? Probably not. They want to see something else. So this is where our test kind of, I cross the line and see the home page because by writing your first test, you can say see the home page. But what are you actually going to test? Are you going to test that? What's the home page? Are you going to test that there is a menu that's visible? Are you going to test that there's this Drupal blob or are you going to test that there are three sections? We can test any of those, but test something. In my experience, seeing the home page is already, so at least I say I want to see a title, I want to see a logo, I want to see this, it's more testable than seeing a home page. And that's what I meant by Drupal blob and that's what I meant, although those looks like blobs, I call them sections. So here's our first test and the first story as well. As anonymous user, which we have our role, I want to go to Drupal.org and see main menu and three sections for developers, marketers and agencies. So I actually test two things. I want to make sure that the menu is visible and I want to make sure that there are three sections. I don't really care about Drupal blob, but you can see how you can build your story from here. So does this story technical, not at all there are no technical aspects, is that a requirement? Yes it is. Can developer build this feature based on this requirement? Yes they can. Maybe menu not going to look like a menu you want to look and maybe sections going to look a bit different, but the essence of them present on the page is there and this is what the test verifies, the requirements. So I feel free to go not say I want to blue with the shade of gray logo, it might be harder to test, but hey, this is your product and the time you spend on it. But for the original first implementation I think that's more than enough. So now let's look at actually using Drupal.org as some other user. And first to do that we actually need to get to login form, right? So if we write a story, what are we going to say? So as anonymous user, I want to go to a particular URL because that's what you all do, right? To get to the login form. Oh no. So raise your hand, how many people actually type the URL of the user login? So you actually type Drupal.org slash user slash login, there are a few. And I see login form. But if you look at the hands that were up, there was about six of them, approximately. So there's more than six people in this room. So you see what majority would do. So it actually scrapped the go to URL and how many people actually go to Drupal.org, hover over the menu and click login, that's more than six, tell you that. So you see what I mean by developer versus user. So let's actually split our action into what we just did. So we hover over user icon, we locate login menu item, we click on it and we see login form. Here's our story. As anonymous user, I want to go to the homepage, hover over user icon, click on it and see login form. Or we can rephrase it a bit better, say as anonymous user, I want to locate login menu, drop down and see the login, basically. Maybe that's people who actually don't hover over the menu, they just left. I'm just joking. So yeah, you can rephrase phrases based on your preferences. So if you don't really care which menu item do you hover, it's not, you don't necessarily need to put it there straight away. As I said, it's our second story, we're just into the project, but we already did that. And once we're here, we put our credentials and we're actually logging in and we no longer anonymous user. So we actually add into our pool of roles another role, right? And you can see now there is an icon there of my user pick. So as a registered user, I want to login and be redirected to the homepage, a valid story that basically, our third story, keep in mind that we already tested that we actually can access login form. So here it's perfectly fine to say I want to log in, right? Making sure that the login process actually, locating login form process actually being tested. So and our consequence, be redirected to the homepage, which means here. And we can extend the story later on because I pointed out user picture, so we can actually add to the story, as a registered to our reaction and what we see, what's the consequence. As a registered user, I want to login and be redirected to the homepage and see my user picture in main menu. And back to our list notes, remember the one of the artifacts I showed you before. Now those stories probably would make more sense to you. Even if you're not across the project and you're probably not a developer. The story is like as an anonymous user, I want to see what's on section after carousel on the top of the homepage. Make sense. It makes sense to a developer, to a tester, it makes sense to a client, to a designer. So in the last bit, I want to go through tips. First I actually want to embed them in the writing test, but I thought I would put them separately because they are stuff that I pick up and it basically going to reiterate how to write the story and how to approach your boss's clients to actually invest in into automated testing system. So number one, start simple. As we did with the user story, we went to very, very beginning of our project, Drupal.org, and saw what the user does first, which they locate the homepage, right? Copy user actions, not developer actions. You're not going to test every page by just typing URL, no one does that. The first thing is very important here because the first workshop I do with the client and explain them how to write the requirements on paper. Once we go through this process, the first reaction is, but that's given, right? The login form is always going to be there. It's Drupal, it's already preset and you have like unit tests on the back to test it and embrace simplicity. You'll see after about 10 tests, you'll actually start getting quite deep into the project, but it's still the basics that drives the project. If they're broken, nothing else probably going to work. If client cannot locate the user login form, they won't be able to progress to do other tests. So by discipline, I mean start simple. Start from the very beginning, explain what homepage is, and you'll see like you'll get over it very, very quickly. Number two, create a pattern. So the pattern I gave you is, again, comes very similar to a gel user story. So define a role, define action, define possible reaction, maybe some exceptions. Create. Versioning is the very good reiteration. Do as many versionings as you possibly can, I mean, in the same sense. WD40 is a good example I use, it's, you know, 40 stands for the 30th, 40 stands for 40th iteration of the formula they produce. Make the reiteration continuous that takes us to a continuous integration and continuous delivery, which is not part of this topic, but it comes very, very close. So to give you a solid example, when I was updating Drupal 8.3 to 8.4, those tests that I wrote discovered three bugs. One was with the views, specifically, particularly embedded to the page. One was with a bootstrap theme that actually the drop-down menu didn't work because they will put JQuery update. I think it was JQuery 2 to JQuery 3, right, in Drupal 8.4, pro bootstrap 4, and there was another one related to Core, I think. So basically, before doing anything, before even any developer started working on 8.4, what we did, it just composed our update core, pushed it to our CI, and straight away got three failed tests, actually more because it heavily relied on a drop-down menu of bootstrap. And then the view didn't appear, so the test said, oh, you know, the view didn't appear, so let's investigate that. So that was, in half an hour, we found out there is a problem, so we're not going to update this week to 8.4, then we did the investigation, then we saw if there were any patches. Luckily for us, for all three issues, there were patches, so we put them in and continued working, just and confirmed that the patches actually worked. Again, on the topic of continuous delivery and continuous deployment, that's the only relevant source I could find from 2016, from New Relic website, saying how many companies can do releases on the same day, daily, weekly, monthly, and so on and so forth. And I'm going to reiterate that again, make sure your codebase is releasable at any time, and that's where those tests probably will push you towards, too. We're not going to get you completely there, but they're definitely going to let you know when something is wrong. And that, again, those are trends from almost four years ago. It's now more and more important to actually make sure you can release your codebase, your changes as soon as possible. Story must be testable. I changed it from test must be testable, as I said before, because it sounds right. But it's simple, as we saw before, evolve your story, like I added this user pick right in the end. Do not combine multiple features. Make everyone work, and I mean everyone, including your client. You'll see the change once they produce the first story. It's going to be their baby. They're going to know what they were talking about. Involve designers. Make them produce a story. You know, everyone on the project who is accountable and produce measurable results is a value member of the project, otherwise, in my perspective, they are not really that. So I'm going to stop for a second, because we are almost in the end, and I'm going to go back to the code I pushed, and just check that. The internet is a bit flaky, but you saw all of them were green. So unless GitLab is down, I just wanted to do a merge request to actually release and show you the ... Here we go. Here we are. So this passed, and if we click ... What I was telling before, yeah, it looks like it's struggling. So you can see the links all passed, the Drupal installation, and the test passed. Just going to quickly show you that there's actually a bunch of tests there. And yeah, so now we can actually, if I'll merge it to Dev, it will do automatically release and send me an email and select notification, which we'll try to fit into this session. But it's struggling. So you can see here, here's one of the stories that was the last of the story tested. There were 234 assertions, which doesn't mean tests. It's one of those green ticks, basically. And before that, it tested that the Drupal actually can be installed, and so on and so forth. So quickly I'll just go to my branches, okay, the branch, which is 225 DrupalCon. Ask for a merge request. All right. While I was doing that, let's jump on the second slide. So we finish it, make everyone work, and as I said, involve client as soon as possible. Make them write the story. Give them the spreadsheet saying, here's your pool of roles. Define the action. And educate them. Educate everyone, because exactly like what I showed you today, because the first story you're going to read, you're going to be like, how do I test that? Maybe we should rephrase that. Work with people. And yeah, make sure everyone grow, like, everyone in the project understands what they mean. And when they get one of the artifacts, they understand what they're looking at. Define test coverage. Even if it's insane, and it's ready to be used. We don't have any budget to do the automated test, but hey, let's at least test the homepage. I'm doing the end-to-end testing in the beginning, and it was always good. It's never fast-scaled. And I thought, it did actually work, spending all the time researching the stack, Selenium stack, and all other stuff I put on top of it, and three-to-drupal endpoint four. It paid for everything. I would imagine what I have done two years before that, when that happened. How long it would take to actually fix three separate problems on single release without knowing what's actually failing. And then again, you probably don't want a client coming back and saying, hey, there are a few things that are just broken in the release. As I said, start with two initial tests, or maybe one test per roll. Let's talk about this. If we'll go back, close this page. So you'll see nothing inside, enough budget, so here's the test. Here's the story, but it's empty. Someone needs to write a test later on. Yep. Create placeholders. Make sure the client sees all those lists, and make sure they know they're not testable. This might push to the point, actually trying to get this test working, and maybe invest a bit more into tests. Sandbox. This is what you know. Why do we need to put copy-paste in versions three? I mean, this is an extreme example, but sandboxing is good. Sandbox, I was working in one of the organizations, and it has about 16 people involved for this policy, make it from draft to live. And they know all the shortcuts, right? They said, oh, this particular policy, the cleaning policy, I don't really need to go to a director. I don't really need to go to a manager, and go directly to this guy. He signs it off, and I can publish in the whole company. They cannot do that. They have to go through a whole workflow. I was like, all right, let's put it on the paper, analyze the workflow. But once I gave them to write a user story for one of the actions, for taking this paper to one of those people, and then try to put the exceptions of the shortcuts, that is, it's all in our heads. We know how the process work, but converting to digital is not. So sandbox client, sandbox features, and then once they are on paper, implement them, and if you have time, at the exception, add more stuff, iterate, expand the functionality if time or budget allows. Definitions of done, usually it's a reaction. This is a good definition of done. Be as vague as precise as you want, as we saw before. Call the menu, or define the particular menu item, mention for everything. Have a beginner's mind. Like tomorrow you start a new job and you need to read this documentation to make sure to know how to write. If you're not going to do that, the person after you might probably scrap it and start something else, and you don't want to do that. And this is summary. So this is a nine points to actually make sure your tests, you know, in a good shape. We use the testing as a common language. Make sure everyone wrote the test and user story, we wrote three ourselves. If there is no budget for testing, put the stubs, put the test plan. The stubs are a very good test plan. So if you have one QA person on site and you give them all these user stories, they're like, oh, I can test that. I can go check if it worked. All these empty tests, they're not there for just nothing. They are going to help your project to grow. Use testing tools. Again, this is not technical session. I'm going to put a few testing tools there, but there is going to be other sessions that they're going to link to which you can actually go and have a look. And they're more technical if you strive for that, or contact me. So the libraries, the Mink library or Night Watch Test, which I use, they are emulating kind of this connection to Selenium. Selenium is a tool that emulates the browser. There is Cypress IO, it's alternative, which the session is going to be, I think, tomorrow. There is other things, but Selenium at the moment is like a leader. If you don't have a person on site, there are services that can give you a Selenium setup. They can even take a video of the testing and photos and then compare them. So the most popular source labs and browser stack. If you want technical, more kind of PM perspective, check out my session from DrupalCon Dublin of actually how to write stories from management perspective, or if you want more technical, check out my session from Drupal South's Auckland from last year. This is basically continuous integration, but it goes inside how to put the test together. So it's all online or contact me. So the couple, I'm not sure if you can see that, but there are at least three more, sorry if I missed someone, at least three more testing sessions, two today, 5.45, principle of unit testing, and five o'clock suite test suite talking about test tools in Drupal. And 2.15 study speedy testing with Cypress, which is alternative to Selenium. So my contact details here, use Vladimir AUS on Twitter, GitHub, GitLab, do we have any questions? Well, if you do later, again, my contact details and the session is going to be up, I'm going to put slides up and you can find them on Twitter. And great experience, if you've been come and work on the feature you like, find the people. There's plenty of people already trying to organize groups. Schedule, please, feedback is very important to me and to DrupalCon to make sure that actually it's relevant to you and what went wrong and what you would like to hear next time if that wasn't relevant to you and you thought it was something else. And maybe that has to be with you. Thank you very much. So I come from a develop, you found and integrated well with that as far as your testing protocol. I actually, the reason I use GitLab is because they have CI. So they have a bunch of features, including container storage. So I'll show you a few things, but we have a community edition at our site. And most of the projects I use, so request, sorry, thank you. So few things I'm trying to put the clients on, if they're not using Jira or anything that's definitely issues. So milestones, issues, boards, right? So the boards are very important because I kind of, I don't have to go to Trello or I don't have to set them up. It's already part of the project, so that's one of the things I use. Milestones kind of give you this gentle introduction to Agile. So burn down charts, take out work as a part of the community edition. And some milestones and some managers wants them to be. And a lot of the technical people kind of still very good with the board. So one of the clients, for example, still using Basecamp. But eventually people start commenting on the stuff. And I got a few clients solved on that. So this is the first thing. CI CD is my major use when I'm using GitLab as opposed to Bitbucket or GitHub. So pipelines is I set up GitLab CI, like the stuff I showed in the presentation. It's basically all done through the CI. So I'm using the only container I use, which is mine, is for Drupal. Because there's no standard Drupal container. So I put some stuff in there. And that's known from Git Composer from Docker Hub, Node.js container. Before I used the Yarn container and standard Ubuntu. So for example, for release, I just grab in the latest Ubuntu box, putting stuff on it and releasing it. So connecting to Acre and doing an Acre release. Got you. Yeah, I mean, this is some of the, this is boards, of course. So we've got stuff like, this is kind of what we've got. And then kind of following the process along chain. So we're using some of that stuff. But I was curious if there's any other tools like, for our JS integration, we're using Karmic. The next thing I'm going to use is a container repository. So if you're using any of your containers, I found it would be much faster if you put your containers there. So that's the next thing. Kubernetes, because I'm actually now converting to Google Cloud. So Kubernetes feature, which was released for, I think, version 10.0, actually gave a Kubernetes access. So I haven't researched it, but that looks very promising. So just, you know, set up the whole infrastructure in a single box. By version 11, they are promising the security checks. So, you know how GitHub has security checks for, you know, NPM reforms? I do not, but I don't need to have that. So for example, if you have, if you have no JS reform and check for, if there are any security outdated versions of your, of your dependencies, and they're going to send you an email, especially when there is a big bug. Plus, I assume they're going to like support PHP and what they actually, if your container, I'm not using, like, release, right? So because Acqua doesn't allow containers or platform usage doesn't allow containers. So, but the good thing about, once you use your container for release, if your container set up is outdated with the security release, they're going to let you know might even update your container for that. There's a comment in the next couple of versions that I'm really excited about. But again, I still do have time to, to look into Kubernetes. So I guess that would be another thing to, you know, to do, so have a look. So it's basically, I think then heavily DevOps oriented, which is, I guess, good. But GitHub, I talked to GitHub and they weren't very impressed. Yeah, but yeah, I'm, I'm actually, I'm solo, but I found the CI and that's already a year ago.