 Okay, so, welcome to the Drupal CI initiative talk, or I guess to keep in alignment the context of this conference, the future of Drupal.org, continuous integration. I just wanna do a bit of a quick overview so everyone knows what they're getting themselves into. So first I'll kinda do a bit of a blanket like what is CI, why do we need continuous integration? The origin story of Drupal and continuous integration, the state of Drupal, NCI. I'll start to get into the project itself, the components, I've got three recorded demos, from there we're just down to questions and demands because at the end of the day this project is the stakeholders are the community, so any feedback from anyone is welcome and we'll go on the roadmap if it's good. Cool, so I don't really like having a me slide but at the end of the day I just wanted to say why I'm a part of the Drupal CI and what value do I add? So my name is Nick Xu, I work at Previous Next. I'm a developer and a sysadmin so it means I get to make the tools and use them at the same time, which is awesome. I also dabble in a bit of everything, so PHP, Golang and Docker are like my latest obsession, so very, very cool, and then I do a lot of puppet. I'm the maintainer of tour module in Drupal 8 and I also maintain three or maybe more, but definitely three components in Drupal CI, so yeah. Yeah, and stop me if I go too fast because there's a few components and there's a lot of technology to this stack, so at the end of the day, if I'm rambling or you want a bit more clarification, just put your hand in the air and I'll answer it. I'm not gonna flame you, so. So I wanted to cover continuous integration and just blanket statement it, so guess what I did? I typed continuous integration into Google and this is what it gave me back, but at the end of the day, this is continuous integration to me, so continuous integration CI is a development practice that requires developers to integrate code into a shared repository several times a day. Each check-in is then verified by an automated build allowing teams to detect problems early and I swear I will not read a slide like that ever again in this talk anyway. So, but as you can see, we already do CI in Drupal. We generate a patch, generally that patch is like, for me it's like a one-line fix and then you submit it and then it goes through a build system and then it gets committed and then I pull down master and I keep going, I'm frequently pulling down head and then merging features into it and then getting feedback through the automation. So, as you can see, this is something that we already do, but we want to unlock a little bit more, right? So, let's just sort of talk about the current. So, as I stated, we have a patch workflow and I'm not gonna get into that at all, but that's the workflow that we have and that's fine, that's completely fine by me. It's, you know, you submit it, it goes through a build system and then we get a response. That build system itself is called Piffer, that PIFR bit there. So, when you do a build, it fails, you go to that site, see your build, see where it failed and why and then you get back into, you know, replicating those files locally. But that site is a Drupal site. That's a Drupal site that manages nodes and compute to run the patches. So, and this was built quite a while ago and at that stage, they weren't really CI best practices, like there wasn't like best of breed CI systems. So, I completely agree with this approach. I think this is completely fine, like you build what you know and building it in Drupal makes total sense in this case. But, you know, like it was testing one version of PHP and back in that day, that's legit, that's fine. Probably wasn't as much of a big deal at the time to just test one version of PHP. The kickoff is to have tests first, right? If you don't have tests, then have tests. There's also no testing for the infrastructure itself. So, talking with the DA staff, they're still pushing changes and they'll push a change to production and there's no test suite for the current infrastructure. Then they'll go, oh, roll back, we better roll back and then they'll fix the problem and then push it up. So, we need tests for our testing infrastructure to make sure that we don't break anything. And the final one is it's locked into simple test as well. So, we have that run test.sh script in core and the infrastructure is tightly bound to that script itself. Sorry, and I missed one too, the nodes themselves. I recently went to DrupalCon Amsterdam and I was walking around the sprint rooms and I noticed in the Drupal Association Room that they had about two or three staff members who sold job for the entire sprint was to spin up test bots. So, they'd spin up compute, they'd provision it with Puppet, which is completely fine, but the time that it took to get those bots going feels like wasted time. We can make it easier on them. It should just be like, spin me up an instance in like two minutes and then go, right? But these guys were spending the entire days bringing up instances and diagnosing and when they could be doing something else, right? Cool, so that's the current. So, now onto some goals of Drupal CI. So, based on that, we now formulated some goals for the new system. So, first one, unlocking better testing. So, that means things like coding standards, PHPCS. Like, we desperately want that in Drupal 8. To be honest, like, in my mind, if you fail PHPCS, the rest of the build shouldn't really go through. Like, you shouldn't test code that you're just gonna quickly change and then push back up and then run your build, right? So, fail fast, but right now we run simple test and then that's, you know, we want PHPCS be hot, any other crazy thing that comes out for testing into the future, but we wanna be agnostic to that. We wanna leverage industry standards. So, things like Jenkins, like since, like I mentioned before, since building our CI infrastructure, a lot of standard software has come out since then. So, Jenkins being the big one, right? You talk to anyone about CI, like continuous integration or continuous delivery and they pretty much say Jenkins all the way through. So, hmm, it's there for you. Yeah, yeah, come to his session tomorrow. ThoughtWorks have a product, well, an open source project now called Go, right? Go, yep. And then a CI one called, anyway, I'm gone off topic, someone put their hand up and stopped me. Um, so, you know, Jenkins, right? Um, less, and that equals less custom code, right? Because we've built our CI infrastructure all bespoke for the most part. Like we are leveraging Drupal, but the actual workflow of communicating with nodes, people have solved that since then and that's a burden that we shouldn't really have to bear. More versions. So, PHP 5.3, 5.4, 5.5, 5.6, 7. Yeah, PHP 7 when it comes out, so head maybe. But you kind of get the drift. But outside of PHP, like what about like Postgres? We don't test Postgres right now in our bot infrastructure. MongoDB, CHX is working on MongoDB right now support and it looks really good, but it's really hard for him to prove to someone that, hey, you know, I've got really awesome MongoDB support rocking right now because it's all like on his laptop, basically. So it'll be really good for everyone to see how that's progressing. Automate everything. So like I mentioned, the DA stuff, how they were at the sprints, like doing all the manual puppet provisioning, all that should be automated. You should have a pre-built image from the get go. You can just go spin me up another one of those and away we go. Tools for better testing, I actually covered that. So that was unlocked better testing. Test our own code. So actually write tests for our components, right? So if we push a Jenkins change or like a change to the workflow on Jenkins, we should have tests around that, but actually make sure that things are working correctly. If we push a change to the actual build workflow, we should have tests around that as well. Every component should be testable. So then we know that we're not gonna break our CI system and then everything that runs under drupal.org infrastructure for testing will be broken. So let's avoid that. And the final one for the goals is I kind of mentioned about packing images away, but the goal is to pack these away into multiple sources, right? So Vega and AWS and Docker. That's not like, there could be more, but those are kind of the three that we're targeting right now with the meaning that myself, I could just pull down a Docker image of a result site or something and I'll get more into that, but I could pull down a component and run it internally at my work. And then there's multiple ways to do it, right? Locally on Vega and AWS, spin up some compute up there or leverage it in a Docker infrastructure. But like I said, build for the community. That's the main goal. All these components should be focused on the community because at the end of the day, someone writing a patch for drupal.org infrastructure is fine, that's awesome. But at the end of the day, if you can get your business to use one of these components in this infrastructure and then say, hey, we want this feature, send a pull request or a patch and let's start talking about it and get it integrated into the main component itself and then everyone wins. Okay, so now I'm gonna start talking about some technologies. I'm gonna get this kind of bootstrapped into a few of these texts before I get into Drupal CI itself. So the first one's Vagrant. Who's heard of Vagrant and who uses Vagrant? Cool, cool. My slides are actually running on a Vagrant VM because I'm too lazy to install a web server on my Mac, so that's Vagrant, all of that's Vagrant. But basically what that gives you is reproducible environments for local development. And why is that awesome? Well, that's awesome because I can spin up a VM locally, start hacking on the project and start developing a new feature and that's the approach that we've taken with Drupal CI from the beginning. So every component we start with local. It's local first because at the end of the day, if you can't run it on your local, no one's gonna contribute back. That's just plain and simple. If no one can replicate the environment, it's really hard for someone to actually write features. So Vagrant's very important. The next one's Packer. This is written by the same guys as Vagrant. They have a whole suite of tools, go check them out. But Packer's the one that we use for packaging our apps. So this can do a lot of things. So it'll package the Vagrant VMs, it'll do AWS, I think it does some digital ocean and then Docker containers. It's basically like takes your manifest or your scripts or whatever you have and then it'll create an image, a reproducible image for repetitive builds. So you can kinda see why this is really important in a continuous integration environment because you wanna spin up the exact same environment every time and then you wanna roll back to a previous image if something happens. Docker, same thing. Who's heard of Docker? Yeah, everyone's heard of Docker, right? It's yeah, build, ship, run. But this is kinda the core of the CI infrastructure given the fact that it's, to run a Docker container is as easy as me going to one of my apps on my Mac and just going like Firefox. Like that's how it's built. So what, so Docker's really important because that's how we get our PHP 5.3, 5.4, 5.5, our Postgres, our Mongo. We get all these components and we mash them together into a test scenario. So very, very important. Jenkins, so I'm sure everyone's heard of Jenkins. Very industry standard, no? I'm sure MIG hasn't heard of Jenkins. Go learn Jenkins. Yeah, so Jenkins is very important. This is the thing that we're gonna use to basically queue up all our jobs. I'll get more into that later on, but it's basically at the heart of your CI, right? Like this is the thing that takes the builds and then does stuff with them. So more on that shortly. So components. Let's get talking about the components. I keep talking about components. So now you will all know what the components are. So this is a little diagram of it. Don't be too scared. It's not that too tricky. But essentially, let's talk about this from a patch workflow, right? Or like I've submitted my patch, right? So I've submitted my patch on Drupal.org. Next step is Drupal.org goes, hey, API endpoint, I have a new patch for you to test. And API goes, oh, sweet, sounds good to me. Let's tell the results site. So we actually have a new page, a new page to send results to. And then let's queue it up on Jenkins to start the build or just the build server, right? Next, spin up some compute and then submit that back to the results site at the very end. And then Drupal.org is kind of sitting there going, hey, guys, hey, API, how's my build going? And then once it's ready, you can submit a link to the results site, right? And I'll get more into these components. Oh, and there's also monitoring and logging. I won't get too much into that because that's kind of a solved problem for Drupal.org. We run that, like that's actually run on Drupal.org infrastructure already. So it's pretty much like talking to those guys and just collaborating and implementing it here as well into the same system. So, but monitoring and logging will be really, really cool because you could start to get some really easy metrics. So if you're collecting those logs, you can be like, hey, you can graph how many patches have we run, how many fails, you can start to get some really, really cool information around that. So, yeah, I don't, monitoring and logging is really, really important, but at the end of the day it's a solved problem so I don't need to talk about it here. Yeah, so that's kind of the overview. I thought I'd start there. So then when I start talking about individual components, you'll know sort of how they all interact with each other. You kind of, dare I say it, it's a bit of a microservices architecture maybe, a little bit maybe. Sorry, Meg, where you putting your hand up? Yep, yep, so spinning up, yeah, spinning up compute. Yeah, or dispatcher slaves, like the more technical term, or the better term for blanket statement for that is more like just that interaction is dispatcher and then node really, because at the end of the day, doesn't have to be Jenkins, could be anything. But I'm getting ahead of myself. So let's start at that API, right? So it's written in Sylex, which is a symphony micro framework, you can probably read that on the thing anyway. But what that means is there's a good transference of knowledge between people working on Drupal 8 and then they can jump in and start helping with the API as well. And it kind of means that we align our best practices at the same time. So and there might be things that we learn in here that we can bring into Drupal 8 or vice versa maybe, but at the end of the day it's all about trying to bring in more of, because we have a lot of developers working on Drupal 8 or Drupal in general, why not try and bring those guys into here and let them work on an API, just the API component, right? And all in all it's an abstraction layer to the infrastructure itself, because at the end of the day we can swap out any one of these components and then just update it in the API itself. So we can swap out Jenkins if we wanted to and then just update our API. But not actually the API endpoint, just the backend, right? So the interactions are all the same between the APIs, just what's going on in the background will change. Yeah, it's a Drupal.org CLI integration endpoint. So Drupal.org right now handles the queue, I'm pretty sure. I'll have to check up on that. But at the end of the day it means that Drupal.org won't be doing as much, right? It'll just be dumb and saying, hey, here's my patch, now you take care of it. There's a lot less responsibility for Drupal.org, which couldn't help them when migrating to Drupal.8 in the future or whatever it's less code to port and less responsibilities. It could also be a command line integration point. So you could write like a little bit of Drush plugin action. You could also maybe write a bot or something like that that integrates with the API itself, but at least that way then, we kind of have that on Drupal.org right now, but if we can point it at the API for those kinds of checks, I think that'll work out a lot better for Drupal.org itself. Because it means we can just scale out the API instead of scaling out Drupal.org itself, because that's pretty taxing. What's less, like scaling out a little Silux app or a big Drupal.org site with lots of modules? And those are the URLs. So those are the projects. And we're trying to keep it like on Drupal.org as well as on GitHub. So that way, if people want the pull request workflow, that's fine, submit a pull request and then we'll sync it up. We'll keep the two in sync, that's fine. And then if you like the patch workflow and do it there, that's fine, two points. Mm, demo time. See if I got this right. So this is a recording, sorry guys, I don't trust the internet. So you could probably see the code already, but it's nothing too crazy. At the end of the day, this is an early prototype of the API, but essentially all you want to, what's a minimum viable product for the API? It's essentially curl the endpoint from your local, say this is my patch, this is the repo, and then it goes, hey, I got that data, now let's curl Jenkins and pass it on, so it's a proxy basically. So as you can see, not too much code, I won't really go into it too much, but not many lines, but I just paste in this string here. I wish I could increase that, but I'm basically just saying repository, patch, and I don't even know that. Oh, sorry, patch branch, yeah, patch, nah, yep. And then I curl that, and then I'm actually telling people in this thing. So, sorry, this is actually from a blog post. So you can see this here, like we have sent you a build, it's just a little, but we'll change that to like a JSON return, send drupal.org likes it, like we'll do best practice API basically, but this is just, this is from a little while ago, but this is just MVP, and then you can see like a build kicked off, and then it's going through all the stages, and don't worry about that stuff, but that's actually just a little build project, but you can kind of see like you curl the API, it goes to Jenkins, which could be anything, and then it takes the data and then it runs a build, and the demos will be better from here by the way, so. Okay, so next is the dispatcher, right? So that's that Jenkins box that we had, but we're calling it the dispatcher because we can swap it out at any time, I don't mind Jenkins, but if something else came along and we could always swap it out, so, but essentially it just queues up builds, it starts the job runner component, which is where all our logic is, so the Jenkins CI is dumb, like it doesn't have really any plugins installed, it's just core Jenkins, and it just knows, hey, I got this, queue it up, spin up some compute, run the logic, or run the project that has the logic, and that's it, right? It's packaging status is we've got vagrant images for local, like I said, and there's an AWS AMI that we'll be publishing soon, so we have the AWS AMI and we're currently testing it on an AWS account and getting all the components together, so, but the goal is someone will just be able to go point click Jenkins, fill out some credentials for AWS, and then you'll have bots as well if you really want something internally, and those are the URLs there. Job runner, sorry, and I maintain those by the way, so API and the dispatcher, so I maintain those bits. The next one is the job runner, which this is where a lot of the build logic is right, so Jenkins has taken the details, it said, let's run it, let's run this script that has all the logic in it, and this is what the job runner does. So it's, in essence, it is the Drupal CI workflow, it's the testing workflow, it encapsulates Drupal.org testing into a single project, like a single symphony console app, so, and the reason why we chose symphony console is exactly the same reason why we chose Sylex, it's to try and get the developers in and working on it as well, so then we're not bringing in some kind of crazy, like we're not writing it in, I don't know, like Python or something like that, where someone from the Drupal community is gonna come in and go, oh, I ain't touching that, like they'll be able to jump into this and get going and pretty much know what they're doing. It's responsible, whoop, sorry. Just a question, so is the reason you chose it basically? Yeah, so then we can iterate on both components, like Sylex does API endpoints really well, so that's kind of why we chose that approach there, instead of a full Drupal site in that case. Yeah, it means we can swap the API out with whatever technology we want, and same for this, like this job runner doesn't have to be Symphony Console moving forward, you could swap it out with whatever you want, but yeah, that's kind of where we're trying to keep in the PHP ecosystem and saying, hey, like Sylex does APIs, Symphony Console does command line apps and Drupal uses like Symphony, so that was kind of the fit there, but there's no reason why these can't be swapped out for anything else, so. If there's like a ton of infrastructure folk that are like, hey, we should write it in this instead, then that's fine. I guess it just depends on the type of people that we attract moving forward, as to what technology we use. So yeah, thank you for sticking with PHP and not doing Ruby or Python. I was very... It's because of the functionality for the people you work with, that's a nice trick. Yeah, yeah, yeah, sticking to the ecosystem. Oh, well, yeah, could have written it and go, but you know, never know. No, no, I believe in writing in this, and sorry, and the benefit for something like this is we can also write a PHP library that two components can use, which I'll kind of touch on very shortly, but that's kind of the thing, we're not writing multiple libraries to interact together, so write for one, write for all. Yeah, Postgres Mongo is just pointing out, someone's actually, we've got people using this job runner right now, and they're doing Postgres testing, because like Angie said, it's in a situation where it's like, get it going or we'll drop it. So yeah, so these people are like, oh, we need a runner, we need something to test this with, so that's why they've jumped onto a very early build of the runner and they're running that. We've moved on from there, but they're still running that right now to keep things moving along. As I mentioned before, CHX is doing MongoDB, so and he's slowly using a few more of the components as well, so I'll touch on one of the other ones and show you a bit about that, but yeah, the maintainer for this component is Jeremy Thorson, and he's the lead of the Drupal CI project, and it seems fitting that he would be looking after the biggest component of the infrastructure, the actual build process, so yeah, I think it's cool that he's getting his hands dirty into it, yeah, thanks, too, Brian. And I asked him to do a little bit of a demo, and by little, he interpreted that as 15 minutes worth of demo, so I've just sort of subsetted it to like a minute or 30 seconds of just like he hit build, and then it ran through, just then you get a quick idea, but I won't steal his thunder because he's gonna release a whole blog series, that's kind of what this is coming from, he's gonna push out a blog series. So if you're keen to learn more about this runner, then definitely check that out, and I'll tweet it out with like hashtag Drupal CI, so. One else, there's probably some more with the runner, but I guess that's kind of it, the other thing is the fact that this process can be unit tested now, where we can't really do that with what we've got. The other big change, sorry, is it's also moving to be not tied or hard coded to run test.sh as well, so then as we change stuff in core, we can basically let the repo dictate, so some of the stuff that's coming through is like a Drupal CI.yaml file. If you wanna do that, you'll basically have two options that are very Drupal, which is it'll just run simple tests for you, or it'll run this custom file with whatever you want, so then we can start to transition from run test.sh to something else like PHP unit if we wanted to, and there is an issue on Drupal.org that myself and Cameron and a whole bunch of other people are working on, so I'll tweet that out later on. That's worth checking out as well, that's the other side of the coin. And Lee, and you reviewed it. Thank you, Jibran. Okay, so I mentioned this demo, right? So, there we go. So I'll kind of kick, I'll kind of set the scene, right? So the Drupal runner, sorry, the Drupal CI runner sort of manages all the containers, right? And this is the suite of containers that we have now, so as you can see, there's a lot of versions. And we're actively working on these as well, so you'll see like, oh, where are they? Oh, these are all just the services. So it's actually like, oh no, PHP. So we're actually working on making these like PHP 5.3, 5.4, 5.5, 5.6, and then Postgres. So you won't actually see this like, this PHP 5.5 Postgres, you won't see that. It'll be like, spin up a 5.3, and then spin me up a Postgres as well, and then let's test against that. But that's the current state that we have right now. So we have a suite of containers. Jeremy's primed his environment, so he's run a bit of configuration and sort of set up like, this is what I want to run. So I want to run like PHP 5.5 with MariahDB, and then he's going to run it. Hope he runs it soon. Oh, and everyone knows that he runs Debbie in number two. So he's run simple test, but almost nailed it, 5.4 container with MySQL. And let me just stop that there. You'll notice the prefix and then the container name. So that Drupal CI actually means like hub.docker.com slash Drupal CI, that container. So these containers are very visible in the Docker ecosystem themselves. So, and I think that's pretty cool. Like we're branching out into other ecosystems besides just Drupal. So we're contributing these back for anyone else that wants to use them. So, and we'll provide more links to that. But yeah, Drupal CI, we have a Drupal CI account and we have a suite of containers under that as well. Yeah, I love that sleeping 10 seconds business. I might have to work on that. And you can see it's checking out the code base and I'll pause that. And then I think I just, yes. And there's a whole bunch of environment variables that handle what runs basically, which you probably won't even need to know. You'll hit send and then it'll build for you, but there's a lot of sort of integration points in the runner itself. There's a lot of ways to interact with it. It's a mixture of environment variables or flags or a config file. So there's a few ways to do it and that's all documented. So don't be too scared. It's just lots of ways to interact with it if you want to. And that's kind of the demo. It's 15 minutes long, so I'll make sure I send out a link once he's ready to publish it. But it takes you through the entire workflow. But as you can see, we've got spinning up containers and running through working. The other side of this is private Travis. So you've probably noticed that there's quite a few people hacking on Drupal 8 modules and then they're moving to GitHub and running like a trap, having a Travis.yaml file to run their builds. Well, I started hacking on something a while ago as a bit of a test and just loading up that file and then spinning up a container suite to run those tests based on what was in that file. And we're starting to work to integrate that library into the runner as well. So it's a compatibility layer for Travis, which basically means anyone that's using GitHub and Travis can move back into the Drupal.org infrastructure and not change their test suite. So the goal of this is all about compatibility and bringing everyone back under the Drupal.org banner that may have moved off to use Travis. It's also a symphony console. That was a bit of a plan around, but at least it sort of lines up well. And it's going to be a symphony console, but we're moving to make it more like an API that anyone in PHP can use. Results. So this is the bit that, this is, I guess this is the bit that I want to see iterate on the most because at the end of the day, if you submit a patch and then it fails and it's passing locally, you're pretty ticked off, right? You're like, why did my patch fail? And then the last thing you want to do is click on the link and then go to a site and then be like, oh man, you know, like you want to see something that tells you why you build failed and then gives you a lot of options to kind of work, you know, to work it out. So to me, this is kind of like the face of Drupal CI and it's the first thing besides that one past one million failed message. So, and it's built on Drupal 8. So I think is pretty freaking awesome. So, yeah, and it means anyone can have a hack on it too. It's an install profile, it's a Drushmake. Anyone can just pull it down and spin it up and give it a test. A little bit of feedback from me building this in Drupal 8 is it was pretty damn smooth all in all. Yeah, there was one issue in Twig and then I jumped into Drupal Contribute and I said, oh, this thing's not working and they said there's a patch for it and it's already RTBC. So people were already on it. It was just, you know, so added the patch into the make file, rebuilt, worked. So I'm pretty happy about that. Yeah, it was pretty smooth. The patch, I believe it is, yeah. I'll check it out. I'll bump up the D8 version and make sure it still runs. But that's kind of the flow that we're taking with this too. So each release will just bump up the Drushmake file, run the test suite, river passes suite. If it doesn't, then we'll see where it failed and I think that will provide some good information back to Drupal 8 and if we deploy this in a scalable way, I think that will also provide a lot of good information back to Drupal 8 development because we'll have something running in a production environment that we can get a lot of performance metrics on and where things worked and where things didn't work. So making this Drupal 8, to me, makes a lot of sense. It also exposes a endpoint, a REST endpoint that we interact with, which I'll show you a bit more about that later on. But with those requirements in mind, like a Drupal 8 site and REST, we just went, yeah, Drupal 8 might as well do it in D8. We're not gonna do it in D7 and then redo the work moving forward. So let's go with D8. This is the site, themed it myself, Twitter Bootstrap. So if anyone wants to help me and by no means is this the finished product, it was seriously just MVP. So it was me, I'd built it and I said, oh, it should probably get themed and I was chatting to everyone and no one could really come to a consensus on what it should look like. So I just drew up some mockups and just looked at Travis, looked at this and went like, list page, node page and that was kind of it. And then I kind of let the Twitter Bootstrap components do the talking from there. As you'll probably see, like that's a Twitter Bootstrap component, like the menu, it's all there. So that's the result and this is the build itself. That actually progresses as it goes through and that's a taxonomy, by the way. So you can add your own states. So it can be like new, building, pass, fail. Like you can add as many as you want and interact that through the command line. That's just the standard message, but it could be anything. And then we've also got a list of artifacts. So now we can upload Apache logs and MySQL logs and you can get a bit more information back on why things failed. So, and just a point to mention, we're actually using, right now we're using S3 for those files. So we upload them to a remote file storage. And yep, yeah, yep, definitely, absolutely. Yeah, without a doubt. So yeah, Nginx and FPM. So, yeah, cool. But yeah, we're pushing up to S3 because at the end of the day, it could be any provider, but we're pushing to remote file storage. Then we can let them handle pretty much all the file storage because then that, like, in the end, all this is like a MySQL backend somewhere and then a whole bunch of app servers in front and then files off somewhere else. You don't really even have to deal with files anymore with this. So it's nice and scalable and simple. Yeah, might be demo time. Cool, so, I hope everyone can see that. But I just, and I'll pause it on the way through. So the result side has a command line component to it. So that's what drives this whole thing, right? The command line component will live on the bots, but it could also be on your local development environment if you want to interact through it, wherever it is. Wherever your point is that does all the work, that's where your command line client can be and then interact with the result side. There's just a little basic configuration file, just admin password for the spin up, the URL of the site, S3 bucket. Oh, nope, sorry, no S3 key or secret for you today. And let's get it started, shall we? So command line client, create, made a title, just a random title, nothing too serious, right? But that could be anything, right? That's not locked in in any way. It could just be my awesome patch or whatever it is. We can make that sort of conform to Drupal.org standards. Like we can make that whatever we want, really. Completely open. So you'll see that it built, oh, sorry, created a node. Yeah, created a node on the site. I'd actually already run this demo once. Let me pause it there for a sec. So created a node, got a list of states so I can actually go result CLI states and that's just taxonomy. I'm just getting a taxonomy back that says the name and the percentage and that can be whatever you want. So you can have whatever crazy states you want. It could be something like building, running PHP CS, running simple tests, running the hat, pass failed. It's a per site basis in this case. So it's a taxonomy on that site. But yeah, it could be, it's kind of. All of Drupal.org is getting to the same test, aren't they? Yes, yep. But we could split out the sites into like a DA, D7. You could split them out into separate apps in that case. But that's completely like open to people's demands. If we have a need for more taxonomies and more, absolutely. Could even make the CLI drive the states instead of the Drupal site having the states available so we could just create them on the fly if they don't exist. So that could be another thing. So that's just listing the states. So you know what's going on? Yeah, I was using my examples, so I kept having build 13. And this is just me changing the state, and now I'm gonna prove that the state changed. So that was my build from before, so don't worry about that. So my project's now building. I'm proving that I have a set of files, so these are derived from runtest.sh, but they are in JUnit format, which is awesome. And let's hold the phone right there. So artifacts, results, and we have our result CLI generating that string message that we all love that says one pass and a million failed. So this way we're not generating, we're not making the result side itself do the heavy lifting, and because if you wanted to do something like this on the Drupal site, it'd get pretty complicated. Like you would either upload the files and then do the calculation right then and there, which would slow down your push, or you would push the files and then you would have like a cron batch process running in the background, generating those. So this right here means that our nodes, our things running the build, also compute the result. It also means that if you already had those artifacts locally from a local build, you could use this little client here to generate a message without having to go through everything. And that could be expanded, right? That just says like assertion values and errors, right? But that could be expanded to like a summary message and then all the things that passed and failed and everything, that's just sort of like a minimal viable product thing right there that we could absolutely expand on. So here I am. So I've built that message, but now I'm actually gonna push my artifact. Oh, sorry, I pushed the message. So I checked it out locally, had a bit of a check, said, hey, you know, is it working? And then I provided a build number to send the message up and now I'm sending my artifacts. So that's going up to a remote store and then it's sending all the URLs to the result site. There we go. Got all our files and you can pull them down and download them and do whatever you want with them. And I might as well mark the build as passed, even though it didn't pass technically, but that's okay. So I'm just marking in state four, which is passed and I had a network issue at that stage. So let's try that again, shall we? So, and then, hey, it passed. So that's the results component. Actually, I'll go back to that. So sprints on Saturday. Here's my one goal for the result site. So as you know, that's built on Twitter Bootstrap, but we want it to look like triple.org at the end of the day. Like it shouldn't be like this for everyone in the triple.org ecosystem. We'll most definitely keep a theme like this for people who want to use the result site outside of the triple.org ecosystem, but we definitely want it to look like triple.org. So I'm calling for help to port the blue cheese theme to Drupal 8. That's the base theme that's used for all the Drupal.org sites. So if we can port that over, everyone else can use the blue cheese theme, but we can also benefit from it here. And I had a bit of a chat with John Albin and he's running the CSS, sorry, the theming portion of the, I think I shouldn't say CSS. The theme, that's very mean. The theming portion of the sprint. So that's my one goal for the result site. Let's make it look like triple.org. And I just had a few little project ideas that you could use this for, right? You could create a gist.github.com kind of clone, right? So you could be like, hey, like spin up an instance somewhere that houses a whole bunch of, like a dumping ground for results. And then my build failed and I'm like, why did my build fail? So I send all my artifacts up there and then I can hand the URL off to someone else. It's just a crazy idea. Someone could do it if they want to. But you know, it's just proving that we can use it for more than what we're using it for. And the other thing is a Drush command, trying to think, oh yeah, like a Drush command to check what your builds are doing. So like you do a Drush something and then it'll give you a state of what's running, what's not, what's past, you know? So just like a little command line client for the result site. Could even be baked into the actual CLI itself, but you know, maybe Drush could leverage it. So yeah, call to arms. So I mentioned the blue cheese theme, but the other one that I want to work on that I will be personally sprinting on and would love a lot of help with is the API. After Saturday, I want to have a mock interface generated for the API, which is kind of like our contracts to Drupal.org to say this is what we'll be. Like when you send a request, this is what we will return because that means that Neil Drum from the DA can go off and builds the Drupal.org module. So if anyone wants to work on a Sylex API with me, please come say hi. And the other sprint that's going on is the Drupal Accelerate Sprints. So the DA Accelerate Fund gave a grant to the Drupal CI guys to get us, you know, sprinting on this thing for a week or for a few days to a week. And I would love it if more people came online and helped out as well if they had time. But, and there'll be blog posts around this as well. I'll be recording each day basically via Twitter and blog. So, and I also want to thank Angie for supporting the Drupal CI with the DA Accelerate Fund. So she just contacted me on IRC one day out of the blue and said, hey, we need to talk about this, and we did, and I think it's great. So thank you for the support.