 Well then, planning on doing this video at all, but it actually, as I'm thinking about it earlier today, because I was working on something that I'm going to be talking about in this video, it's like, you know, I really should, because this is a side of information technology that I see a lot of developers not really get or appreciate anywhere near as much as they should, and it includes some parts of it that they don't really mention much of. So I'd like to do that. As pretty much anybody who was watching this knows, I manage a rather large set of libraries and some other, mostly text processing, but there's some other stuff in there. A console wrapper that's getting not going to be a wrapper anymore. And there's a large overhaul that's going to be done for that. But there's, oh, I've deleted a lot of them or archived many of them. Parts of them aren't in constant development, but Stringer's the big one. And there's a lot of repos in that now. I think it's up to like 17 or something, not all of them being actual libraries. Some of them serve some other purposes, but there's a lot going on there. So obviously, I want to be focusing on the development itself. Other parts of that, I don't really want to be doing. This should make sense when it comes to trivial things that become easy to forget about. And if you forget about them, they don't get done and your stuff lags behind and suffers because of it. So what's one of these examples? Well, making sure all your dependencies are up to date, that's actually a big one. That's something I see a lot of places, a lot of products rather, falter on. So back when I had started doing this, it was still in preview. It was a third party bot. It's been merged completely into GitHub at this point. So that's how much of an endorsement this is. It's not just me endorsing the product, it's GitHub literally endorses it to the point of fully integrating it. Then that's dependabot. Dependabot is a bot that understands certain languages, most of the common ones. Typically speaking, you'd have no problem using this. And is able to go through, understand what your project depends on, and will issue pull requests to update those when those updates happen. Now, this does include some useful information. Dependabots updates, it keeps track of how much of the pull requests pass and fail, and gives you a sort of confidence value on how likely it is to be fine to just straight up merge rather than, you know, potentially something that you need to check out. I merge regardless. And I'll talk a little bit about that. I would not recommend doing that in the majority of cases, but I do do that. Now, dependabot is pretty versatile. It can be set up to do all sorts of specific things. You can tell it to use specific labels and other stuff that's actually super convenient. And you can have it like rebase the merges and stuff like that, if that's something that needs to be done, which is or rebase the pull requests rather. And it's fantastic. Now, I use this together with a pull request automator. In my case, specifically Kodiak, Kodiak HQ, but there are other options you can use. And the two together can wind up just automatically issuing and merging the pull request and then, you know, going through and deleting the branch that it created and all of that stuff gets completely automated, so that I never need to touch updates. I will occasionally try to sync my local repo. It gets told that it can't because there's, you know, other stuff ahead of it. And so I go and pull those changes and then push the change back so that my updates are now at the head. And that's it. That's my process. If for some reason, dependabot or Kodiak mess up, or we get into the next part of this automation. A continuous integration. Well, I mean, this is all technically parts of various parts of continuous integration. But I guess, technically speaking, this would be the build service. So I use Azure DevOps, but there's tons of different ones, Circle CI and Travis and App there. There's all sorts of options. You don't have to specifically use Azure DevOps. I have my own reasons for using that specifically. But use you can use whatever's appropriate, whatever you prefer, whatever's suitable for you. Jenkins was a popular one. I don't know if it still is, but that's that that's a popular self hosted one. So every time there's a commit to master, which of course, Kodiak merging would put it into master. And any of my own stuff would go into master. Unless I'm working in a branch and then once I merge the branch into master, then it's, you know, it's still into master again. But it also does it on every pull request. And it goes through and builds the thing and make sure there are no errors. Now, if dependabot happened to get something wrong, if there's some sort of conflict, it won't build. And in that instance, Kodiak actually won't merge it because it's a failing pull request. You see why I'm not afraid to do that automatically now. But I have it set up to do those checks. And there's stuff like branch protection and other shit that you've got to do in order to do that safely. I'm not going to be talking about how to set those things up. That might be a thing for another video. Maybe some specialized guys can cover that because I'm not a big DevOps person. I just have a fairly basic understanding and definitely obviously an appreciation of DevOps. The other thing it does, since I typically commit directly into master, because I'm pretty much the only developer for the majority of these things, I've had some occasional contributions, but I'm, you know, I'm the developer. I commit directly into master. And if anything's, you know, I typically, I don't always remember to but I typically test my stuff locally before I push it. If by some chance, I forget or there's some type of funky thing going on that didn't happen on my machine. Azure DevOps will build and run the tests in its own environment that is nice and isolated and doesn't have all these other things going on that I have on my computer. So that gives a nice test to make sure that everything's working appropriately. Not just on my machine. But this is also a fantastic way to test code on multiple operating systems very seamlessly. The ways you used to have to do it and sometimes is still justified is to have multiple local machines that you all tested on. It's obviously tedious as hell, but sometimes you still need to do that. But you can set up your build pipeline to actually build it and test it on multiple containers, Docker containers. By default, I run them on the Windows one, but that's just because that's the easiest to do for simple.net core tests. Typically speaking, my libraries don't have any platform specific code. So there's no reason to test it on a specific platform. That being said, this does happen streams is an example of one of those that I have to test on a platform specific way. A consolator would be another one that is not part of string here. And that kind of build system is actually fantastic for testing and making sure that your stuff always works for all of those platforms. There's another part of this I don't seem mentioned all that often and appreciate it anywhere near as much as it probably should be. And that's making sure your documentation isn't sync. See, this is the thing I was working on today, because I wasn't keeping my documentation in sync. In fact, I haven't rebuilt my documentation in several months, which is less than ideal. So I wanted to develop a pipeline for developing these docs. Now I've been developed, I've been building the docs through something called Dockerfx, which is something Microsoft created largely as a replacement for sandcastle, which may have a whole system because they're working with multiple languages. But the idea is that you write documentation in the language itself through the XML documentation. That gets built into this language agnostic file of the metadata for the documentation. Your documentation generator goes through builds that are analyzes that metadata and builds the documentation from that so that you can, you know, have it in any language and assemble the entire thing together. Dockerfx can be run through part of the MS build system. You do this by downloading a specific NuGet package, which hooks into that and I've been doing it as part of a library project, but there's no actual sources in it. I do this because you wouldn't have anything to run. So the library project is kind of suitable for that. But you download this NuGet package, build the library once, and it sets up a bunch of scaffolding all the stuff that Dockerfx would expect. You then set that up. Now I've been running into a little bit of a problem and why I was having to do manual builds before in that I have split everything into multiple repositories. And some of those repositories private. But I want many parts of the documentation to be publicly accessible, not just the documentation itself, but also the article markdown that is used when building the documentation. I want people to be able to potentially contribute those. Okay, so what are my options? Now I could put the entire thing in a public GitHub repo have and I was doing this before and have get sub modules for each repository. This works. But there's an issue. When you have a sub module to a repo that you're trying to keep private, things get a little funky. It's not really private anymore, unless you go through and remove the sub module information every single time before you push it back. But are you going to do that? Probably not. So fuck. GitHub allows private repos for organizations now which is good because stringier is set up as an organization on GitHub. I'm not going to talk about why I did that in particular because it's technically not an organization but it just it made way more sense for me to do it as if it was. So okay, I have that but Azure DevOps can also host repositories. Now either one of these would work. I happen to put it the build repository in Azure DevOps. And that's because it's not something I'm ever going to need to make public. So it's just kind of there. This is completely arbitrary decision. Either one of them would work. That's just what I did. So you create your doc effects library in there. Set that up. Create all your sub modules. Now since this is a private repository, it doesn't matter if you sub module private repos because nobody's ever going to see them. But how do we get the article still public? Well, you create a public repo with nothing but the article still in it. And go into the doc FX scaffolding, delete the articles folder, and then create a sub module for the exact same location. Sub module it to the public repo. Boom, you've got your articles in there. Now you just need to tweak how your build pipeline works. There obviously, this isn't a standard project by any means, you don't need to run tests, there's nothing to test. But what you do need to do is every single time you run the build update your sub modules. Okay, one of these sub modules, of course, has to be the output for the documentation. I need to explain that part. I've been hosting my documentation on GitHub pages. There's other options, of course, but I figure since stock effects handles the build anyways, let's just do this straight into whoa, get get out of here. Freakin hornet, all up in my face. I don't need none of that. Since doc doc FX does the building and everything, I don't need something to manage that entire thing. Like there's read the docs. It's a great option for many people. It's just excessive for me because I don't need all of that. I literally just need a place to host the documentation. Documentations build is built as a static website. So okay, GitHub pages serve static websites. Let's just set up a GitHub page for stringier. This was one of the factors behind it being an organization instead of my own personal repos. Just one of them. There's other reasons. Okay, so sub module that into the documentation project as well. Set doc FX up to use that sub module as the output for where the site gets generated. Because normally it does it inside the project itself through a specific directory called underscore site. But you can you can change that. So now, as part of your build process, not only do you update your sub modules, but you also commit and sync that sub module, you now just pushed your documentation automatically. But how do you what do you trigger a build for this every single time? That's where we get into some of the really nice things about CI pipelines. You can set triggers based on all sorts of different things. Now, the typical thing you will see done is trigger it for every commit to master and every pull request, which is reasonable. But why would we ever be writing this HTML ourselves? There's a generator for it. So there's never going to be one of those triggers. But there's other triggers. It can be on a time schedule such as building it every single thing. That would be a valid option. Or you can trigger it based on the completion of other CI pipelines. Now, remember how I said that I had already set everything up for each of these libraries to use as part of its automation depend a bottom Kodiak. And because you don't want Kodiak merging things that'll break break your master branch. There's branch protection. And as part of that to safety check that the Azure pipeline builds and tests successfully. So that pipeline is already there for every single library, which means you can easily set the thing up. Every single time one of those libraries gets updated successfully triggers a rebuild of documentation, which pushes new documentation. Documentation gets written right in the sources together with the methods stays up to date that way, but also stays up to date on the website, because the entirety of the pipeline has been automated. This obviously saves me tons of time and resources. So they get some people that ask me how I manage so much, especially considering this isn't my job. clever use of automation is a big part of it.