 Because it provides end-to-end visibility, a DevOps platform offers us the opportunity to enlarge the DevOps tent and start to include roles that have traditionally existed in their own silos and been left out of the DevOps or even development lifecycle. One of the most logical roles to include is the role of the tech writer. Now, despite the fact that technical writers sit so close to the coding process and work so closely with developers, documentation has usually existed in a completely separate system. In our next talk and demo, Alec Clues is going to lay out specifically a detailed plan to involve tech writers in the process using documentation as code and ultimately outputting a GitLab pages site. Let's check in and see how that works. Thank you, Core Mac, and welcome everyone. I live and work in Australia and I would like to start with a customary acknowledgement of country. I'm speaking to you from the lands of the Wunditry people of the Kulin Nation and I wish to acknowledge them as traditional owners. I would also like to pay my respect to their elders, past and present. Why am I talking to you about docs' code? What are the problems we are solving here today? When projects grow, documentation is often managed by dedicated teams of technical writers to improve scalability and quality. They bring valuable skills, experience, and their own sets of tools and processes. However, frequently the result is that content can only be managed with hard-to-use and sometimes expensive tools. Tech storage is stored in proprietary formats and any technical writers can make even simple changes. Not getting the whole team involved means we lose contributions and the perspective of other team members. Developers also learn less about the customer's experience of their product. At some point, publication can become slow and cumbersome and it can be a mad scramble to get the documentation ready for each release. Docs have become the bottlenecks in many projects. Let me tell you about my journey and how I avoided this problem. I've been in IT for a number of years working on everything from mainframes to IoT devices and I'm the developer relations geek at Papercut Software in Melbourne. I write a lot of documentation for our developer community. When I took on responsibility for developer relations, I knew that writing tools like madcapflare and Microsoft Word were not going to be productive enough for me. So I took inspiration from my Unix routes and started using plaintext content and developer style workflows to get content out the door. I was inspired by 1970s technology and hadn't yet heard about Docs as code. But once I started, I quickly discovered that the technical writing community had embraced the same approach but with updated tools and processes. So now I can achieve much higher productivity using things like GitLab, static site generators, lightweight markup formats and many other things. I can even use some of my old tools like sed and grep and so on. Even better that the other developers in my team contribute and publish content for the project as well. I've only got 25 minutes. So I've actually created an example set of notes on a GitLab pages website and that website is actually maintained using Docs as code approach. So hopefully it will serve as an example for you to see how this is done in a practical fashion. So here's actually a page from that from those sets of notes and I've actually deliberately opened this one about the world role of the tech writer for two reasons. First of all a little bit I'm going to demonstrate how we can make changes to this page and secondly it's important to remember that tech writers are still valuable members of the team but now we can actually get them to focus on higher value activities such as creating information architectures or maintaining our style guides. Now this page here is actually stored inside the GitLab repository as restructured text which is a lightweight markup language and it's quite simple to use it's just ordinary text and then there are these simple adornments to give it some structure and provide other entities in the document. I'll talk a bit more about this in a minute. If we go back up to the top level as well as the actual content it's worth while drawing your attention to the GitLab YAML file that actually shows some of the automation techniques that I've put in and I'll be talking about this in more detail in a minute but you should definitely come back and have a look at this file for yourself later on and finally and this is quite specific to this project I have all the docker related information I'm not going to go into this in more detail but you can come and review this later but I'll explain why it's like this and how it's used. So let's talk a bit more about docs as code. First of all the philosophical level so the idea is that we've adopted software development practices and DevOps practices wholesale into the documentation process. The net result is that we can try and ensure that the whole team is contributing to our documentation and that documentation is useful and then value to our customers. In addition to that we're using agile techniques to make sure it's always current and up-to-date. We can provide release updates in a timely manner. On an ongoing basis of course we're also using technique like retros to proactively improve our processes and make sure that we can continue to stay productive across the whole writing and publishing process and that means that we're paying attention not just to the authoring and reviewing of content on the work station but also publication and distribution to our customers. Keep the docs close to code is a mantra that you'll often hear and that's certainly valuable because we should think about docs as an integral part of our product however despite what you may be in some places there can often be beneficial to keep documentation in a separate repository because your following is like different processes but definitely they all start as part of the same project. So at a sort of physical level what does it look like? So as I mentioned previously you're using lightweight text based markup formats. You're not keeping your content in proprietary binary formats. So I already showed you a quick example of restructured text which is my favorite. It's quite powerful but still but still reasonably easy to use. I suspect the most popular format is Markdown which is made popular by websites like GitLab and it's much easier to understand however it is less powerful but it's widely adopted and that may make it a good choice certainly for your first project. Once you've got your content into these text-based formats then you can start using text processing tools not just text editors but you can use stream editors, you can use file inclusion and that type of thing to process your content and you can do all of that using developer style workflows so all the things that we're used to like issues and merge requests we can adopt and manage our projects with. Finally when we deliver the content invariably we'll use something like a static site generator, a wiki is also an option but these are far easier tools to deploy and use than the traditional content management system. Throughout the whole process we're using automation to get as much done as we can without involving people and we're using build tools for two reasons. One to wrap all the complexity into something that's much easy to use. We can just issue single commands to do quite complex activities and secondly, build tools will invariably only build what needs to be built and it's a lot quicker specifically during the review process. There's lots of information on the internet, a couple of references here. The search terms you need are docs as code and docs like code. Let's talk a bit more about automation and that comes under two headings. One is that you can generate built content at build time which saves you a certain amount of writing so really productive bit of content generation is having your diagrams created for you automatically from text descriptions. My favorite tool for this is Plant UML but there are others available and it's certainly easier to maintain diagrams under version control if they're text than having to build them using an image manipulation tool. You can extract text from other sources like files and insert that into documentation so for example you can extract examples of configuration settings from the product repository and insert it directly into your text if the form actually changed at some point in the future your documentation is automatically updated. You can generate textual information and insert that into your documentation so for instance every time your documentation is regenerated you can extract the current version number of your product by running it and extracting that just that piece of information again it's not something you've got to remember to manually update. You can take it one stage further and you can do things like automated translation used automated translation tools you can capture user interface screenshots and html screenshots and insert those into your documentation as well. Here's an example from Plant UML the diagram on the right is dynamically created every time the documentation is republished and the information in the middle column the text information is far easier to maintain and track and you can do this between different versions for instance so you know what's changed and that's all you actually have to edit. The second thing you can do with automation tools is actually do verification processes so a nice simple example of that is spell checking and a top tip always maintain a project word list of words that are specific to your project and add that project into your version control repository so that it travels with your project and then the updates are automatically deployed as well. You can check links to make sure that links aren't broken in your documentation and take it to the next level you can use style checking programs or grammar checkers to make sure that to try and catch as many errors as possible in your system so the example project that I showed you earlier on has has a tool called alex in it that checks for profane or inappropriate language and will stop deployment if it detects a problem and you can calculate document metrics if you want to things like word counts or reading levels or that type of thing. So what does GitLab fit into all of this so the first thing that GitLab provides is Git of course and it allows us to track and record changes we can use push and pull to transmit content between platforms in a controlled manner and when we do start pushing or pushing things into repositories then we can trigger automation events for instance in pre-commit hooks or on the GitLab CICD platform and we're doing merges. GitLab gives us issues and merge requests and we can just adopt those in the same way we do other projects to coordinate across this particular project or as many projects as we want to and track all the work that's going on so it's a line of sight from requests to publication. The GitLab CICD automation platform allows us to create content with form validation so some things I was talking about earlier on and it finally allows us to deploy our content to publication. The other thing that I haven't talked about is that the CICD platform allows us to build container images so basically giving our developers or our writers I should say easy access to a toolkit that they can use to edit and review their content and once we build those container images then we can store them in the project's container registry ready for use by any member of the team and we can use the same image for deployment as we use for review and editing which means that when content is reviewed locally on workstations then we've got a high degree of confidence that we're going to see exactly the same thing as our customers when it's deployed into production so that's a huge time saving. According to containers standard documentation toolkits finally once we have generated all this content then we can use GitLab pages to deploy and the great thing I love about GitLab pages is that you can have it as either a private or a public feature which means that if you have a beta project this is something I'm doing at the moment if you have a beta project then you can actually make your pages private and then you expose them to your testers and your beta customers and then once it's ready for final release you can flip the switch and make it generally available. So let's actually see some of this in action. So the first thing that we need is we need to be able to provide people with a standard Docker image that they can run on their workstation and they can use for editing and reviewing and that's just a simple Docker in Docker build and you can just take this and run it pretty much as is. The only thing that I've done here that might be a little unusual for some people is that I've put all of the Docker build files that I need so the Docker file the package dependency listings and so on in the Docker build context directory and that just kind of keeps it out the way of people who are actually not really interested in that. Once we've created one of these images then we can use it on our workstation to do local work and that's kind of where it gets a bit interesting. So let's actually do that. All we have to do is install docker desktop, git and make sure that the editor or reviewer has access to a text editor and then we can give them access to this process. Now this is pretty hard for people to type in on their own so typically we package these up into scripts or maybe he won't even provide people with the button on a GUI so that they could do this but it all happens magically behind the doors but this is the mechanics of how it runs under the hood. So let's actually do this. So what I've done is that I've created a change request. It's actually change request number 29 and I've got to take these two bullet points and combine them into a single point because there's not much point like this and this is the file that I need to edit in order to do that. So let's flip over to my text editor and I'll just start my demo and the first thing you do is create a new branch for our fix. So it's fixed it's it's ticket number 29 so I'm going to use 29 throughout this and the first thing I'm doing is open up a preview of the current content. So this is the content running on my local system because it's a static site. It doesn't actually need a web server to serve it up. If you're using other products like Hugo for instance then you might need to run Hugo in development server mode. So the mechanics vary between different systems but the the principles are identical between them all. So this is my local preview environment and I've actually got to go into this page here and if I look at my look at the actual thing I had to combine these two these two bullet points which are these two bullet points here in fact. So let's flip back into my editor and I happen to have got that open. I'll need to put my tool I need in order to regenerate this content when it changes I need to run my toolkit. So this is just quite a long docker command but the thing that's interesting is right at the end is this reload feature. So I've now got this docker container sitting there waiting for changes to happen. So soon as I come in here and make this change and save it away it will automatically rebuild that for me without me having to do anything and if I hit refresh on this screen there then the change is visible and I can view it. This is exactly how it's going to look on the final website. Let me just close that down. So being a quality professional I'm going to run a spell check before I check it in and that's just run make spelling that actually did for demonstration purposes did want to put a spelling mistake in there so I'll just save that spelling mistake away and we can see the spelling spelling mistake run and it has found that spelling mistake. Now if I was doing it properly I'd fix this and test it again but let's not fix this let's just try and check it in immediately. So I'm going to do commit git commit close and try and close the ticket but my pre-commit hook is actually running all the verification steps there's actually three verification and it's failed on the spelling so it is complaining that my spelling is not up to snuff so let me fix that spelling mistake and save it away and I'm going to try and do the commit again. This this commit will take longer because it failed on the spelling mistake which was the first check it's actually going to do the spelling mistake it's then going to check for links and it's going to check for a fail language so it's going through now checking all the links work in my project takes a couple of minutes right so change is committed so now I can push it I'm using push options or sorry yeah push options minus o to actually create the merge request automatically for you for me so I'm going to just push that branch up and say create the merge request and does that for me automatically and here is the merge request if I open that merge request it shows me that it's running some pipelines so if I look at these pipelines in fact it's over here it's actually running all the verification checks because of course it's perfectly possible for somebody on the local workstation to bypass the pre-commit hooks with the minus n option or even delete all the pre-commit hooks or whatever and so we need to we need to run these checks when we get a push to the cloud as well so that's going to take a few minutes to to to go through now this is actually I've implemented this quite inefficiently if this was a production system this would all be a single job but I wanted to show people that you can actually have multiple jobs running in parallel so I spit them out like that but eventually that should pass and I can actually merge this when the pipeline succeeds and delete the source branch which I'm going to do because once it's when it emerges onto main then it will then it will also go through the same or a similar CI CD process and but this time it will publish it so if we go back to this pipeline see how far it's got that's still running obviously the runners are a bit slow today um but in a minute we'll see that we'll see this go through so while waiting for that let's just quickly go back to here and of course the bit I just did was the merge merge request review and approved it but when the pipeline improves normally of course you might actually check the content out onto a local system and run exactly the same process as I did before to review the change and see what it will look like on the website and just something to say quickly here is that all my team are following this process occasionally I'm working with for instance members of our legal department and training them up on this system for a few changes that they might do every year doesn't really work so I'm happy for them to do their changes in google docs which is what they're used to and then I manually integrate their changes in if they were using the system every couple of weeks then I might invest more time in training them but it works fine as is and eventually once all these tests have changed have passed and the mergers occurred and the tests have been reapplied again it will post to pages and this is how you do it this is not particularly sophisticated and was basically copied pretty much as is for the examples that GitLab already supplied to us so let's flip back here and see how it's done so this job succeeded let's go back to the pipeline so all the jobs were created so there should be a new pipeline now for the public so for the actual merge so remember that when this pipeline passed I said I want the merge to occur automatically that's now happened so now it's running a second job or set of jobs that are going to go through the checks again because it's of course possible that in doing the merge maybe something broke but the thing that's different this time is we're doing the deployment to pages which is so it's just regenerating all the content now and rebuilding it for the pages environment and we should have time we'll actually see this published whilst we're waiting for that I might just finish up quickly with our summary so GitLab is a single source of truth for all my documentation projects it also handles all the processes around my documentation projects so I've got one place to get everything everyone on my team can contribute and does contribute with using the tools that I provide to them and it's very easy for them I had because they're developers they just read the readme file there was no training involved and it just worked and having local tools and review loops actually hugely shortens the the time scale it takes to actually make a change if I see something fairly simple wrong I can get a new change out with having done a review within a matter of a few minutes it's really great automation increases my productivity and and everybody's productivity in fact I'm using the number of errors that are made the spell checking was an example of that and with the GitLab platform I've saved huge amounts of money on training infrastructure and tooling so I can't it is a huge success for me this job has now succeeded so I'm just going to flip back here and see how this publication has gone and if we go here and refresh this we should have seen these two and this is the live website by the way we should see these two bullet points being combined into a single thing can take a couple of couple of seconds there we go the content's been deployed and updated so that concludes pretty much everything I had to say I'll be around on chat if you need me or you can reach out to me on the links that I gave at the beginning of my talk for any discussion or questions I'd love to hear from you thanks so much for your time and have a good day thank you bye bye