 Hello, my name is Nicolas Massard. First, I would like to thank you all for being here today. I'm a developer at Pegasus. I work on the documentation team. Pegasus, as you may know, is a startup made possible by Consensys, one of the most important Ethiopian blockchain companies. Pegasus, that I'm proud to represent today, focuses on the protocol work where our lead product is Pantheon. It's an enterprise-grade Java Ethereum client that we develop from scratch, in fact. I'm a developer at Pegasus and I'm proud to work in the documentation team. Documentation needs developers because apart from the UX development that I'm mainly related to, the command line interface of Pantheon I do, I also provide the documentation tools for technical writers and I act as an interpreter between them and Pantheon code. Our documentation team is nicknamed Pliny, to make reference to the Roman writer Pliny the Elder, who wrote about the first Roman Pantheon in his book, and also I admit a bit to show how fun it looks more than other teams. So, as Pliny wrote, and I'm just quoting, Natural History, Pantheon is a masterpiece of excellence, but it has not had an opportunity of being so well appreciated. Yet, we are fixing this. A good documentation is one of the ways to achieve this goal of excellence. So, let me tell you how we started this journey to excellence. In fact, I started it in July 18 alone, with an already impressive code base to explore. You may have heard of Gaia spacecraft from the European Space Agency that was launched in 2013. This spacecraft was designed to measure our galaxy, the Milky Way as precisely as possible. With distances of stars, spectral and photometric measurement velocities, it measured about 1 billion stars. All this huge quantity of data was used to graph a 3D image of our galaxy, and for the first time ever, we were really, really, and precisely knowing its shape. Everything before that was artist's views. No, it's real precise data. I felt a bit like Gaia in July. Alone, in the void of space with a billion of stars to explore, but our journey is not going to be out of space. Even if consensus recently started some very interesting space activities, in July 2018, we are still on Earth. And given the real weather and crypto weather, I decided to set our story in the cold land of the North. We were on Earth reading the source code of Pantheon and a lot of Ethereum-related documentation. The fact is that even when you think you know that about Ethereum, and you do if you are able to prepare this expedition, and you even watch the crypto winner coming with some confidence, but you still know nothing before working on a project like that. I admit that I had to ask a lot of questions to the developers that were hard-working on Pantheon, and they answered nicely thanks to them. At the stage of the process, I was diligently assimilating knowledge, preparing the trip. So a few weeks later, two technical writers joined the expedition. I think of us as explorers and cartographers on a new and old land. We drew maps for users to be able to settle a new world of the Ethereum blockchain. We drew maps of the Tira Incognita for users to land on setup businesses quickly and as safely as possible. We drew maps to locate all weaknesses and know the places where we can expand and improve. We drew maps for other explorers to initiate the expeditions and help us draw more maps. Then we continued to prepare by sharing our findings and writing them down into a basic documentation. But at this stage, our first issue to start exploring is that we needed better tools. The project contained already some marked-down fights. I hope everyone knows, marked-down. It was mainly a rid-me-fight in the root folder of the project. We needed to move to something more user-friendly and that will enable us to collaborate faster, at least on the beginning. Our project being hosted on GitHub, we naturally decided to move to the GitHub wiki. It provided some advantages, like its views, possibility of a table of content on the side of the page, restricting the editing rights to the team and so more. But it also came with its batch of issues. After a few months, where our focus was clearly on the content itself, we decided to rework our documentation and make a migration. We saw two ways of better tooling our documentation work. The first one was to move to a state service. We knew about read-the-doc.org, but also that GitHub was providing GitHub pages, even if we were not able to tell exactly what each service was doing. We also had the possibility to self-host our documentation and generate it using a tool like Metalsmith that we had off. But we had no idea about what was the best choice. To make an informed decision, we listed requirements and followed a simplified approval process. The tool we wanted to have to satisfy multiple capabilities had to first be able to use markdown syntax. After a poll, we made among developers to know if RST was something they would accept to use. Markdown was clearly the only way to have their approval. It may be different in other companies, but here it was the case. The system also had to be able to manage documentation versions based on release type that were used in the code repository. The best solution being to have an automated build of the doc when publishing a new release of Parfillon. One of my biggest concerns with the GitHub Wiki was that the code and doc were not in sync. I mean that when you use GitHub Wiki, you don't have a way to go back to a version of the code in the Git log and tell exactly what version of the doc can apply. There are two separated repositories. We also needed the new tools to enable reviews like code reviews on pull requests with commands, but for the doc. With the Wiki, we missed this ability to create a pull request, review it and fix it before merging in the master branch. Everything we pushed was directly available for the public. And so we used the ugliest way we could imagine to review docs. During a few weeks, using Google Documents, we created drafts of the Wiki pages on which we were able to comment as developers for reviews and then copy and then paste in the Wiki and then reformat to markdown. Well, I tried to forget this moment, but it's hard to remove it from my mind and I still feel the taste of that. But hopefully I knew we were going to change this soon. Another thing we needed was to make it easy to contribute to the documentation. We needed that because one way to grow a community around a product is to enable to contribute to more than just the code. And I would like to use this talk also to invite any of you to have a look to the documentation and try to contribute if you like. If you help us in many ways because you will grow the doc, but also you will ask questions outside of our comfort zone and you will certainly have a different point of view. So to be honest, we will come any criticism. Just please be polite. So back to requirements. Amidst all the content related requirements, we also wanted to be able to adapt the visual theme to Pegasus Graphical style. The goal was to make the doc more attractive. Then people more inclined to contribute, at least it's a wish. And to check if these choices could really help our doc to be more attractive, we needed a way to extract usage statistics. Our favorite tool that we always use and we use for our website is Google Analytics. Enabling feedback was also a requirement. Close to statistic. It's usually some common tools, but the goal is more to improve on things user really requested. Also checking links was a must have too. The tool itself had to provide that, but some help from Google Search Console was possible. Also an automatic table of contents and navigation was a requirement. We had to update table of content manually on the wiki. So in 2019, it's not really something we want to do. Also as modern language is a bit limited when it comes to complex content like code examples, math formulas, cable shortcuts and so on. And I had about Python model extensions. I really wanted to be able to use them or at least some similar system. I think dedicated search engine was also something we wanted to provide to our users. In fact, because GitHub engine is nice, but it returns too many things and display findings in model file as source code. And we can't comply because GitHub is made for source code. So that is doing exactly what we expect him to do. And finally, we like the idea of having features like an offline PDF or HTML version of our doc and the ability to use our own domain name and some other things that GitHub wiki was not able to provide. So choosing a doc tool and process is not easy. We had too many options. One of our requirements that I did not indicate previously was time. We didn't have much time and no one has. But more than time to put this in place, the time we lacked or that we did not want to allocate was time to maintain this tool chain. This requirement quickly excluded the self-hosted solution, but GitHub pages is just a hosting and doesn't provide any tools to generate the content. This option will perhaps come back later, but today it's too big option. I don't feel driving that. Also, we are still making experiments. So if we have to trash all work because we choose the runway, I'd rather trash light work than a huge one. Then within the Clue Dusted tools, we had a few options too. The Master View Sworn was readzodoc.org that filled almost all the requirements we had. The only exotic aspect of this tool chain is that we decided to use MKDocs instead of Sphinx because of the ease of use and also because I found a moderate design theme for MKDocs that I really wanted to use. Yes, I admit it's a very light argument, but as I said, we wanted to make the migration as fast as possible so having a theme where you could just change a few CSS was ideal. So what's the setup we put in place? Let's see what's shipped to the base camp. Readzodocs.org is the Clue service we're using. It's free, but you can donate. It's a free project, so I really ask you to have a look at it if you didn't already did that. It provides the hosting. It provides the link with GitHub using a webhook. It enables version if you created a tagged release in your repository and it also supports custom domain names. MKDocs is a Python tool to generate static documentation sites from modern content. We use it with a moderate design theme that's far more interesting than the default Readzodocs theme shipped with MKDocs. We're not using the Readzodocs theme, so it requires some adjustment and specifically to display Readzodocs version box. It's a box you can find on the bottom right corner of documentation site where you can switch from one version to the other and it requires some hacking in the theme to do that. We installed a bench of Python marked on extension for highlighting handling abbreviation, math formulas, tab code blocks, footnotes and some more. Writing style guides is also in progress to explain how to contribute technically but also on the content part. Using the new tools required a few adjustments, of course. We had some technical writers that were not so technical and so we had to teach them how to use Git. We had to update all modern content we previously generated. We had to update links and syntax to use new extensions. We had to configure and test the system using forked repositories so we were not working on the real repositories. We had to write a style guide which is still in process and we also learned how to review documentation and create PRs because it was usually developer work so for me it was okay but for technical writers it was somewhat some new process. We used documentation labels in GitHub and Jira in order to isolate the pure doc PRs and see the doc PRs needing doc, the code PRs that are needing docs and to isolate them from the pure code for request. So what's next? We have this system in place and today we have to finish the contribution guidelines and have contributors reading it. We have to gather enough analytics to tell if pages have to be reworked and if the experiments we try are working or not. We already received and taken account a lot of feedbacks that users and contributors are going to send to us. For that we will probably put a feedback tool like hotjar in place on the site. We already know that we are going to expand the map where structural and implementation design is not easy to figure for users. There's a huge demand for this and we also have to use a new system and make it live see its limitation and then define the new requirements for the next system. And now here's the map. We have real data, not much but some of it and it's growing. We already see the places we have to fix. We can help explorers point telescopes to the right directions but the most important thing is that we know for real because we have metrics. Then with the camp we go back home. We rest a bit and show our findings to colleagues. After that we'll plan the next expedition with a bigger truck and more tools. I don't know what it's going to be, to be honest. Oh and did you know that Antarctica is the only continent where consensus is not present? As for no, of course. So if you have any questions, remarks and feedbacks please. We have still six minutes left. We actually did have time for questions. Yes? Do you handle translations of your documentation? Not yet. We have only an English version for it. But the tool we use, Read the Docs, is able to handle translation. But I admit we are first creating the content in English. We first have to provide a good basis for content and then we're going to translate it. I'm French so I'm going to help translating in French, but I don't know if we have any other people able to translate in other languages. Do you have the code and the documentation code in the same repository as a sub directory? In fact it's exactly in the same repository. It's in a slash docs folder at the root of the project. So it's time a user makes a change on the code. So adding a new feature, changing a behavior, changing a common line feature or something like that. We require M or her to change the documentation or at least seed a new documentation page. And then all writers are going to rework this content. How about the other way around? When you find or fix a bug in the documentation that is not in the code, then do you tag the... What do you mean by bugging the documentation that's not in the code? Something, the documentation that is just wrong for some reason. Yeah and the code is okay. Do you tag the repository and release a new version? No, for the moment I admit it did not happen. So what we are going to do is probably rebuild a tag of the repository but we are not going to publish a new version of the binaries of the program for instance. For teaching new technical writers how to use Git, how much original content do you create to help them learn? How much do you repurpose from like this? It really depends on the person. Some are just learning with five minute examples and some never learn. So it depends. Do you have a preferred resource for like in general? No, what we do is that we practice per programming. So we practice per writing too and then we work on the real content and then we teach them how to use a tool with using video conference. We use Zoom for video conferences and then we do the work together live and I show them how to use the tool, how to commit, how to push and things like that. Yeah, now everything is into the same URL. We have the source code in a GitHub repository with slash docs folder on the root of the repository and read the docs is triggered to build this slash docs folder. Each time we push a new content to the master branch and it creates a new build. Each time we create a tagged release of the software in this repository. So everything is on the same repository. There's no sub projects in read the docs or things like that. I don't know if I answered your question exactly. We don't have this case to be honest but what you could do and the way I would do that is to have probably a third repository or some GitHub pages and things like that where you would put a concatenated version of your documentation or if you don't want to do that I would use sub projects in read the docs as a feature to make sub projects on the same URL. You can have different documentation so you can see that they are part of the same thing but not exactly the same way. If you use the same theme and same templates and everything it could be enough I guess. What was interlinked in their references to the part of the documentation? In Sphinx you get some extension to handle more like that. I don't know within KDLX. I did not explore Sphinx a lot because of this Markdown requirement. I know Sphinx does Markdown but I clearly don't really know this stuff on Sphinx. The best tool we use is includes in MKDLX and perhaps if you work with get sub modules or things like that you can achieve the same thing but it may be more complicated. To be clear this setup is because what we are working on is on the same repository. If we had another project setup we probably would have used other tools or other way to configure them. For the not so technical writers and so on do they have the ability in your setup to just combine the documentation without any conversion? We have a preview system. It requires to install Python on your computer but it's not a big deal and then you can just run MKDLX serve on the root folder and then you can view it in your browser on a local address and it's very handy. The only thing you don't have is the version checkbox for MKDLX. We have to wrap up now. We are going to move from Cliffy and Pegasus to Kubernetes more myth. Thank you very much.