 Hi, my name is Mike Padgin. Welcome to this talk on the auto test package. Let's start with a bit of motivation for why this package even exists. I work for R Open Sci, which, among other things, is an organization that provides peer review for our packages. R Open Sci has been peer reviewing our packages for over seven years, over which time they've reviewed hundreds of our packages and improved the software quality of most of the packages that have been through the peer review process. One of the things that's been learned along the way is that the reviewers are quite often some of the first people to cast a truly external pair of eyes on a piece of software. In doing so, what they quite often do with software are things that developers themselves might not necessarily have anticipated. Reviews quite often, not always, but often begin with reviewers coming back to the authors of packages and saying this looks like a great package, however, I tried to do this with it, and something strange happened. Can we first of all work out why it's behaving this way? Reviews get a little bit sidetracked in the early phases into uncovering bugs that, as I said, the developers just simply may not have anticipated, and so was born the idea of the auto test package. What the package is, or what it does, is it analyzes all of the inputs of all of the functions and mutates those inputs to attempt to uncover unexpected behavior or behavior that may cause problems through review processes or other users using packages. So it systematically mutates all of the inputs of functions and looks at the results on the output of the function or the behavior of the function itself. The other step that the auto test package does is to examine the documentation of functions, and it looks at the kinds of things that are given as inputs and checks, whether they match the documentation and also for the outputs, ensures that the classes or types of output results from functions match the values given in the descriptions. It does all of this through extracting the examples code in the documentation for every single function, and using example code it tries to identify all of the types of inputs thrown at every parameter to a function. So that's enough for an introduction, and without further ado, we will now go on to looking at how the auto test package actually works. The auto test package. First of all, the auto test package at the moment lives on GitHub at an organization called Ropensight review tools, and if you simply search for Ropensight and auto test, you'll find the package will hopefully be on crown by the time this talk is actually given. The auto test package in three simple steps demonstrated here. The first thing it needs is the package to test. So it can only test packages. So the first two lines here create a simple R package, and the third line displays the directory tree where there's an R directory, which is simply empty, and the two files necessary for it to be considered an R package. The second step to do is then to add a function. In this case, we just add a really simple function that takes some input x adds a value of one to it and returns it and add some documentation to that function, which when you then run roxigen, roxigenize command turns that documentation into a corresponding help entry in the man directory of the package. The third step is simply to auto test the package. So the only function that I will focus on in this talk is one called auto test package, which simply tests the entirety or selected parts of an entire package. So auto test package with the path to our dummy little R package that we just constructed returns an object that is of class auto test package, as you see there, but that derives from tibble, which itself derives from data frame. So it's effectively a tibble data frame, and it has a dimension in this case of one row and nine columns. This doesn't look very nice printed on a screen like this. You can see the results look something like this and the nine columns are kind of a bit messy, but the one that we're interested in here is the third last one down in the bottom line there that's content. And if you're focusing on the content alone, this content column tells you the content of the message that auto test issues in response to a test. And this says function has no documented example. I said at the start that in order for auto test to work, to work, each function needs to have examples. These are the things that are analyzed and extracted in order to identify all the parameters that are input into functions. So you need to add an example, right. We do that by modifying our function that looked like this with documentation at the top, adding two more lines that has an example, where the value of x in this example is equal to one will then run oxygen to update the documentation. Once again, run auto test package. And in this case, you'll see in the first section at the top below auto test demo, there's a little tick there and saying that one out of one functions was successfully auto tested my function. And in this case, the content of the results has two lines. Now the first line says that the parameter x is not specified as an integer, yet it's only used as such and gives advice there to use one L to explicitly specify an integer. And the second content entry in the result says that that parameter x is only used as a single numeric value, but it responds to vectors of lengths greater than one. Right. So what we can then do to our function to address those issues, there are a lot of things that we could do, but one for one way, for example, would be to restrict it to x to being an integer by modifying the documentation line to have one L and using the checkmate package, which is just provides a very easy and very fast way to implement assertions on inputs in single line. Assert int says I expect the argument x to be a single valued integer parameter. So we then update the documentation, run auto check again. And now the content says one line this time. So we're down from two back to one. The parameter x permits unrestricted integer inputs, but it doesn't document this. So please add the word unrestricted to the description. And so at the start, all the parameters are matched to corresponding description entries and attempts given to make sure that they make sense without being too prescriptive about how to do that. The single word unrestricted is sufficient here. So now we take the documentation once again and modify it to say an unrestricted integer input, run oxygen to update the documentation again. And this time auto test successfully checks the function and returns nothing or a null value. So this reflects the fact that when auto test works, it should do nothing. So it's a package that should help you write robust and much more bun free code by applying it throughout the development process and ensuring that the auto test package function returns nothing at all. Then you know that you're coding in a very robust way. Of course, we'd like to know which tests are actually run. So the test previously, we set this parameter test equals true and the default value is actually test equals false. If we set that for our function that we've written here, it returns nine tests. Don't worry about the details there, but they list all of the actual tests that are conducted. And you can see on the left, the first column is called type. And that tells you the type of test when tests are not run. That type is simply dummy to indicate that they're not run. When they are run, the results can be errors or warnings or diagnostic errors, which developers may consider in order to modify their package. The other column of interest here is test name, which in this case, remember the parameter X was an integer parameter. And so some of the tests that are constructed are to examine the input, the acceptable range of that input to convert the into a numeric, to convert a single value into a length of two, and all of these tests pass successfully. And then the remaining tests from four onwards are all about matching documentation of input parameters and of return values. What we can also do, for example, is change the function from an integer to a numeric value. So instead of one L and a certain int, we can have one with a dot to say it's a numeric value and a certain number, this time expecting a single value numeric input. If we do that and look at the types of tests, you'll see that those integer tests don't appear anymore, but tests that are relevant to numeric parameters appear, such as the second one, for example, trivial noise, where trivial noise is added to the input with the expectation that that should not affect the result. So that's all well and good for a simple trivial function. But how about a real world example, the auto test package can also be applied to any package, as well as, as mentioned at the outset, a selected set, subset of functions from a package. So here, for example, the code in the top box there applies the function with test equals false to initially just listing tests that would be run to the variance function of the stats package. When we do that, the first output there indicates that four functions are actually run. So the package works, as said, by documenting example code, these are held in RD files and one RD file, which is the name of the files in the man directory, can hold code for or can document several functions at once. So the variance function is documented in the same file as the covariance function and the correlation function. So all of these are tested at once when you put any one of them in as functions and the result has 150 rows indicating that 150 tests will be applied to all of these functions if we set the parameter test equal to true. When we do that, it takes around 20 to 30 seconds on most computers to run and generates 15 rows of result. You look at those 15 rows here, you can see on the type column at the top that the first two of those are warnings and the remainder of them are diagnostic parameters. Now, throughout the documentation of auto test, it's recommended to use the DT package from our studio to visualize these results in an interactive HTML table because these tables can be quite overwhelming to try to look at and read on screen. DT offers a much more convenient interface for understanding the results. When you do that, in this case, you'll see that the two warnings are about parameters for which the usage has not been demonstrated. The first of those is the use parameter of the variance function and also the y of the covariance function. So the usage of these is not actually demonstrated in the documentation and the remainder are diagnostic messages. In this case, all about parameters being case dependent. So that may be considered perfectly acceptable by developers or they might like to simply eliminate those diagnostic messages and think, well, I could actually match those arguments regardless of case and then those messages wouldn't even appear. But in this case, at least the two warnings ought to be taken seriously. So from 150 tests applied to the stats package, sorry, the 150 tests applied to the stats package, if we want to look at all of those with the DT table, you'll see again, the type is dummy on the left there. And this tells you all when you look at it yourself, which you can easily do all of the tests that are actually shown at this function. They are involved mutating the single logical values mutating vector inputs, scrolling down through the table. You can see that the lower rows there check that the description of return objects matches the observed values going down a bit further. You can see that a lot of tests are about matching the parameters to the documentation. And then finally, this illustrates one more important aspect of the auto test package that for inputs that have particular class attributes, such as data frame inputs, it then mutates those class attributes in certain ways, depending on the type of input with expectations of what a function should actually do. So in this case, rectangular or tabular kind of inputs, such as data frames are converted to tables and converted to data tables with the expectation that that shouldn't change function results. So the stats package is algorithmically robust, auto test does not reveal any algorithmic problems whatsoever. But it does reveal a few gaps in documentation, which could and maybe should be addressed in order to make the stats package more robust and ultimately user friendly. Now just to conclude, I will show a package which generates much more rich array of warnings and errors. And it's one of my own packages. I'm about to do something nasty and reveal a way in which a package is not coded very well at all. And so of course, it's only fair that I choose my package, which prior to this talk, I had not applied auto test to geodist has algorithms coded in C that are algorithmically, I would hope very, very robust, but I perhaps didn't give as much thought to the user interface as what I might have. So you can see here that 157 tests will be thrown at this package, which has effectively only one main exposed function and three or two auxiliary functions that come out of it. So the results here of the 157 functions show you the sorts of things that will be done mutating single logical values substituting characters for them, submitting vectors of length greater than one for inputs that are shown to be single valued and things like that. And custom class definitions for vector inputs. So changing the classes of them as well. And what this shows at the end is that auto testing the geodist package reveals one error, two warnings and 26 diagnostic messages. Some of these are that the input classes are not actually documented, the parameter types are not checked appropriately. So back on those results, if you run, you can easily run this yourself. It shows that you can put character parameters where logical parameters are expected, the parameter lengths are not checked. So submitting vectors of length greater than one to single value parameters doesn't generate any warnings or errors. And numeric parameters are only demonstrated as integers and finally return objects are not described. So the auto test package reveals an awful lot of ways in which I as the developer of this geo test package could actually improve the package making it more robust and much more user friendly. And so to conclude the auto test package is sufficiently developed to be usable for most people, hopefully has been extensively tested. And the recommendation is that auto tests should be used from the first moments of package development, in which case it should be relatively robust. So like in the first example here, if you start with the trivial function, and then you just concentrate on ensuring that auto test generally returns null every time that you've run it, then that that message should be sufficiently robust. But application to existing packages may not always work straight away. So please let us know of any problems and we'll endeavor to fix them. Finally, this package is available along with other packages to aid the general review process at a GitHub organization called our open side review tools. Thank you very much. It was quite challenging. It was not the tests. So the tests at the moment are relatively restricted the sets of tests that are run, but it's fairly modular. So once it's sufficiently stable, then implementing more tests should be relatively straightforward. But the most challenging thing was trying to work out from the example code what people are actually putting into functions. And that requires passing the example code to identify every input. And especially in an era where people use pipes all the time, that input might actually be constructed lines and lines beforehand. And you have to work out what it is and what all the transformation steps are, and then separate that input out from the actual function call to identify. And that was really quite challenging doing all of that. Good, sir. Congratulations for doing that. Another question again, I'll do the general test for the test directory. No, no, it can't at the moment because I am waiting for it to be used enough to know that it's sufficiently robust before I roll that out. And I also would love to hear feedback from anybody about whether they think that's a good idea. I've done it for a few packages and got auto tests to automatically construct tests. And you can in basically one line of code get a test coverage of 50 to 70% for your package with a black box. You don't know what you've done there. And I'm not sure whether that's a good thing or not. And so what I really am looking forward to is hearing input from people about whether they even think that that's a good thing to do. You can just automatically achieve depending on the type of package, but 50, 60, 70% test coverage with one line of code at some stage. And so people, you can give feedback in the Slack channel as well as in the repository of the package, I suppose. Where do you want to get feedback, Mark? So where do you expect to receive feedback from users? I'm not sure. I guess on GitHub on the package itself or through our open side channels? Good. And you can also keep the discussion going in the Slack channel of the session in the Slack space. One last question. So does Autodesk have similar functionalities to R command check? So while it does all examples, check for types and overrunning. So how does it compare to R command check? R command check only runs examples exactly as they're written. And as long as nothing goes wrong, then it won't notice anything. It won't mention anything. Autodesk separates out all the inputs for every function and mutates those inputs. So it's completely different. It's doing a lot more than an R command check. If you have x equals one, it takes x equals one, and it tries to submit x equals the biggest integer possible, and x equals the most negative integer possible, and x equals a vector with length, it mutates all of the inputs. And so it's really quite different to R command check. It's doing a lot more in terms of trying to break your package. Thank you. And I hope you will get a lot of feedback on the package. Yeah, I'm really looking forward to it. Please, any feedback at all. And any usage, please use it and give me feedback after usage. Thank you. And so it is now time for our next speaker, who is another Mac, Mac van der Lo, who will speak about the tiny test package, a fresh look at unit testing. Hello, and thank you for joining my talk. My name is Mark. And in this video, I would like to talk about a package that I've been working on for the last two, two and a half years called tiny test. During this video, I will show you some slides and I'll also do some live demos. And if you would like to repeat the demos that I show you, then you can download all the materials that you need from the link in the bottom of this page. So the word tiny test consists of two parts. One is tiny. This refers to the fact that tiny test is a very small dependency free package. So if you install the package, you need nothing else except packages that come with base are installed already. And the second part test refers to the fact that tiny test is a unit testing package. So let me first tell you something about unit testing. So in unit testing is a way to measure the quality of your source code. And it does in unit testing, you do this by comparing the output, the actual output of functions, function calls with the output that you expect. So here's an example is a function called add one that actually adds to and have a function from a tiny test package called expect equal. And I compare the output of add one of one with the expected output two. Well, the output doesn't match. So I get a message that says this test failed, some data is inconsistent. This is the test call. And here's what I expected and what I got. Before I continue, let me just comment on what the status of the package currently is. We released the first issue or the first release of this package was in April 2019 on Chrome. We've had 10 releases since that. At the moment of recording this video, 160 packages on Cran or Bioconductor are using tiny test as their unit testing suite. It's also supported by package kitten. That's a package which allows you to set up a package infrastructure real quickly, including everything you need to use tiny test in your package. If you'd like to know something more about the programming methodology that sits behind tiny test, then I invite you to take a look at this R journal paper that will appear soon, but is now available on archive. Before I continue, I would really like to stop for a moment and thank all these people that have influenced the development of the package. I've had tremendous help from people that either supported features or inspected my code, came with a pull request, provided documentation on how to mock databases. A big thank you from here. Of course, I also like to thank the people at Cran that maintain an awesome infrastructure, which is incredibly important for a success of R. In the rest of this video, I want to show you a few, well, first I will show you the basic setup and then I will show you a few features that I think set tiny test apart from other unit testing packages. And this includes you can test test travel are installed with the package. You can test in parallel and you can tiny test can track side effects. So let's look at the basic setup of the package. So here's the directory infrastructure for a package called my package. And it has all the usual elements like description file, a namespace file, a folder with our code, among file with the manual entries. And then to use tiny test, you need to add two things. One is you need to add a single file under the tests directory. And you add one line of code that says test package, my package. So this makes sure that when you run Archimonde check, all the tests in the package are actually run. And the unit tests themselves sit in a folder under ints. Here I am in a bashell in the package directory that I just showed you. I'm going to start a new R session, load tiny test. Now if I want to build and install and test the package, there's a function in tiny test for that called build install test. And what it does, it actually calls Archimonde build and then installs the package in a temporary directory under your temp folder, goes to that directory, loads in the package in a separate new empty R session, and then runs all the tests. So this ensures that you run all the tests in a clean R environment that's not hampered with options settings that you might have loaded or functions or other variables and codes that you loaded in your interactive session. So you see some output while the tests are running. It reports how many tests it has run, how many fails it has detected, how long things took. And if it detects failures then a report is printed at the bottom. And in this case we see that one test has failed. There's something wrong with the data. So that means there's something wrong with the actual output, the values that came out and not the attributes of the objects that were produced. It reports in what file something went wrong and then what lines of that file something went wrong. It gives you the expectation call, so the actual test call and gives you a small report on what actually was expected and versus what was obtained. So now we found out that something is wrong in the test quad or an error was detected in the test quad file. We may want to reproduce that test interactively. So what I'm going to do is I'm going to source my R code to make sure it's in the interactive session and then I'm going to run this single test file interactively and I get the same output. So I see that there's something wrong in add one and what I can do is I can go to my code which is here. I've opened my code.r and find the bug, I repair it and my source again and run test file and see everything is fine. Just that's all okay for results for this single file. Now it's also possible to run all the files in a directory all at once interactively. So I want to show one other feature. So I can actually store the results. So I can run test directory, sit under pinch, tiny test and I run it and again I get the reporting wild tests are running which is really quick in this case. But the output and this is something that I think is a nice feature of tiny test. The test results are data. So for example I can summarize my output and get a nice table that shows how many results were obtained in each file, how many fails, how many passes and it can report some other things that I will talk about a little bit later. So and you can also turn this thing into a data frame. So you can just call and then you can write that for example to a database in a circumstance where you run tests for example automatically and you want to store all the test results and export them to a system where for later inspection. Now the next feature I would like to demonstrate real quickly is that tests in tiny test principle travel with the package. As you've seen you have to put your unit tests under inst tiny test in your package infrastructure and the advantage of that is that any if somebody installs your package they can and they also install tiny test then they can run any test that you actually add it to your package. So let me just demonstrate how that works. If I have I have tiny test loaded and I'm going to test one of the packages that uses tiny test. Give it a test package. The validate package is one of the packages that's being tested using tiny test. The packages loaded and all tests and you see there are a bit more tests here are run and it says all okay almost 400 results. So the advantage of this is especially for package authors is that you can ask any user to rerun all of tests that you wrote locally. Right so they may sit on an infrastructure that's a little bit different from your own or maybe even a little bit different from one of the many infrastructures that's tested on Cran. So you can at least when somebody reports and work you can ask somebody to run all the tests again and see if they have there's something different. Tiny test makes it real easy to run multiple tests in parallel. So if you have say four test files you could use run test there with the argument ncpu equals four and then it would start up four separate r-sessions that each run one of the files. So let me just demonstrate that for you using test package on the validate package. So I run all the tests in the validate package so four workers are being set up and all the test files are run one by one. So it's important to understand that each file should be so because of this parallelization feature it's important that you set up test files in such a way that they run independent from each other. So it is good if you set up your tests in such a way that one file does not expect that another file was run just before it. Other than that parallelization is real easy with tiny tests. The last feature I would like to talk about is on reporting side effects. A side effect occurs when a function or a script changes something outside of its own scope. So for example a function might change an environment variable it could change an option setting in your R environment or add a variable there. So in general you would like to be able to detect such side effects whenever you think this is relevant. There are two ways to do this in tiny test either when you call a test runner like run test file or run test there or a test package you can give it the option side effects equals true and side effects will be recorded and reported or you can add this statement report side effects to your test file and I will just demonstrate the last option quickly. So let me just run the test directory like I did before in the my package package. So everything seems to be fine I'm running a file called testcsort and I'm running testquad.r and there are five results and everything is fine. Now if I go to the testcsort testing file I can add report side effects and now if I run the test again you see that tiny test is recording side effects as well and in this case you see there's a side effect that affects the locale so there's a short summary one line summary always in testcsort on line four and it tells me that the lc collate locale setting which affects how strings are sorted depending on language settings and it was changed from usutf8 to the C setting conclusion. Tiny test is a small package it has it's built out of less than I think 1200-1300 lines of code it's dependency free so it doesn't import any other package except for packages that already come when you install R it's very easy to set up test files are just R scripts you can set up everything using package kitten it will give you the complete package infrastructure including a tiny test example file it supports interactive testing and the whole built installed test cycle parallel testing is easy side effects can be tracked which as far as I know is unique in testing in other when compared with other testing packages and another unique feature is that tests travel with a package so you as a package author can ask users to run all the tests in their local environment or look at it from the other way you as a user can test every package that's being tested with tiny test at home to check whether everything does what you wanted to do thank you for watching and for any comments or questions you can post them under the video or contact me via email or via github thank you thanks Mark for the interesting talk we have several people are wondering how tiny test compares to test that so could you this answer oh that's a good question well I think the the syntax of expressing tests is very similar so just it was actually inspired by test that I mean I think the expect underscore functions are are an excellent way to express your expectations I think the main difference is that tiny test is a lot smaller it doesn't do anything except testing so for example there's no things like praising people when all tests are run or saying something like too bad for me at some point it becomes a bit you know disturbance in the command line and other than that I think the the features that I that I mentioned like parallelization was really built in so it was really set up in such a way that parallelization was easy to to build and certainly tracking side effects I think is is unique all right speaking of parallelization we have someone asking wherever it works on the windows as well yes yeah this is independent of what operating system you use okay I was wondering how you get colors in the command line so out in your demonstration you have colors for the messages so what gives the colors well you can use the crayon package for that if you want uh but there are um special escape codes in in oski they begin with backslash square bracket and then a number and that tells you from here the color should be for example red yeah cool thanks what was the most challenging part of creating time test I'd have to think I think the most challenging part was something I did before um I mentioned in the in the beginning that I wrote a paper on the method that sits behind tiny tests and that's I developed that method for for another package called lumberjack where you can you can run a code file you can run an r file and while it runs you can you can tap off some information without disturbing anything so you have to really separate two pieces of code um and where those pieces of code are running very well and when I solved that for for the lumberjack package I realized it could easily be used also for a test package so it has a very clean separation between uh what the test package is doing and what the code is actually doing and the user code is actually doing um so I think that was hard at least took me some hard thinking on how to solve that but once you have that idea you can you can use them in multiple ways okay thank you one last question uh do you know are people using their data frame of their test data for doing cool visualization or calculation on their test I'm not aware of it there there is some I have the idea of building in a visualization myself um so doing interactive testing you could you could plot your output um but I haven't seen any feedback from that my my first thought was to be able to export the data frame right but uh I think you could make some interesting polls as well oh and uh in case you missed it in the Q&A someone is saying thank you because they use tiny tests um in their introductory uh programming course extensively oh nice to hear thanks and uh people you can keep asking questions uh in their site channel um so you can keep the discussion going um going on uh there and it's as soon uh time for our next speaker Sylvester Voschette from Thinker who is going to talk about his Hussein package thank you so you should be able to see my presentation which is called use Hussein to write or upgrade a package um today I will present Hussein mostly to people who already develop packages uh just to to change the way I present it and to to see how it can address your challenges inside the existing packages already so this presentation is already available on my github if you if you are not able to to follow it now you can have it as a pdf why did I uh develop Hussein is to save some time because uh I I want people to be able to write the code uh and the documentation at the same time usually you let the documentation for the end so first oh am I I am uh Sébastien Rochette uh I work at Thinkor and we are doing uh um uh teaching or and also consultancy and if you want to know more about what we do we have our website github twitter and I also have my personal website and personal twitter if you want to to follow me or to follow us but let's go back to this presentation when you are a developer you need to think about many things when you are building your own package if you want a future user uh users to be able to use it so the different question of the users could be what does the package do or how to install a the package and the different dependencies and for that you know that you have to fill the information inside the description file if you want to answer the question what are the functions of the package how to fill the different parameters of the function or can I have an example on how to use this uh specific function you know that you will have to go inside the org directory and open the script file and add the information inside in the our excision skeleton for instance the other function you can have is okay how can I have uh all example of how to use the package maybe the different functions are not uh working are working in this specific uh order and for that you will write some vignettes inside the org magnum file and to explain how to use the different functions and with some text and then you can add the phone to the question like will your package work with my last version of or or with the last version of the different dependencies of this of this package and to ensure that your package will be able to work in the future you add some tests maybe with tiny tests or test that or and you will have to set the continuous integration that you help yourself for the future so you see when you are building a package as a developer you have to think about at least this four different places to store codes and documentation and examples in different places but what I would like to ask you is why not prepare all of this the code the example the test and the documentation inside a unique file in the same place so that you don't have to switch between different places that's what foussain is about foussain allows to write everything in the same place you first write your armagdon with everything inside you follow the folding lines to be sure that it is written in the correct way and then you inflate the armagdon and foussain will build it as a package it's like origami you know when you add this little piece of paper that you want to fold to fold it you have to follow the the folding lines because it's very specific the way you do it and at some point when you have finished that you inflate it and you have this beautiful package that is made because of the good folding lines so foussain is exactly that to follow foussain folding lines you will have to make foussain aware about these different places I already spoke about like the description the or directory the test directory and the vignettes for that for foussain to be able to distinguish between these different places the folding lines is a template armagdon file so if you want to try it just create a new project a new directory with nothing inside and then you run foussain add dev history this will add in your project in your directory a new file which is called devhistory.armagdon.rmd and with a specific template already filled to help yourself to to follow the the lines you can see inside this template that there are these four different places like the description the function the example and the test in the same place and they are divided by chunks named chunks so the names are here to help foussain to know where it will put the different the different part of the code and then you inflate so just go to the down to the bottom of the of the file you run foussain inflate to inflate the specific armagdon file you just wrote and foussain will distribute the different pieces of code in the correct places to make it be a correct package so the description will go in the description file the functions will go in the function file and the examples will go in different places because you will copy the examples for the examples of the function but you also will keep it in the vignette and you have the test we will go inside the test that because for now only test that is just it's just um so the first part and then what about the vignette indeed the daily story armagdon is is already armagdon is already the vignette so you will write inside the vignette everything that you have in mind say i would like to write this kind of function so you write what you have in mind i will do the function that does this and how to use it then you write the chunk of the function you write the example you write the test and foussain will keep the text that you already wrote and we'll add the example inside the vignette director so that you don't have to copy paste this single example in multiple places foussain does it for you why i did that is because i used to switch between these different places and reducing the number of copy pasting five open and five switch really helped to to focus on on one on on one task also usually when you build a package you say okay i will put the test at the end when i finish everything maybe i don't use a auto-test each time i write a new function and usually you forget about it because you don't have time or say okay i can do it our unit testing is too difficult so as you have everything inside the same file you cannot let the test at the end you will do it at the same time while you write your examples and you also write the documentation as you code i mean nobody is asking you to have a perfect vignette with everything correctly explained the first time but at least you wrote everything you had in mind when you wanted to when you presented the function and you don't wait for the end of your package and to keep everything in mind what you want to write in the vignette for the end of the package in six months maybe so you don't have a head big enough to keep all this documentation just write it while you code then how do you can you add new functionalities when you work with spousen you can add new functions new family and functions in an existing package whether the package was built with spousen or not i mean if you have an already existing package and you want to add a new family of function then you can use use the history fill your new function your documentation the the examples and the unit tests inside you inflate and it will go in the correct place spousen will not delete the function that already exists inside your package there is a vignette about it if you want to look at it because i read the documentation before writing the functions and you can reinflate the dev history as many times as you need i mean nobody's waiting for you to write everything correctly the first the first time so when you do the check you will probably see some mistakes and then you go back to the dev history you modify the function you modify the examples and you can you can do you can reinflate the file and it will this time it will delete the the files that you already wrote before i mean with the same functions inside if you want to add new functions or new family you can add new sections inside the error model so you add a new title and you add new function example and test chunks so that you can you can create new files inside your package if you want to see a full working example you can use add the history with name full which shows an example of a working example with a package with multiple functions and sub-functions too so it will also show you how to add internal functions that are used inside your exported function you can add multiple dev history you don't have to have a dev history with hundreds of thousands of slides i mean you can create a new dev history and put to be able to add new functionalities it will be easier to maintain and note that each day history will be the new vignette inside your package so if you add a new family of functions don't hesitate to add a new daily story to to structure your your documentation and there is a specific template for that which is named additional which will add omnis a chance for a function example and test empty chunks so that you can directly code inside as bonus you don't have to think too much about dependencies with fousen because it will look inside your package every thing you need every dependencies and add it directly in the description file because it uses the package attachments behind the scene and if you want to see an example of a package that was built with fousen fousen itself is built with fousen it means there is a dev history file inside the fousen that i use to add the new functions and that i inflate to add a new function inside the fousen and since yesterday when i started to try to send it to the crab i continued to modify the things inside your dev history at some point i just switched yesterday because of you will see at one point where you have to switch between the dev history and doing the the normal way of building a package is this up to you i already had common questions about about fousen is like you can reorganize the vignette if you want just go back to the dev history modify the different sections if you want and it will modify the vignette you can reorganize between the vignette if you want but at some point maybe you want to lock the modification of dev history right just write a comment inside that do not inflate anymore just do it as usual and you can write a documentation in a normal way if you want to debug i mean you are in a markdown so if you write function you can debug in the in the global environment or you can use the debug ones or browser any function you already know on how to to debug and if you already have a package that you want to put back inside the arman file it is currently not possible there is no deflate function but if you want to participate in the writing of this kind of function to to put back the code you have in in our file and test file and vignette you can participate if you want but if you want to do that now you will have to copy paste the thing by yourself you can teach full send to new developers and there is a template a simplified template for teaching if you want and you can follow this package i built on my home github which specify the different steps to give to the new developers if you want to to teach it i will do it tomorrow in the tutorials some of you will be able to see it so if we send a try for your next awesome functionality or your next package and tell me if you like it or not and how i can improve it if needed and what i want to tell you to conclude is that use a markdown first for every project you can use now use full send to do it if you're a new developer and you already write markdown file you already have part of the package so you will almost only have to inflate it following the guidelines and document and test as you write your your code as you write your function write the example as you try them and write a unit test as you add new possibilities everything in the same place so that you don't have to think about what you will do in six months when it will be finished same package with percent thank you for attention thank you Sebastian for the cool presentation we have someone asking for clarification how you write your unit test in the dev history file so how does it look like currently i use a test that to write the unit test so inside the the the charge you have the test that function which is called until inside the expect expect things so if you have a look at the main template if you use percent add the history you will see some example of how to use this this test so inside this chunk is exactly the code that you would write inside the horrified in in tests thank you how do you test for saying like how does do you use a unit test of using look like so one that tests your packaging it's quite tricky because i have to build external projects that build the package and there is some still cool cool inside the use this package which allows to to test a new directory so inside the use this with this use this function i can test in the build the correct building of the package inside a new environment i just don't remember it's use this with project thank you for that reminder but you can use a q and a button to the attendees to ask question there is a question about whether there is nothing super for things like rcp package no i mean i never used rcp myself so i didn't think about it i don't know how what it needs to be able to work but you need to remember that what to write inside the chunk which is called function is what you would write inside the the r script inside the orf directory and then i know that in rcpb you have to change some things inside the description too but i do not know much uh with that to to be able to give you a correct answer how would you know a package just for saying to uh like on fran how would you recognize packages that have been created roughly same um you would recognize it if there is a dev directory inside i mean the dev directory is is the way we we use that thing or first for golem so if you already use golem for shiny application you will see this dev directory which is the documentation of the developers so it basically is a function you use to reuse for all of your packages so if you have a dev directory inside your package it's either a golem or a fustan or you follow the power line so you can write analytics for the usage of the two packages um i think it's soon time to wrap up the the session um thank you thanks again to for to all the speakers for the interesting talk and the discussions can um keep happening in the selection of top packages too um mayim could you share the final slide of the session it's very first for attending this session uh coming math is at um 10 uh 15 utc is a yoga session recharge for karma meditation as well as an analytics community meeting and after that the next session are um uh sorry so the next uh session are about uh trans markets and models another one about database and spatial applications so we hope you'll find um someone interesting uh for you and thanks again to all the speakers so it's probably a very interesting session