 Thanks for coming and thanks to Heather for being the technical helper today. So if you came today, you sort of have an idea of why you want to test your package or packages. So I decided to justify this with a funny quote by Charlotte Gelfand who wrote, who knew there was actually a robust way to test functions besides trying random examples and seeing if they work and then not remembering them once you kill your console. So that's a great justification for a formal framework for unit testing because when you're developing something, you will be testing stuff anyway. So just a better way to do that, that will make your life as a developer easier. And then there is your justification on the other side for your users. And again, not all my sources are tweets, but some of them are. So Jenny Bryan in this street route, if you use software that lacks automated tests, you are the test. So that's for the users. The users of your packages probably don't want to be test for your software. And in our open site guidance for package development that we have as guidelines for our peer review system for packages, we underline the importance of testing. So we write that all packages that are peer reviewed by our open site should have a test route that covers major functionality of the package. Your test should also cover the behavior of the packages, package in case of errors. So that's one thing we say, and we also say that it is a good practice to write unit tests for all functions and all package code in general. Entering key functionality is covered. Test coverage below 75% will likely require additional tests or explanations before being sent for review. So we even have this arbitrary limit for test coverage, which is a percentage of lines of code that are run by tests. And today we will see how to get the test coverage and how to do to export it as well. So how do you start testing your package? So you can set up testing by using the use this function use test that adds a dependency on test that in your description file creates a folder, we will do that. And so that's quite useful. And then every time you want to create a test, you would run use this use test with the name of the test. And what you write inside the test file is a formation of the random example that Shala mentioned in the tweet. After about the premise that things aren't always simple. And in a test that Jenny Ryan wrote testing is often demonstrated with huge little tests and functions where all the inputs and expected results can be nine. But in real packages, things aren't always so simple. And today, our goal is to explore less cute tests. So less, yeah, maybe they will still be cute, but less cute. And actually be testing. So when you have a package that interact with the web resources, a packet that sends data to an API packages that gets data out of a website. So they can be hard to test because the test can be slow. Like if you have to, if you're not allowed to make like say, more than 10 records per minute, then your test can be very slow. You don't want your test to overburden the web resource. Like it'd be bad practice to hit your API every time you run a test. You don't want to use credits for testing like by this, I mean, but if you have a paid account for this, through for this web resource, you don't want to use that for testing. And it might be impossible to trigger errors. You want you to know how your package behaves when the API is down, but you cannot take the API down for running your test. And to, to deal with all these challenges. So that's an interest. So very obvious are interesting Chinese. And this is a good use case for learning more about testing techniques. So our goal today will be to have to gain specific knowledge about HTTP testing with the VCR package, but also more general knowledge about testing our packages with testpads. And we'll make a short tour of the materials for today. What I forgot to say, I'm very sorry, is that to access the speaker notes, you can type S. So this is our reveal.js slides, and if you type S on your keyboard, then you should be able to see the speaker notes. And I tried to write some. And so the website and yesterday someone emailed me telling me they couldn't access their websites. If you cannot access the website, please tell me now so I can help with that. No. Okay. So on this website on the left, there is a sidebar with the three parts of the tutorial for today. In each of this part, there is a link to the slides. So this has a preparation instruction that we will mention later. And then there is a page called demo. This is also the end zone part. So this is what we're going to do. And with the, in each of these demonstrations, there is a function that we're going to run. And sometimes even little snippets to copy paste. And there is a copy paste button to copy this little snippets to your to your keyboard. So this is one thing you can search the website. I haven't tested that too too much for this particular tutorial, but this might be helpful if we've mentioned something somewhere and you don't remember where there is a resources page with many favorite resources. And there is a get the link to the GitHub repository. So if you prefer to access the source of the slides, and also tell me if the font is too small on any of the web page I'm opening, for instance, I this is only a 110 person zoom for GitHub. So if I make it bigger, please tell me. So there is a content folder. And this is where you'd find the are marketing source of any other page that that are on the website. And this is all I wanted to say about their website. I'm going to open it again to have the demo at hand. And our example today, so we will interact is our universe API. So our universe is a new platform by our open site for publishing and describing our packages are my done article and over our base content. I'm not going to present this into too much detail, Jerome Holmes is giving a talk about this on Friday. So you can go to the keynote talk and learn more about our universe. What's great about our universe is that it has an API that will be using as an example today of a resource to do, for instance, so this is a documentation of the API. And it has several endpoints we're going to use on the two of them. And it's an API that doesn't require authentication. And in offer, so we're going to do a first ends on exercise, but that should help us all start at the same point where we're going to load use and check our little package, we're going to add a few little test to his package, we're going to explore his test coverage and to set up continuous integration. So I think that's the one where I open the demo file. Where did I put it? So demonstration. Does anyone have any question right in the set of instructions or if we're going to look at them just in case you didn't have time to look at them? So the idea of the setup instruction was to advocate of account if you didn't have one to install some packages and to make sure you were able to develop packages on your on your machine. And to all start from the same setup, we're going to use a template repository. So this is a template repository. It's called able pie. And I'm going to copy paste its name. And I will create the same. Oh, I first need to create a fork story. That's a bad example. So I'm going to use this template. So this is a green button. And this will generate the same repository under my own account. So I'm going to call it April pie as well. It's available, which is good. So I'm going to create a repository from template. Good. So I have my rep my online repository, which is hiding point. And I'm going to use it locally with the use this create from github function. And the repository is April pie and the destination directory should be just one step above. I think this should work. We'll see. Oh, no. So, um, of course, so I'm going to create it one. And I'm saying, of course, because April pie project, but I already have locally is the one that I use for creating the template in the first place. Okay, so we have here this package. It doesn't have much things in it. It does a description file. If you are not me, you can change the name if you want to. There is a read me. And in this read me, there's also I think my name. So you could change that to yours. You don't have two things are going to work even if you don't change the username associated with this package. And so I'm going to do a few steps and then I will let you all time to repeat this. See if it works. I'm then going to load the functions of this package. And I'm going to test the first varies only one function and it's called get packages. And it let me access the function in one universe. And I'm using the gg sag universe as an example. So hopefully this work. So yes, or the function get packages returns me their names of the packages in the gg sag universe. And so that's one thing then I just going to text a package to see whether it's it passes outcome and check as it is. Because it'd be quite bad if we were starting with a package that is already very broken when the test won't make that better. This is extremely slow. So I'm just going to check what's there. Yeah. Okay. I had the same problems yesterday with a different package. It's just okay. Yeah, which is interesting, right? It's an HTTP thing that doesn't work, but I don't have control on over this one. Well, so I'm going to stop. I know what's wrong with their I mean, I would know what's wrong with my package. There is one thing missing. Sorry, we need to add MIT license to the package. And I am the copyright holder. You can be the copyright holder. I give you all copyrights on the example package is no problem. So we now have a license and I'm just going to make a comment and I really need to remember to do that every time I change something. So why doesn't this work? It works. Okay. Add a license. Commit. Okay. I'm just going to stop share. I need to enter the key. So oh no, not like that. Sorry. Sorry. So I'm not screen sharing again. I was entering my secret keys and I could burn it. I heard somewhere that you shouldn't share a screen when you're typing an actual secret on screen. So that's why I stopped sharing. So we have this package. It's working and I'm going to add a cute little test to it. So to start having a test infrastructure, I'm using the use this use test that function. Sorry. So again, it changed the description file. So in the description file, it added a dependency on the test that package of third edition. And there is a test folder. Now as written in the instruction about our message by use this, I have to use test to create a basic test. So I'm going to create a test of the packages function. And I already have a test that I wrote in the demonstration notes. So in the actual little test section, I'm going to copy that to my clipboard and create. So I'm going to use I use this use test and the packages function. And in that new test file, I will delete the example text and pass the one I have created. So not to run the test, there are several different assertions depending on whether you use the ID or not and how you use the ID. So there is a button called run test. And I am going to try and for myself use a test that test file function. And I don't remember whether I need to write the absolute file name or no. So I need to be a bit more precise on where it is. Okay. So it and this test, I use a test file function around the test and tell me that the test passed. This is right. And another way to run all tests will be to use DevTools test function. I have only one test file. So that will be as fast. And I'm going to look at the coverage report for my package with the cover function, cover report function. And so the and it has 100% code coverage. There is one function and I did one test for this function. So it's not that surprising that the code coverage is so good, but it allows us to look at the test coverage for the first time and we can click on the file name and from this table. And it would show it shows us which lines of code are run when a test are run. And here it doesn't miss any line at the moment. And I think I'm going to let you do this step central now. So creating the local project if you don't have it loading it and adding a test if that's okay with everyone. And can you type in the chat if you have any or speak if you have any issue and see when you're done and you have this package and run the test for that. Does that sound okay for everyone? Hi, good morning. Can you please show again the steps because it was really too fast? Sorry. Yeah. And do you also have their demonstration? So what I'm showing now, can you also open this just so you have their comment to copy paste later? Sorry. Edya, do you have their demo notes open? Yeah. Okay. And do you have their local repository or where are you? Yeah, I did the same as you use template and then I got lost. Sorry. Yeah. So then you can open like friends in our studio session anywhere on your computer. Yes, don't already. Yeah. And then you would use, use this create from GitHub and so this line, use this create from GitHub with your username and slash apple pie. Thank you. Hi again. Hi, I'm just wondering could you just explain to me or to us what this dev to dev is doing? Which one? Oh, yeah, the one from the instruction. Yeah. What do you get when you run? So sitrap means situation report. And it would tell you if you have any issue that something you missed or. Okay. Yeah. Well, I mean, we got all the messages, but I just wondering. Oh yeah. So and I'm going to run it again on my computer just to see. Okay. So and when so it puts you something about the local R version, RSU version, it tells me that two of my packages are out of date. So that when there is a red dot, it means that something needs to be to be done about something. I'm not going to update any package right now because I think things work as there and I don't want to break my local setup, but this could be something that needs to be done. And yeah, and it tells me the same thing about the package. I'm using that it's also as dependency out of date. And then there is use this get sitrap. It also checks a lot of things about the local get setup. And it's the same. So if, if there were something to check, it would also be a red dot. Does that answer your question? Okay. Yeah. Well, I guess I saw there's a red dot in one of my. Oh yeah. So then it's maybe you can do like me and ignore it. Okay. It depends on what the red dot is. Yeah. One question. I'm getting an error. It says when I do create from github, it says enable to discover a github personal access token. Oh, so for that you need to look into the use this vignette about token. Like for instance, if you run the use this get sitrap function, what does it give you? Okay. But I got this similar message like a few days ago, and I think that use this get sitrap was a function that gave me the steps to follow. Call github token help. So I think yeah, that the page would tell you the name of the function to call, which is something like just this create github token or something similar. Okay. It's great. Github token I wrote in the chat. Can I ask you a question, Mayel? Yeah, of course. I remember that after you use this use test that you open the description, but I'm not sure if you change it anything. Oh, no, sorry. I just opened it to a show, but no, I didn't change anything by hand. Okay. Okay. Thank you. And does anyone know if there is settings in Zoom to make the person with talking come for like when made like it. So if someone tells tell me your name just so I know with talking, or you don't have to, it's fine. You can ask. Oh yeah, but I find your name, but I had to click through. Okay. So I created the token then it's automatic, right? No, I think now then afterwards you have another function to call I think from something like the github credits package. Does it exist here? Yeah, yeah, yeah. The github credits, github credits set. Okay, thank you. And I'm also so just in case anyone gets lost at some point and we don't have time to wait, like I hope we won't come to this situation. But if it happens, I'm updating my own copy of the of the template to my github account, I'm going to put the link in the chat to that version. So this is not the template repository. This is the one with the tweaks we're doing. When I hadn't seen the chat question at your nose, I'm sorry, I'm going to read them. Yeah. Oh, so Martin, all this area, that's and Martin, have you tried like you can install the package? So if you build it, does it work? Yes, building works and testing works to with DevTools. Yes, those work. So but I'm now just googling something on the side can very well be just my system. And did you, like I know that this is not a very cool to the digital restarting and the ID you're using? Well, actually, I'm just doing that now. So fingers crossed. Yeah, I'll give a heads up. I'm successful. Thank you. Hi. I just got to be lost. I have the same problem as Martin. I'm not sure about the code report. What we should have that I don't know. So I'm going to run that again. So the report and very like this one is for having this is the one that function I run is for having the interactive report. But I think we can also run the one called package coverage. I'm just going to copy that in the chat before I run that. So so non interactive version. So if we're on that one, so it will just return as a percentage. Yeah, so it will return a code coverage by file. And Valentina, if you run this one, does it does it work? Actually not. Same, same arrow. Okay. Well, let me. I wonder what the question is. I'm going to look at the version of cover. I have, I have cover from Cran. Do you know what version of cover you are using? Oh, so please everyone do like Philip doesn't reopen your eyes to the project. That's very good news. So I hope it works for other people. Okay, I will do that. So and I'm going to commit this. Oh, and I have the same version as you. Okay. And did things work? Is there a repository creation? No, I have the same problem, but it's okay. I will try to follow and then. So which error? It's, I created the github token, but it says create. Yeah, it's the same error like github token help enabled to discover a github personal access token. And after you use a git crates. Yeah, yeah, yeah. And what, when you run the git crates function again, have you tried restarting or just to be sure? Yeah, I tried to restart, but it's the same error. So in that case, what you can do is to, to try and so how do you usually set a local projects corresponding to a github repository? If you do that, usually, I use the token and it works fine. So I don't understand because otherwise you can try to follow the instruction that are in your github repository. To set it up locally or maybe downloading the source of it. So they go to April pie. So I think there's something like, where is that? Like, I'm wondering how it would tell me to, oh, here. So if you go to the, there is a button called open or download. And it gives you, it tells you like, you could use in a common line git clone with this, this or with HTTPS. Like, do you sometimes do sometimes do that or not at all? Yeah, I do. Yeah. Okay, so I do it this way. And then it's like, I open the R project. Yeah, or it's been correct. Yeah, sorry. So I'm going to wait three minutes more. And then we can. Okay, this is a point. Yeah. Hey, my little Martijn here, just update on the cover report function not working. I don't get it to work on my R studio, but I switched to a Windows subsystem for Linux and there it works. So I think that's enough to follow along for me. Okay, cool. Similar problems. So and what operating system are you using to start with? I have a Windows 10 corporate computer. Yeah, so sorry about that. No worries. And the other person who asked was Valentina. So were you able to make it work after we're starting? Right now. Okay, cool. You should always be able to set your GitHub access token, why sys.get set and manage should be picked up as well in our, I don't know about our studio. And to answer for the question, I guess you mean the Codco service on private repositories. And I cannot remember, but for the what we're running now cover, we're running it locally. So that's fine. But when we set up continuous integration, we'll see if it works with the private repository. Thank you. Thank you. All right. And she heard you. You've got the same. I was at the, I think. Oh, no, no, no problem. Okay, so you had an install getter. Okay. Yeah, it's cool. Yes. And yeah, were you able to open the local project? Oh, sorry, I was trying the other options that sys.get environment were and it didn't work. So yeah, I think your solution is better than resting time, but it's okay. Go ahead. I will try to follow. Okay, but then in the break, we can also, yeah. Yeah. So the next step is Senegal continuous integration. No, I wanted to add another function first to the package just to see how the coverage report was going to change. So this is Anyone else see a black? Oh, I think that's a chat. I've closed it. Oh, but that's good to know. Like if I open the chat, that's clever on the part of Zoom. Good. So basically, we can add another function to have the get packages function and I wanted to show another function that we can add and how the coverage report will change. So to create a new function, we'll use the user maintainers function and I just wanted to comment a bit on the code of this function. So the maintainers endpoint of the iUniverse API doesn't return JSON directly. It returns another type of JSON and that so we cannot write the functions the same way that the get packages function is written. So like in the other one, we have the and I'm sorry because I didn't even comment the function we had to start with. So have the URL to the remote endpoint but we'll put it in the name of the universe that we're trying to get information about. Then so we make the query with the request with the httr get function. After that, we have a line where we stop for status. So if the request was not successful, the code will error at this point and return the error status. Like for instance, the endpoint was not found or the remote API is done. After that, we get the content from this from the response from the API and we save it to a local temporary file. And that's done with the with our package, which is a very useful package to know. So when I call the with our local time file function, I know it's going to create a temporary file for me that I can use and it will destroy it after the function is done running. So that's why it's called local. It's a temporary file that will be thrown away and I don't need to think about that. So without, well, deal with that for me. And when this temporary file is created, I write a content of the response in there. After that, I read the content of the response with the JSON streaming function. And after it's done, I can return the data from that file. And the file is deleted by with our. So this is what the maintenance function does. So I'm going to copy paste to copy it to my keyboard and add the function. So if you are maintainers, this opens a maintenance file and I can pass the code in there. Save it. And if I reload the package, I can run the get maintainers function. And I'm doing that with the same universe as previously, the ggseq universe. And there is only one maintainers who had an as a moving code. He's the one maintainer for the ggseq universe. And you can try that with other universes from our universe as well. Now, one thing that's still missing our dependencies, because this function uses more dependencies than the other function we're using. So I choose the with our function and the table with our package and table package. So we need to add them as dependency to our description file. So I'm going to do that with the use package function. So I'm with our and then the same with table. So at this stage, I've added one script. And I have also edited their description file. So if we compare the description file to previously, we see that there are more dependencies. One last thing I need to do is to document the package just so that there is no manual page for this function. And now if I run the cover report again, because I've created a new function, but I haven't added any test for it. So I know I have a much less high coverage when the 38% and cover report can show where the lines that are not covered by test are. So in that case, it's quite straightforward because there is one R script that's tested, one R script that's not tested. But in some cases, it really helps to look at that to see what's what has been missed. So this is our new function. And if you have time, you can add a test for this function. So, so that's getting a bit. Okay, so I, I'm not going to push and commit from with a get package, but I don't need to enter my key all the time, I think. Okay, get push. So the get package is an helper package for using nuggets. And now we're going to add continuous integration. So the idea of continuous integration is that we want our tests to be run somewhere else than on our local computer and an over operating system. And we've seen with someone not being able to use a cover package on one operating system using different operating systems can help us find new problems that we didn't know about. So we're going to use GitHub Action because it's well integrated with GitHub, but we're already using in this tutorial and also because it is very well supported in the use this package. And I'm going to add an action call. So check standard check action from the use this package by using the use this use GitHub action check standard function. So that's from to demo notes. And I'm going to switch to our and pass this here. So it created a folder dot GitHub folder. And it I'm going to ignore it. And it created what's called the workflow. So we can look at the workflow that we get. So it's called our common check dot YAML. It indicates when this check will be run. So every time we push to either the main or master branch while default branch and every time there is a pull request. And it will be run on four different setups, Windows, Mac, Ubuntu with the release version of R and development version of R. And it has all the steps that it does to install our install the dependency and then run the check. So that's what it does. But to for this to happen, we need to add this found to our remote GitHub repository. And I also need to read my read me because it added a badge that will indicate the status of their continuous iteration. So first, I think I need to install the package. So with the dev tools, the reason I need to install the package locally is because otherwise the read me won't be net, I think. But I have an example there. No, no, I need to install it. So that's DevTools install. Sorry to push this thing. So, okay. And we've got to commit and add. And I'm going to push it to the remote repository. And now if I go to my repository, so as a mild and fragile by repository, and I go to the action tab, I see that there is a workflow running there. So running the check. And I could look at, I hope that it for other successful results, but we will see. And I also wanted to add another workflow. So one to run the code coverage online, which is something Fodil started mentioning. So we have our code coverage locally. And it's great if we think to look at that regularly, but it's also good if it's run every time we make a change. And for that, we'll first run the use coverage function. And then this use this comment to add another workflow. And I'm going to do that now. And then everyone can start adding continuous integration. So use this use coverage. And then the other comment was use this use GitHub action with this. So I'm going to add this one too. I also, I need to run the, to need the read me again because there is also a batch for coverage. No. Okay. So, and I'm just going to have a quick look at the remote GitHub repository. So, and now there is another action running. So can you all try to add continuous integration to your repository to to and then tell me whether it works? Oh, yes. So we created two because one of them runs our command check. And the other one runs a coverage function. I can't also remember why you can do that. Because sometimes you have workflows that do the tool at the same time, but I can, I can make a note and during the break, I hope I can look whether there are two, I guess an advantage is that the code coverage workflow is quicker. So, so you'd have a result on that, that earlier than from our command check, I think. Yeah, but very different things, but not completely. You could imagine runs to one after the others. And if we look at the read me after we have the two badges, but now there is no status and the code coverage is unknown because these things are still running. So the badges are quite useful because they show something. And if you click on them, you should arrive to the right place. For instance, the code coverage report on code code. And now it works. So my code coverage has been sent to the code code online service. And sometimes it fails getting sent. So I don't think we ever need to, we don't, we don't need to add some sort of secret credential from code code. Sometimes what I've observed is it doesn't work. It doesn't communicate with code code right away. And I need to rerun the workflow for it to to work. Yeah. So I, and I think about under us because you need to look at their workflow. And I think it's still, it might still be running. And how I add the badges. So I didn't add them to use this, did that for me when I run the first use this, just get actions check standard service world in some order. This one added the badges for me. And the way use this does that is when you create, when you create a read me with the use this use read me function, which I did for the templates. There are these comments. So badges start and badges end. And this is how use this nowhere to edit your read me file for you. Oh, yeah. Good, good point. My bad respect. It needed to be two minutes. And I suppose that's a part of using the template repository because when you run use read me RMD yourself, then it adds a githook and you are not allowed to commit if you ever need to read me. But I suppose that because we use a template repository, it's the githook was deleted somehow. Is everyone running or waiting for the workflow to run? Okay, so we're not going to wait for him to run. We're going to go back to the slides for the moment. So what we have now at this stage, we have a package, we have two functions, it has some tests, it has continuous integration, we're all keeping our fingers crossed that things work well. And now to get to the like the course, the topic, what if our internet connection gets fragile, which because our test right now, like our one test uses and the internet connection, it actually calls the API every time we run the test. And I want to use the notion of test fixtures, so a test fixture, according to Wikipedia, which is quoted in a test pad vignette, so a test fixture is an environment used to consistently test some item, device or piece of software. So this can be many things, it's a very vague and general notion. So in general, in our, there are two things that are useful to know versus a with our package when you want to do something, and you want to only change in your current environments or a test or a function. So in the maintenance function that we added, there is a without local temp file, which created a local temporary file, so temporary file that is deleted at the end of the function. And you can also use for instance, without local options, if at one point in the test, we want to change the options, we could use this function and we know that this won't affect over test later on the road. Then if you have example data for your test, so that can be useful, you can put them in a folder under the test, test that folder. And in tests, the problem is, how do you know where that is? Because when you, depending on how you run the test, they are not from the same directory. So to be able to locate the directory where you put the test data, there is a test that function called test path, and it will return the path to your test data. So you wouldn't, so what you would do in your test is really for getting the data is using the test path, test path function. Sorry, can you repeat this point? Yeah, test path. Yeah, so this returns a path, these returns, where's the data? I can maybe in there, so say if I, in my apple pie folder, and I created a folder called data, and I run test path, test path, data, it tells me where they are compared to now, so now at the root of the package, for instance. But when you're running the test, you're not at the root of the package or somewhere else. So in your function, for instance, you would use read our read CSV, and depending on what varies in your test that, I'm sorry, test path, data, blah, blah dot CSV. So what I mean is you would never use the path itself, you would call the test path function, so that it builds a path for you, depending on where you are now, compared to the root of the package. Thank you. Thank you for helping me clarify. And then for when you're interacting with a web resource, you can use fixtures too. And the idea is that instead of calling the API every time we run a test, what about we create a test fixture and store in it some response from the API. So we don't need to call the API, we're calling this file instead. And this is the idea behind two packages for HTTP testing this year, but we are going to use today an HTTP test, to which is another one. There is a third packet for HTTP testing called Webfix, and it's a bit different because it actually spins up like a local web service. So that's a bit different, and we won't be looking into Webfix today. We won't be looking into HTTP test either, but once you know how to use this year, HTTP test is not that different. So we're going to try VCR now. And so let's go to the demo notes. So to use VCR, the first function I'm going to run is a use VCR function. It's not in use this, it's in VCR. So use VCR. And so it tells me that it's running on the API function. It tells me where I will find the fixture for VCR, and this will become less theoretical once we create some fixtures. It's added VCR to description, and it added an example test file as well as a configuration. We're not going to tweak the configuration right now. We will tweak it later today. And it tells you where to learn more about VCR. So this is a book Scott and I wrote about HTTP testing. So let's look at the VCR example that we get. So it says run and delete me. We're not even going to run it. We're going to delete it now and we're going to write our own example, which I think might be clearer now. So yeah. And I'm going to take a snippet from the demo notes of a new version of our test. And then I'm going to edit the test packages test. So at the moment it was quite simple. We were getting data and we were checking that the data were a character vector. So now I'm going to replace that with a new one. So now the difference is we're still calling the API. The API is a get packages function, but we're doing that inside a function called VCR use cassettes. So let me run the test a first time. For instance, we have DevTools test. So and it created a fixture folder. If we go one step up, so not test test pad, but test fixtures. We now have a YAML file called packages. It's called packages because that's the name I gave to the cassette. And it adds a what it contains is the file representation of our interaction with the API. So it has a request. And it has the response. So the request with what methods we use a get function what you are while we we called and what we got. So we we got your success. And it all all the headers and the data that for us returned. So now if we sorry, yeah, could you make your console a little bit bigger? I want to see the last command. Just expand not this. Oh, yeah. No, sorry. Yeah, I could. I could. Sorry. This is and so we know it's your fault. And if I run the test again. So it's much faster. Like we were not, it's not that important, I guess in our lives today to have a test that's point five seconds faster. But the reason why it's faster is quite cool. So the test did not call the API. Instead of calling the API, the test used the file that we're here with the old record of what an interaction with the API looks like. If you run not following an online tutorial today, what you could do is you could turn off Wi Fi and you can run the test again. And it will still work because it does not need the internet connection to work anymore. And you can now try and do these steps. And I'm going to look at the chat rivers. Yeah, issues with I have a question. Yeah. What you're showing now in the YAML and URI that that HTTPS your name is something we need to change or what? Oh, no, sorry. Yeah, because in the test packages, it was very self centered. And I use my name as an example. It's yours. That's why. Sorry. That's a good point. I should have, I should have. Sorry. Is your name there because it's always an, it's an R universe. But what was a better idea? I should, I should use a ggsec example as an, as an example here. So ggsec is the name of the is as now of an universe. I'm going to show a new universe. So sorry. So for, we can go for instance to the Arapenside universe. So it's a collection of packages and you could use Arapenside then as a viable for the packages one. And I'm also going to find your own name Pauline there because you maintain a package for Arapenside. So yeah. So for instance, we can see the Kramer package in there. So and so that's the name of the universe is Arapenside. And you could create your own universe with Kramer and all the packages of yours. And what would be here would be your github username and to know more about you as there is a keynote by Jerome on Friday, all there is an Arapenside community call. But for now, you can keep my name in the, in the test or change it to Arapenside for instance, if you, if you prefer. And I'm going to commit all of this. My one question. I couldn't understand why the second time it doesn't call the API. Because what VCR does, so it's, it's sort of patched like I kind of as a writer like it doesn't, in honor, it will try to find an existing record of a previous API interaction. And if there is one, it chooses it. So if there was no, it could error or call the API and create one. But in that case, when you write use cassettes, VCR can find the cassettes. And in that cassette, it sees that there is already a record of a get to this URI. And so it converts the YAML content into something that can be used as a response. Okay. Thank you. So it's like caching the response. Okay. Thank you. And by the way, so you were now able to create the local repository. Edyah. No, I'm trying to follow and understand now because I think I have a problem with GitHub. I changed so many things there. So I have to fix it. Thank you. And what really helped me understand like, but it doesn't like to convince me that it doesn't need the integration is really the idea of turning internet off and then running the test once that I have said, but of course, that that'd be quite bad. You would all be leaving the meeting. What is actually that use cassette caching? Because the UB we're using get packages and that's assigning it. Is it caching any requests or what is it caching? Any request that happens in that call. So in this case, there is only one API call when I do that, but you could have a longer code that actually calls several endpoints and it would record everything in one cassette. And it's not dependent on the HTTP package. It's also working, for example, with the current package. So it would use it. It works with HTTAR and cruel and various work is cut to make it work with HTTAR too. You know, the new HTTAR packet that's under development. So it's worked by Hadley Wickham, but Scott has asked where I always could work with HTTAR too, but it won't work with curl. If you use curl to be able to do the HTTP testing, you would need to look into the webfakes package. And with the webfakes package, there is no helper function for recording things. So you would need to create some sort of recorded response by hand, copy-pasting JSON into a file. So these kind of things. Do you use curl in your package? Yes. Yeah. It's lower dependencies compared to HTTP. That's the reason why. But I think if you use curl, you're already used to having to implement more things yourself. So I think webfakes won't be too clunky to use. Yeah. Okay. Thanks. So yeah, it works with post-request to any request. When you change to Arabin, what's the failure? What's the error message? Mm-hmm. I think it's because the package demo was still... Oh, yeah. Yeah. If you change, and that's an excellent thing to mention, if you change the code in a cassette and it changes the calls that are made to the API, you need to re-record the cassettes. And the easiest way to do that is to delete the existing cassettes and then rerun the test. And this is the reason why one should be quite careful when naming cassettes. So that it's easier to find them. Here we have only one, but sometimes you have many. So their names, it's like naming a test file. It should make some sense to you. Does anyone has any problems? It's fine if I go to the slides and break now. How do I mix it? Sorry. I'm trying to re-accelerate the chat to see if I see the answers. Okay. So where are the demonstrations? Okay. So now we have one test that can work with the internet. It's done. And we'll build on this later. And now we're going to take a short five minutes break. I won't be available during the two first minutes of that break, but then I will come back and read questions and we'll answer the question. Mael, can I ask a question? Yes, you can. I'm sorry. I said yes, but I was muted. Sorry. Yeah. I'm not this one. Okay. Sorry. I can't understand very well the cassette thing. Actually, I went to the VCR package and tried to read more. I would rather recommend the book because the book has a Getting Started chapter. I mean, I think because that's the book, I covered so half percent. I'm recommending that. So this chapter, the use VCR chapter. Okay. So it's a step by step. It's what we've been doing now. And I think that the problem with your not being able to run the common locally, it makes more sense once one is running, you can see that the YAML is created, but Internet is no longer needed. So this kind of, otherwise it's, yeah, it's a weird concept. Is it possible to share it in the chat? Yes. Thank you very much. And so now I'm looking at your question. Can I read this? Okay. I think that's because you haven't wrote the code in it. So, yeah. And if you look at the cassette code, how is that? It's because they use cassettes or there is a comma after its name and an opening callibrates just like the test that function. So you need to have code inside the callibrates. And so the break is soon over. And Camarie, can you confirm that you tried what Beatrice said? So, and I can see that. So the code is the one from the start choosing this year part of the demo notes. And there is a snippet. No, no, that's better. Don't be, no one should be sorry for anything that was, thank you. So there is a code snippet in the start choosing this year part of the demo notes. And this is the snippet that you should copy to your clipboard and put into the test file. Does that make sense? Okay. I'll try that one. Yeah. I'll just, I'll just lost a little bit. So, okay, I'll try it. Thank you. And then we can go to the slide of the part two. So we're going to try and see how to test things in two more cases. So we have a test that works, that's useful when the API has no problem. And this API, I'm very happy that the API has no problem today by the way. It does never add problem, but I know not, but I would have been quite at a loss if the API was done today. But in real life, like a web API is done sometimes, or sometimes, you know, they can have intermittent intermittent failures where they are done and when they're up again. And in that case, in your code, you want to maybe you have some sort of message to the user. If you receive, for instance, a 500 failure code, you might have some message that says you should try again later. So your user knows what's up. And in your test, it's good if you have a, it may be useful to have a test for that case in particular. And we can't clear an error. So you're not going to take the API down before running your test. But you can mock or fake them. And when using VCR, as we are today, you have two approaches. You could use the WebMocker package. So WebMocker is a package that VCR already uses under the hood. And you can close the WebMocker package directly. So that works. That's documented both in the VCR docs and the HP testing book. But what is the approach I want to show today is the approach of editing cassettes. I find that easier and is a perfectly fine approach. And it's also closer to what you would do if you were using the HTTP test package. So that's like a more common thing to do. So let's look at the demonstration notes. So this is part two, demo notes. And we are going to add a new test for when there is an error. And so I'm at the beginning of the demo notes. And I am going to copy this snippet and look at it in the test file. So this is a new test in the test packages file. And it says that it will be skipped if we turn off the VCR package. So when you are using VCR, you can use environmental viable to tell VCR not to use cassettes, to do, to record nothing, to not choose recording response. You already use a real interaction with the API. And we want to do that in that case because in this test we're going to use a fake response. So if we were using a real interaction with the API, our test would fail because the API is not going to return the error we are expecting here. And I'm telling VCR to use a cassette called packages error. And I'm not going to use VCR to record it. I'm going to create it myself by hand. So how am I going to do that? So I'm going to start from the packages cassettes, the existing one. So that's good, but I'm not able to write a cassette from scratch. I have no idea what headers should go in there. And I'm going to re-save it under a different name. So I'm going to save it as packages ifen error. So now I have two cassettes, the same, but I'm going to modify it. So in my test, I want to have an error. And with the message takeoff. Just to note why takeoff? Because in the code for get packages, when HGDR stuff for statues, it says something like failed to takeoff. Because takeoff is the message I wrote here. That's not an especially great user interface, but that was just a simple one. So you could try and catch any error message here. And in my cassettes, I need to create the error. So what I'm going to do is I'm going to delete everything, or nearly everything, except the recorded parts. And as status code, I am going to write 500 2, which is a status code corresponding to an error. And I'm going to save. And this is a cassette. So it's as if I had recorded an error from the API. Now, if I run the tests and test packages, which part did you delete? So nearly everything. And just to note that so for later, I've also put the result in the demo notes. But I can show that again. So I deleted everything from the response except the status code, but I changed to 500 2. So from the original cassettes, rain test fixtures. So if you look at the packages cassette, it does many things in the response, theory, headers, body, but I'm only just keeping the status code. And then I'm going to run the tests. My I'm sorry. Could you please repeat the skip if this year of I didn't get it that word to be honest. Yeah, sorry. So our code, so our test test for the behavior of our package in case of an error and this error does not exist in reality. It's an error, but we we have the fake error in a cassette file. Now, in some in some context, we will run our test and turn off this year. So we have an environmental viable so you can and there is so the documentation for that is like switch in this year. So this year like switch so you can turn this year off globally with with an environmental viable. So it's this year turn off. And if you do that, if you use this environmental viable and run your test, even if there are cassettes with their recorded response, they won't be used. So if VCR is off, the API will be called. And the API is not going to return an error. So the test is going to fail. So we should skip it in this case when VCR is off. So if you in general, if you amend the cassette, if you edit it by end, you want this test to not happen when VCR is off. And there are other reasons for editing cassettes or or maybe you have some sensitive data in the response, or maybe you want a smaller cassette so you can just delete part of the data and make it smaller. That is kind of the case. Just to thank you. And I'm going to wait for everyone to do that. Just note that the two snippets for the test and for the new cassettes are in their demo notes. I was in camaraderie, camaraderie, but it runs now and I'm nervous for the efficient. Yes, their, their coverage CI will run even if you check CI fails. Does your check CI fails right now or does it not fail? But the, but in the case in the workflow where it is now, um, yeah, key they both run at the same time. I get an error fail test. The second one. An HTTP request has been made by VCR that the VCR does not know how to handle. I guess there's a correct message. Can you show your results? Miles, please. No studio. Oh yeah. And, uh, and there was another question by someone else with their VCR error message. Who has that? Was it you too? I think I heard two voices. Yeah. Oh, right. So you, what was your VCR error message? An HTTP request has been made that VCR does not know how to handle. And the result is finally a failed test. What did you, like, how does your, so it means that in the cassette that this year is using, it cannot find an API interaction that matches. So, but as the same URI, what is in your cassette? Is it the same as the one in my snippet or? I think this is the correct response. This is the correct response. Can you show a zoom? Oh, sorry. No, no, no. Oh yeah. Yeah. Sorry. Yeah. And I'm sorry. I don't know how to make that disappear. So it has this error message, but we'll just ignore it. I'm sorry. So it says the error message, but the test passed at the end, actually. Yeah. Yeah. Sorry. I should have mentioned that. And you can see that it's retrying. So it gets an error. It retries because that is what is in the packages code. I'm using the HDR retry function instead of the get function, which is a function that will try again. If there is an error, it tries a few times before giving up for real. So if there is intermittent failure, it should be more robust. Like it could work if the API just fails once and then it works again. Okay, fine now. Working. No, good. I'm going to do that. And my employees mean it means I should have gone to your meetup with their workflow tips when I would know that already. Thank you. Oh, but it worked with control. No. Okay. Does anyone have any questions? So it's, and this idea of amending cassettes to be able to get their error you want to test for, it's tricky. It's not straightforward. And we're going to make it worse actually. So I said that our code retries. So if the API fails and then work again, we expect the code to work. And we're going to add cassettes with exactly that. And this is a bit more complicated. So the test for the case when we have an error or a success. And I'm not saying you should test for every single of these little things in your package because we can trust HCTR, for instance, to do that, but it's good to know how you would go about that, how to test for these weird API behaviors. So we're going to add this new test. So test in case of an error and success, I'm going to copy the test to my clipboard and add it to the test packages. And so this is a test that I will also skip if this year is off, because if this year is off and I cannot expect the API to have this behavior, and it's going to use a new cassette, the cassette packages retry that I'm going to create afterwards. And in that case, it will expect two things. It will expect that there is a message when creating the object. This message corresponds to the fact that HCTR is going to write on the screen, but it's trying again. And I'm expecting that at the end, I get the result I was expecting. So a character vector. So let me edit the cassette again. And the cassette that I'm using is also in your demos. But the way we do about that is starting again from an existing cassette. So the package is one. And I'm going to save it under a different name. So packages retry. So there is one request here. And I'm going to add a second one before that. So VCR in a cassette file will look for a request that matches the one we are making. And if there are two requests that match, it's going to loop through them. It's going to use one and then the other. So I'm going to copy all request objects. So I'm just going to check for a better way to do that. Yes, request. So I could select, for instance, the old request part of this and paste it here. And from the first one, I'm going to delete everything but the status code here. And I'm going to make that 502. So now in this cassette, I have two requests. So the same request, but a different response. So in one case, the response is a status 502. And then there is one which corresponds to the one we recorded. So the actual response we get from the API. So first a failure and then not a failure. And I can run the test, which should work. So if I do test tools test, so it's running the different part. And the thing is, it's, so it works, but it's very theoretical. And when you're developing a test, and the test is using the cassettes, but you, you would like to use a cassette too, you can actually try and debug like you have the cassette loaded in your environment. So right now what happens is this year I use the cassette in the test. And I've shown that in the cassette, I have a failure then not a failure, but it gets easier to understand if we actually use the cassette. So for that in this year, there is a function called VCR insert cassettes. And just before I do that, right now in my environment, if I load every function and I run get packages, so for instance, I get the results. So it works. Now I am going to insert the cassette. And the name is packages retry. So it's, and that's why VCR is called VCR, like a VCR, you can insert a cassette, play it, eject it. So I'm going to play it. And it has some text that appears. And what's important is that now VCR is not going to make real calls to the API, it's going to use recording interactions. So if I run get packages again, get packages to just say. So you see it, it tells, or there is an error. It should know. So that's a bad example. Why does it fail? Oh, I think I might need to use a configuration first to tell VCR where the cassette leaves. So if I source this file eject cassettes. Okay. And no, I run the, I will do that once again, once I figure what doesn't work. Okay. So it seems to work. No, still no. So I'll let you all try and use this casino and I'm going to try and debug this problem myself. I'm going to restart her. Can you please move your studio a little bit up the last line it's Oh, yeah. Can I do that? I hope so. Yeah, I, that's better than I think. Yeah. Okay. Maybe I created the wrong cassettes. Let me try again. I go into the one that is in the demo notes should work. So I'm going to save this one to source. Sorry. What's the file we need to source for no nothing because that's I'm still not sure myself where it doesn't work. So probably because you're using GT second in your examples you're using my own. Oh, yeah, that's why. Thank you. Yeah, this is very, it's okay. So, and I think that's the reason why and I can explain it better now that the issue was fun. Okay. So what I'm trying to do is I want to the get packages function to behave as it does in the test so that it gets clearer that we actually really getting an error than a success. So I'm loading all functions and then I'm going to insert cassettes which I think should work. So let me try that packages which I and I'm going to run get packages mail. No, it worked directly. Oh, and because it doesn't know where to look for them. Okay. So I need to load all the functions and to source this file, the setup file, so that VCR knows where to find the fixture with the VCR test path function and it's similar to the test path test path function in that it will smartly know whether you're running the code from the root of the package or from somewhere else and you will find where the fixtures live. I'm going to source it and then VCR eject cassettes. Okay. And insert cassettes and then I'm going to run functions. Okay, so you see that. So the steps are so creating this fake asset, loading all functions, sourcing the setup file. And if we do that and then inserting the cassettes, so playing this one packages retry with VCR insert cassettes packages retry. And if we do that and we run the get packages function with mail as an argument, it tells us request spelled retrying in one seconds and then it gets the correct results. And the API isn't selling today. So the reason why we have this failure as if there were a failure is because we're using the VCR cassette. We don't do not necessarily need to run this code in the console outside of the test, but sometimes it can help with seeing what's going on with when using one cassette. So it's in test, test that. And I think in the demo notes. And so the setup file is a file, a special file that is run when the tests are run, but not when we're loading the package. My one question. You mentioned there's also this web marker package. What is the advantage of this method with the cassettes with respect to that? So I don't think there is an advantage because that's not my problem. I'm going to open the link and then I go. But there are advantages. So this is a vignette in the VCR package called why and how to edit your VCR cassettes. And it has example with web marker. So in the two examples, so one is adding a cassette that has a failure. And the second one is like the one we're doing now. So adding a cassette that has a failure, but not a failure. And it then shows the same code with the web marker package. You can compare what you prefer to get in that case. So it's an advantage is that the web marker is shorter because the whole thing is in your test file. You don't have to edit the next channel file. But then you need to learn how to use web marker. So that would be the downside. Okay, thank you. I have a question on editing cassettes, sir. Paula, you might have written dot yaml somewhere. We're calling this on the registry and Beatrice. Oh, yeah. But you need to create it. There is a package retry cassette. You need to create it by hand. Or you can add content to the one that was created. Oh, you mean it was created as a root? No, it was the extension. I also completed the filing. Ah, because there is no function that creates an error cassette. Because editing cassette is not what VCR maintainer prefers. So the old philosophy of VCR is more to never edit cassette manually and to more record the cassette and use web marker. It's still possible to edit cassette. Otherwise, I wouldn't show that. But it's not the philosophy of VCR maintainer. You will find more content around editing cassette in HTTP test where that's more of the spirit of. But I mean, if you want to get like a cassette that you catch in HTTP error, we will have to remove some lines and then change the code of the response from 200 to, for example, 400 or something. Yeah. And maybe it would be easier to use web marker. So maybe that's something you would prefer, but also another idea. And maybe you could write a cassette. And I mean, editing a cassette is one way. Maybe you would prefer to write it from scratch. That's really I understood. So to go back to the science. So we amended the VCR cassette. And what's important if we do that is to use a VCR skip is VCR off. And in general, with testing, it's like number two thing of the different context. So when is the test going to fail and why? And in that case, if we run this test, when VCR is off, we're calling the API for real. So we're not getting an error and our test will fail. But this won't reveal anything interesting about the code. It will just mean that the API is not done. And I think amending a cassette is easier than starting from scratch. But that's really here. Some people have different tastes and it's perfectly fine to find your own workflow. And it's not, and I won't say, amending a VCR is not an off the label use of this year. It's documented in VCR, even if it's not the spirit of VCR event. I wanted to get to something quite crucial when you are dealing with an API is what I call authentication gymnastics. So say the API needs a secret, you have a secret API token like the GitHub API. So you want to not lick it. You don't want anyone else to see your secrets. But for instance, you want the test to pass when someone makes a pull request to your package. So you want the request to both be there and not be there. So and I'm going to show how to deal with that. And first, I wanted to, and not a few things about security. So it's important when you use an API to know what the secrets are used. And if it's an API key, it's if it's token, it's important for this to be to be documented. If you are you, if, for instance, if you are interacting with two Twitter API, you might want to create a special Twitter account for testing so that if there is a leak one day, you're not leaking the credentials to your professional Twitter account. So that is more of a playground account. So if that's possible, and some services even let you use shared account for testing, then do that. Then it's important to know why your secrets live. So sometimes the HTTP client you use is abstracting some of the things for you, not if you're using curl in that case, you know, but so sometimes you don't really know how the secrets are passed to the API. And it's important to know that so that you know how to protect them. Then you have to learn how to keep secrets safe. So this is this means reading, for instance, the documentation of the HTTP testing package you are using, and to double check at the beginning. And then you need to learn how to end up mistake. And you're really learning from one of the best because I recently leaked my own GitHub personal access token. So when I was working on the HTTP testing in our books, that was really not smart. But leaking your own GitHub personal access token on GitHub is probably the best place where to leak your GitHub personal access token because GitHub deactivates it immediately, notifies you and tells you to go and check your security log to see if anything bad happened. And it's so nothing bad happened. But that was not a good thing to do. But it's good to know how you've been at all mistakes. If you'll leak your API key, how do you deactivate the API key so that no one uses all your credits for a service. But really, so I said you're learning from the best because I like my personal access token. Obviously, that's not true. So take all of this with a pinch of soul. But I'm going to demonstrate you out to to make your secrets safe when using this year. So if we'll look at the demo notes. So we are in part two demo. I'm going to add another new function to our package that uses authentication. So the universe API does not need authentication, but I can add authentication to my package if I want. So in this get packages to function, that's the clone of the get packages function just a bit different. So I'm constructing the URL to the endpoint with the universe argument. And I'm reading a secret token from an environmental viable call secret planet token. If this viable does not exist, then I have an error message that says you should look at their documentation, which is quite mean in this case, because the documentation does not mention their token. In real life, you should have some documentation of their secrets. And then I do the request, as I did earlier, except by I use a TTR had add headers function to other authorization header. So good thing with the universe API and most is that they ignore headers that don't need. So I'm going to send an authorization header to the universe API. It doesn't need it. So it's going to be ignored. So it's perfect for teaching, because I'm still showing you a secret, but it's not a secret. That's something I made up. But you might use a TTR add headers for your API that really needs authentication. And then I stop for a status depending on the API returns. And then I get the contents like in the get packages function. So I'm going to copy this to my keyboard, this new function. Yes, I have a question about add headers. So here, why we do why we do this step? I didn't understand. So to do as if the API needed authentication. Okay, to have this use case for test case for keeping a secret safe. Okay, thank you. So we're going to create a new file package to and in that file, I'm passing as a new function. And I save it. So as you see, this needs a secret planet token. So I'm going to create a very different way to love that. But I'm going to create a dot iron viron file, but just for the project. So that's not the general iron viron. The user level one, that's just one for my project. So I'm going to pass that here, scope project. I created this new blank file. And I'm going to add my token in there. So my token I'm getting back from the and I am getting this from the snippets. So I and I had two lines. So it needs an empty line at the end iron viron. And there are other places where you might want to store your secrets. And you could use, for instance, the keep us our package to read them from a base over places. But just for today, this is easier like that. And as you see, I have my top secret token written in the iron viron file. Now, first thing, I know, if you add a secret somewhere, you definitely don't want to publish it to get up. So I am going to get ignore it as well. So that it's not going to be pushed to get up. And, for instance, like if you use HDR package for API with all else, if you create a token, it's getting ignored by default. So often, package developers try to protect their users against leaking secrets. So but it's, but when that doesn't happen, we need to be careful where we have our secrets and how they could be linked into the world. So it's not going to be linked to get up by as an iron viron file, because we get ignored it. Now I'm going to create a test. Sorry. Yeah. Sorry concerning the ignoring, but only knows it for get and get up. Is it automatically ignored by when they build the package? No, you're right. You're right. I should also run use build ignore iron viron. Yeah. Otherwise, it would have a check failure. That's true. My one question is not good to start a token in our environment. Sorry, can you repeat the question? Is it good to start a token in the our environment? Or it's not a good habit, right? You said that. Yes. Yeah. So and there are, so often there are packages that would expect to find your token as an environmental variables. I'm going to show an example from a package of how you could do that. So the open cage package, open cage package set up authentication. So the open cage package as a function where it tell. So what it does, it expects user, that was not key pass package or a key ring package. So it expects users to save a token using the key ring package. And the key ring package is going to save the credential in your operating system credential, storing thing like securing package nowhere to find that nowhere to store your, your secret. And then at the beginning of your script, you can get the secret from curing by using the curing key get function and the open cage package as a function called all second thing that will store that will use this, this key for the calls to their open cage functions that come after that. So you could advise your user to use a key ring package and to get it from with a line that holds a key ring package in the in the script when using your package. So in that case, the secret is never in the our environment. It's in the operating system credential store. And you interact with that with the key ring package. Does that make sense? Yeah. Thank you very much. And yeah. So don't do like tutorials. So we have our token and I'm going to add a test for this new function that chooses authentication. So for that, I'm going to copy it from the snippets. I'm going to run use test packages to oops, sorry. I need that packages to file. I'm going to run the test and just look at what I'm doing and don't do that. So don't do what I am doing because I'm going to do a mistake. So I have my test with the get packages to function. I'm going to record everything in a cassettes. And let me run the test. So this year, wait, I'm going to restart our first because I think it's still using a cassette or what is it? Oh yeah, sorry. Now there was no problem. And that was me not understanding the results. So we have a new fixtures here in packages too. And it has a problem. So if you look at the request that has been saved because this year by default saved everything. It has saved our secret tokens or secret token isn't clear in this file. If I comment this to GitHub, then it means that my secret is out there and I have leaked it. So we definitely don't want that to happen. So we have to tweak the VCR configuration to prevent our secret token to be saved. So I'm going to delete the cassette and from the demo notes, I'm going to find the new configuration for the VCR package. And I'm copying it to my keyboard. I will pass it in my alpha and then I will explain what's in there. So in the VCR configuration, now we're still loading this year, which is required because we need VCR to be loaded when we run the test. And then we have the VCR configure function and it indicates two things now. So it indicates where we want the fixture. So it's the YAML files to be saved. And now it says that we want to filter the request header authorization. So we don't want it to be saved. We want to it to be replaced with not my secret. So like a fake token, just to be sure that in our cassette, we don't have this authorization header saved. And then there is a VCR check cassette name. This is a VCR function that checks that you don't use the same cassette name twice, for instance, or that you don't have space in your cassette names. And then there is another thing. So when there is no secret available for running the test, we want to use like a fake secret, which will be fine because VCR is going to use a recorded response. It doesn't need a real secret. And that's one thing that I find tricky, but that is useful when you use an HTTP testing package with an API that needs a secret. You can record response using your actual secret. And then you can rerun the test when the secret is not there, which means that someone that runs your test or a collaborator or someone that does not have access to the API can still run the test. And so this is great that it's possible because it makes development easier, but it means we need to understand when the secret is used or not. So the secret is needed for recording a response because the API is really cold, but the secret is not needed for rerunning the code. So why are we setting a fake secret? So VCR does not need the secret when the response is already recorded, but our package. So if you look at the get packages to function, if there is no token, it's going to error. So we need a fake token in all cases because we need to trick our own package into thinking there is one token, but then VCR is not going to need it. So again, we have this function that chooses a token. We changed the setup of VCR so that it's not going to save the token. So let me run the test again just so it gets clearer. So I'm going to run the test again here. And if I look at the packages to cassette, now there is still the authorization header in there, but now the secret is not my secret. So I have hidden my secret from someone looking at my cassette. So that's one thing. And if I run the test again, I'm sorry, I should sort of at the same place in the console. It passes. And I wanted to show that it works even if the secret is no progress. So say I will delete my token from our environment. I'm going to delete it, save it, restart R. And so from here, because I don't have a secret, if I run get packages to, no, get packages to not get packages, the one with authentication. So it's failing because there is no token. But in the test, because in the setup, we are setting a fake one. So we're going to do that. So if I run the test again, so I'm running all tests, so it's a bit slow. Now all tests pass. So we have no actual secret there because we deleted our environment but we have the recorded response and our setup here tricks our package into thinking there is an actual token. So the tests can still run. So I have any questions? I know this is not. I'm sorry, Myle. What does VCR check cassette names do again? So it will check. So the cassette names are the one in the test when you say VCR use cassette. So this first part is a cassette name and you cannot have space in them, for instance. Okay. Yeah. And what I'm also writing is that the HTTP test package, which works like this here, has a different behavior when the HTTP test package will delete some headers by default from the recorded response. So like the authorization header is never saved by default. So by default, when using HTTP tests, instead of VCR, there is less configuration to do. But you might still have some data to protect. And from the demo notes, I wanted to show something. And from the demo notes, I have linked the VCR configuration docs. Because in what I showed, the secret wasn't in the authorization header, but the secrets could be in another header, in a response header. So in the VCR configuration documentation, you can see how to keep secrets safe, secrets that are elsewhere in the API interaction. Myle, can you please repeat the part, these check cassette names? Yeah. So the check cassette names. So the cassette names are when you call the VCR use cassette function. The first character that's here is what's the cassette name and it will, it is then the file name of the cassette. And there are things you are not allowed to do. You are not allowed to have spaces. Let me check. I think that's in the check cassette names documentation. You don't want to have illegal characters such as a question mark. And so if you do that, then the VCR check asset name function would, I'm going to do that. Let's name a cassette something that. I add question marks. I'm going to run the test. No, I wanted to run them in the console, but that's fine. Let's see one. So I'm getting an error that says none of the following characters allowed in cassette names. So if I didn't know that I couldn't do that, then it would be good for me to know it and to change in then rather than having more problems later. Okay, thank you. And the VCR setup was added by default by use DCR. So this is what I did by default by use DCR. I have another question. In the setup of Popeye, what is the function filter request headers? Do we need to add something in the list or is that? No, that's fine. As is for our example. And so that's where it's important to know how secrets are passed to the API. In our case, we were putting a secret in the authorization header. So that's why we say that we want the authorization header content to be replaced with not my secret. So instead of having my top secret token, what's saved in the cassette is this not secret string. And depending on how the API you're interacting with works, you might need another header in there or you might need not filter request headers, but filter response header or something else. Does anyone have a package that interacts with an API that uses secrets, a token or something? Can you write in the chat if it's okay or not? Sorry, we have questions. Okay. Okay. Okay. And filter to interpret this question. Good. All right, that's that mark. And I'm seeing if I'm not missing something, not okay. Yeah. Paula, Andrea, what does your package use as a secret? I guess it's an API key? Yeah, I have a GitHub token, but I don't think I've saved it in my environment. So when you use secrets for testing, so we need to know our secrets well, so know where they are stored, like how they are passed to the API, for instance, and we need to know the tools we are using well. So if we are using VCR, we need to make sure that we configure it so that it does not save the headers where the spread is passed because by default, VCR saves everything on disk. You might choose over HTTP testing packages such as HTTP test with HTTP test authorization header, for instance, because it's a header that's classically used for authorization, right? It's not saved by default. So you have to know by default whether you need to configure or not. So in this case, we need you to add some logic in the setup file. I've written secrets as GitHub repo secrets. We only need to store secrets in the GitHub repo secrets if we have some workflow we are requesting happening, which we do not have yet. In the HTTP testing, in our book, there is a chapter about security. Yeah. And one last thing before the break that I wanted to mention is that we use a file called setup.r, setup.apri.r, and I've put a link to a table. So when you use testpad, there are different places where you can store your configuration for some packages or helper function for your test. So you can store them like we did in a file called setup something. And it's run before test, but it's not loaded into your environment when you load everything with LODL. So if you want to develop your test interactively, you have to source setup by hand like I needed to do when I inserted Cassette. You could also use a file called helper. And the difference with a setup file is that this one would be loaded with LODL and so that you could have put the VCR configuration there. Then some people store their helper functions for tests in the r folder with other functions because they want to test them, for instance. So that's something that you could do. So that's just something to keep in mind when you choose where to put the helper code for your test, either setup or helper or in the r folder. And this time for a short break. So the break is soon over. I have questions before we get to the next part and there will be also time for questions. So last part, part three. So in this part, I'm not going to talk about SCP testing things, but more general test that things. So there are cases when you can use the inline expectation and test that and I will show also where they are. So test that expectation. So in the test that documentation, you can see all the expectation that exists. For instance, you might choose expect equal when you expect something to be equal to something else. And in the example, I think I was using expect vector, which I discovered while preparing this tutorial. I didn't know expect vector to check that something is a vector. And these are inlines. So like for instance, we had expect vector character, which was inline. But so sometimes you have a large complicated output. Maybe your packages have put in some sort of list or JSON, or you might be creating images, or you might have error messages and how do you test for that. And in that case, it might be interesting to know and use what's called snapshot testing. So it's available in test that for the edition. For some of you were lucky and started using test that recently, you never use another edition. So you don't have to update all tests because you're already using test that for the edition. So this is good news. If you are not using test that for the edition already varies in test that documentation too. And the release notes, you can look at the blog post about the version three, which explains you how to update your test to the new edition. If you want to use feature that are only in the new edition, but there and it shouldn't be too much work. And there is even a vignette about test that for the edition. But if you didn't use this that before, you can use snapshot testing. So there is a vignette in a test that about what snapshot testing is. And I will show an example in the in the in the session of using snapshot testing for testing and error message. And then it will become clear what's made and what the snapshot meant, but there and another test that trick that I wanted to show is using custom skippers. So we add this vcr skipper vcr skip is vcr if vcr out there might be other cases where you want to skip some tests to run on certain environments. Often you will skip all tests that use internet on cram. So there is the built-in skipper and test that for that skip on cram. You might know that the function doesn't work on an operating system. So you will also be skipping the test in that case. But you can run your own skippers and we're going to do this two short things. So using a snapshot test and then using custom skippers. So first, I need to stop having so many files open and just look at my rock floor. So things seems to be working here. Okay. So in there and we're going to add a snapshot test for get packages to so get packages to as an error. But if we don't have a secret token for the API and I want to test to test for it. So let me pass this test to my tim bird and I am going to the test packages to file and adding this test. So it's a test and here I'm using with R to have locally no token because in my setup for the test, I'm adding a token always. But in that case, I want to see what happens when there is no token. So I'm using a with R function for that with R is really super useful. And this one is called local under and it means that there there is no token. And then I'm running the get packages to function when you use the get packages to function in the absence of a token. So if we look at the source of this function, there is an error. So I'm and I'm going to run this test. So test. Oh, there is test active. So now that's something I should have been using before. I tend myself to use a button, but I don't didn't want to show it with the button in the ID today. So test active file. And we have run the test and we have one that has a warning because it tells us that token for packages not found. Okay, I didn't use snapshot, right? I didn't know that was a warning. But in the now in the test folder in test path, we have a new folder called snaps. And this is where test path stores are the snapshots and the snapshots for packages to open it. It's simply a magnet form. It has the name of the test as a header. And it has an error message here. Now. So this is a snapshot. And this is one for an error. Now imagine I'm editing the code. I'm changing the error message. And maybe I'm reading a cell guide. If we'll look in the tidy roast I get most often, the error messages will be phrased something like this, count find token for packages. So I've changed the error message. And I'm going to run the test again. And it tells me the snapshot has changed. But it used to be this and token for packages not found. And now it's not it's now count find token for packages. And it's it means it can mean two things. So in this case, I decided to change the error message. So I'm going to accept this change. This is a new snapshot I want to have as a as a snapshot as a Goldstone out. But maybe this and it can also help like old test, like if it fails, it could mean that I have changed this, but I didn't want to. And then I need to fix my code. But in that case, I'm going to run snapshot accept packages to and everything is fine. Now I can run the test again and it runs. And so this one is accept snapshot error and they are over type of snapshots. So they are snapshot for files. So if you are creating a plot, for instance, and in that case, if you are creating a snapshot for a shot, an image, it runs a shiny app to show you the two different images so you can see the differences. So that's quite handy. So this is an error snapshot and I'm going to pause here so that you can add this snapshot test to your test or ask question. Yes. Sorry, my can you repeat what's the I couldn't understand the snapshot utility. So this is a fact that we're using expect snapshot error. And and there are several function and test that I do snapshot testing. So in the reference page of the test that website, there is a section called snapshot testing. And you can see the different function that is the expectation like you can use expect snapshot output, submit snapshot file, and the one that helps you use the like review the differences when the snapshot changed. Yeah, but what's the value of using snapshot compare the classical like testing method. Okay, so in the case of an error message, it so it might be useful if you have for instance, a error message with several lines like it might be easier to test than with expect error. And in the case of like, if you have a package that drops ggplot2 and creates a plot, it's like you have many you couldn't test for the output with an inline test, you would need a snapshot. Okay, thank you. But and if you've been using v diff r for doing is a bad for the plot to be if our uses test that snapshot things under the hood now so these are linked. Or are recently I've used expect snapshot for functions that outputs html like big html text and it would be awful to try and test that with an inline test or not awful but not as smooth. Are there any questions? You're right in the in the chat river. It worked or and you put out on there again and snapshot testing is probably more spectacular when you don't use it for something bigger, which was also either this case here, maybe doesn't do it justice. Now we're going to add a custom skipper. Another thing you can add and we're not going to do that, but you could also add a custom expectation like a custom expect that expect blah, blah to your and there is a vignette in test for knowing how to do that. And there is a vignette in test that will know how to add custom skippers and it even explains to you how you could test your skipper if it's a skipper that you use in many places or that you distribute with your package like this here does. But today we are going to add a skipper, but we are not going to test this skipper. So I'm going to copy the skipper from the custom skipper snippet to my clippard and past it to the setup file to the setup apple pie. And as mentioned previously, you could choose to put that in a file not called setup something, but helper something. And this is called a skip if not a Beyonce. So I need this uses a one my package. So I need to use it to add it to the dependency, but it's only used in tests. I'm going to add it and suggest dependency. So this way it's in the dependencies in my of my package, but only suggest one on this skipper. So if the user is being saved and it does nothing, if it's not been say, and it will be so case on my computer, then it's a test is skipped. And I'm going to add it to a test. But I'm also getting from this snippet. And it's a test that I'm going to put in the test packages file. And so this test is like the overtest, but it skipped if it uses a custom skipper. Now I'm going to run this active file. So it's doing all the things that our tests are doing. But also it's in the report. It tells me that no test file, no warning, but one test was skipped. And the reason why it was skipped is because it was not Beyonce. So that's what the custom skipper is doing. And you could imagine doing that with more complicated setup or for an actual use case. So I have no reason here to skip the test. But that's the example. I'm going to pause here just so you can add it if you want. And I forgot to add stuff to the remote repository for quite a while. I'm trying to understand this function. Is it just to check one username? If you have more usernames, we will have to have a different function, right? Yeah, exactly. Yeah. And maybe you will never need to have this way to skip, but you might skip in other, maybe skipping depending on the version of R, for instance, have a test that only does a behavior on a very old version of R. Are there questions about this in particular, any aspect of the tests? Not about this in particular, but you mentioned that there are the setup files which are run first and you're running the test, which is very nice. And I've also read that there are no files or no code which I can run afterwards. So is there... Okay, I don't understand that reasoning except of pushing with our package. But is there any way of running code after you've done your tests, after you've done all my tests, except of just adding a dummy test at the end, which is called 999 or something like that. So I think the idea is that before... I think this has been deprecated. So this was a teardown function and whatever, whatever I call. So I think that's what the test features explained. And does it mention the teardown... So it mentions a teardown environment. So now what they would expect you to do is to have your code that you want to run at the end, call with R, defer, and then teardown environment. So they would start creating you to use with R. So I don't think they used to be the teardown files, but now the idea is that your code should clean for itself. So what kind of code would you like to run after all the tests are run? For example, I want to avoid the with R packages. I'm reluctant to use packages as much as possible, especially from the tidyverse. This bit me quite badly once. So you can use exit function in R. Yes, but these ones I have to define in each test, for example. I've using the on exit extensively in my functions, disconnecting from the database, for example. But I'm especially thinking about deleting temporary directory, temporary files, which have been created during the tests. So okay, I could put that into the individual test packages. I think the idea is like what would be recommended like from test documentation in particular is that you would have a helper that you would define yourself. And when you use a temporary file, you call that helper and this helper delete it exactly what with R would do, but you would write it yourself. I think that's what yeah. So I know there is no longer a tier down file, but for an after older. Yeah, okay. Thanks. And check for the other questions. No question. Can everyone just write if you have a question and no question just so I have more signals to move on. I have another question is all this testing creating any files that we should delete at some point? No, no, and if it does, then it could like if you, for instance, sometimes one can make mistake and create to create this files, but it shouldn't create files. So that's why you, for instance, if you create a file, it's better to do that in a temporary directory and make sure that it's deleted afterwards. So the goal is that the test do not add files. And in the case of this year, maybe what would happen, I don't know what happens if you have like a VCR cassette and you no longer use it. Maybe you would have to delete it yourself. So that's one example. So maybe like, yeah, if you're, you might have to clean your test pictures by hand, like your test data, you could store test data in your test folder that you do not choose. And that would get your package. So you might need to delete it yourself. So what we just saw was snapshot testing and customs keepers. And in general, I recommend looking at the test that vignette, or test that and so another, I don't, I put the link in resources. There was a webinar about test that for the edition. And it's quite enlightening as regards snapshot testing, for instance, and what has that for the edition is. And I think it's only one half an hour. And the last point I wanted to mention is real request. So if you are using an HGP testing package, like this year, HGP tests, and you are using mocked response, like we have this file with the recorded response. And imagine we're using that for months. And the API changes. And there will be no way we would know because our tests are using, I mean, we would know from using the package in real life, but our tests would catch that because they are using a saved response. So what can be useful is having the same workflow as our command check, but where we turn off VCR. And if we're using another HGP testing package, like HGP tests, finding a way to turn it off for a run. And the idea is that every week you would run the test and you would not choose the recorded response. So every week the API would be run like once a week might be a good frequency. But it really depends on the API and your use case. Also note that there might be other way for you to follow the API changes if maybe the API has a changelog and you could be notified by the API itself, like the API developers when something changes, but there is no such things or you want to be very sure. Real requests are quite useful to have. So I'm going to show how to do that with the demonstration. Let me first check. Yeah, so my workflows are still passing on continuous integration. So that was good to know before I add another one. So I am going to add a secret, but my package uses to my GitHub repository because now it will be needed to run the test. I saw it's called secret planet token, if I remember correctly. So I'm going to go to settings, to secret, and so set settings, secrets, and then there is a button telling me to add a new repository secret. So the name of my secret is secret planet token and it's something like top secret. And note that I write the string as it is. I don't put this I cannot find it. I don't put inverted commas or it's really directly the value for this secret. That's not fair. And it's encrypted so it's only accessible for me or collaborators. I have given some sort of access to my GitHub repository. So it should be quite safe in there and it shouldn't be shown in the log either. And then I'm going to add a new workflow to get an action workflow and it's in the snippets of the demonstration. I will copy it to my clipboard and create a new file under, so new file. So it's not a text file, it's a YAML file. I will save it under the .github folder under workflows and I call it scheduled real request.yaml. And the way I created this workflow, so it's the same as the standard check workflow but I tweaked it a bit. So I tweaked when it runs. So it runs when I push to main. I can delete that later. But when you are developing a workflow that is scheduled, it's good to have it work when it's pushed to a given branch because otherwise you have to wait. Like I don't exactly remember what I've put here. That's cron schedule. So this is cron. So it's once a week for sure. I think it's 12 every Monday. So there are tools online to help you write cron syntax. But in the case it's once a week and I don't want to have to wait once a week to see whether it works. But once it's developed, maybe I could delete this part. So this indicates when the workflow should run. And then the other difference with the workflow that I had been added by use this is that I put this environmental viable, this here turn off equal to true. This means that this workflow will never use the recorded response I have in my package. It will always call the API itself. So that's the difference. Then I'm going to add it to my GitHub repository. And then it's just a matter of waiting that it's run. So I'm going to let you all add it to your repository. And did anyone have any problem with GitHub action? Or did everyone have... Oh, thank you to Beno. And actually, if you edit these files from the GitHub interface directly, so scheduled to say edit it from the GitHub interface. And I go to the schedule and I hover on the cron schedule. It will tell me when it runs. So it's not a weekly schedule. It's a daily schedule. And so that's very useful. And it has a link to learn more. And I suppose that if this would lead me to somewhere where I can learn how to correctly write the cron syntax. And I'm going to cancel changes. And I wanted to know whether anyone at the workflow that was failing for some reason under GitHub action. Another question, does anyone use something else than GitHub action? Do you like to use SQL CI or another continuous integration service? Yeah. And Travis, sorry, it's not recommended to switch to something else because Travis, we no longer recommend Travis, but the good news is that the principles are the same. And for the person using Jenkins, I suppose it means you have more setup to do yourself when you use that. And to go to the scheduled run, does anyone have a question about that? Or do you already use scheduled runs for your continuous integration workflow? Oh, and one thing I should have changed, actually. So if you look at the GitHub action interface, I named my two workflows the same. So that's bad for browsing. In the workflow, it'd be best for me to change the name and call it real request. So this field name and the YAML file, it's better to change it. So it's different from the other one. And I have one that failed. Okay, but only on one. Okay, so I have one workflow that fails on Mac, but it fails at the setup of R. So I think this is not my responsibility. And this is a case where I would just wait a few hours, a few days to see if it goes away or if it's really my fault. Yeah, all of mine failed. But I think there is also the missing package for my. Oh, yeah. Sorry, because that's what I have. I have done that on the console, but I have not written that in the demo. So you need to run use package, who am I, type equal to suggest. So I put that in the chat. Okay, so it's to add who am I to description. Can you all write if everything is okay over we have a question? So I go back to the slides. And I wanted to mention how does one get better writing CI workflows? One need to learn the current syntax when writing scheduled runs, not like I did so. But the main idea is to get inspired by others like to see how people using continuous integration. So I really like this life hack by Julia Selgy. So this was a life hack for Travis, but it still applies to any continuous integration service, where she said that. So she said my go to strategy for getting Travis bells to work is looking at other people's Travis.yaml files should that today to study the Travis.yaml for solving my problem. Because sometimes you, for instance, if you use a package that uses given system dependency, that's how to install on one operating system, you will need to go look at other examples for things to work. And actually this idea of reading over people configuration files also applies to testing. So you can go read the test of other packages to see how they deal with different things. So for instance, how does this package use snapshot testing? How do they set up their helper function? And I would recommend reading the test that documentation. I would recommend reading the HTTP testing in our book, of course. And on that one, the HTTP testing in our book, of course, we are looking for feedback. So if you have any feedback to make, you can open an issue and the data repository of that book. And we're getting HTTP testing itself. So this year that we use today is handy for usual workflows. It's highly configurable. And there are over HTTP testing packages like HTTP test, which is quite close to VCR. So what we saw today will help you with learning HTTP test too. And there is webfakes. Webfakes is quite different because it lets you define the whole web service. But webfakes is the one you have to use if you use Curl, for instance. Because VCR works with HTTDR and Pro. HTTP test works with HTTDR. Webfakes would work for any of those. If you have a package that interacts with databases, you might like the helper package T2DB that is like HTTP test for databases. And what I also wanted to underline is what we've seen with this idea of fixtures having files that are called from tests. So this would also apply to over testing packages. So I wanted to thank you for listening. And we have time for questions after I say thank you. I'm not going to go away. So thanks to Heather for being the technical helper today. Who gave some accessibility comments and to Shanina for giving me feedback on the tutorial proposal. And now we can take more questions and we can take them right now. So if we have a bit of time. Later during the next day, so I said the launch, but it's a slack. But that will be as synchronously. And then later, if you have questions on testing, so you can ask them to as a wider community, I would recommend the Arabian side for them as well as their package development category of the RStudio community for them. So I refer to questions. Thank you very much. I have a question like a general question. Is it good to, for example, test your package or functions while you're writing the function or after that? Now, so if you write it as soon as possible, so you don't forget and also so you like, you know, you have this idea of test driven development or you write a test that fails because your function is not doing the right thing yet. And you edit the function until the test works. Okay, so thank you very much. Are there other questions? I'm going to write a bit more. Can you recommend a nice introduction into the Webfake packages? As a chapter of the HTTP testing in our book. So there is a chapter of HTTP testing in our book where I do something many more with Webfakes. So that might be an introduction, you know, for from a newbie and then the Webfakes itself. So it's not much used yet, but there is an intro and the documentation is actually very exhaustive from what I saw until now. So that would be this book chapter and the documentation of Webfakes. Okay, thanks. Could you recommend a package that has all the tests that we can look at? You mean like a good example package? Yeah. Yes, it would depend on the package you are writing, right? And a better way to find them, like for instance, you could look at the reverse dependencies of say, BCR or HTTP test. And if you see one that you know, then you could look at that one because it might also help to also know the interface of the package a bit so that it's less foreign to look at. Okay, thank you. And by the way, for the Webfakes package, it would also be a way to learn how to use it to look at the reverse dependencies. Is there a cram badge? No, so we have to find the reverse dependencies this way. So the Webfakes package has a few packages that just use it to answer best discretion about their test-driven development. I think that would be in the R packages book that it would be defined. I think the R packages book chapter about testing even have something like the idea that you could end your workday with a failing test. So then when you come back to work, you know exactly what you need to fix, but what problem you had left on the day before that. So I would say the R packages book chapter first. And there was an RLEDs, mid-app RLEDs Berlin, I think about testing. So that might be interesting to look at the stydeck as well, but isn't RLEDs get a organization? Okay. And I will wait like eight more minutes for a more question, but then we have to leave the room in time because the next tutorial is in 15 minutes after we finish this one. And I'm also going to stop sharing my screen. I have another question. Are you using the R best package to retrieve some data? Are there any specific tests or functions that you recommend that I should look at? No, but I see. I see the developer of the polite package is still here. And the boy packages nice interface to the average package, but that's not related to testing it. Now you could bet because our best packages in HTT are under the hood. I think you could use VCR or HTTP test. So you would wrap your code in a use cassette. VCR call and it should work. Okay. So and what can I have data? Is it for a packet that you need to scrap the data? Yeah, it's not yet a package. I'm scraping data from a website. That doesn't make it available through our means then. Yeah, I would like to make it a package. It's just a lot of clicking in one person. And I put the link in the chat to them. So if you're using the harvest packet, also check the polite packet. So it's like a rubber harvest with that, but make sure to respect the robot text file of the website and as well as not sending too many requests at once. And it should be many more changes to their harvest code. Yeah. So and use the sleep function, but it works well. And there is also rate limit your package for rate limiting, which is what polite choosing I think. But sleep is sleep in general is fine. And also the sleep function in and out. So you have a few more minutes to ask any, any, any questions. Maybe we can stop recording. No. Yeah, definitely. I've gone through the main questions. I can stop the recording.