 In the package, for instance, like I said, it is very easy for students to set slightly different names for the functions or objects that we ask for. So what we do is the AutoHARP provides a shiny app for students to run their code before they actually submit it. So this is a kind of a pre-check. It looks something like this. So we run it on a server and let's say a student comes to the server and they want to check whether their solution is correct or not. So they come here and then they upload. So they look for their solution, they upload it, and some lints appear. Now the lints are from the linter package, which is a really excellent package. And if we have time later, I can tell you why we didn't just use the linter object. It just has too much detail for us for what I need it. Okay, so lints appear and they can correct that. When they go to the next tab, they can see the output. So they can verify that I will get what they get when they run it. And then the correctness check tells them some basic checks. Okay, you were supposed to create these objects with these column names, but you're missing these. So this student needs to go back and verify why that doesn't happen. Maybe another student will come and then they get all true and they can happily submit this to the learning management system. We can add details here like explaining exactly what this server is checking in this run. Because a common question I get is that if it passed all the checks, why am I not getting the full marks? We kind of run more extensive checking when we actually do it. Okay, the other functionality is that this is a class on visualization. So one of the things that the package provides is it generates thumbnails of all the visualizations that students create. And I can bring this back to the class. I can whenever I click on one of these images, it zooms in like you saw. When I click on this link, it brings me to the students code and I can discuss it with the students and you get their feedback. Why do you think this is good? Why do you think this is bad? And I can offer suggestions. And it's anonymized. So no one feels too bad about it. Okay. So why is it a semi-automatic grader? Because the output it creates is something like this. It creates a data frame that I still need to go through. I spend about a couple of hours each week looking through this. So one row is for each student and I can see these are the functions they used. Did they use the union function or intersect or did they write a loop on their own to do it? And then usually when I go to the end, I write some comments. And almost always the comment is use less for loops. Okay. So there's some other functionality that I'm hoping to develop, which is things like this. In a situation like this, where the X output becomes the input in the next line, then the Y output becomes a Y input, I want to pick this up quickly and then identify who here's a place you could use piping. Or if they duplicate code, they write some snippet and then they cut and paste it many times and reuse it. I want to be able to identify that. And then encourage them to write a function so that their code is reusable. Now is the package the right one for you? Maybe not. So what I've developed is something very specific to the way I do things. So if it's not how you work, then maybe it's not appropriate. But there are other options out there. There is mark my assignment, homework, and from ours to do, there's the learn R together with the grade this package. If you want to know how more about how it works, the full manuals can be found at this URL. The package itself ships with some examples, like how some examples of solution templates, examples of student scripts, and question sheets. And one of these question sheets is the one that I just highlighted to you during this talk about the PDF for execute. If you need any help or something, or you have suggestions on how it can be improved, please feel free to email me at this email address. Thank you for listening. I hope you all stay safe and healthy. If you're wondering about the name, it's the title of a song by Belgian band called Kuva Fawney. Claudio is listening, I think. He's a good friend and he's going to introduce me to the band. So thank you very much. Thank you, Wig, for such an interesting talk, an interesting package. So here's one question for you. So does static code analysis works when students use tidyverse functions like per instead of a for loop? Yes, you can call. Yes, it looks for the function. It doesn't grab for function. It looks for functions with a particular name. So you give it a regular expression. So it looks for function calls with that precise name. So it picks up things. It looks in the package name space and matches it there. Sometimes there are different packages with the same name. So you have to be careful and aware of that. That's all. So I have a question because I also write my own automatic grading workflow for my R class as well last semester. And so just wondering how do you check the students answers correctness? Is it compared to your own saved object? Yes, yes. So at the end of the requirements, they are required to generate a vector or a scalar. And then in my solution template, I will pick it up and check with the correct version. Yes. You also specify the variable name as well, right? Yes, that's right. That is a bit manual. I'm sure that I can't find a simpler way to do this. I have to specify. And that's why we came up with the Shiny app, to make sure that simple mistakes get overcome. Yeah, I can't blame them for those. Yeah, I'm thinking mine is very harsh because even the types of variable name run. And I just penalize them. It's just a couple of problems. Yeah, I should do that too. And the second question for me is, so because this class is about data analysis class, right? And when I design my questions, I often find my variable, if they do something wrong with question one, and it will also affect the rest of the questions. But I feel like they shouldn't be really penalized for that. But I don't think there's a good way to avoid. So what's your approach about this one? Do you like, for each of your questions, when you design, you design them like isolatedly, or like also like a natural data analysis flow? No, yeah, you're right. We should have that flow, right? But we can't completely penalize them if they get a small mistake, and that causes the second part to be wrong. So I try to insert checks in between. So I check the first part, and then if there's something wrong there, I go back and check it. That's why I still have to do a lot of manual stuff. I look at the data frame, and I see where they went wrong. And then I, from the thumbnails, I can just click into the HTML file and look at the script more quickly. I can't avoid that, yeah. I wanted, like you say, we want to avoid like the, there's a place for the data camp style or the learn R style, where the environment is provided to them, and we just check really specific correctness. But when we ask them to submit a script, I'm hoping that they get into the habit of writing it from start to finish, writing empathetic code, right? That someone else can run. And if they trip up, then we can't penalize them. We got to spend the time looking at it. Yeah, I totally agree with you. Cool, so there's one more question from the audience. So have you noticed any change in the marks from using this automated process? I'm just assuming that full manual checking can be open to more subjectiveness and buyers. I didn't do that comparison, but I do realize that I am more consistent in the grading because once I have the data frame for everybody, I can see what everybody did. If later on I decide to change something slightly, I can just quickly go back and make it consistent throughout. This has helped me be more consistent, but that's a good idea. I should perhaps go and look at the comparison of the grades. Thank you. Just wondering how long have you run your automatic grading process? How many semesters have you run so far? So far three, so far three, yeah. So it's quite mature to be like, everybody can use that. Yeah, yeah, I think it can be used, yeah. Cool, I should try that. Sorry, I haven't been aware of that in your package. And now I just simply just write my own workflow. Yeah, I don't know. It only went up on Cranberry recently, but if you need any help or just let me know, please. Thank you very much. Great, thank you. So I think, can I just move on to the next presentation? Because I think there's two more minutes to start. Okay, so Rihanna is like nodding. I think I'm ready to introduce you. So now we have the second presentation about from the decision. So Rihanna, a data scientist from Jumping Rivers and a co-organiser of Our Lady's Lancaster. And she's talking about another automating online teaching with R. Yeah, it's obviously the fun topic at the moment. I wonder why. Thanks, Eero. Yeah, so it's slightly different spin here. So my name's Rihanna. I'm a data scientist at Jumping Rivers. And I want to tell you about how we handle our infrastructure around delivering training. So I work at a company called Jumping Rivers. We're based in the United Kingdom. We do statistical consultancy. We build a lot of infrastructure. We're our studio partners. So we do a lot of setting up of their tools. But a big thing of what we do is we provide training. So we teach around, we have about 30 different courses at the moment or one day courses. And that might be Intro to R, Machine Learning with Python. We do Docker courses, Git courses, et cetera. So quite a wide variety of courses that we deliver online. And like I said, about 100 courses a year. We've got quite a small training. We've probably got about five or six trainers delivering about 100 courses to around 2000 attendees per year. So it's a large part of what we do as a company. And obviously most of our courses pre-March 2020 were in onsite. So I'd be traveling around the UK delivering training courses and then March happened. And suddenly we had all of these bookings and we had the same scramble. I'm sure all of you are well aware of trying to move things online. What we actually did was we took our standard course. So lots of slides and talking and maybe half hour practicals. And we ran it online. So we said, if we're gonna do this, let's get one of our trainers. So Jamie delivered our Intro to R course to the entire Jumping Rivers staff. And it was really, really dull. It did not lend itself. So we had this mad panic of like, okay, we need to make this more interactive. We need to make this engaging. We need to keep people engaged. So we changed from one day to sort of two half days and we made things a lot more interactive. So we basically threw out all of our slides brought in live coding. So when I teach, say spatial data analysis with our, it will mostly be a lot of me live coding and chatting with participants as we go through. And instead of longer practicals, we have lots of short exercises, plus lots of quizzes just to check in on formative assessment. And that's great. We've had some really nice feedback. When we do go back on site, we're gonna stick with this format because we think it makes better training. But it has made a lot more work for me as a trainer. And there's so much more infrastructure now. I need to think about as a trainer in terms of running that session. And we don't do any sort of go away assessment in terms of the stuff that it was talking about, although that looks amazing, in terms of assessment, it's more on the day chatting back and forth. So I wanna tell you about the process we use, but to tell you about the process, our process always starts with this lovely lady. This is Deborah. She's our administrator at Jumping Rivers. And when you book a course with us, she's the person that talks to the client and finds out what course do you want, when are you running it, and information about the client. So everything starts with Deborah. And what Deborah does is she puts all of this information into Asana, which is our task manager that we use. So all of our project management stuff we do in the Asana tool. So the sorts of things that Deborah would take, she would know obviously the client name. So quite often when we teach, it might be NHS or government, but we also sort of go into clients and we do public courses. There's lots of different clients that we work with. So we need to know who the client is, who's delivering the trainer, that's me here, and what the course is. Like I said, 30 courses across multiple languages. You wanna make sure you teach the right one on the day, right? And she would also set up the Zoom call information. And that's kind of everything really. And the idea is that from this one task on our project management system, we should be able to create all of the infrastructure required for training. Now the infrastructure that's required for training, we're teaching virtually. So we need somewhere for people to actually do our programming. And we know that particularly with intro courses, we don't want to get people setting up and installing our studio packages in advance. So we want to provide a virtual training environment where attendees can turn up, do the course and have everything set up for them. So we need our studio. We need all of the right system dependencies and our package dependencies. We also want our notes there, our scripts. We need a Google doc for quizzes. We need links to Zoom. So there's a lot of stuff that we need to spin up every time we run a course. And obviously this is happening multiple times a week. So not something we could do manually. So we've effectively developed a series of our tools which are all internal at the moment, predominantly because, again, like Vic was saying, you sort of tweak things to work to your exact personal workflow. But most of the tools that we use are available as alternatives, which I'll highlight that are out there. And also very, very happy. I can't go into technical detail right now just because of the amount of time. But if anyone wants to know more about any of the little sort of connections here, please just get into on touch. I'm very happy to share some code and help you get set up with similar things. So everything comes from this R package we've developed a central R package called JR Droplet. What JR Droplet does is, effectively, I should be able to, as a trainer, run one line of R code, create Droplet from this task, give it the URL to this task, and it should create all of the infrastructure I need. So spin up a virtual machine with the right name on a URL with the client's name built in. And we're gonna talk you through all of the bits that JR Droplet does. Because from creating Droplet to Droplet finished, although it only takes about five to eight minutes to run, it touches a lot of different technology. So digital ocean provides virtual machines. So it's what we use to create a virtual machine that we use for training. So a day before the training course, we will spin up this virtual machine and that's the environment that the students will use for the entirety of the training. They're given a username and a password so they can log in and out of the space during the training and a little bit after the training as well. So we pay to create virtual machines via digital ocean. And there is an R package that wraps around the digital ocean API. It's called Analog C, and we use that a lot. So what that will allow us to do is create virtual machines and delete virtual machines from R. Now, we did go in to digital ocean and we set up, I guess, a sort of standard image. So what's the minimum that's required for all of our courses? So we would want something like Ubuntu 1804, some minimal system dependencies and R and Python installed because there are two big, most common courses. What JR Droplet can then do is it can take that base image and then we can say, well, look in Asana, what course are we teaching? If we're teaching intro to R, we're gonna want Dplyr GGplot. If we're teaching visualization with Python, we need the Seaborn library. So what happens is effectively, we get the base image and that's developed and then it looks for the course specific package dependencies and installs that as well. We use the RStudio workbench, which is the new name for the RStudio product, the server pro. So that's the system we use as a sort of training environment. But we do need to actually install the package dependencies as well to make sure we're all set up. So we've got a virtual machine with all the packages required for people to learn. We actually need the notes now. So we use, like I said, quite a lot of live coding and we also have PDF notes and slides. These are all stored in our GitLab repositories, all of our course materials. As you know, teaching programming can be, you have to keep on your toes. So we sort of update our courses monthly, effectively, either because new packages are coming out and we want to share the latest information or I've taught a course and I've spotted a typo or I've seen an exercise that I maybe want to make a little bit harder or a little bit easier. So we constantly update our courses. So version control is obviously really important there. But also we want to run a bunch of checks. So anytime I update the course material, we've got quite a long GitLab continuous integration process which checks, does the note, do the notes run? Are they limited correctly? Are there any spelling mistakes? Simple stuff like that. At the very end of that process is something called making an artifact. In GitLab, what you can do is anything that's created through the continuous integration process, you can kind of have available, somewhere where you can actually grab that down, called an artifact. So what happens is anytime we change our notes, the notes will basically be built and be available on GitLab. So again, what JR Droplet can do is look in a sauna and say, okay, we're teaching intro to R, let's pull down the introduction notes and put them on the virtual machine. Let's pull down the scripts required for the learners and put them on the virtual machine. So we've now got a virtual machine with all the right packages, all the right system dependencies, notes and packages. Now if your head's getting a bit full, so is mine too, and so are the clients, right? We've already got quite a lot of tech going around, quite a lot of links, and that's a lot for the learner to think about. And when I'm teaching someone, I don't want them thinking about how do I log into our studio workbench or how do I find the quizzes? We want them to just have a place for all of their stuff. So what we have is a welcome page. And what this welcome page does is effectively provide a central location for everything they need for the course. And again, this is just a one-day course. It's a lot of stuff that we have to share with them. So we want to keep it as simple as possible. So this is actually a Shiny app. It's hosted at a URL, which is memorable for the client. So we use a custom domain. And the Shiny app aspect is just so they can self-serve themselves a username and a password. And the reason is that's the sort of thing that can take quite a bit of time off the trainer and actually being able to go and grab your own username and password for the RStudio workbench is really helpful. So that's been a big benefit for us. And I mean, this is just a simple idea, right? This is just a placeholder, which is telling you, okay, this is my video room down in the bottom left. So that's where Zoom is. This is our quiz document, which I'll talk to you about in a minute. And this is how we can get to RStudio. So we found that having one central location for, okay, you're going to start your course tomorrow. This is home. This is where everything is, has been really helpful. Yeah, and that points to quite a few things. So like I said, it points to Zoom. It points to our Google Docs. And it also points to the RStudio workbench where the actual learning happens. I'll show you the sort of thing that we do in the live coding. So we do provide PDF notes as well built in our markdown, but I don't want students reading those during the course. I'd much rather than be engaging in the teaching and have it as a kind of takeaway pamphlet for use afterwards. So we make sure it's on the virtual machine in case they want to read ahead or look back over something. But the main way we deliver things is through a sort of filling the gaps, but free text. So there's a couple of scripts for each chapter that we use. We have exercises, which they go away and have a go at, and the solutions just so they can double check. There's nothing wrong with having a nosy at the solutions. You can always learn something. And we also give them a demo script. And the idea here is, as I'm live coding, I'm going to be typing lots of stuff. And the last thing I want is for them to be worrying about writing all of that down. So the demo is effectively a good guess of what I'm going to code. I do go off script. I do like going off on tangents, but it gives them the confidence that they can sit back and relax and just watch me code and engage and ask questions. So this is kind of what the actual environment looks like in terms of when people are working. And that's their own personal space. You can see we logged in here as user 45. What that means is at the end of the course, they can just click one button and everything will be downloaded onto their computer. So if they've created another script, just to have a play around with something that's not going to be lost, which is really important. So yeah, there's a lot of different technology going on here. And then the very last thing, oh, the Google Docs, that's it. So we use Google Docs for our quizzes. This is something I borrowed from Greg Wilson, who is part of the studio education program. Using Google Docs for quizzes has been really useful for us. So what we do is we create a single Google document for our client for their course. And it will have these sorts of ballot boxes where people can just go in and, you know, mark their answer. The reason I like this over sort of, just sort of standards polls or something other solutions is it's very free for us as trainers. So I can put any questions in there I like. I'm not restricted to just using text, so I can put a code snippet in and say what would X be, sorry, what would Z be if I run this code. I can put plots in. So when I teach sort of spatial mapping, I can say, here's four maps. Here's a piece of code. Which of these plots would be generated by the code, right? So lots of freedom. It's also really nice if you've got 12 people in a Google document, you think that'll be mad. Actually seeing people typing, starting typing, not quite sure, gives me that feedback that I've really missed in online training. And people can and quite often do not only just fill in the poll, but they're quite often right, you know, in R, is it plus or is it, which one do I operate first? Or I really don't have a clue or can you explain to me again? So using the Google docs has been great. Again, this was something that when we first started doing online training, took a lot of time as a trainer. I would have to go into our Google Drive, create a Google document, copy it across from a template. What we've actually moved to now is we use our Markdown. So we have a standard index file. We render that our Markdown as a word document. And then we use the Google Drive package to upload it to the right place in our Google Drive. And the big thing for me is I would always forget to change this sharing settings. So by default, Google Drive is quite private. If you want anyone with the link to be able to edit, you have to go file, you know, anyone on the web can edit. And that's the sort of little thing that I could so easily just forget to do before I deliver a training course. And then it causes a bit of fluster and a bit of break in the training. So Google Drive, the R package, has a function where you can specify the privacy of the doc. So you can say, I want to upload this Google doc to this place and I want it to be editable by anyone. So it's all these kind of little steps in terms of changing the process and just putting in little checks that's really helped make sure that our courses run smoothly. There are only little things by themselves, but for me, I want to spend my time focusing on the learner and their understanding rather than worrying about the tech. So this whole process from, I guess, someone starting to run JL Droplet to having the machine up with all the notes, the Google Doc creation and the welcome page takes between five to 10 minutes, depending on how many system dependencies there are. And when that's done, it pings our slack. It says, Rian, you're teaching tomorrow. Here's the welcome page. This is the client. Here's all the information you need. So I know, again, I don't have to worry about it. I get that message a day before and I know everything's set up and I can just go in and just be happy that the environment is there and that the learners are going to have a great day. Now, I must confess, I work with a lot of techy geeky people who like making and building things and we do sometimes have a tendency to build tools for fun rather than really needing them from a business perspective. And I was thinking, we've built this really quite complex stack. We're touching GitLab, Asana and Slack. By the way, they all have R packages out there already for wrapping around the API. We've written our own because we're doing something quite bespoke, but there is Asana, R, Slack, R, et cetera. But yeah, we've built this complex stack and it doesn't just look after itself. API's changed, we have to go in, we have to maintain things. But we reckon as a team, we've saved around six hours plus per course and we're a very small team. So in terms of, you know, that's just the setup. That's a lot of work. We also have a similar command for cleaning up afterwards. So it checks who logged into the virtual machine and uploads a register to Asana so we can discuss with a client things like that. It sets up a nice R markdown format ready for our questionnaires and it also points to a shiny app which generates certificates and emails, certificates direct to the attendees after. So there's quite a lot sort of in the back end as well afterwards. So yes, it's definitely worth it. It is a lot to maintain. The key things are me having everything in one central location. So I guess that's two points. The Asana listing everything that you need to run the course but also for the client knowing they can go to that welcome page and everything is there for them. It means I can focus on teaching rather than stressing about the admin which is the bit I love, right? But it has taken our full jumping rivers team. I've not built this stack, right? This is a collaborative effort because when you're touching all these different technologies it does take a wide range of skills that are useful for us. Very happy to give people more detail if you've gone, oh, I really want to know about Google Drive or how do you do certificates in Shiny? We're sponsoring the conference so you can come grab us in our Slack channel and there's our Twitter and our website. Thanks very much. Thank you. We've got a couple of minutes left for one question only now. So there's a question. Are the virtual machines provided by DigitalOcean similar to cloud infrastructures such as Amazon Cloud9? How did you decide for this infrastructure in particular for your setup? We went with DigitalOcean because the pricing model works well for us. We're costed per minute. That works great. But most of our infrastructure DigitalOcean plus the studio workbench on top. But yeah, it came down to just looking at the flexibility of the different offerings and which models suited us. This is one more question. Impressive stand. Quick question. Have you used our studio cloud for delivering online training? And if so, why have you not carried on with using it? That's a really good question. Yes. We are our studio partners so we work quite a lot with some of the services as well. And like I said, our studio workbench just gives us a little bit more in terms of our flexibility in the service we can provide. That's what used to be referred to as our server pro until a couple of months ago. It just gives us a few extra bits of functionality and gives us complete control over what we want to spin up and where. And we can host it ourselves which means we can tweak everything we need. Thank you so much for the talk. And so now we have the next speaker, Christoph, a software engineer and our studio working with April Share in the R Markdown team. We'll talk about the extending the functionality of your R Markdown documents. Christoph, I think you need to unmute yourself. It's a pre-recorded talk. So and I'm not the one if you feel... Matt, while sharing please share the sound as well. Do you have some technical issues? It seems like I'm not really playing anything. Just a second seems to have a problem with the audio. Does it work now? There's no audio yet. Maybe I can try sharing my screen. Yeah. I have an audio problem over here. And mix some ingredients to the tax cards in order to put them into the documents. So this is just the first thing to imagine. At the end, there is some tool involved under the hood. And it is important to understand how this works. Because if we had to know what to look for and what is that in mind, what stock is actually needed to be able to extend the amount under the hood, it's mainly a specific person. First, you will write a classical dot under the hood. This is done by the second step will be this mountain which is the company of Bender. Bender is the unit of Bender. Bender is in charge of converting mountain climb any altitude format so usually HTML, dottings, PDF or even a program is the solution. So we run that to this format Bender creates just 100 to the dot that time. And after that we put in a latte to convert to PDF. Armandon will also run latte for the same way it brings render quality. These are the three main steps of the process. Each one has its own features its own set of configuration and while Armandon does the most part it will make it easy to run functions Armandon Bender to convert to RND file and you have the following I will show you a feature that you can create the people view to extend this course. To do that I will use a simple report as a base to add to the picture so in the animated video you can see this RND source file that I don't expect you to read it on this slide. You can go into the R2D project or just run with Armandon in 5.0.3. This report contains a quick analysis of the data from the Timer Pinchments package so this analysis is made mainly on the content of the great factory website and just an example of that we will be able to show the results of this report. This report is using an extreme entertainment format and it's really suitable for you so there is some text, some graphics some arrows, some illustration and you can see this in the view or you can also see the results in the R2D project or don't know the R2D file. So the thing is to extend it to this Armandon Bender report that we can publish. I created the report to publish. The first thing we want to do is test it. Can we get started again with the same video for Christopher? I think there were a lot of issues. Sorry for the trouble. Hi everyone thank you for joining. In the next 15 minutes we will talk about how to extend your Armandon documents. First, a few words about me. I recently joined RStudio as a software engineer and I'm working the Armandon team. You may find me on Twitter and GitHub and you find my handle on the slide. Now let's begin. In this talk we will talk about the Armandon package. We will focus on some recipes to extend your Armandon document. This is inspired and based on the Armandon cookbook. It is made with book done and you can read it online. The first thing to understand before extending your Armandon document is what happens when document renders. Often the feedback we have is that it's like magic and a lot of without free. You may already know this illustration by Alison Hort and the little wizards that cook and mix some ingredients, so text cards, in order to produce beautiful documents. But this is just prestige magic. At the end there is some tool involved under the hood and it's important to understand how this works because it will help you know what to look for and what exactly, what's part exactly to tweak when you want to extend your Armandon document. Rendering the document is mainly a three-step process. First, you will write the RMD file that will be converted to a more classical .MD file. So it's a modern file. This is done by Nitor. The second step will be this markdown to be the input of Pandoc. Pandoc is a universal converter. Pandoc is in charge of converting your modern file to any output format you ask for. So usually it's HTML, documents, PDF, or even PowerPoint presentation. The third and last step that can happen is when you render to PDF. When you render to these formats, Pandoc will just convert to a .Tech file. And after that it will be LaTeX which will convert to PDF. Armandon will also run LaTeX for you the same way it runs Pandoc for you. These are the three main steps of the process. Each one has its own features, its own set of configuration and what Armandon does for you is just making it easy with one function Armandon render to convert the RMD file to any output formats. In the following I will show you a few features that you can use to treat the default behavior and extend the reports. To do that I will use a simple report as a base to add features on. So in the animated below you can see this RMD source file. I do not expect you to read it from this slide. You can go into the R2D Cloud project or just download the RMD file directly for the link below the GIF. This report contains a quick analysis of the data from the Palmar Penguin package. So this analysis is based mainly on the content of the great package on the website and just an example that we will build on to show the different features you can add to customize the report. This report is using HTML document format and it's really the GIF old behavior. So there is some text, some graphics, some tables, some administration and you can see this in the GIF or you can also see the results in the R2D Cloud project or download the HTML file. So the aim is to extend and improve this RMD document to make a report that you can publish. But creating a report to publish the first thing we want to do unless it's for a demo or example is to hide the source code from documents. For that you will need to set the eco chunk option, NITAR chunk option to false. You can set that globally using the option object in NITAR and this will apply the eco false chunk option to any of the following chunks. You don't have to set that on a pure chunk basis. This configuration can be made in a setup chunk. In this chunk the configuration eco false won't apply so it's a good idea to use another chunk option called include that you can set to false. Include with evaluate the code but don't show or include the outputs and so it's a great use for the setup chunk. Another important thing to do for a published report is to think accessible and so you need to add any occupiers or images that you need to report. In the recent NITAR version you can do that on the changes by using fake chunk option. This fake.r checkup can be used with any external image you want to include and for that you can use the NITAR function include underscore graphics to include an external image and put that into a chunk. In this way you will be able to set the fake.r attribute on this chunk. Obviously you can also do that for our graphics. What I show in this slide is that any variable is created in the NIT process. So here I'm setting the variable in another term for which I will set include false because I don't want to include this in my document just for it to be evaluated and then I can use the value of the variable into the fake.r for my chunk containing the outputs. It would also be nice if your published report would look a bit different from the default document and maybe all the other document that other I mark the user can produce so what I want you to style my document. For example I would like to apply a color under either. This color could be from some guidelines of my organization and for that I can use CSS so cascading tag sheets. It's the way to style any HTML output and you can use that with Armageddon too. One interesting thing is that you can do that from inside Armageddon document and mandatory use in NITR there is a CSS chunk that you can use and in this chunk you can put any valued CSS code. I will set equal false on this chunk because I don't want the source of the CSS to be included into the output of my documents and what happens when it renders the CSS will apply on the document directly. So in this chunk I'm using this CSS engine and I'm setting the header level 1, 2 and 3 to change the color and set it to a blue color. This is useful for example for quick iteration a style you want to make for HTML before setting that to an external file but it's also interesting for demoing or for teaching purposes. But obviously if you have an external file you can also set that directly your Armageddon document. You can use the CSS argument from the HTML underscore document function and this workflow is better suited when you have an external file that you want to share across document or with your colleagues. By doing any of the two previous steps I'm going to add some blue headers inside my document. Next, it would be great if I can highlight some of the results of my analysis. With Armageddon you can do that with something that we call custom block. Custom block are based on pendants called FenceDiv. This is a standard syntax that pendox supports. You can create these blocks by using three columns following this. This is with a dot or beginning with a dot or attributes with a name equal then a value. So in this custom block I'm creating a special environment of class highlight box for which I want to apply a style that I provide as an attribute. The content of the block will be usual mountain. So here I'm adding some of the previous results using niter in like code and I'm putting this value in bold to emphasize it. This will be rendered in HTML in the following way. You will have a div tag with the class that you set up in your custom block so here highlight box and the attributes will be set on this div tag and so here my style attributes will be set. This will be the content of the div and will be converted as usual mountain. So my bold text is now between a strong tag and my previous variable have been evaluated by niter and replaced by its value. How to improve this highlighted result there? What I want to do for example is I would like to have a box. A box with a border with the same color as I put for my headers would entail this box to be the same color also. So by just thinking about what I want to do I see that I want to apply color on three elements. So there will be probably some duplication into my style sheets and so can I use variable or can I simplify the writing of the style sheets. So obviously for those who know CSS you can use variable in a CSS document but what we'll see next is another way to improve the creation of a style sheet for your document. This new way is using a tool called Stats. Stats is an extension language for CSS that will allow you to write some rules and selectors in a more flexible way that you can do with plain CSS. But don't worry, I'm not saying that you need to use another external tool. You will be able to use that directly from R because there is now an R package called Stats so it's the same name as the tool. This R package will allow you to use this tool to produce any CSS but with Armageddon you won't even need to use this R package because the support for Stats has been built into Armageddon thanks to this R package. So let's see now how to use that in your Armageddon document. So here I'm showing the CSS that I need to apply to have my box with the blue border and the strong text. So you can see that I need to set my color in a three different place. It would be really interesting if I could simplify that mainly because if I want to change the color for example I would like to change it in any place I use. This is where the CSS syntax comments to play. There is two syntax that you can use. The first is the SCSS syntax and the second one is the SAS syntax. How does it look like? Here you can see that I'm using a special NITAR engine called SCSS. So it works the same way as a CSS NITAR engine but it's specifically meant to be used with the SAS package. So any code you would put into this chunk will then be processed by the SAS package to produce a CSS included document. Let's look at this SCSS syntax. You can see that it's very close to a usual CSS syntax with the brackets and the semicolon but you can do more things. For example here I'm setting the blue color inside a variable. A variable begins with the dollar. I can then use this variable in the different places that I want to apply on. You can also see that I can use a nested organization for my element. So for example here I have a div that I want to style but I want to style the div of a lightbox and put the solid border with the padding and the color but I also want for div of this class to have their strong element with the same color. And so using this nested structure is kind of easier to read in a way and easier to write than if I had to build the CSS. This is exactly the same with the SCSS syntax. The SCSS syntax is the old one. It's much less closer to usual CSS because it's only work with indentation. There is no brackets and there are no semicolon. So here in this slide I'm just showing the way to write the previous code using this syntax. Obviously you can write all this in external file with .scss or .sss and you can provide this file into the same CSS argument that you can do with the CSS file. By inserting the previous code into my rmd document I will now have a custom block that will be styled and rendered in HTML as a blue solid box with text inside and any bold text will be also set in blue. If we add all that into our document we have no source code no more source code in the document we have our headers colorized and we have our reasons highlighted in special blue box. You can't see this in the animated diff but you can also see the results in the RStudio Cloud project or download the modified rmd for HTML with the link below the animated diff. How to go further if you want to extend even more our document. You can find a lot more information and adventuric into the Armageddon cookbook. I hope you learned something in these 15 minutes. Thank you for being listened to me and have a good day. Hello. Thank you Christophe. My name is Christophe. I'm a technical founder in the beginning. I'm sure there are lots of questions for your own Slack channel so if you're happy just post your questions on Slack and now we're going to move to the final speaker, Patricia Sontek, talking about psychometrics and now we'll be sure about that. During this talk I would like to share my insight on teaching computational aspects of psychometrics with R and Shiny. Here is an outline of the talk. I will first introduce the field of psychometrics and the courses I teach. I will then focus on the Shiny item analysis package, its newest features and how it was integrated into the course. I will then describe how understanding the course was recorded by collection of real and simulated data sets which are part of the research. Last but not least, I want to discuss the book related to this talk. Can you re-post the link? I can also try to share it again over my setup. Maybe the quality is indeed better. Let's take a measurement, practice it and analysis is posted on the email sciences. Researchers and practitioners are organized in psychometrics and science. You might check the web page of the virtual conference taking place. I see it. I am in agreement to try with my own setup. These include the estimation of reliability in order to deal with the omnipresence of a measurement error or detailed modeling with so-called IRT models and many other topics. There are a number of existing architectures developed specifically to psychometrics models. Let me share my experience teaching psychometrics as an interdisciplinary course to a very heterogeneous audience. As a Fulbright alumna at the University of Washington in Seattle, I got the opportunity to teach psychometrics to graduate students at the University of Washington. Since my return back at the University of Washington, a graduate course devoted to the faculty of psychometrics and a seminar for the recent research. The courses are attended by students from many different fields. We also conduct the best developers and other practitioners. We have students or participants at various levels of our proficiency from beginners to very experienced students. And also students with various levels of the participation of the students. With this wide individual genius audience, the hardest part is not to allow anyone to lag behind while at the same time maintaining an interesting and challenging course. If you have any other issues can you share again? Sure. This course is to explain the psychometric models and methods in a wider context of statistics and data science, providing terminology links and interpretations because the same concepts often times are named differently in statistics and psychometrics. To support this understanding, my goal is to illustrate important applications and welcome to this hall the students who are well-estimated or when the computational one method provides a deeper understanding of the other. To answer this question I have collected students for real data and psychometrics made it similar to this. My goal is to provide a tool for the scientific analysis and the relevant functions of these functions I would like to share my insight in this course Finally, going back to the main and hardest part my goal is to make the computational tools and our functions better and understandable to those I will then focus on and for this purpose we develop the shiny item analysis and interactive I will then describe how understanding important psychometric concepts was supported by the shiny item analysis which is an R package for psychometric analysis and multi-isometric analysis I want to list them for later It's a virtual website that is available on-prem The field of psychometrics contains a number of functions now and also contains an interactive shiny application researchers and practitioners involved in the psychometric research are organized in psychometrics and for those who are new to R they are available online in the shiny site and on shiny site Psychometrics features a number of versions of the R package represented at the news art that R27 team in Brussels was later described in the R journal There were a number of functions which I will now describe We are happy to announce the R package being widely used and that we now have a total of more than $40,000 from R-shaped psychometrics and over $1,000 for most of the practices The online app let me now share my experience teaching psychometrics with people from all over the world who are using neuro-genius audio teaching There is a full break at the University of Washington where you can continue to teach psychometrics what is displayed here is the intra-political page available methods are organized which correspond to the topics covered by the graduate course and psychometrics which I teach and which also correspond to individual chapters in the book covering the main site The newest developments include new methods which are made available within many of these tasks including also for dialysis reliability and workshop samples or model scores data we have students from a number a very new data set they give us both very experienced and also students with various levels for individual upload efficiency these include with this wide audience the hardest task is not to allow anyone to lag behind so at the same time interactive training sections can change and also extend it these allow students to explore how the item characteristics occur the main goal of information is to explain the psychometrics changes in a wider context of statistics and data science and training sections can also be extended by interactive concepts allowing the understanding of the model interpretation the last test of the interactive Shining allows more automatically generated for data model that provides a deeper understanding we had a poster presented to answer these questions I have collected two real data on how a full generation is performed which I will disperse Shining provides the user my goal is also to provide a listing box of existing content for available information and conversion this function and under the hood in this course will focus on explaining utility differences I will focus on this going back to the main and hardest task my goal is to provide the user functionalities that are very important to spend functionalities to those that are beginning to start Shining analysis we developed the user knowledge and now starts the interactive Shining and interactive Shining position because it makes the console available for trying to sample the Shining analysis for psychometric analysis that's helping to increase and provide some sort of learning while also using the interactive discussion the newest version is definitely good how we start the also complication of running the Spark Shining analysis using the Spark Shining type of features and for those who are new to art it is also available only then in the Spark Shining analysis and this already starts the interactive Shining the features of an earlier version of the fact that it's running the ZEVD at user art 2017 conference in Brussels going back and to the celebrity Shining app and then we can use the language in the original for example there were a number of improvements that we will now describe we are happy that the Shining analysis package is being collected in some methods and that we now have a total of more than 40,000 downloads only from our studio playing with the crowd and over 1,000 downloads from Githem the online app downloads the figures over 100 copies and we are glad to get emails from all over who are amateurs and to our employees which the studio can focus on the interactive Shining app what is displayed in the section is the intro page the available methods are organized in separate steps which correspond to the topics covered by the graduate course of cycle line by line then and which also spreads possibly in physical chapters in the book the newest developments for the Shining item analysis which are made available within each of these steps including character analysis let me go back to the presentation examples or models for polytons we are now also a number of modularizing new data sets with assistance are now available and also high number of dependencies on one hand we want to offer wider ideas of approaches on the other hand we want to offer a second form of to any of them becoming unavailable interactive training section have been also extended these allow the user to let me now move to the item section with my call curve and this can collect information provided by Shining item parameters and further discuss and training section will also be illustrated with the items the first data set is the date of the model interpretation of the overview it was used in a paper the last time the Shining Shining analysis was published it allows for long-distance to generate supports while zero-distance reliability estimates are possible under a specific situation and interactive illustration provided in the Shining item analysis is for with the ABS data Shining provides to illustrate this issue of operations and we can see that when the performance will range and under the hood this restricts and the ABS samples are utilized between proposal variants let me go through this section by discussing the user functionality eventually a very important new functionality is that the second data set or rather collection when using our data set of the distribution of softhold differential is very important so it makes the console available as we are trying to present with a simulated gene-wide data set in the paper published this is a copy and paste this may be present even in case while also in the interactive there is what is the two practical interests to have exactly the same distribution start the big main signal is running on a fair item analysis should be done routinely in development and if items should be checked for working by content expert in longitudinal designs the differential item function is already proposed in the learning and instruction and can provide proofs of instructional sensitivity even in cases where differences in change are not visited in figures we can see for example from two different academics can track and the basics but otherwise with exactly the same characteristics and exactly the same have different probabilities of downloading the items 6A, 6B and 6D providing greater tables of parameters and very down here the diff and the descent C analysis is available in training in the shiny item analysis app with the data set implemented within the different order package for diff tables data paper published in the arsenal in 2020 for the computation aspects of parameters especially on specific models which are implemented in the different order package including a simulation study comparing several existing and novel approaches alongside with some research and reliability will be presented at the international let me go back later this month we are also working on let me introduce a book which is called the title of the book high number of computational aspects on one hand we want to offer wide range in this book to bring a deeper understanding of the psychometric methods to any of the book it's aimed for a wide audience to be gradually speaking in artistic psychology, education, health and other things as well as researchers and practitioners and this best collection of data sets is provided up by the colleagues in the book which are used in each chapter we provide our code the first data set is designed by each chapter it was also made in the interactive shiny analysis earlier this year to demonstrate how interactive leaders who are possible under a restricted range and interactive illustrations I have presented an item analysis app with the shining animation to illustrate this issue the interactive shiny app provided in this package was developed to help attract those who are I also highlight the package although available online and interactively the entire complex shiny app I also want to highlight the importance of the sample provided within the shiny app making the processes behind the app more transparent the second data set I have also illustrated and want to highlight the importance of the collection of relevant simulated data sets to help explain the simulated G map data set and finally make a statement on psychometric methods this may be present even in my presentation thank you very much for your interest exactly the same distribution of the total scores the data analysis should be done routinely in development of assessments and this item should be checked by content experts in longitudinal designs the differential item functioning in change which we have proposed in a paper published in 2020 in the learning and instructional journal can provide proofs of instructional sensitivity even in cases where differences in change are not visible in the total scores we can see that two students from two different academic tracks, study tracks and the basic score but otherwise with exactly the same characteristics and the exactly same baseline score in grade 6 have different probabilities of answering items 6A, 6B and 6D in grade 9 the Diff and Diff C analysis is available interactively in the Shiny Antim Analysis app with the data set discussed above implemented between the Diff and Alar package for Diff, Dale's data paper published in the arsenal in 2020 further computational aspects of parameter estimation in group specific models which are implemented in the Diff and Alar package including a simulation study comparing several existing and novel approaches alongside with some research and reliability will be presented at the international reading of the psychometric society later this month last but not least let me introduce a book still in preparation which is planned for publication in 2022 the title of the book is computational aspects of psychometric methods so it are in this book we hope to bring deeper understanding to psychometric methods models and the book is aimed for a white audience including graduate students of statistics psychology, education, health and other fields as well as researchers and practitioners the content is based up in the topics covered by the psychometrics graduate course I teach and I described in each chapter we provide a code and a number of practical examples including those presented here today each chapter also includes a section demonstrating the methods in the interactive shiny item analysis I hope this can help attract those readers who are new to R to conclude I have presented ideas on teaching computational aspects with R and with the shiny item analysis package the interactive shiny app provided in this package was developed to help attract those who are new to R I want to highlight the fact that although available online and interactively the entire complex shiny app is built with an R which demonstrates the power of R I also want to highlight the importance of the sample R code provided within the shiny app making the processes behind the app more transparent and helping users I have also illustrated and want to highlight the importance of the collection of relevant simulated and early examples to help explain important computational aspects and finally stay tuned for the upcoming book on psychometric methods this concludes my presentation thank you very much for your attention thank you Patricia for a wonderful talk once again since we are out of time wait we are not really out of time we are the last session and I really suggest that unless everybody wants to hop off to the party immediately we could take a few questions and we could also take a few questions for for Christoph if he's fine with it and if he has time to answer them so perfect thank you Christoph and sorry for the glitches by the way Christoph's video is on Slack now if you want to rewatch it like I want to you can rewatch it and maybe catch up with him during the conference as many people as many people do and did also there was quite a lively interaction also with the elevator pitches that were right in the middle of the night for a lot of people so this should work to catch up if you are interested in the markdown improvements topic so I suggest we go on with the questions for Patricia and after that the questions for Christoph I have a question for Patricia so how do you set up your interactive quiz are you using NNR or like you made something special about the Shiny app no so it's inside the Shiny app and there is one thing we are actually considering is to also generate these data sets so right now there is a fixed data set and the student is answering to this data set but what we are thinking about in implementing is including the possibility to automatically generate a data set so that different students get different data set but this is not implemented and also I got a lot of motivation here for the conference how to automatize the process of grading so right now the students would send us a screenshot but if there is more students this might need some further step so here is a couple questions for Christoph about our markdown so is there a blog post to show a more step by state process on how to style the output using SAS currently there is no blog specific to our markdown but as I said it's powered by the SAS package which is an R package developed by RStudio and in there there is a nice article in the package on the website that you can find which is an intro to the SAS which is an external tool and also with a specific part on how to use it in Shiny apps and in our markdown so it would be a great place to start and after that there is also the BSLEAP package I didn't mention that into the talk but there was a talk by Carson Siever that RStudio talked last year about that so there is now support for Bootstrap and it's also using SAS and so you get a nice article into that on how to use SAS with our markdown because our markdown HTML document is using Bootstrap I hope I'm not missing any questions from the Q&A so another question I would like to see a good question from Jeremy question from Jeremy sorry what did you say there is one question by Jeremy oh yeah so is there a way to make the images center align in our markdown yes there is an option you need our option for that which is fit.align I posted the link into the Slack for the other question too so you can find that error but for this specific question is also in the Armageddon cookbook because there is a specific like the same question exactly so when you have a question like that about how can I do that into our markdown the Armageddon cookbook is the place for us to put this kind of answer that are not related to a specific more global topics so there is an option for that which should make it work for different type of outputs because CSS is only for HTML output if you want to style and the presentation was about that but if you want to style for example pdf output you need to use other tools but some integrated features like fit.align will make it work for several type of outputs and the final question is can we change the font in the YAMA for our markdown it depends on the output formats so so you can change the font font is quite specific because you need to have the font in style in your system and things like that but with HTML document you can use web fonts and so the BSLIP package or some tools there is some specific package to handle fonts that you can find on CRAN I will post the link in the Slack and so you can change that as a hard chunk or inside the YAML if it's supported but you can also change the font I think it's something about that in the cookbook too or in the other or markdown definitive guide if you want to look into that correct so can I wrap up my session now or do we have like more time if there are questions we would have maybe additional a couple of minutes but if not feel free to wrap up so well I'm just going to wrap up now so thank you all the speakers for impressive talks about Shiny app and our markdown and automatic gradings in online teaching so the next session is on social event mixer and that's about it thank you so much for joining me for the session thank you very much thank you