 So, okay, it's about three minutes after so I'll go ahead and get started as I was saying this is part two of a two-part mini series. Last week my colleague Eric Milman talked about the risk metric package and today we're going to be talking about the risk assessment application. So my name is Aaron Clark. If you don't know me, I work at Biogen as a data scientist and I serve as the lead developer for the risk assessment application for the R-validation hub. Okay, so quick disclaimer, any of my opinions that I expressed here today don't necessarily reflect those of Biogen or the R-validation hub. And this is a quick agenda of what we plan to discuss. So just a quick intro to both risk metric and risk assessment. In the past I've given other talks that kind of delve deeper into, you know, risk assessment, you know, how to install it and for the first time and things like that. This talk we're going to assume you kind of already have a general sense of that but we will talk about, you know, the importance of why we created this shiny up in the first place. I would say the bread and butter of the talk is going to be, you know, hovering around the latest enhancements with a recent release that just came out earlier this week. And then we have to do a demo, of course, to show you hands on, you know, how these new enhancements look and feel in the app. I have a couple of things to tease in a coming soon segment and then we'll finish with some Q&A. So if you're not familiar already, the R-validation hub has been around for a while and it's basically just a group of pharma organizations that are all focused on building, you know, unique tools to help those who work in regular regulatory environments. So if you're unfamiliar, go check out the pharma.org website and there's lots of info there about all the work streams that exist within the R-validation hub and there's plenty of opportunities to get plugged in if it's something that you're interested in doing. Okay, so the two tools that we're focusing on for this mini series is first and foremost risk metric which we talked about in part one of the mini series. So it's a framework to quantify risk of using an R-package in a regulatory environment and that risk is quantitative, you know, something that has been converted into sort of a number and then the risk assessment package is a full-fledged R-package but it is also a shiny application that sits on top of risk metric. The main goal of that package being we'll get into all the details of it here shortly but basically to be a central hub for your organization to be able to sort of use risk metric in a meaningful way. So just to recap risk metric is a means by which to assess quality of a piece of software and it does that by leveraging a number of assessments. So today there's about 18 assessments, give or take, and they're all geared towards like measuring, you know, software dev best practices, community engagement, sustainability, things like that. So here's just a sampling of a few of the assessments. Like does the package have a license? Does it have a place to report bugs? And it can get into more complex things like, you know, how is the test coverage for this application or a function and community usage like reverse dependencies, you know, do other packages rely on this package? Or how often is it downloaded? How popular is it? Things like that. So why create a shiny app? The shiny app was actually created a while ago, probably three-ish years ago. And I think the main goal of the app or at least its highest and best use is to help organizations and specifically people within organizations sort of create the documentation necessary for GXP package inclusion. So what do I mean by that? Well, that means that we're trying to provide a means or a highway for members of an org to take responsibility for assessing a package's risk before they go and make a request to their IT personnel or maybe whoever the gatekeeper is that makes those package inclusion decisions. So that the onus isn't on them, it's really on the request or the person who is making the request. And so the way that we do this is we are summation of the reason the app exists is to create, you know, a beautiful like summary report that goes through all the different points that maybe in all the requirements that your organization needs before it can include a package in a regulatory development environment. So that's the goal. So we're trying to make users create these reports that they can then hand off to the person making those decisions. And so we're trying to avoid just people asking those people directly saying, hey, I want these 10 packages because I need to do an analysis that requires these 10 packages. So this actually makes the person reflect upon the shiny app helps people reflect upon the risk of using and introducing those packages to a GXP environment. So along the way, it does a lot of other things. So in order to achieve that goal, so it provides one space where you can do tons of package exploration without the need to write any custom code, especially custom risk metric code. Along those same lines, it's going to ensure that the risk metric code you are running is ran on the same machine and in the same environment. So that's going to help with reproducibility of the different risk outputs that we can generate with risk metric. In addition, it's going to make sure that you're following your org specific settings. So you may have like a lot of options that are specific to your organization. And so by using the application, there's ways that we'll talk about in a little bit that sort of help us do that well and do that consistently. It also does do some automation for you. So when you first start using the app, it won't be a terribly arduous process to start categorizing packages as either low, high, or medium risk or something like that. There is some automation that's built into the app and we have plans to build in more in the future. It allows you to manage who's involved in the process. So there are different people that you may want to perform some part of the review, like maybe a statistical procedure could be reviewed by a certain person. And there could be another person who assesses a different aspect of a package. So we've built in lots of authentication and role management features to allow admin users to do that. In addition, we can store lots of summaries and communications. So as you're reviewing these packages, you can take notes. You can write down things you think are important or not important. You can comment and ask someone else to check out something that you found interesting. We're going to store all those, all that communication and all that summary in the database that sits inside of the application so that it can be published in the final report if you wanted to. And then last but not least, of course, we have the summary report and that's going to be shared with the decision makers, right? Okay, so that is the reason why this app exists. It really is, I think, a great add to RIS metric and I think it helps organizations really adopt RIS metric a lot better for the reasons mentioned. Okay, so the bread and butter of today's talk is I really want to share some of the latest features of the application that came out in version 2.0.0. So this just hit GitHub earlier this week. And so I'm excited to share with you all the things that it includes. So there's a facelift to the report builder and the database view. There's better support to analyze dependencies in the application. So that's pretty exciting news. I'll show you what that means in a little bit. There's even more org level customization including the use of a configuration file. So we'll show you an example config file as well. We now allow admin users to edit roles and privileges first and foremost in that configuration file but also on the fly in the application. And sort of the climax of today's talk is we allow users to explore source contents of a package. So this is a new feature where it kind of deviates from just following RIS metric to a T. So it's basically the application now serves up lots of RIS metric info but it also allows for a more manual hands on approach to exploring a package and you'll see that in just a minute. So I just want to mention before I show a demo and all these examples that the reason that we were able to sort of work on and focus our efforts on these features, these new features is because people told us that they're important in our issues GitHub page. So if you think that there's an issue that you'd like to open so that we can focus our efforts on something that's important to your organization, please let us now open up a GitHub issue and we would love to talk about it with you. Okay, so first and foremost, the report builder. So previously, I don't think we even called this a report builder. I think we just called it a report preview, but it's definitely a more holistic approach now. We allow users to define what content shows up in the report. We also added a package summary. So if I scroll down here, there's a little GIF showing off what the report kind of looks like today, all the metrics that you can see, all the comments and summaries, and even a little metadata in the report. But now let's say you want to get rid of like the author on the report. You can remove that. If you want to get rid of this overall comments, you can remove that piece as well. So it provides for a lot more customization in that way. There's also this new package summary where you can write really important information. Maybe you want to hit, you know, these six points or seven or eight or 15 points that you need to make sure you hit before that before you deliver this summary to someone for package inclusion reasons, and then you can download it in whatever format that you want. So I think the options are HTML, Word file, or PDF at this point. So that's kind of the facelift that the report builder got. And we have some more plans to keep expanding this out in the future, but this is kind of our first step in that direction. So it's definitely going to become more usable from that aspect. Okay, so the database viewer also got a facelift. And if you don't remember, the database viewer is it basically shows you like all the uploaded packages that you have uploaded to the application in the past. So first and foremost, you're going to get a summary of the uploaded packages. You're going to see the date that the package was uploaded. It's going to become increasingly important because it's that date is tied to one year risk metric code was run and thus one year package was assessed. And also a bunch of decision related columns. So I'll show you those in just a minute. And now everything is like really easily downloadable, which is pretty low hanging fruit from a table perspective here. So here up at the top, you can see this is just a preview. You can see, you know, I have 300 packages in my database. 178 of them had a decision made about them. And then you get even get a little summary where it says, you know, one package was considered low risk 147 was medium risk and 30 were high risk. So you can see that right here is our decision calm and you can see who made the decision. So sometimes it's made by a specific user. But if you use any of our decision automation, you can see that a lot of these were auto assigned up when they were uploaded to the database. And so I didn't have to do any extra work to assign those. You can see that they uploaded is listed here as well. And also the decision date is included as well. And I can't remember if this was in our last release or not, but we also have buttons here. So if you want to quickly zip over to view the zoo package or something, you can click this button and view all the metrics related to that package. So just quick, quick and easy like back and forth to view packages that you desire. Okay, so that's the database view facelifts ran through that pretty quickly, but we'll explore a little more later. Package dependencies now has additional support in the application. So here I'm looking at the carrot package. And you can see that it has a lot of dependencies. It says 16 here. But if you look at each one, you can see that a lot of these are just base are at the bottom here. So I wouldn't say those should really count against, you know, a package. Most people trust base are packages. You can see that some of them have package scores listed here and they range from 0.2 7 to 0.6. And then there's a couple of them here and two packages that aren't in our database. And so we've created some handy little buttons where you can automatically just click those and it'll upload and it'll produce a score for you. But this is handy because it shows you a lot of people that we've learned through our case studies really care about those first order dependencies because packages only as good as the packages that it's built on. So this may be worth exploring further for the carrot package. Like if you want to explore this package, these two or three packages that kind of have high scores. In addition, you can see reverse dependencies really clearly. So this 293 packages depend on the care package. So that is, I would say a high amount, a high number of packages. So that only shows how reliable, you know, this code base must be because it is so many people rely on it for building their packages. Okay, so another great add to the app is ability to tweak org level settings and you can do this either in app or in the configuration file. So you can customize your decision categories and even the colors that go with the decision categories. In case you wanted to make something that's, you know, maybe consistent with your organization's color scheme, corporate color scheme. You can toggle your automation rules. You can mess with the roles and privileges and you can also initialize your metric weights in the app or in the config file. So here's an example configuration file. You can specify all your databases, the names and the locations of your database that's been there for a while. But the credentials is a new portion. So here you can specify your roles. So here we have an admin, a lead, a reviewer and a viewer role. You can do whatever you want here. And then you can specify, you know, what privileges each role should have. So the admin role in this scenario has all the privileges. The admin user can do anything. They can change and edit existing users. They can adjust weights. They can make final decisions. They can revert decisions. You can add and delete packages, add overall comments, et cetera. The viewer, conversely, can only add packages or make general comments. So they have way fewer privileges. And you can see that a viewer actually has less privileges yet. They can't do anything. All they can do is sort of log in and kind of look around at things, which may be, you know, a role that you want to assign to certain people groups. The decisions piece in the YAML file, this allows you to specify a few things. You can specify the names of the categories, the rules associated with those categories and the colors. So here we have the typical three low, medium, high risk, but you could have two if you want. You could have, you know, GXB compliant or not GXB compliant, something like that. Or you can have five categories, whatever your organization, you know, wants to use, you could do that. And if you want to take advantage of some automation rules, you can specify those here. So here you can see I made a rule for medium risk and one for high risk. So basically this is saying whenever a package score, a risk metric score is over 0.639 and less than one, then it should be automatically categorized as high risk. And then you could do that for as many of the categories as you want or as few as you want. So here you can see I'm not doing it for low risk. So that means there's no rules associated with low risk. Nothing will be automatically categorized in that case. And then if you want, you can add a color. So here we're adding, we're using RGB to generate a color for us. And just for medium risk. Lastly, you can actually initialize your metric weights directly from the config file. You can also adjust them in the app, but it is nice to be able to just set these here in the config file on the onset so that with future deployments, everything is already set up for you. So here you can see there's only two listed. So all the metrics are going to receive an equal weight of one, unless you change it here. So you can give it either a zero for here for code coverage. So basically I'm saying, hey, get rid of code coverage. It's not important to me for some reason. I would argue that, you know, that is a really important one. So it probably shouldn't be a zero or vignettes, vignettes are really important to me. So I'm going to bump that up to a two. And so just to show you, you can change these things on the fly and the app. So this is for setting your, your automated decisions. And then you can see, which we already saw earlier is, is when you upload a package, it's going to be automatically assigned. And you can see that here in the database view. Okay. So that's a quick highlight of all the org level settings that we've empowered our users or at least our app deployers to use or admin users to use. And just a little bit more on the roles and privileges. So we, you've always admin users have always been able to add new users and sort of edit user profiles. So here's an example where we have Adam, the admin, Lenny, the lead, Rachel, the reviewer and the viewer. So those are the people who are authenticated to use our application. And you can see, here's a definition of the privileges that each one of those roles have. So it's exactly the same as, as what we covered before. But what's neat now is you can actually adjust these things on the fly. So for example, I can say, maybe I don't want, you know, my lead role to be able to adjust weights anymore. So I could uncheck this checkbox. And you can see I created a new role over here. Also on the fly saying, hey, I want to create a new role that can make final decisions and make general comments. And that's all I want them to be able to do. So that's a really nice handy feature to have in the application. Okay. So this is, this is probably the most exciting part of all of our latest enhancements. And that is that we've added a file browser. So like I said, previously risk metric, we've basically sucked a risk metric up until this point, but now we're adding in the concept of sort of a more manual package review. We're essentially starting off with a file browser. So you can, we're actually downloading the tar ball. So you can explore the source contents of any package that you want. So here I have a package, tidy C disc. You can see all the authors here. You can see a great description. You can see the license, the URL, the bug reports URL, everything is listed here. Super easy to navigate. Just as if you were a developer, you know, on this project and scrolling through the, the files down here, I just am showcasing that if you want to read through like some of the tests to see how robust the tests are looking, you can do that and you can sort of make some decisions and write down some comments on how you feel their tests are written. It's a really handy feature and we have a lots of plans to expand this even more and I'll share that in a little bit as well. Okay, so those are our latest features, a recap of all the features. Lots of activity has been happening in the repo. And so like I said, please open some GitHub issues if you want to see some change personally or reach out to us on GitHub or reach out on Slack or in the chat or something if you want to get involved in this project because we can always use more hands, more developers to keep the effort going. Okay, so I thought I'd take a break from the slides and just switch over to a chat and I'm sorry, switch over to the application to show off like how it looks today now that we have all these new features in there and we'll review a package called prodlin. So first when you come over. Oh, I also, if you guys want to follow along, we put this in a chat. Send to everyone. Send. Okay, so I just sent you a link if you want to kind of follow along with the demo. Basically it's available right now. It's hosted on Shiny apps IO and it's a version of the application that is pre-populated with about 300 packages. The first one is the farmer verse uploaded the tidy verse uploaded and I think about 250 other packages that are highly popular packages so they have like the most downloads out of all of cram. So the first screen you're going to come to is our authentication screen and you're going to see what version of the app you're working with. This is the latest one 2.0.0. And then there's some instructions down here if you want to log in as a certain type of user. So there's an admin user, a lead user and a reviewer, each with username and the password is available right there. So I'm going to log in as an admin user. And that will redirect me to the application. And so the application depending on the last time you've seen it looks more or less the same. There's a big control panel on the left hand side. You can select, you know, a package from the database that you've previously uploaded. If you haven't uploaded something yet, feel free to just type in the name of the package here. You can upload as many packages as you want. And this list is actually pulling all the packages from cram. So you have access to them all right there. So you can type in, you know, anything that you want. You also have the power to delete packages if you want. Or if you want to upload like a large swath of packages, maybe you have like 500 packages you want to upload, you can browse for a CSB on your computer. It just has to be in this format where the package is given and the version is given. Oh yeah. So before I start looking at problem, this is where you can sort of adjust your decisions on the fly. You're automated decisions. So for example, if you want medium risk to be from 0.3 to 0.64, and you actually do want low risk to be categorized, you can add low risk in here. Or maybe you just want to do high risk. That's a possibility. But you can also, like I said, change the colors to be something else. If you're not satisfied with the colors. So. But I'm just going to leave those as is for now. And I said I wanted to look up a package called prod limb. So I, I have never used prod limb before. I decided to be a good example for us to look at. If I go over to the build report tab, actually, it'll tell us what it does. So it says it's a fast and user friendly implementation of non parametric estimators for censored event history. So Kevin Meyer and things like that. Um, so if I go, so usually you're on the upload packages, head over to package metrics, and this is going to be where you see most of your risk metric metrics. And so I intentionally chose this one because it's risk score is rather high. So you can see here that it's at a point eight one. So if you don't know your risk score is point eight one, that's a pretty high risk score. And you can see why. So if you look at these metrics, it's missing vignettes missing a news file. Doesn't have a URL to report books. Kind of find a website or source control. So it's missing quite a bit. It's, it's a lot. So the only thing it does have is we do have a maintainer and we do have dependencies and we have a license. So if I'm reviewing this package, if I put myself in those shoes, I want to just make note of that probably because I'll want that to show up in the final report. So you can see there's some comments down here already, but I'm just going to go ahead and add a fresh comment that says missing quite a bit of metric. So I'm just going to go ahead and go ahead and add some info. We do however. Have a license. And dependencies on file. So I'll get, we'll give them credit for that. And then you can see that the, the comment shows up directly beneath. Beneath that. So heading over to community usage metrics. We can see that this package has existed for a long time, 15 years. And it looks like the latest release was pushed four months ago. So that's good news. It tells me that, you know, both that this package is probably pretty mature, existing for so long. And it's still in development because it's been, you know, put a new release has been pushed in recent history. I can also see that this package is very, very popular. So there's 1.3 million downloads in the last 12 months. So that's pretty good news. So I'm going to go ahead and add a few packages relying it. And I would say 39 is pretty high. I wish I had 39 packages relying on the packages I build. But if you look here, it looks like even though it's existed for 15 years, it's had zero downloads on CRAN until about 2008. So this is a fun little graphic and we can kind of zoom in on the time period that we're interested in. So we have a number of downloads per month since, since inception of the package. So you can see that it really got its first bump in like 2017, where I had 50,000 downloads in a month. And it's been kind of trending upwards ever since then to probably a climax in November of 2021, where I had 350,000 downloads. So it's quite a bit. I'm not super worried about this downward trend because it looks like it's highly influenced by, you know, the first bump that it had in 2021. I would, if I had to fit a line to it without that bump, I'd say it's probably, you know, trending slightly upward even so. So taking in all that information, maybe I just want to make a few comments saying, Hey, this, this package is really popular. Popular at 1.3. Oops. 1.3 million downloads. And maybe I'll just say in one year. It's still actively developed. Almost 40 packages rely on it. Yeah, I think that's about all I wanted to say about, from a community usage perspective. So I think that the community usage looks really great. I think this bodes well for the package. So taking us up further, this is the new package dependencies page that we added. So this is nice because it gives us sort of a firsthand look at, you know, the footprint, the dependency footprint of a package. So we did get the metric on page one, sorry, page one maintenance metric and said there's 10. But now we're kind of just zooming in on that to see, you know, what 10 are we talking about? And it looks like three of those are actually just base our packages. And then some of them appear to be slightly riskier. So maybe we want to investigate the diagram package or something like that. But in general, it looks okay. In terms of it's not relying on too many packages. There are two packages that we haven't uploaded yet. So if we wanted to, we could click this button, these buttons to upload each one of those. And if you wanted to see the reverse dependencies, all the packages that exist, you can do that here. So it looks like a censored relies on it. And even parsnip relies on it, which is I think a tidy verse package. So that's probably why or where it got its boost in popularity is one, maybe one parsnips are using it, but I haven't confirmed that. So. Okay, so that rounds out sort of our metrics. So, you know, I thought we could just take a minute and explore the package via the source Explorer. So this is our file browser that's literally like downloading the tar ball and untie it so that you can browse the contents of the package. And the first thing that you're going to notice is that this, this file tree browser is pretty, pretty bare bones. There's not much here. I mean, you have what you need to make a package. But there, there isn't a whole lot. So the description files pretty bare. You know, it just has the bare minimum. Looks like there's just one, one author. I don't see any other contributors, which may be a risk for you. I don't know if they're for your organization. If you like to see more authors. And then it looks like there's a lot of, there's a lot of our files. So, you know, we scroll down just a little bit. There's probably like 40, if I had to guess, maybe 40 or 50, our files here. So that tells me that there's like just a lot of helper functions used to sort of support the, the exported functions for this package. And so likewise, I would expect, you know, to see a lot of tests, right? If there's high test coverage. But it looks like there is just about three, you know, three test files for this package. So that doesn't, that doesn't bode well from a testing perspective. So maybe that's something I'd want to make a comment on. So test coverage appears to be, to be lacking with three test files and about 40 ish. Our functions. So that's our folder. So I think that's worth noting. And so that comment goes down there as well. So that's, that's sort of all we have to offer so far in terms of exploring. So now we can head over to the build report tab. And so this is where we can kind of finalize things like what do we actually want to include in the report before we send it on to, you know, the person making the package inclusion decisions. So here you can see all of our maintenance metrics are included of our comments of our community usage metrics. We even have this plot here, which you can of course adjust to whatever time frame that you want. If you just want to show the last two years, you could do that. Or the last one year, whatever is important to you. And then as I mentioned before, there's a metadata portion to this report. So here we have the version of the app, the version of risk metric, the date and time it was generated and also just a quick recap of the metric weights. So here you can see we're only prioritizing vignettes to be slightly higher. So we give it for some reason, you know, vignettes are important to us. So we waited that a little bit higher. So like before, I'll probably get rid of the report author and overall score. If I want to make changes to this package summary, like I said, this is where you're going to want to like add the stuff that's really important to you, you know, requirements, you know, number, blah, blah, blah. And you can add that, all the pieces of information that you need, and it'll show up down here in your report. So I'll just download that as HTML. And that takes just a minute and then it'll be ready to send. So just to give you a quick preview. That's what it looks like. This is what I want to send to the next person who needs to know about this package for your package inclusion request for GXP environment. So that's what it looks like. I'll close that down. And then basically my, you know, my job is done for now for this package. If you wanted to review like all the packages in the database, that is that the database to have up here. And we kind of already went over this, but and you can zip over to any package that you want just by clicking these buttons. Okay, so that's, I think that's all I really had to share. Oh, I guess real quickly, this is where your admin tools are located. So you can add and edit users. You can assign and adjust privileges here, roles and privileges for who is using the app. And then you can also adjust your weight. So if you really want to up the ante on your vignettes, you can crank that weight way up or adjust it down whatever you want. And then you can also back up your database if you want to download a copy that's accessible to you as well. Okay, so with that said, that's kind of the premise of the demo. And then I just wanted to talk about a few things that are coming soon. And then we'll talk about, we'll share some Q&A. Time of Q&A. So Eric mentioned this in part one of the many series, but risk score is a fun new initiative that is kickstarting and complements the risk metric package really well. It'll help lots of folks be able to, you know, see how risk scores are trending over time. I think that would be helpful for the application. And I think it'll be helpful for the developers of risk metric to see like, you know, as different versions of risk metric come out, how our scores impacted. So I'm really excited to see that kind of get off the ground. And we're loosely planning on shooting for the October timeframe to get that repo more robust right now. It's in an experimental state, but it does have some initial like an initial data set that has scores for all of CRAM, which is about 20,000 or so packages. So that's something excited to look forward to. And yeah, this is just a distribution of different groups and their risk scores. So here's like the tidy verse has like this pristine, you know, record of having really low risk scores. And then the farmer versus actually not too far behind. So that, that looks good, but they do have a slight tail off to the right. And then here's a number of other groups of packages based off of popularity. So the top 100 downloads, the top 100 to 500 and so on. And then this is the other thing that I'm really excited to announce is we're going to continue to build out our package explorer. Thanks to our friends at GSK for sharing their code with us. They built a really cool shiny app. A couple of years ago that actually helps assess any exported function from a package. It will digest three things. So you can see the test code. So right now we're displaying test code. And every time that that function is being called in a test file, you can explore the source code. And you can also explore the documentation all in one easy to use like user interface. So that's the direction we're going to be heading and going for the future. And so I just wanted to sort of tease that now because it's something exciting to look forward to. Okay, so this is our depth team. The app wouldn't be where it is today without all these members contributed in one way or another. So a huge thanks, huge shout out to them and other contributions in the last several years. And then that's all I have for you. So I have a be happy to answer any questions. That may have popped up while I was going through that. I'm talking about the latest enhancements are going through the demo or whatever. But I have some links here for you. So if you're interested, I can share these. Yeah, I can actually just paste these right here. I think I already shared the demo one. But if you want to reach out on GitHub, here's that link. And if you want to reach out, we put together a Google form for a survey. So this basically just helps us understand how you're using risk metric or risk assessment. So we'd love to hear from you on that. And it looks like I need to include HTTP HTTPS. And then of course, farm our org is, it's really helpful if you want to get involved or reach out and join an existing work stream. And you can join, you know, risk metric risk assessment or you can even join other work streams that exists in the our validation. So with that, yeah, is there any questions be happy to talk if you want to take yourself off mute or if you just want to comment your question in the chat either works great for me. Hi, Aaron. So I had one question on the demo, would you showed how much of that is like admin specific and like our reviewers are viewers like who would with the package kind of assessment you were going through who would be able to see all those features. Yeah, so that's a good question. Okay, as an admin, you obviously have like all rights, all privileges to look at everything that you want. And I would direct your attention to this roles and privileges tab. So this is under admin tools, roles and privileges. And that will actually show you who has what right. So admin can do everything. A lead can do everything except for, and these are just, you know, these are defaults that we built into the app. You can customize these roles to do whatever you want. So you can take, I mean, you can take these as is, but really you can do whatever. So this, a lead can adjust the metric weights. They can adjust the automated decision making. So like if you upload a package, if the score is really high, it'll automatically, you know, put the label on it that this is a high risk package. So admin can make final decisions. They can revert final decisions. They can add new packages that can delete packages, and then they can make overall comments and general comments. And then similarly for a reviewer, they can only add packages and make general comments. So that's kind of the way we've set it up. But obviously, yeah, like I said, you can, you can set it up however your organization wants to, but in general, you can make overall comments and general comments. So that's kind of the way we've set it up. So that's kind of the way we've set it up for all the privileges that exist to date. Awesome. Thank you. Yeah, you're welcome. This is a good question. Okay. Oh yeah. So someone asked, what is test coverage? Why not actually calculate code coverage? So I was, I guess I was using those words. Interchangeably. So code coverage or test coverage is, it's not, it's not going to render on this demo app. And that's because we are seeking out what's called the CRAN remote source for the package. So if you have, you know, the source for any given package, like the source code, you can actually risk metric will calculate this for you. But since we are, you know, this is just meant to be a lightweight application. So I think that we just want to grab information from the CRAN remote source. And so there's a few metrics that actually can't be calculated because of that. And actually I was talking with some of our developers this morning on that. And I think we'll probably start to tailor the app a little bit more. Like whenever there isn't a metric that's available for a given source, you know, we probably won't show it anymore. For example, test coverage. This card here probably shouldn't show up. But the good news is, is this, this being not found, it's not going to hurt your score. It only, only assessments that exist can actually impact your score. So this test coverage is not going to impact it. It's like a quick question. Okay, cool. Thank you for following up on that. And I'll have a question per se. So I don't have a question. I have one thought. And maybe I can add this is an issue on the GitHub. But in your source Explorer, maybe add like a loader, like, you know, they have the shiny CSS loader. Because it took a second for it to load. And for me, I would have thought something was wrong and tried to reload the page. So. Okay. Yeah, that's, that's a good point. Yeah. Yeah. I think there's a bunch of, you know, so we have 300, 300 packages uploaded in our database. And it's got to go find the tar ball and start to, to populate this area. So yeah, I think that we actually have some things in mind to improve speed a little bit, but you're right. If there is like a little weight. Yeah, we should definitely show the user that, Hey, everything's working. Yeah. Yeah. Yeah. So just wait a minute and then we'll show you something cool. Avoid the rage refresh. Yeah. Yes. Absolutely. Yes. Okay. Thank you for that. Yeah. If you want to open a GitHub issue, I love that because I love it when. Other people are opening GitHub issues and not just me. So. Sure. I'll put that in for you. Thank you. Cool. Let's see. So what about our CMD check any plans to include results from it? It says, in my opinion, the RCM D check and code coverage are the most important metrics. Yes. I think our CMD check also has some problems with crane remote. I guess Eric, maybe you can confirm that for me. So, yeah, so the actual running of. Our command check happens for source packages. But not for remote. So that will be coming, I think, as we nest different source types. But there is a metric. Or crane remotes where we scraped the results of crans, our command checks on their test servers. Now. So it is important. However, if you have errors or warnings. When you submit to Cran, you eventually, you either don't make it the Cran or you get rebooted from Cran. If you don't resolve them quickly enough. So for, so if you're using the Cran remote object. In fact, the Cran, the Cran command check results are not that informative because they pass on the test systems. Yeah, 95% of the time. Right. And then second to our command check is. It is a good one. However, it's trying to starts to violate the, the sort of the initial spirit of risk metric, which is assessing a package sort of in an isolated environment because our command check requires all the dependencies possibly suggest. And so that pushes it out of that sort of narrowly defined scope, but is on our radar in terms of either assessing cohorts or environments, you know, that have sort of all your packages may be in one place. But we do run it for a source package or can run it for a source package for sure. Yeah, that's a good call. Yeah. The only thing that you would be able to have visibility to with a Cran remote is just notes, right? Cause you can, you can have notes while you're on Cran, but otherwise, yeah, they'll have a problem with any warnings or errors. Yeah. And even notes, I think they can have, they'll have, sometimes have problems with, cause Cran is notoriously picky. We'll say sometimes with their submission criteria. Yes. Okay. Yeah, that's a good question. Yeah. And Eric alluded to this a little bit, but there, there may be a time in the near future when risk metric will automatically sort of be nesting or chaining the different source types. So, um, and I forgot to mention this, um, when I was reviewing test coverage, um, for now we may take this card out if it's not available, but in the future risk metric may be able to pull it, um, uh, automatically for us. And that way I will have more complete, uh, view of what's happening, um, in this maintenance metrics landscape. So that was also mentioned last weekend during part one of the series. Cool. All right. Well, any more questions or comments or suggestions? Uh, is this like tied to a specific version of our, because sometimes, you know, packages will not be available for a given version. So, or is it just like our version agnostic? Yeah. So, um, you can run the app, the application. Um, you know, if you, and there's instructions on our GitHub, um, let me just pull those up real quick and slide it over here. So this is our GitHub page and there is, um, some installation instructions here. And so we are using, um, sorry, let me scroll down the right spot. Yeah. Okay. So, um, we are using R M, um, to sort of control the version of R that we're working with for development purposes. And we would propose, you know, that's probably something that you should also do within your organization to sort of fix on a specific version of R. But, um, you can, uh, you can run on the latest version, uh, but we would just, you know, highly recommend being intentional about which one you are using. So we found R M is a good way to do that. Um, so yeah, the demo app is using a certain version, probably our four, four point two point two, I think. Um, uh, right now. So yeah, if I had to guess, that's what, that's the one it's using. Yeah. There's more, there's more context here in this installation, um, section. So I'll just paste this in the chat too. Thank you. Okay. Thank you. Any other questions, comments, suggestions? Okay. Hearing none, uh, I will give you six minutes back. And, uh, thank you so much for joining us today. Um, our validation hub certainly appreciates, uh, your interests. And like I said, if you have any more comments, uh, reach out to us on GitHub. Um, and we'd be glad to, to keep the conversation going. All right. Thanks everyone. Have a good day. Thank you. Thank you.