 Welcome everyone to this keynote session. We are very grateful to our sponsors who make this conference possible. Before today's keynote, as your own owns, we have a five-minute presentation by Rosh, our sponsor of the day. Hello, my name is Roy Federer and I work as a data scientist at Rosh, and I'm happy to talk to you about how we leverage interactivity in our clinical trial work. It's actually exciting times to work as a R-developed Rosh, because we start to use R more and more across our whole content delivery process, starting with expertory data analysis for basic research, diagnostics, clinical trials, including predictive modeling or also doing some expertory data analysis to see which particular subgroups of patients could benefit more from the treatment. But more and more we're also using R to actually produce validated outputs that go to regulatory entities within validated R environments, and we actively pursuing a goal of being able in the future to ship interactive solutions, apps, together with static outputs to the regulatory authorities to actually streamline the process of evaluating or also approving a drug or a treatment. I'm here today to show you an example of one of these apps as we use them. This particular one we use throughout a couple of different COVID-19 studies that we had running or we just did running. We use a shiny solution powered by an internal framework which we call NEST, which features some nice aspects, especially it's totally modularizable. It's quite data agnostic, so you can use like an app scaffold and just swap out the data from different studies. It features some really nice life filtering, and this allows for some flexible expertory data analysis with these reusable modules for plots and tables, also efficacy outputs. It's all parametrizable via templates, so these templates take care of updating the inputs and setting the correct filters, which helps a lot with reproducibility, and once you have liked the outputs you desire, you can create a quick report which is just HTML-based, editable, and flexible, styleable with CSS. I show you a quick video of an example app using some dummy data. This is the typical scaffold as you would see it with some different tabs having different modules, so one for a table, one for a line plot, box plot, but also efficacy outputs like your couple of Maya curves, your time to event analysis. It features this filtering panel on the right-hand side, so you can filter the different datasets that are generated for clinical trials with different types of data, lab data, patient-devil data, and via drop-down menus you can select which variable you want to filter and then interactively set the filters you want to use, and on the left-hand side you have this template drop-down menu which is pre-populated, that is specifically with some templates that you might want to use, but you can also add your own templates or rather one that has been generated by somebody else and just copy-paste it, apply it, and it will automatically update the filters and set the correct inputs. Once you hit the run button, it will generate the output that you pre-specified now. So in this example, a line plot, and on particularity is actually that every time you hit the run button, it will create you a new output in these new cards, so you can generate multiple outputs and compare them side-by-side. Also in this example, it gives you an additional table with some descriptive statistics, and once you have the desired output, you can start it and favorite it because you want to keep it, and discharge the ones that you actually don't want to. Then you can also move to another tab, so you can create a table module and you can create whatever table you want. You also give it a star. You switch over to the reporting module where you can add your favorites to the report. So we pull all of these favorited star outputs into this one module where you then can update titles or add some nodes. And also arrange a little bit the order, or to be generated, and then click on Open Report and it will render you a report in HTML which is still editable, so you can still change your nodes or change your titles. And this one can then be saved as a PDF for sharing. All of these modules and these reports also record necessary information. For example, what filters have been set, what inputs or what methods have been used, so you can really have some kind of reproducibility or at least like it's well documented what you have done. This is it so far for our quick glimpse of how we use R at Roche. I thank you very much for your attention and wish you a good rest of the conference. Have a good day. Thank you. So we thank Roche for sharing that presentation with us to show a little bit of how they're using R. And now it's my pleasure to introduce our keynote speaker, Jerome Ohms. Jerome is a researcher and software developer with R Open Sci at the University of California, Berkeley. He has written many CRAN packages and also maintains the compilers and build infrastructure for R on Windows. Today he's going to talk to us about the R Universe project. During his talk, please post messages on the key Ohm Slack channel that's hashtag key underscore OOMS or ask and vote on questions via the Zoom Q&A. Over to you Jerome. Alright, thank you so much for having me. So this talk will be about the R Universe project, which is a big new ambitious project by R Open Sci, which I've been working on for the past year or longer. So in this talk, I will discuss some of the different components and use cases for R Universe. But if you're going to take away one thing from this talk, it should be that R Universe is for everyone. It is a place for publishing research and research software. It can be used by individuals and organizations. It's beneficial to novice R users, to students, researchers or professional package developers. There's no gatekeeping in R Universe. If you have some R code or a markdown article that you think is for sharing, regardless of what it is, you can sign up and start doing that today, no matter your background or expertise. So a little bit about me. My name is Jerome. I'm a staff research engineer and the lead infrastructure for R Open Sci. If you don't know R Open Sci, we are a research group based at UC Berkeley doing all sorts of things related to open science with R. And if you want to learn more about our group, you should really check out the talk by Stephanie Butler earlier this week, who gives a great overview of our mission and the various activities of R Open Sci. And as part of my work with R Open Sci, I've written quite a few RAM packages. Most of these packages interface to interesting C and C++ libraries, which expose functionality to R that can be useful for researchers. For example, packages providing interfaces to HTTP clients or cryptography or database drivers or image processing and so on. And finally, as I already mentioned, I'm currently also the maintainer of the official installers and tool chains and system libraries for R on Windows. So in this role of Windows maintainer, I spent quite some time in the past few years trying to modernize the infrastructure to build R and the components and the system libraries needed by R packages, collectively known as R tools. And so this process is now entirely automated and transparent and reproducible, such that everyone can see how this works and get involved. And at the same time, I really tried to redesign R tools to reduce the friction for Windows just to develop R packages on their machine. So these days, if you install R tools for and R things should generally just work, which wasn't always the case. And if you're interested in this work, you should check out the R Windows organization on GitHub. So that was briefly a bit of background about me. Let's talk about R universe. So what is R universe? R universe is sort of umbrella projects under which we are experimenting with many ideas that we have developed at R OpenSci in the past years to take open science with R to the next level. And in essence, R universe is an open platform for publishing and discovering research and research software written in R. And the platform has many features and different components. But the core is that in R universe, everybody, every user organization has a personal Cran-like package repository which is backed by a modern build system and also allows you to publish articles and other R-based content. And on top of that, R universe has extensive dashboards and fees and APIs and metrics and so on to make all of this content more openly available and accessible and discoverable. So how does it work? In a nutshell, so in R universe, every user organization has a unique subdomain under R universe.dev for publishing their content. And this subdomain is mapped to your GitHub account as though it can either be an individual account or a team account from GitHub. So for example, I publish my personal content under your room.ropensci.dev and things that I develop as part of R OpenSci go under ropensci.runiverse.dev And so on this subdomain for every user, you can find a Cran-like repository with R packages owned by this user organization. So, and this repository automatically includes binaries for Windows and Mac OS and some of the other things that you expect from a Cran-like repository. So it works, installing from this repository works for users exactly the same as when they would be installing a package from Cran. And you can also find R Markdown articles published by this user organization and a lot of metadata which I will show later on. And if you open this domain, the browser, you get a dashboard for visually browsing and exploring the content in this universe and the packages and the articles. But all of the data is also available programmatically using HTTP APIs. And again, as I already said everything is fully automated. So that might sound like a lot, so let's just look at an example. So here is the GGSec universe. GGSec is a suite of R packages that is developed by a research group at the University of Oslo for brain and cognition research. Most packages here are maintained by Athanasia Movenko and currently we see that there's 14 packages in the universe. One of these packages is Cran and the others are not. So again, the owner of this universe is the GGSec GitHub account which is an organization account and hence the URL of the universe is GGSec.Runiverse.dev. And through this GitHub account we can also show some information about this user. So that's what you see on the right. It is the profile picture and information from the GGSec group that we take from GitHub. Our universe dashboard that you get when you go to this subdomain in the browser has a few tabs through which you can browse the different sorts of content in the universe. So I'll walk a little bit through that. And by the way as we develop this and if you may be watching this thing in the future there might be additional tabs or they may have different names but the ID will remain the same. So first let's look at the most core thing which are the packages. So the build tab is this first tab and it shows an overview of recently updated packages in this universe and that shows the version and the maintainer and the date of the most recent commit package. And it also shows this green batch when the package is available from CRAM. And on top of that on the top of the page we show some example codes for users to explain how to install a package from this universe in R. So you just can directly copy paste that code into the R session and start using these packages from this universe. And so just like CRAM we automatically build binaries for Windows and macOS for the current and previous release of R. So that is this column with Windows and macOS icons that shows if there is a binary package available for Windows and Mac which in this case is true for each of the packages. The color of this icon shows if the package also passes all of the CMD checks. So if the icon is green it means that the package was successfully built and deployed and it also passes CMD check and if the icon is gray in this case it means that the package was successfully built and deployed to our universe but there was some error or warning in CMD check. So it means that users can still install this package but maybe if you are a maintainer you might want to look at what's going on. So as said the top of the page shows example code of how you could install packages from this universe. So this is a CRAM like a ball story so it's simply using installed packages the easiest way is by setting the repost argument in R to both your universe and then also CRAM. So what happens with this code is that the package and its dependencies will be installed from the the ggsec universe but for dependencies of the package that are not available from ggsec it will fall back on CRAM. So note that again all of this is done using the base R package manager exactly as when the user would install from CRAM and on Windows and Mac we have these binary packages so the user does not need any complicated tools and libraries to build these things from source. So the second tab of the dashboard is called packages and this basically gives us an overview of the content of the packages that are in the universe. So this is just mostly information that is taken from description files and then some additional metadata that we collect when a package is deployed. So for example it includes the title and the description for all of the packages in the universe who is the maintainer and the most recent commit. So this is like a place where you can quickly browse like what is available from this universe and what it is and if the package has a logo this is also shown here in this dashboard. So there's a few standard places where you can include the logo into your package and we use the same rules as package down to find the logo. So if the logo works for package down it will show up here as well. So this tab shows you some of the information from our database that we have for packages but there's really much more available through the API. So this is basically just the core stuff that's interesting to visually inspect but as I will show later so this dashboard actually calls to exactly the same API that's also public so we don't cheat and you can look at these APIs to extract many more pieces of information from the packages that are deployed in the universe. All right, so that was everything about packages and let's talk a little bit about articles and so besides publishing packages our universe is specifically intended as a place for publishing articles and this is really articles in the broad sense these can be vignettes with documentation but they may also be research or like a tutorial or anything that you think is worth sharing so if you go to this third tab you can see a list of all of the articles that are available for this user or organization and they are listed here and these are automatically rendered using the vignette system in R so the idea is that you you include these articles into an R package and then they automatically get built and rendered and shown here in the dashboard however articles don't have to be limited to software documentation so I will emphasize it a bit later on but really you can publish any sorts of content in your R markdown article so it can be a research paper or a tutorial or a homework assignment so whatever you want to publish into your universe so you can easily browse these articles within the dashboard by clicking on them from the list that I just shown and this will this will show the article within a context of this of this dashboard so it's easy to sort of browse through different articles in the universe or maybe across different universe universes and we all of these articles are rendered in a consistent HTML theme that is the same across all universes so hopefully that makes it more pleasant to browse rather than when all of these are using their own theme as you see on Cran yeah so you can browse these articles from within the dashboard and some of the articles also show direct links to the input markdown file and the output file so you can link to these resources directly if you just want to share the HTML or markdown without people having to go to the dashboard API access so an important feature of our universe is also that we provide programmatic access to all of the content and the metadata that is in the system so I think it's an important part of open science that content is not locked away in some platform but made accessible and available in as many generic ways as possible so all of the information that I've shown so far in the dashboard you can retrieve that through APIs and you can even retrieve much more information and also sort of aggregate statistics and other sort of metadata information about the repository and the system through our APIs so the API tab shows a few of these example endpoints that you can explore if you think that's interesting so this part is still a bit under development although the endpoints that I've document here are pretty stable so you can play with this in your browser or in R so for example the dash packages endpoint in this slide provides all of the information and all of the data that we have for a given version of a given package so in this URL dash packages slash ggsec slash and then the version number it lists all of the artifacts and the data that we have for this version of this package so in this case it's listing probably a list with five artifacts into a JSON list which is one source package and two binary packages for Windows and two binary packages for macOS and then this API call shows all of the data that we have about these files about these package artifacts and of course you can also read all of this in R or any other language or platform that understands HTTP or JSON so I'll leave it up to you to run this code and one thing to note is that many endpoints in our universe use the NDJSON format which is a streaming version of JSON which is needed for to optimize performance and therefore if you want to read these aggregate endpoints in R mostly in the dash stats you need to use the stream inference in JSON light or the equivalent function to read NDJSON format from your favorite JSON library right so that was sort of a very brief tour of what we have so far so let's take one step back so for who is our universe intended it's like why are we building this so I think there are many different use cases of having a personal art publishing space for your personal work or your organization or research group so the system is built around the concept of packages of course as the central container and deployment format but it's certainly not only for professional art package developers so in your universe you can publish whatever you want there's no policy there's no policing of what is allowed or not and certainly no archiving happening without your approval so yes I'll show you can use it to publish that version of grant packages for sure but you can also think about publishing more experimental projects or research material or even homework assignments so let's have a look at some examples of the early adopters of uses and organizations that are currently using this system so one one example is that you can simply use it as sort of your personal package portfolio so suppose you've written a bunch of art packages some of which may be on Cran or they may be in various places on Git and you can set up the universe to showcase all of your work it will show everyone things you've developed if they're in good state what you're currently working on and also for example if you publish articles you can use it to have a collection of things you've written about or your research another use case is to use a universe as an outlet for your research group so here's an example of a research group based at the Imperial College in London and they develop a suite of packages that is mostly maintained by Rich Fitzjohn and many of these packages may not be suitable for Cran or it's just too much of a pain to release it along Cran so they release all of their source code on GitHub but by creating a universe you can sort of increase the exposure to the work and make it more accessible and discoverable for users better than when it's only available in source form somewhere on a GitHub account and people may not be aware of another use case for our universe is software curation so as you may know our open site organization maintains a large suite of peer reviewed and staff maintained our packages for which we try to keep up to standard for use in scientific research so this is actually where our universe originates from this ID so through the open site universe it becomes easy to see for users which packages are available and it's easy for them to install regardless of whether or not they are Cran or not at that point but it also helps us as an organization as a monitoring tool so from the dashboard and the APIs we can keep an eye on the development activity of the packages that live in our organization and we can quickly spot packages that are failing tests and they don't seem to be actively maintained anymore and they might require our attention and another use case of our universe is simply to publish a development version of Cran packages so sometimes users they may want to test the version of a package that is not yet on Cran to test some new feature or a bug fix installing packages from source, from GitHub sometimes can be quite painful because many of these packages are live for example they contain C++ codes or they require system libraries and so on so by creating a universe for this everything gets automatically built and it's very easy for users to install the development version of the package it's just as easy if they would install the Cran version of that package without requiring any special tools or knowledge yeah I finally another example of a use case is for organizations that are developing an interdependent sets of R packages in a given domain such as for example here the R spatial organization who develop a set of R packages that are all related to geospatial analysis so if in the universe you can quickly see what they're currently working on and try the latest versions of these packages and a benefit for the developers of these packages is that these packages automatically get built and tested against the other versions of the packages in the universe so these are the other dev versions of the packages in the universe so if one of the maintainers of these packages makes a change that unknowingly breaks some of the other packages that depend on that it quickly becomes apparent in this dashboard because the other packages are built and checked against the version of the dependencies that are in this same universe so you can quickly see if the package that is being developed is passing checks in the content of the current development versions of its dependencies without having to do a manual ref depth check before the current release and then figuring out that you broke something a long time ago and having to go back so these are just some of the examples there's many more examples we have about 300 universes right now you can browse our web page to see some of the cool stuff was in there but I want to highlight one more use case which I personally most excited about that we haven't seen much yet so we are used to think of our packages mostly for sharing reusable code I mean that is what crime is for it is to share software that is useful for other people but many researchers have argued that an R package is actually a fantastic generic container format for research material R packages provide standard format for bundling code and data and articles and metadata such as the author and license and dependency and so on so I think that for example if you have a publication or a homework assignment which consists of a R markdown article or a SWE file and then you have some supporting code and data and you have that in a setup in a reproducible product or something from here is actually a small step to put it into an R package have published that into your personal universe so that you get a live automatically rendered version of that paper or that assignment on your our universe so this idea has been raised in the past by various people but it never took off and I think because well sure you could put your research into R package but then what I mean this can't go on crime because it's not software so what do you think about that but I hope that maybe now if you have your personal crime like repositories and your vignettes are sort of beautifully rendered this idea can take off and people can it really it becomes really rewarding to turn your research paper into an R package that is entirely fully reproducible and automatic such that the vignettes contains your paper or your article and through the supporting code in the package and the data in the package and the dependencies that are declared in your package file we have this place of publishing like fully automated reproducible research so that was an overview of different use cases that we envision so so far we've mostly looked at single universes but we want visitors of the website to be able to quickly discover and browse content from various universes so one way through these global feeds so if you go through the R universe homepage we show a global feed of all package commits across all universes so that's fun to look at to sort of see what people are working on across the entire our ecosystem similarly we have a feed for articles so you can see which articles have recently been created or updated across the entire ecosystem so this actually works very well it's very cool I've discovered several cool new R packages and new features just by looking at what people are writing in their articles and finally there's a maintainer view on the website which shows an aggregate of all the people so the various of these packages across all of the different universes so for example if you hover on a certain name of a author you may see that this author has published or maintaining packages in several universes and then you sort of may cross link and you click on that and sort of see the different projects that they're working on in different projects so maybe by now your period how do I set up my own universe I promise it's very simple I will briefly show how it works in the upcoming minutes but the best reference for this is a tech note that we wrote recently on the R open site blog which you can check out on ropensight.org but basically the way it works is all we need from you is to list some and it lists the packages that you want to include in your universe as to list the name of the package and the URL the git URL from where we can git clone that package and the only requirement is that this URL is a public git URL so from here our build service will literally git clone and that URL and that's basically everything you need to provide publish that in a repository called universe in the GitHub user organization for which you want to create a universe for example here is the one by Mahel and then the next step is to activate this by installing the GitHub app on your account for which you want to create the universe and the GitHub app needs very few permissions it basically only asks for writing commit status updates which means that the system is able to post like commit status which is like the green check mark or the red cross behind your commit on the R package repositories as we will see in a second so once you've done this what happens next is that the system automatically creates a mono repo for you again there is another blog post which extensively describes this process so you don't have to remember it but basically the mono repo is a Git repo story in which each of your packages is a sub module and this is basically the canonical source for your universe and it allows us to see currently but also have a full history of what was in your universe at any given point in time and then if you go to the actions tab of this mono repo on GitHub which is by the way the mono repo is under R dash universe action in the name of your account and there you can see where all the building happens so if something is not working you can check that out and then once you've set it up after a while usually no more than our packages that have a complete building will start bearing on your dashboard and then if you gave it permission if you gave the GitHub app permission to write commit statuses as I said earlier then every time a package is successfully deployed on your R universe it becomes visible in the R package repository as you see here as a commit status so that as a link here with a green check mark so that means that this commit for this package was successfully deployed onto our universe so this is basically the current state so this is what we have right now we have just started this and we really want to build it out we have many ideas for features we want to have but one thing that we are working on and that I want to highlight is that we want to integrate various metrics about packages and maybe good indicators about the quality and the health and the impact of research projects so if you want to understand the motivation behind this better you can watch the recording from my talk at our studio gone from earlier this year link is also on our home page and in the talk I try to explain why we believe it is important for organizations and potential users of software to get a sense of the health and the quality of the software that they build on and in this talk I talk a bit about various types of indicators that I think that you could look at for an open source project and here I distinguish technical and social and scientific indicators that may say something about the health and the role and the impact of a project so we want to integrate such indicators with our universe such that the dashboard and the API show some information about these software and research projects that gives you a sense if the project is still active and how it is maintained and if it is used by other researchers so for example some low-hanging fruit we want to show something about the download statistics and the reverse dependencies of packages that are obviously an indicator of how much something is used and particularly for example if we look at these numbers over time you may get a sense about the life cycle of the package if this is something that is up and coming or on its way out of sort of an established package into the ecosystem but a much more challenging and important aspect of this project is to find uses and citations of research software in scientific code and publications we think this is very important for research software to be able to say like where is this software used in actual scientific research so for this we are collaborating with a team of experts to build machine learning models to recognize software mentions in the literature and they are now running experiments to look at the citations from like the large corporates of 20 or 30 million open access articles and try to extract citations automatically especially for software that we haven't seen before so we hope to announce some interesting progress in this area later this year right so that was most of what I had to say small recap so what is our universe and why is it useful so it's an open platform for publishing research and research software and other art based content the system automatically tracks your git repositories and then builds the binaries and the articles and so on users can easily install these things just like they can from CRAN there's no complex tools needed like art tools or remotes or so on installation for users is just like as they would install from CRAN and then the dashboards they show the art packages and the articles that you have published and they can even work when the actual source content is spread because different github organizations or even different git servers also our universe can be used as a zero configuration continuous integration system so as I showed if you if you give the app permission it will show this check mark when the package was successfully deployed so if you want to have a continuous integration or actually it's continuous deployment in this case for your art packages you can just install that app and there's no other maintenance or configuration on your site and it will just run the standard checks on a different platform so you can quickly spot if there is an issue and our universe is very easy to set up we've tried to deal with all of the challenges solve that on our server side so really what you need is just provide a list of the git repositories that you want to include so we hope that it sort of becomes a space where you can publish your content your packages and your articles maybe other things later that is both these packages can also both be on github or they can also be on cram it doesn't have to be exclusive and again I want to emphasize our universe can be used by anyone whether you are just starting to use our or you are a veteran package developer you can set up your own universe and start publishing your work so some references if you want to learn more I recommend you check out our website from our open site where we have a landing page for the project and in particular so documentation for our universe is a little bit sparse right now but the best references are a few tech notes that we published on the blog so for example we have a post that is tightly dedicated to explaining how the build system works and one in which I have explained the idea of publishing articles based on the vignette system and also this idea of what if we could use this system not just to publish software documentation but actually to start publishing research compendia and other research material that would not be suitable on cram but definitely fits into an art package and then there is the most recent blog post that is very detailed how you go about setting up your own our universe and some of the tricks if you want more fine control yeah that was everything I had to say thank you so much for listening thanks so much your own yeah so we've got 10 minutes of questions and we have lots of questions on the Q&A and also on the swag so we probably won't be able to get to them all but we'll see what we can do now so to start with how do you guard against malware and so on in our universe we don't so the idea is that is your personal publishing space so I expected users will install so they will if they install something from a particular universe just as if they would install software directly from github using install with github or something that you think about from that you install that from source that you trust so in a sense it is similar to when users would install from github is that you need to trust the source of the author or the organization from where you are installing this from okay does our universe build macOS binary packages the same way CRAN does in other words can we use our universe like a windbuilder to predict CRAN success of building binaries for macOS mostly yes I mean CRAN is not it's unclear exactly how CRAN builds things because their build platform is not entirely open so but in most cases I would say if the package builds on our universe it will probably work on CRAN vice versa but you know it's impossible to exactly replicate the configuration of the mac build server because we are building everything in public CI service on github so obviously we don't know all of the custom settings and tweaks that CRAN may have configured on their build server okay makes sense perhaps while we are on building are there plans to also build linux binaries in the future e.g. for selected distributions similar to RSPM maybe right now because it's pretty resource intensive to orchestrate that and of course our studio is doing that for CRAN and they have a lot of resources to be able to build these binaries but it's very expensive because if you want to build linux binaries you need to build separate binaries for every version of R for every different version of every different operating system so you literally need a binary for R4.0 for Ubuntu 20 and then another binary for R4.0 for Ubuntu 80 and so forth and then so it's like you need to build a lot of things so for now that is out of scope and it doesn't weigh up to the price that would cost us to provide these things which version on GitHub gets built is it the latest commit on master you can specify that so this is detailed in the blog post about how to set up your own universe by default it will be it will be tracking the the head branch which is the default git branch which is either master or main but you can specify in your registry file if you want which branch you want to track so for example if you have a special stable branch where you only commit things that you want to share that are considered stable in your registry file you can track that and there's even a special value that you can enter in your branch called star release which is a syntax that we've taken from remotes and if you track that automatically track the latest github release of the package showing github you can tag releases of your package and if you do if you specify in your registry file you want to track the star release branch then we know that at any point we look like what is the latest git tag what is the latest tag release on github and that's what will be deployed in your universe and actually the control is for the user and can we build something like this on private clouds or does it require github it does not require github but private is pretty complicated and so I've also written about this in the last tech note there are packages the source code of the packages they may be hosted on any kit server they may be hosted on github but they may also be hosted on gitlab or they may be hosted on like a private git server by your university or your institution the only thing that matters is that we that is public and we can git clone from that URL the source code of your package yeah making this private is pretty complicated and I don't think that we are very interested in that right now because we are about open science and if we would make this private it would add layers of authentication complexity at every level we have to authenticate probably we're pulling the source code and then I imagine you would expect users to authenticate if they want to install the package which I think ours install the packages doesn't even support authentication it's really not something that we are at this point investing in if I install something from our universe is the R package RM able to know that it's installed from there when I create a snapshot yes the description file as you install the package of course the description file the package is installed and that one contains several fields which we stamp which are built server stamp into the description file which shows you which universal which universal repository this package was installed from and it also shows you for the package that's built it shows you the upstream URL of the R package sort of get URL of the package which is where you build from and it shows you which branch that this package was installed from so by default this master remain but if you're installing from something else and it shows you the shaw hash of the of which version of the package was built so if you look at the description file of the install package you will be able to see exactly like which universe is coming from and also exactly which commit to the R package that this package was built of okay can a package be part of multiple universes? yes absolutely that we see that happening a lot so the the universe registry file you just list to get URLs from the packages you want to include and of course different people may all include the same package for example if you have I may have a package of mine that's both part of R or PSI and it's also part of my personal universe that's fine can our universe be used to share forks of grand packages say? so that's you can but it's tricky so we specifically of course that's dangerous right so because I mean you of course you can share a fork of a package and you can even share a completely different package with the same name but of course that is not something we would recommend because then if the user installs that then it overwrites the other package and maybe other packages on your system were depending on that package so then things break so in the dashboard we will show for the show indicators like these badges on whether the package also exists on CRAN and then if it is the same source as a package from CRAN so we will show actually a red flag if the package has the same name as a package that is on CRAN but the URL does not match what is shown on CRAN thereby we try to show and we try to warn users or also developers against like watch out because this package has the same name as a CRAN package but it may actually not be from the same source Thanks so much for ruffling through those questions there's still a few more left and hopefully you'll be able to follow up with those afterwards on Slack on KeyOmes channel Up next we have a 15 minute break and three parallel sessions after that statistics and bioinformatics are in production and statistical modelling and data analysis so join us there thanks very much everyone