 So hello everyone and welcome to the 16 Argos community call today we have colleagues from our space with us who we will be presenting and discussing integrations with with Argos. As always we've prepared the notes document. During the presentations feel free to add your questions so we can address them there or raise your hands. If you want to ask something. And yeah, without further delays, I'll pass the floor on to Wellie. Service manager to start us off. Thank you. Thank you so much. Thank you. Thank you also Rory and Rob for being here and everyone this is our first community call of 2024. And we're starting with, we're starting strong with some integrations that we have done some collaborations to show something that we achieved last year with our space. And you will hear more about our space from our colleagues, Rory McLean, Rob Day, who are working there on how we integrated with well they used Argos to find the DMPs inside our space add them in their collections with other data and other things that we're working on and then how you can publish this while maintaining the PID so the PID of the DMP and getting the PIDs for the whole collection of data and hope that you have in the Argos collection. Thank you very much for joining. I'm looking forward to this presentation and demo will start with you with the presentation so I'll just stop talking and we can we can start and we can have a nice discussion after. Okay, thanks. Thanks very much Ellie and thank you. Thank you very much for inviting us. It was really interesting working with with you and others on the Argos team last year on the integration, and we learned a lot. And it's actually really exciting that to have an opportunity to share what we've done with with the Argos community so yeah so thank you very much. Yes. So I'm Rory and Rob is here as well. And next slide please. So today what we're going to do is divided in my presentation is going to start off and then Rob will actually give you a demo of a brief demo of our space but mainly focusing on the Argos integration. The presentation has two parts because I realized some of you may be familiar with our space but others won't be familiar with our space so the first part of the presentation is an overview of our space and then the second part of the presentation is describes briefly the Argos integration which Rob will then get into in more detail. Next slide please. So the problem we're addressing and I think actually when I say we I mean Argos as well as our space is that there's overall there's a lack of data centric tools tools which are focused on on data. And although there are more and more data centric tools of perhaps an even more fundamental problem is the lack of interoperability between tools that leads to siloed data which impedes the fair principles and also makes it difficult for researchers. Next slide please. So over the past 10 years or so we've been working at at our space on trying to address do our best to make a contribution to addressing this problem. So our space is a digital research platform that's designed to interoperate with and connect with research infrastructure. So if you look at this graphic here. If you let's ignore the purple ones at the top will come back to those but our space itself consists of four components, an electronic lab notebook. Which again Rob will show you briefly a sample management system. So you can send you can you can manage your experiments in the electronic lab notebook. You can manage your physical samples in the inventory system and the two are connected. Here the ecosystem of tools you see depicted here, which I'll go into in a minute and forth a set of apis which allow for extensibility and further integrations. So just going kind of starting on the left here. It's critical. If you're documenting your research, you don't want to have all of your data connect collected in something like our space because it's probably already stored somewhere else in a structured and maybe more convenient fashion, especially if it's big data you don't want to be moving it around, but it's important as you document your research if you're a biologist or a chemist or someone doing ecology or environmental studies or in fields like that. So you can associate your the data that's that's stored externally with the write up of your experiments in our space so we make it possible to link to file stores. We also have an integration with irons some of you may be familiar with irons I'm not sure, but irons is a kind of virtual file management system which can crawl file stores. The irons integration is that if you it's great if you can link to data from externally but if the if the file location moves oftentimes the the link will break, but with irons attracts the locations of the link so it maintains the the integrity of the links. We also integrate with commercial file sharing apps, as well as open source one so, or some commercial and some open source so things like Google Drive and Dropbox, as well as own cloud and next cloud. Then we also have integrations with kind of specialist tools with to support particular workflows. So Pyrite is an animal colony management system so you can integrate link to data that you're if you're using animals in your experiments. A cluster market is an equipment scheduling tool so you can link to equipment data relating to your equipment that you use an experiment. An exciting new one we have coming up is famous which is an offline field data capture tool. So you can capture data. Let's say you're doing research in in remote area where there's no, or even for that matter in a not so remote area where there's no connectivity. You can capture your data in a structured way, whether it's let's say your oceanography or agricultural field research, and then seamlessly pass it using a common templates from famous into our space when you're when you're back in the lab. We also have an integration with Omiro, which is a widely used open source microscopy tool. And, and we have other integrations as well, including with with protocols IO. So those are kind of integrations with other other data sources, and then if you look at the purple ones at the top and that's really the focus of our of our talk today. We integrate with with data management plans and repositories repositories are kind of really a starting point. Because in order to verify data is important for the data to be get into a repository where it can be discovered queried reused to enhance reproducibility so you can see from the graphic we have integrations with four of the main repositories, including Sonodo, and that actually came about as part of the project we did with our goss and we'll be talking about that more. And then we also integrate with data management planning tools and today the focus is on in our goss. Thank you. Next slide please. So our mission as I think it's probably implicit from the previous discussion. We feel that our mission at research base, which is the company that produces our space is to enable streamline passage of data and associated metadata throughout the research life cycle. And I really haven't talked much about metadata, because that actually that's one of our major focuses at the moment but I think Rob may touch on that in his demonstration. Next slide please. So, if you so what does this mean in practice so we deploy at institutions at universities and research institutes are users, if you will, are the chemists, the biologists people doing environmental studies ecology, medicine, those kind of things. But the, the customer, the engagement partner is the research institution, which typically is either a university or research institute like the Leibniz institutes, the Helmholtz institutes in Germany, for example. And again, our mission is to try to facilitate a movement easy passage of data throughout the research life cycle. So here's the one depiction of the research life cycle life cycle. Graphic which everyone has seen. And this is just an, an, an example of how our space actually interfaces with other tools and other research infrastructure that is used in a large research institution. And again, the details may differ, but the categories of things are likely to be the same and this shows this shows how our spaces is optimized and designed to interface. With other parts of the research infrastructure, that'll be deployed at a large research institution. Next slide please. So, so currently our deployments are at research institutions, but as everyone can see, I think there's a movement to multi. Multi institutional to national and EOS level research infrastructure. And we're now, we're now actively exploring and beginning to get involved with integrating our space with research infrastructure at the national and EOS levels. And this is a, we have a proposal in with, with about 23 others. But for our space to become one of the core components of what's proposed for a Norwegian research commons. So the process is similar to the same process of course that happens at the university the plan collect, share and reuse publish. And again, I'm not proposing to go into a lot of detail, but you can see our spaces is can play a role in a in a in a broader infrastructure which is not just based on an institution. Next slide please. And another example of an ELS level infrastructure would be the EU that collaborative data infrastructure which possibly some of you are familiar with. And again, actually the research lifecycle is quite quite similar to what what it is in the universe is the same research lifecycle is just, but the tools are being provided. And again, this is the institution for people, people to use and again if you look, if you look at the, the stages you've got data management, data deposit discovery identifiers is quite similar. And again, the notion is that that our space would would act as a facilitator of flows of data throughout these other various other resources. Next slide please. So now let me come to the second part, the main part of the discussion today is the our space Argos integration about which we're very excited. And the, this is the idea as Ellie so eloquently phrased it was to work together to try to bridge the planning active research and post research stages of the cycle to capture the evolution of research outputs and practices in published data management Next slide please. So, again, this is a bit of a preview if you will, of what Rob is going to show you in in more detail. And I thought, we'll show you some screenshots from our space which hopefully will give you some idea of what our space is like and then it will become a lot clearer when when Rob actually does his, his presentation but the first point to note is that this is an institutional solution. So it's possible to deliver this to your researchers at an institutional level. So I don't have a screenshot here but the first stage in enabling our space in Argos to to be used by your researchers is that assist an R space admin will enable and configure the Argos integration on on the particular institutional R space server. Next slide please. And then the workflow is is fairly simple. So step one is to then these are these are actually screenshots from our space. Step one is to is to import a DMP from Argos into our space and you can see here a simple list. Next slide please. And you can also when you're doing the import you can take advantage of Argos is powerful search parameters. So you can search by a number of filters by by this is searching, searching in Argos by label grant funder collaborators and then choose the the DMP that you want. Next slide please. You then complete the import by selecting whatever DMP you'd like to import. Next slide please. And again our space will Rob is going to show this in more detail, but we have we have a special place our space gallery is is is where your files are stored in our space. And as you can see on the left highlighted there, we have our space gallery divided into a number of sections we have a special section for storing DMPs. So your DMP once you import it will be stored in the our space gallery. Next slide please. And then I haven't I haven't actually gone through the screenshots for this, but you then go through if you like you have the ability and this is important because this really helps to facilitate the mission that described was described earlier. You can associate your DMP with the actual documents that are relevant to the DMP so you have a DMP then you actually carry out the research in a project and documented in our space. And you can associate that DMP with the research that you carried out. And you do that by opening or creating a document. Then you simply have the option to insert from the our space gallery. And then you choose your DMP. And there's a link to the DMP in that document so that become the DMP link becomes part of that document. Next slide please. And then there's also a nice audit kind of accountability searchability feature that we have. So the information, the information about the DMP will show you all the documents to which that DMP has been linked. So perhaps you linked that DMP to several documents because they were all relevant to it. That information will be available to you and is it is included in the DMP itself. Next slide please. And then finally, and this is a critical part of the of the workflow and I mentioned earlier we actually did an integration with Zenodo in conjunction with the integration with Argos. So it's also possible to associate your export at the end of the, at the end, presumably at some point. Again, in order to comply with fair principles, the researcher might want to export the data in a particular project, the data sets to repository like Zenodo. And you can associate the export with the, with the data management plan. That's part of that. That's part of that project. Next slide please. So, finally, just before I pass on to Rob, just mentioned, so this is what you're going to see today is the first stage of the integration. But we have some ideas for future enhancements, which will, which will be working on. And these include, first of all, an indication of when a DMP has changed. So to make it more of a automatically updating more of a living record. And a mechanism for updating the version of the DMP in our space. Second, automatic notification to Argos when a deposit within Argos DMP is made from our space to a repository. Again, a nice, nice audit function. Then also some things within our space itself. The sharing of imported DAPs within a group in our space so that it becomes a recent group resource as opposed to something which is most relevant into the individual who created the DMP. And then actually something which we're interested in really interested in having this broader discussion with people involved in data creators like data stewards. We're planning a data curator role in our space. There seem to be lots and lots of opportunities to have data curation baked into our space. So as part of that overall plan, we would think about a dashboard for monitoring DMP usage during the active research across a whole organization. So it would be again an organizational feature as well. And that is my last slide. So thank you for listening and I will pass it over to Rob. Now you're muted Rob. Yeah, there we go. Hello everyone. I'm based in Columbus, Ohio, and I'm actually from the UK originally but I came to the US to do my graduate work here and I ended up settling in the United States. So I'm going to very briefly show you an introduction of our space to show you how this integration works. I'm actually going to turn off my video to maximize the bandwidth that's available for both audio and screen share but I wanted to say hi first so you know I'm a real human being and not an AI or anything like that. So I'll turn off my video and I will share my screen so I'm going to share my entire desktop here. Okay, so can everybody see my Chrome browser moving left to right now. Yes. Yes. Fantastic. Thank you. Okay, so this is our space I'm already logged in. But I also wanted to show you that for this demonstration we're using your sandbox version of Argos but that already has some demonstration DMP is installed into it. And here is the sandbox version of Zenodo and I'm showing you this initially show that this is my account here is empty so there are no data deposits dropped into this account currently I just deleted them all earlier. So that we're starting with a nice clean palette here and you can see there's nothing in here currently. Okay, so this is our space I'm already logged in. When you log into our space you're brought to an area called the workspace. So this is where you create our space documents that typically would be organized into either folders or notebooks and these are designed to tell the story of your research. So our space really in many ways is sort of at least three products in one in the workspace area you can use this for taking detailed notes and creating rich content that describe what you've done in the lab. In the gallery section we use this like the hard drive of our space this is where you can keep any and all files in any format from any source and you can organize these hierarchically into folders if you want to. So the gallery is sort of replacing something like Dropbox or G drive or one drive it's simply a place where you can keep your files files are sorted by file type, and you can do things like image files you can easily view those. Or if you pick a file that has a slightly higher resolution so if I search for a sub folder and I pick one of my favorite images and I take a look at it. If I want to I can zoom in and see this in all of its high resolution glory right within the browser I don't have to muck around downloading this and opening it up in some external application. There are areas for messaging getting messages from the system and from your colleagues and there's an apps area where you can configure some of our many integrations as we're alluded to. So in here you can configure where you might get data from or where you might link to data you can link to data external sources like Dropbox or Box if you want to you can import protocols from protocols.io or you could connect to an Amaro server to link to sets or individual images that are held there. We also can configure here various repositories that we might send things to including Dataverse and of course Zenodo. So and then we also have a detailed inventory area we're actually very proud of our inventory area and we won't be talking about this today unfortunately it's not really the focus of today's talk. But for no additional cost users of our space get access to this extremely sophisticated modern and innovative inventory system which can be used for tracking. Any large set of any real or conceptual entities that you work with in your research and showing precisely how those different items have been used in your experiments and workflows. So I can for example swap the inventory system into tree mode which makes it extremely intuitive. And then once I've done that if I want to I can browse around in my various containers looking for items just the same way that on my computer I might browse through folders looking for files. So for example here I can see that we have a room and in that room there are freezers and in those freezers there are shelves and on those shelves there are freezer boxes. And I could go ahead and take a closer look at a freezer box and see a little map of the box to understand where all of my items are. And I can do things like select individual items here and move them around or edit them or customize them or record metadata about them or do whatever I want. And then I can build all of these different things into sets and use those sets of items in experiments and track exactly how I've used those in the audit trail over time. I can even see the entire history of a single sample and understand who's accessed it, which experiments they've used it in and where it currently is. Anything that I really want to do with my inventory system I can do easily from here. So and then information about those samples may be embedded in different documents. So as an example, if I take a look at this content example document, you can see here that within the note taking area of our space I can create rich formatted text. I can embed images. I can embed files of all types. I can easily either view these using the technology built right into our space or in the case of MS Word or Excel files. I can easily also view those or start editing, editing them without dotting integration with Microsoft Office Online. We also support open source alternatives like Collaborate Online. I can create tables and calculations. I can drop images into those tables to control the layout of my page. I can copy and paste in chunks of table or chunks of content or images from external sources and easily paste those onto the page. I can also use innovative content creation tools like for example I can, as a shortcut, I can simply type a backspace key to get access to a range of different elements that I could drop onto this page, including special characters, code samples, images from the gallery, images from my computer, equations built using a built in latex editor, or really whatever I want, including things like links to external data held outside of our space. And in this case, by using the backslash key, I can do all of that without taking my fingers off the keyboard, which is very nice. I can also see over here an immediate list of sets of different samples that I've associated with this document. Like for example here, if I click this one, we can see here is a list of pancreas sections that are associated with this experiment. And if I want to, I could export a summary CSV file of these, or I could click on this link here to jump to that item in the inventory system to immediately learn more about it. And in fact, I can even from here tell immediately which experiments that particular sample has been used in. Okay, so if I switch now back to the gallery. So the gallery is good at storing a number of things. It can be used to store any file in any format. You can add these manually, you can drag and drop them in, you can bring up a file chooser, you can interact with your mobile device to add things like pictures or voice notes directly through the operating system of your mobile device. You'll also see that you can import things from various different sources. In this case, this server is configured to import DMPs from Argos. So if I choose that option, I'll see a list of DMPs. These are all actually test DMPs associated with the sandbox Argos server. These are not actual production DMPs. And I could sort these by label or by grant, or I could search for a particular funder or there are other ways that I can look through this list to find the one that I'm looking for. I can select an item at any time and then I can click import. And that will add the item to a special DMP area of my gallery. So in this case, you can see in this in this gallery area, we have a mixture of DMPs that have come some of these have come from DMP DMP online. Sorry, DMP tool and others have come from Argos. Like this one, for example, came from the Argos system earlier. And if I'd like to associate this with the experiment, I can easily do that. So I could go back to my workspace. I can choose a particular document. You can see here from here I can do things like duplicate that document, move it to a new location. I can delete it, but it will actually still stay on the server because our space is fully 21 CFR 11 and also annex 11 compliant. I could export this in a sort in a number of different formats as we'll see in a moment. But for now, if I just click on this document to open it, one of the things that I could do is I could say how I'm using the DMP in the context of the study. So I could say here is the DMP used to define this study. And now if I want to, I could very easily say insert from gallery. I can go to the DMP area. I can choose the DMP I imported earlier and say insert or drop it in here as a little icon representing the file. Now if I want to, I actually also inserted that right into the middle of a word. So let's actually fix that too as well. I'll just do that by saying here, get rid of that. And actually I can cut it in this location and put it where I mean to put it in the first place, which is right here. So now what I can do is if I want to, I could select this and I could look at the info for it. And as Rory alluded to, I can say showing documents and it's going to tell me that so far I've associated this DMP with one R-space document. And in fact, that's the R-space document that I'm working on right now. Here's the unique R-space ID and here's the corresponding unique ID on that particular page. But if I associated this DMP with other documents, they'd all appear right here in this list. Okay, so I'm going to save this now. So one of the things I can do from R-space is I can easily export my work at any time in any one of a number of different formats. I can do that by either from the document itself clicking export or I can go back to the workspace. And I can build a cohort of data that I'd like to export by selecting it using the corresponding checkboxes. So I'm going to go back to the workspace and I could go ahead and select any documents I wanted. I could select more than one. I could select folders. I could select notebooks. I could even select everything that I see on the page here. Or I could select a cohort of data using our advanced search engine. I could build a set of different things here that I'd like to export as a single data deposit. In this case for brevity, I'll just select one document and I'll say I'd like to export this document. I can now go ahead and choose a format for that export. And then I can decide here that I would like to send this out to a repository. I also actually I'm going to go back one step. I also have the option to include file store links. So what's this talking about? Well, if I go back to the document and take a look at it, one of the things you can do in R-space is you can link to external files or data sets that are outside of R-space held in university governed file stores like SMB or SFTP file stores or even in IRODS managed file systems too as well. So in this case, you can see here we have a couple of links to some external files held outside of R-space. And if I click that link, it will tell me exactly where that file is kept. In this case, it's kept in an SMB file store somewhere outside of R-space. So when I export this document, I can, if I want to choose to tell R-space to reach out to those external files, grab them and retrieve them from wherever they are outside of R-space and then bring them in and include those along with everything else that you see on this page. In our export, it's actually a very, very nice trick to let you reunite data files in a number of different external locations and bring them all together in the repository. So I'll select the item again and say export, choose my format, say that I'd like to send it to an external repository and also say that I'd like to include any external files that are held elsewhere outside of R-space. Then I'll say next, the system will allow me to choose what kind of repository I'd like to send it to. I'll choose Zinodo. And I can also go ahead and say I'd like to associate this deposit with a particular DMP that I've previously imported. I'm going to choose this one. I'm actually not choosing this one at random. The reason why I'm choosing this particular file is if I open a new window, and we'll briefly mention that there is a little feature here that's nice, but it's a bit of a gotcha if you're not careful. These DMPs that I've previously imported have come from a sandbox, Argos instance, not the main production server. This association of a DMP with your export will only work if the DMP that we're exporting has already been published and has a valid Zinodo DOI already associated with it. And so in this case, I picked this particular DMP because in this case, this one I already know does in fact have a Zinodo that it'll identify associated with it. And I could see that if I went back to Argos and took a look at that particular file, I'd be able to see a Zinodo ID associated with that particular DMP if I were to run it around in here and find it. Okay, so we've chosen our DMP that we'll be associating with this data bundle. I can give it a name. Let's say Argos export demo. It just will be easy to find. And for description, for brevity, I'll just drop that in there, but normally I'd put in a proper detail of our description. I can at this point also access standard industry-specific tags to associate with this document to make it more findable in the future. Now, this is a feature that's fully supported in our workflow with Dataverse and it is in fact also featured with our workflow with Zinodo too as well. So I could add a tag and I could go ahead and I could say, I'd like to add some sort of data tag that will be drawn not from our space, but actually this will be drawn from known industry standard tagging sites, specifically BioPortal in this case. So for example, I could say this is Nash with Ribrosis and you'll see that the system is actually searching the BioPortal control vocabulary site for a corresponding tag and I can actually choose that and tag my deposit with this particular tag. Now, this is not just any old tag. This is a worldwide recognized industry standard tag that I've selected here that I'm now going to associate with this deposit. Now I'll say next, the system will check to make sure that I'm logged into any of the external file stores that I've opted to include some files from. So I'm going to log into that system. Of course, I have no idea what my credentials are. I can look them up quickly here. Okay, so now I'm logging into external file system where there are some external files that are not currently part of our space, but I'd like to include those in this deposit. So I'll ask the system to scan if it sees any. It says yes, I found these files. Would you like to include those in the export? I'll say yes. Okay, so now I can click export. Behind the scenes, our space will take whatever I've selected and it will also pull in those external files from an external source. It'll bundle a whole lot up together as a single industry standard zip file and then it's going to send the whole lot off to Zinodo. And once that process is complete with a little bit of luck here, I will get a notification telling me, hey, that export worked. And not only that, it's going to include in this notification a link directly to where it's put that particular export. So I can click that link and I'll be taken to Zinodo. And this again is my Zinodo account here. And sure enough, you can see here, here's the deposit that I've now added to my sandbox Zinodo instance. And if I want to, I could actually also go through here and adjust these metadata settings more if I wanted to. As you know, there are a number of different places here where I can add additional metadata or do other things to better identify this data. And here is the link to the zip file that I've just associated with this Zinodo repository. And if I click that, it will actually download that zip file to my local computer. Now, the tag that I included has also been sent along, but I won't see that until I actually publish this repository and make it available. And when I do that, that tag, in this case, you may recall, I added a tag of Nash with fibrosis. That will actually be dropped into the subject field of the metadata of this deposit so that people searching Zinodo would be able to find it by performing a search for subject Nash with fibrosis. So that, in a nutshell, is the process as it is today. I hope that was useful for you and it will give you some idea of exactly how this works. And if you have any questions, I would be happy to answer those. Thank you very much, Rob and Rory for the overview and for guiding us through the workflows of our space so that we can use them to add our DMP and connect our space collection and activity with our DMPs. Let's see. Do we have any questions? Please raise your hand and feel free to speak. I cannot see you, now I can see. I don't know if you have used our space before, for example, that's a good question to give an idea if you were familiar already before hearing about our space. Well, it seems like that must have been crystal clear for everybody since I see no hands raised or questions asked. Well, let's see, so I have a question. I partly know the answer but I'm not sure, so I will raise it. So our space is not open source, right? Oh, that is a fabulous question. Yeah, I'll take it. Yes, at the moment it's not, but we've been spending, we've been working hard on the transition to open source since about March of last year and we will transition to open source in April and in fact next, I think either this, I think this week we have the public first public announcement coming out. So, so it's a huge, a huge, it was a huge amount of work. It's a huge transition for us, but we are really looking forward to it. And so yes, and we're actually on target to meet the April date, which given the amount of work that's involved, it's a miracle, but we are on course to meet the April date. So we will be open source quite soon. Very good to know. In some ways related to the open sourcing of our space, there's one very important feature that we haven't really discussed. Roy didn't really mention it in any great detail, but I am eager to show it to you. And that's the fact that even today, our space features from the profile page, every user can get immediate access to our existing full featured, quite sophisticated open APIs. And you can use these to currently to push or pull data into or out of our space to any other data source. And this is worth mentioning today because of course this is important part really a verification of data because it means that you can allow this data to be freely exchangeable with any other system. There are APIs both for ELN and there is actually even more advanced APIs for the inventory system. The logic here being that we think it's quite likely that people will want to move samples into and out of legacy inventory systems or into an out of things like purchasing systems so that they can track exactly what materials they're using and how those materials have been used. So this inventory, this API is available today. And so if you wanted to work with our space to access and exchange data with something you already worked with, you could do that now with the API. But it will be even easier after we go open source. And we're also hoping that once we're open source, these APIs will continue to develop very quickly as they're modified and improved by the open source community to make our space an incredibly flexible and interactive tool that will allow you to integrate with almost anything. So Ellie, since you asked, since I guess we have a bit of time, I could perhaps explain that our thinking behind going open source, would that be possibly useful or should I do that? Yes, but I see... Oh, there's a question now. Oh, great, perfect, yeah. There is a piece. Well, thank you so much. And thank you so much for this wonderful presentation. I had not known of RS Space before, so I think it's fascinating. I think it's a wonderful feature to be able to link DMPs to documents. There's one question that I would like to ask. Where are all the files located? Is it on site at the department, the institution, the organization where RS Space is installed? Or is it somewhere on the cloud? The answer is it's entirely up to you, Theresa. Each RS Space customer and user gets their own unique private personal server, and that server can be installed wherever you want. It can be installed on-premise, as many of our customers do, especially European customers. It can be installed on a cloud host of your choice that you would manage, or it can be installed on an Amazon web services instance that we manage for you entirely. The last point is by far the most convenient way to do it for the institution, because if you have us manage your own Amazon slice, that comes with unlimited data storage, and there's literally nothing really for your IT team to do. We manage the entire thing for you. All you need is a URL to log into your own personal server. That server is unique, it's private, your data is not mixed with other files, and all of the files that you've added to RS Space and all of the data you've created there is physically kept on that server, but the location of that server ultimately is up to you. Thank you so much. Thank you. That sounds great. As you mentioned, I think in Europe, and I think I'm located in Sweden, the location, the information security of data is an important question. Therefore, this is an important point. That is on a different level as all the user features that researchers can access to work with their data and to collaborate with colleagues and those sorts of things. Of course. Thank you so much. Thank you. I'm impressed to show that we've done many large deployments to large enterprise organizations and institutions all over Europe. We are completely on top of data privacy, data security, GDPR issues, other kinds of issues that are related to that, that are important to the institutions. You can be sure, of course, that during the RFP phase and the selection of a solution at Preach University, of course the privacy, data management, and security teams of those institutions dove deeply into RS Space company that builds it to make sure that we thought of all these things and we have all of those aspects of the deployment under control. Well, thank you so much. Yeah, thank you. Any other questions, maybe? Rory, maybe you want to go ahead with the open source? Yeah, sure. So just kind of a bit of background on why we're going open source. So there are several reasons. Probably the most pressing reason was that we were getting more and more requests from our existing universities and potential universities that were interested in RS Space to go open source because their researcher communities preferred using open source products. So that was probably the most pressing thing. Secondly, as I mentioned, we're now actively exploring, including in Sweden as it happens with national research, the NRENs, the National Research Infrastructure Providers, about having RS Space deployed as part of a national or an EOS level infrastructure. And again, being open source in that case is pretty much a prerequisite. If you're not open source, they won't, they won't even consider working with you. So those were kind of business opportunities, if you will, which we kind of had to do it. There are two additional reasons which are also important. One is there's, as you know, there is research done, which is managed or overseen by data stewards and others at research institutions. But there are also projects which are kind of project-based and not institutional-based. I'm thinking of, like the NFDIs in Germany, I think, are the primary example, where a lot of the NFDIs, in fact, all of the NFDIs are now developing their own research infrastructure. And so they only use open source tools. And right now, because RS Space isn't open source, although it might be an attractive product, as you can see, it's got lots of useful capabilities. Because it's not open source, they can't work that they won't work with it. So we think that there's an opportunity for RS Space to be used outside the institutional context in these projects. Again, we won't get any income from that, but that's okay because we want RS Space to be part of the innovation that's going on there. And that also relates to another benefit, which is one of the things we've done, as you can see, we've done a huge number of integrations ourselves, as well as product development. But the more you do, the more people want. And so RS Space has multiple capabilities, which is great. But for each of those capabilities, then people say, oh, this is really nice. And could you add this? Could you add that? And pretty soon it just becomes impossible for us to manage all the requests for additional features and capabilities. So once the product is open source and the community is able, it begins to grow around our space, then that will also lead to accelerated product development. So yeah, so those are some of the considerations we had when we made the decision about a year ago to go open source. Thank you, Rory. And I think we have another, well, it's great. I like saying, you know, small or medium and large understanding, you know, where there is this community's heading, more open science and open ecosystems. And I applaud you for, you know, taking this step towards open source. Yeah. And I know that it's not easy to change everything. Yeah. Well, well done. Thank you. There's a question though. In the chat. Hello, congrats for the amazing job and the very interesting presentation. Can single researchers decide to use our space or it's necessary institutional committee? Well, yes, single user researchers can already decide to use our space. Even before open source, we have what's called the community space, which is completely free and anybody can sign up for an account. So we already offer in the sense we've already, we've always had this community version. It's not open source. So it doesn't enable open development, but in terms of open usage, it is. So it's, that's already possible. In fact, I think Rob. Yes. There we go. There's a URL in the chat area for you. Thank you also for showing that in your screen. I should say though that the airspace free community version does not include the, the innovative inventory system. So this will give you a basic and pretty good usable experience of our space. We have thousands of individual users using it on this server. By the way, this will also give you a sense of a good performance of the system too as well, because we have probably 20,000 registered users on this server. And they're mainly things like the sort of orphan graduate students whose PI has not yet made the commitment to buy an ELN or access to the full version of our space. But at any one time, there might be a thousand or more people logged into the system using it. And this is a relatively modestly provisioned server too as well. And so you'll be able to see that this performs well even under heavy load with thousands of people using it simultaneously. Anyway, we have lots of people using this as their main production server. And then if you do decide to buy your own instance of the system on your own private server, you can actually easily migrate your data from this public version to the new server so that you can then pick up right where you left off and continue using it in the full environment that would include the inventory system. Thank you. Any other questions? We are getting warm. We will leave. Can we, Rory and Rob, can we leave the email where can we, how can we contact you if we need more information in the future? Yeah, I can do that. Thank you. You already have a copy of the slides so you can feel free to post the slides if you'd like. Thank you very much. Oh, we have a question. Yeah, thank you so much. It's a pretty general question. What potential do you see that our space has for data driven science, for data driven life science as a tool for data driven research? Well, you want to go ahead, Rory? Yeah, sure. Well, I think in terms. I think this is for curiosity. So I, as I said, I have no background knowledge about our space today is the first time that I hear about it. So I would, this is not meant to be a nasty question. Yeah, no, I think, so I think the, I think the, so I think it, I mean, it has huge potential and the potential is already being realized. I mean, as I said, the, to us, the core, the core mission has always been to enable streamlined flows of data between tools. Data siloed in various tools, which aren't open. And so by, by doing the things that you've seen in our spaces able to do depositing aggregating data from different sources and then making it possible to deposit that data in data repositories, that's already a massive step forward in terms of data driven science and reproducibility. The other thing, I think, I don't know if this is part of your question, but we constantly get asked now, what about AI, what about AI, what's your AI story? So I think the story there, I tried once before to share my screen. I'm going to try again, I'll probably fail. No, I won't even try. But so I think the AI story is, is not so much using AI inside our space. It's, it comes down to the data, data aggregation and data discovery benefits that you get using our space. So one of the things that we're doing now with, with the IRODS integration, we're doing a second stage of the IRODS integration, which enables you to export data from our space to IRODS and IRODS is a, is a virtual file management system, which can track up to billions of files. They could, IRODS could, could see all the files, i.e. all the data that's produced in an entire university, for example, or in an entire biodiversity project. And by, and by enabling the export from our space into something like IRODS and associated metadata, which enables discoverability, you then massively increase the kind of intelligent, if you will, data pools, which AI engines can access for things like medical research and all different kinds of research. So I think the, the traditional story that we've had of data passing throughout the life cycle in a streamlined way is already a huge benefit to data-driven science and the kinds of things we're working on with IRODS will facilitate. Actually, I won't be able to get it now, but we have a really nice graphic. Someone at University College London, which is, as you can see one of our customers, produced a really nice graphic of how they see the data funnel of all the data in UCL coming through our space and then being accessible for AI engines to kind of encapsulates that. And that, of course, would be a second stage of ways in which our space can, can further. I'm going to pass it on to Ellie later, and maybe you can find a way to share it with people. I don't know. I won't be able to get my hands on it quickly. So both of these graphics that you saw on this shared screen right now, these are both in the PowerPoint that Rory gave earlier, and I'm sure he'll probably make that available for a distribution after this call. But the graphics give you some idea of what we ultimately think will be the story for data sharing. But speaking as a former bench scientist and a graduate student at Ohio State University who is involved in many different aspects of research that both in the life sciences, but also in educational and pedagogic research. What was very clear to me is that although there are various different ways to record data and hold data and store data, no one has really up until now thought about the entire story, which is trying to give a single tool to busy graduate students to allow them to manage their data from the planning stage when you're first making a data management plan, all the way through then from the early bench stages where you are doing trial runs, where you're doing experiments that fail, where you're doing experiments that yield useful data. And then ultimately using that central core bench tool, RSpace, which is available to use both on workstations and it's also very mobile friendly, you can use it on tablets and other mobile devices while you're in the lab. Using that to then reach out and link to other types of data that you or your colleagues have stored elsewhere. And then ultimately to bring that whole story together, the plan, the bench research, the data held in external systems, bundle a whole lot up together and send it all out to a final resting place in a repository where other people can access it. That's really our vision and it's the vision of many other institutions that are trying to find ways to do that that are going to help make this easy and seamless for people to do it. And one of the key barriers here is in the past is most research at universities is done by graduate students. If this process is not simple and easy and if it's not doable from a single central core tool that everybody uses every day, those graduate students simply are not going to do it. And I speak from sort of bitter experience of trying to locate the past data of graduate students at different institutions that have been involved with where they're really just very focused on getting their experiments done and graduating and I don't really care that much about research data management. Our space makes that step easy for them and it's going to make it much more likely that they're going to go to the trouble of properly documenting, managing and eventually making available the data that they've created in the lab. So that's really our vision I would say. Thank you so much. Right. I know that we have if there are more questions you can take the last one now. Otherwise any hands okay or something like that. So thank you very much. Rory, Rob it was it was lovely to have you joining our first community goal first of all and then as always it's fascinating to see what you can do with our space and Argos also inside our space and the other way around we are also going to share in the future some some plans that we have let's say so we can continue this collaboration yeah our next call yes it's scheduled for the 28th of February and we will focus on the Open Science Trails project which starts next week is the Kiko so we'll have more to tell you about this project which has one of the core components that it will work around is DMPs and actually massive actionable DMPs so let's yeah let's reconvene next month for that and thank you very much thank you again for accepting the invitation Rory and thank you for joining. Thanks everyone Thanks for having us and nice to meet everyone bye bye Thanks a lot everyone we will follow up via email goodbye