 So the next speaker is Christian Garbus who's going to be talking about the genode infrastructure services, safe, efficient and seamless data management for neuroscience. First I want to thank the organizers for giving me the chance to give this talk here. As you said, I'm Christian Garbus. I'm currently working at the genode at the Ludwig-Maximilians University in Munich and I'm here to talk to you about our ideas on data management and data sharing and I'd like to say that while there has been tremendous progress over the last years in data sharing and data management, still a lot of data, if not most of it, is kind of managed in this way that we can illustrate here. A data owner or data manager is kind of sitting on a large pile of data where nobody else is allowed to attach to it, touch it or do anything with it. This is bad for many reasons. The main reason is that neither the guy sitting on top of it nor anybody else can actually make use of this data at all. Maybe not even the guy sitting on top of this has a good idea on how to use it because the dwarves who made all this stuff have long gone and vanished. Yeah, we need to change our culture. Maybe it's a good idea to look for other places where this is actually working quite well. And one example is sites that look like this. It's places where you can share recipes about cooking and that's where people go to, they upload protocols, they report about the results of those protocols and actually some other people might go there, replicate those results, report about what they got and might even improve the protocols. To a degree, that's what we kind of teach to undergraduates how science is supposed to work. Take it with a little grain of salt. Another place that is great for sharing and where software people hackers go to to share anything from small scripts up to complete operating systems is GitHub, which has been popularized in the sciences over the last years quite a lot. And it's a great place for sharing code, it's a great place for sharing software. However, it's not really good for sharing data because big data is a real problem for Git. And that's where our inspiration comes from because our motto is inspired by GitHub flavored for science and that's basically what Git, what JIN tries to be, some sort of GitHub Git for the neurosciences or for sciences in general. Most users of JIN are currently just using the service as kind of a website where they can go to upload some data through the web interface and share it with the community, with everybody or with some chosen collaborators. However, JIN can do way more because JIN actually is a version repository store, meaning in this case with versioning that old data is not lost. If you change it, it's actually just not visible anymore. And it's a version repository store for data, for code and for whatever you can basically get in there. JIN is based on Git, Git Annex and for the website on Gox, which is a nice project. And with saying it's based on this, there come quite some nice features. JIN is open, JIN is free, JIN is accessible. And the interesting part and that makes it a bit different from some other approaches is that JIN repositories do not need to be created on the website. They can actually come into existence locally on your laptop, on your data acquisition machine wherever you want. However, they can always go global directly from this. So the world and the sharing is always just a click or just the call of a function away. And as it is versions, you can always time travel. So you can back to the version of the data and your repository. In the state it was when you made this nice paper. Also, JIN is fairly modular, meaning that we have our web server. But you can actually create a second instance of the same thing in your laboratory, in your department, in your university, wherever you want. Also, without any of these tools, you can just use our command line client to do data management locally. You can also use the underlying tools like Git and Git Annex. You can access the data using HTTP, HTTPS, WebDov. You can create web hooks and you can use our own or continuous integration, continuous deployment skills. And you can do like ETL tasks, whatever you want. And this is why JIN is actually perfect for integrating into existing data pipelines, starting maybe from the data acquisition to moving data over to some analysis machines, sharing locally with some people in the lab, while keeping everything synchronized and nicely organized. Also, because of this, it's fairly easy to share data with JIN, either with some remote collaborators or people next door. And what makes JIN maybe interesting because of the underlying tools is it also takes care of data integrity, meaning you get a warning if your data suddenly changes. And as I said before, you can always go back to the state it was before. JIN separates files into big files and small files, and that is why it can actually deal with rather smallish text files in Git, but also with big files that can get to like terabytes of size if you want. That is not only nice because you can get the nice version control this way. It's also nice because you can actually download complete repositories without the necessity to download all the binary blocks together all the time. So you can actually demand like big files on demand only if you need this explicit files. This might save your bandwidth time and space, and all of these things are actually valuable most of the time. JIN also remembers quite well because anything you upload into the big JIN web server is actually if JIN knows the file format, which is true for most text formats, for example, and some other examples. It actually indexes and gives you the ability to search through those files, as a content of those files, using either simple term-based search, fuzzy searching, or like complex search grammars. Also, since recently, we added the ability to do like semantic searching on some file types, on which we have a poster here this afternoon. It's number 110 and it will be presented by Michi who is sitting behind there in the audience. And with that, I'm nearly at the end already, because JIN remembers even more. You can get for each repository a doye if you want. You just need to add some additional metadata in a standard form, and you thereby have a data repository which is uniquely identified and can directly be cited in some publications if you want. And with that, I would like to motivate all of you to come to our website, which is jinn.gino.org, where you can register and look to the details on how all this is working. And also to joining me at my poster, which is poster 74, also presented this afternoon. Thank you for your attention. So thanks for finishing up a few minutes ahead of the belated schedule. So we have time for a few questions. Nice presentation. Do you support use cases where different users may have access to different sets of metadata stored in Gitanex? Like not all the metadata is accessible to everyone. And if yes, how do you do that? So currently the access model is like limited to repository access directly. If you would like to share different parts of the metadata with different people, you would need to split up those metadata sets into different repositories. So the honest answer to the question is probably no. Do I have a question? My fear with all of these things is that they may stop working sometime in the future. I mean, what's the sustainability model? Okay, worst case, jinn is not working anymore. You can still use the stuff because the underlying tools will still be there, right? There is nothing that we add that cannot be used with any other tools, like either Git, Git, NX, DataNet, you name it. Yeah, I guess that's the... So as long as Git is going to work and Git, NX, I think the data is fairly safe. And even if not, if you get a doi, we actually take the data out of the Git world and create additionally also like a complete archive of the repository put on a simple web server that has basically nothing fancy around it. And the stuff will still be around. What is the relationship to DataNet? The short answer is we are both using Git NX. And I mean, I'm not the DataNet expert. I think there's kind of sitting there, or Michael is sitting there, and Jaustaf is sitting there. I think from the technological perspective, the two things are, or at least like going from, like using JIN stuff in DataNet should be fairly straightforward to a degree. But yeah, that's, maybe I'm too positive on this. But yeah. If there are no more questions, and I think we should thank the speaker again.