 All right, I may look familiar in that I'm also at impact story and Heather Talked about one of the main things we're doing which is on paywall But the way we work is we tend to have a lot of projects going at the same time And I thought it'd be really cool if you get the chance to share with you some of the other projects We're doing mostly because well partly because we're excited about him But partly because this seems like a great group for us to basically, you know Expose more surface area for collaboration love collaborating and we feel like each of these projects is an area where either Organizational you can collaborate with somebody or technically even just by someone using the API that we've built So that's what we're trying to hope that each of these projects will represent for folks listen so a lot of the how we Work was based on the feeling to me and had I had way back in the day at a hackathon at some point that said Hey, you know if we want to change a culture to be more open We have to make sure that we change the reward system because you need talked a lot of scientists And they pretty much all say same thing like oh, this is pretty great But like how's this gonna affect my tenure case or whatever, right? So he said hey, we should build tools that would try and help that happen And what we found is that when we build these tools we're trying to assess open science We end up having to build some kind of an open database to do it to find the products to do the assessment on and then often that open Database becomes useful for other things So a great example of that was on paywall that Heather talked to you guys about was supposed to be an assessment project But it became useful for a lot of other things and maybe some of the other stuff We're doing will become useful for those other things as well Maybe some people in this room will go to that good ideas for how we could do that So I'm gonna talk to about one or two projects that we're doing We're just gonna try and go through this list relatively quickly I probably won't have time of course to get really deep into it But again, maybe it's an opportunity for someone to say hey you guys this we're already doing this or we could use this in such And such a way or whatever so on paywall Heather I talked about so not really gonna spend any time on that The software impact impact project is something that's being funded by the Sloan Foundation It's in collaboration with the University of Texas And we are really excited about this because we feel like there's sort of a three level or a Triad of open science that we hear a lot of times, right? We've got open access so people can read the papers We've got open data so people can you know replicate and also like build on on the data gathering We've got finally open software So people can do the things that the person did originally to create the paper in the first place and open softwares They liked a little behind on those other two and really starting to see a lot of action right now So we're really excited to be working on this so what we're essentially building is a big database of every Every item of research software and how it's been cited right like Garfield built this citation databases back in The 60s we want to build that same type of thing but for research software and to do that We're gonna need to scan through all the literature We're going to see every time a person is you software which they often don't cite it in any structured ways They're just kind of offhand mention it so we kind of have this whole machine learning thing to try and Understand those those mentions and figure out which ones are are actually using software And we have a little tool that we've built already you can use this that side as org where you can enter Any project of any kind rather a software data code Papers whatever and then we'll tell you how to cite the project and the way that we find out how to cite the project is you know When it's simple it's simple right just ask CrossRap no big issue when it's code when it's complicated a lot of times We're going to the you can see that little things as citation provenance down here So we're going to the website of the software. We're going to github. We're checking Dois that have been registered as a note of for the software this it's really complicated But the point is we find all these places to try and figure out how they want to cite their software So that's a software thing paper buzz we already heard a little bit about paper buzz paper buzz is built on the crossref event data Platform which is awesome. They've got a terrific API that finds all metrics who here has heard all metrics You know that all my yeah, that's great I'm really happy about that because I point out we're in the tweet like seven years ago And now people know about it, which is awesome, which I'm really excited about so it's a thing that people like they are interested in it There's a couple good sources that are commercial source sources crossref has said hey Let's let's do this in an open way so that people can get this data and build on it. We said, that's great We work with PKP who's funding the project I want you you already heard it from a little bit on to work to build something that's built on a crossref event data But can provide kind of a UI for it, right? So it's just an API which is great, but sometimes people aren't so comfortable with the API They want to be able to see something so this is a pretty darn basic UI. It's still in it's still kind of in construction We're still working on it The idea would be this is a web page that someone can go to that they could find out the altmetrics for any particular research project And this also has its own API that adds a little bit on top of what the what you might get from crossref event data So impact story profiles is something we built a long time ago long long long time ago It's probably five years old or more I'll be built it like as I was mentioning at hackathon of them first projects that I that I did as a grad student One of the first things I worked with Heather on and the idea is that if you're really in open science You've done a lot of open science and don't all these things that we want you to do you shared your code shared your data You're interested in altmetrics. You're trying to make a global impact. How do you show that off, right? You can make, you know, there's various profiles that you can make whatever but they're more focused on kind of traditional outputs So we wanted to try and create something that's focused on open science outputs and shows open science impact So for instance, are people talking about your work? Are people citing your data? Are people reusing your software? Those are the type of questions that we want to be able to answer and so you can make one at open Profile that impacts or this is just a quick screenshot Open science assessment project. So one thing we found in the course is profiles And it's Heather saying just for you know making things as the clears the profiles did not really go over real well Like we created them people like them. There's tons of people who love them But tons is like thousands. It's not hundreds of thousands of millions, right? And so we found there was kind of a ceiling on how many people really want to use those profiles We still have them up there. We still trying to publicize them. But what we found is that People who aren't already doing open science don't really want to make one because the news isn't real good for them Right. So not really excited about it. But the evaluators who are really keen on open science They really want to find out the open science behaviors of people in their department or in their institution And we ended up collaborating with someone at the national institute of mental health Who is really excited about trying to measure open science at an amh And he said well, could we just Get a list of all of our researchers. They don't have to make their own profile profile I want you to make one for everybody and find out their open science behaviors So we did and we're going to try and extend this to a lot of other institutions as well But for right now, it's just uh, it's just with nim. So, um, it's has the super catch name open science assessment project for now We maybe we're trying to come up with a real name later on But so this you can see all the investigators at nim and then you can kind of click on them So you can see these again these sort of tryout, right? Like what are your how much is open access papers? How much is open data and how much is it is open coded? We can get that by scanning the papers and by kind of reading them and finding out within there If the person to share their code we can start using some of these same machine learning techniques reason for the earlier projects And then you can zoom in on someone. This is on promise You can kind of get this little thing of like oh like yay happy. I shared it Boo, I didn't or maybe it's embargoed or whatever and it'll become sure later on and you can the people can edit and stuff like that So the idea is we can create this tool That people who want to assess open science who want to say yes more open science. They can use this To help create that that um demand from above right to say, why aren't you doing open science? Yes, you should do it. Yes, it can go in your tenure. Yes, we can like make the surgery baseline And then finally, I want to have a Two more that aren't raised so much about assessment. One is called get the research. We got really awesome grant from the Arcadia fund. It's under $2,000. It's going to give us the space to really kind of like, you know Dig into this project. We're super excited about and it's way for regular people to find read and understand research Right. It's great that things are open But if I can't find that open sort or open access paper Well, it doesn't really help me very much And then if I can find it but I can't understand it because it's written in a bunch of jargon or a bunch of You know, very field specific type things. What good does it do me as a regular person? So what we're going to do is we're going to create a search engine on the unpayable corpus We've got 20 million free to read open access articles there And we're going to build this thing called an explanation engine They use machine learning to read the paper and then try and explain it to a regular person Someone who maybe is motivated, you know, they they're not they're not You know, we're not going to make every single scientific paper easily understandable by every single person Right, that's not going to happen But a person who is motivated who doesn't necessarily have education resources to be deep into the field They could get for instance annotation, right? So one thing we're going to do is when there's hard terms will annotate it on the side We can do that automatically or we can create lay summaries. I'm talking too long And so that's a screenshot. You check it out. Get the research.org. We can talk more about it later on And finally, we're interested in trying to create a graph of disambiguated authors So orchid is going to solve this problem in due time Every single person is going to have an orchid and every single oracle will be linked to every single product That's awesome. That's exactly what needs to happen in the meantime. That doesn't exist And so we're going to try and build one again trying to use kind of a machine learning approaches to cluster the authors so that we have A database of every person and what they've written the other people could use for cool projects It is currently available at vapor.com. It's not actually it's not ever available It's going to happen eventually All right, and thanks to everyone who's funded us and thanks for your patience. Sorry, I went a little long