 Today, we are joined by Aaron Braswell, who is one of the infrastructure developers here at the Center for Open Science, and Tim Head, who is the developer of the OSF-CLI tool and also the head of Wild Tree Technologies. My name is Ian Sullivan. I'm one of the trainers here at the Center for Open Science, and I'll be handling the switching today. So, if you want to follow along during a course of this, all of the files for this webinar are available at the following URL. That's just one of our OSF projects that has this particular presentation, a link to Aaron's iPython notebook that she'll be going through in just a little bit, and we'll link the video and other materials there as well. So, feel free to open that up and follow along if you want. It'll also be sent to you along with the link to the video when it becomes public. So, I work at the Center for Open Science. We are a nonprofit running out of Charlottesville, Virginia. Our mission is to increase the openness, reproducibility, and integrity of scientific research, and that's research sort of broadly, whatever your field or discipline. We want to make things easier for you. We do that in a couple of different ways. Internally, we're organized into three teams. We have a meta science team that runs replication studies to try and identify key issues in reproducibility and best practices across multiple fields. We have a community team. I'm on the community team. We run events like this and reach out to individual researchers and teams of researchers to try and spread some of those best practices around and showcase some of the efforts that people are making to improve transparency and reproducibility in their research, all of which is made significantly easier and much more possible by our infrastructure team, which is the largest team here at the Center for Open Science. Their mission is to build tools, most specifically the open science framework, that make actually implementing these best practices possible for people in the community and make it as easy as possible for that to happen. So the big effort there is the open science framework. The open science framework is both freely available and also an open source free software project, the code for which is available online. This is important because this part of how we see pushing our nonprofit mission is to make both the infrastructure and the actual operation of it freely available. So in addition to the core software as we run it, we've also built a free API, which is available for other people to interact with it. The OSF is designed to support all of the different stages of the research life cycle from searching for new ideas in the literature to collecting data, interpreting findings, publishing a report. This is a lot of steps and we know that there are a lot of other tools that are out there that people are using for individual portions of this research life cycle. And part of our strategic goal with the OSF is to be a way to tile these things together so that whatever tools that you're using at different stages along the research life cycle, the OSF can be the central dashboard for your project and a way to integrate all of those efforts into a coherent whole. And that means that we are not trying to be better than all of the existing tools that are out there for each stage and we're not trying to be more GitHub than GitHub, but we do want to give you the option to both have a tool like the OSF that's freely available and connect as many of those other tools as possible to the OSF. And we do that through the API. The API, just like the Center for Open Science, integrates in a couple of different ways. We have applications, add-ons, and user scripts. The OSF is actually an application that makes use of the OSF's API. The web interface that you see when you go to osf.io, portions of it talk to the API directly. That's direction of the development is to make the web interface something that talks to the API directly. There are some other applications out there like JASP, PsychoPie, that are external applications for statistical analysis or experiment operation that talk to the OSF via the API in order to store your scripts or run analyses on your publicly available data. We've also implemented a number of add-ons, which are bridges between the OSF's API and the API for various publicly available web services like GitHub and Zotero and Google Drive. And we have user scripts that you can use or write in a couple of different languages. There are libraries for Python and R that have different levels of functionality, and those are ways of implementing our public API in language that you might be familiar with in order to make interacting with it easier. We also have the OSF CLI, which Tim will be talking about a little later in this presentation, which is sort of an amalgam of an application and user script. It provides a Python library for interacting with the OSF, but it's also a standalone command line client that you can use to integrate directly into your sort of experimental workflow without needing to write your own scripts in either of those available languages. So with that, I'm going to pass this over to Aaron Braswell, who's one of our infrastructure developers here, and he'll be presenting the API. All right. Thank you so much, Ian. I'm going to be sharing my screen here with everybody. So as Ian said, I'm a developer here at the Center for Open Science. I've been here for just about three years, so I've gotten a chance to see the Center for Open Science grow in both size and also mission. We've gotten to do a lot more cool stuff as I've been here, which has been great, continuing to expand into those parts of the research lifecycle, which has been fun. So I'm going to share with you a presentation on how just very basics on using the OSF API, and it's going to be in a format called Jupyter Notebook, formerly iPython Notebook. I guess they're expanding beyond just Python, and now their name reflects that they're both Julia and Python. It's like a... Anyway, I think it's funny. So Julia is another great programming language as well. So this is going to be covering some very basic things, starting with just querying the API for publicly available information, and then moving on to a more fully fledged example, which will be creating a project, parsing through the information that you get back from the API when you create that project, and then uploading a file to that project. So to start off, if you're ever interested in reading way more about the OSF API, we have a pretty new version of the OSF API documentation, which is available at developer.osf.io, and this has kind of a broad overview of all of the endpoints on the OSF API. There's a lot of information here, so it's good if you're going in with one specific thing that you would like to know more about. So in my presentation, I'm going to be focusing on the nodes endpoint just because it's a nice and simple one. So we call a node, it's kind of an overarching name for a project, and so this API documentation will give you an example of all of the different attributes and relationships, and just all the information available on each endpoint, as well as an example response that you might get back from the API, just so you know what to expect. So with that, let's go into seeing some of those examples. So we're going to start querying the API for some publicly available information, and the first thing we're going to do is just do some Python imports, and we're going to define a little helper function that will make it easy for us to print out the results we get back from the API. So you can see we're just setting our URL as api.osf.io slash v2, which is the URL for the API, and also just a little function to help us print out the results. So as I mentioned before, we're going to start by accessing the public nodes list. So we're going to request the API endpoint for a list of nodes, which is just the term we use for a project. So it's like kind of the container that holds everything in a project. So here we go. So what we're going to do is go to that osf api URL plus nodes, and then see what we get back. And so this response looks a lot like the response that we got back from that was shown in the API documentation, which is good. So it's exactly the same thing as we would get if we actually went to that URL in just a normal browser. We have documentation also. It's called like a human readable API version. If you just go to it in your browser, you can also just see the JSON format, which is good as well, I think. Yeah, there we go. So if you just want to look at pure data, you can see that as well. But if you just go to the URL in your browser, you'll see kind of this human readable printed out format of all the information that's available there. And I am logged in right now. But the great thing about this is you don't have to log in. You don't have to provide any credentials. So it's a good way to just get started playing around with the information that you can get back from the API. So this is a lot. So what we're going to do is just print out just the different sections of the first result that we get back. So we have relationships, which is just links to other more expanded bits of information. We have attributes, which is things like title, description, and a lot of like little metadata type things. Type ID and links and we'll get to those later. So let's go ahead and look at some of those attributes. So we're going to go ahead for the first results. We're going to print out the titles. So this is live. So actually don't know what's going to happen, which is always fun. Hey, those look fine. Okay. So these are the most recent 10 titles of the public projects that were put on the open science framework. So some of them are pretty general like surveys or materials. We have we have people using it from all over the world. So there are a couple international results as well. It looks like open science community research is on there too. So I don't know if that's one of the one of our the Center for Open Sciences projects, but that's one thing I like is that we tend to use our own products as well, which is kind of nice so that we can kind of also function as a bit of quality assurance. So if one of us has something we'd like to use, then we can get that through the process quicker. And so we can also filter these results. So if we're interested in narrowing down our results a little bit, there's a whole bunch of different filters that you can use. We're going to use the tags filter right now. But if I go back to the documentation, there's a list here of all of the different terms that you can filter by. So you can filter by title, category, description, whether or not it's public, the tags that we're using, the date created, date modified, and then a couple other things. So let's go ahead and add to that. We're going to filter by the tag climate and then see what kind of results we get. So this is just printing out the title of the result that we get and then a couple of the tags that are associated with that result so we can just verify that that's what we're getting. So yeah, these look like they're related to climate. And you can see climate is in the tags there as we go along. So you might have noticed that we only get 10 results at a time, but there are way more than that, especially for if we just do a general node search. So the way, there are two ways that you can expand the number of results you get back. Number one is you can just paginate through the results. There's links provided at the bottom of every results page that will let you paginate. So let's go ahead and do that first. So we're going to search now for any title that has the word fish in it. So the first thing we will do is take a look at that links section. So this will print out the links section. So you can see that there's a link for the next page which already includes that filter for you so you can keep your results going. Also gives you the number of results it returns per page and then the total number of results that you have. And so this goes ahead and also just queries for the next page. So you can see this is what the links section from the next page looks like. It has the next URL which is looking for page 3. There's 10 results. There's 40 but it also has now it has a link for the previous page which is the one we were just on. The other option for getting more results would be to add the page size parameter to the query. So this time we're going to search for anything that has science in the title and we're going to request 30 results at a time and then we'll take a look at that links results. So you can see it says total is 724 and per page we have 30. So now we can go through bigger chunks at time. And the maximum number of results available to request at a time through our API is 100. So if you do anything over size 100 it'll just still give you back 100 results at a time. So I mentioned before that one of the results included with each individual node or project is relationships. So you might notice that if one thing you might notice when you're going through API is there's a lot of you get a lot of links to other places in the documentation where you can expand on that information. And all of this is it's mostly conformed to what's called JSON API. And it might look a little bit strange because there's just you know you see contributors and you would expect a list of contributors but instead it leads you on to another URL to follow to get more information about that. So it is mostly conformed to JSON API which is a nice kind of template to start from and it's a common format that's used so that folks who are familiar with this format can just kind of know what to expect and it's easier to share this convention rather than just inventing some things. We wanted to go with a community standard for our structure over API. So going back to relationships so what we're going to do is take a look at the results, the first results and then take a look at the contributors relationship and then follow that contributors relationship and then print out what we see there. So the first thing we did was print out the contributors relationship and we can see that it is a JSON object with links related in an href and that href there is the URL we can follow to get the more information about the contributors on this particular node and so then we'll go ahead and follow that relationship and see the kinds of information that we have. So we have this is about one particular person and so we see that we can get a list of their nodes. We can get a list of the institutions that they belong to. We can get some attributes about them so their first name this one's for David Miller who's also a he works in the meta science part of the Center for Open Science and just the information that he's provided here. So GitHub username links to various other online presences and then also a link to just this page the self link which is just a link back to the more information about this particular person. So there's the relationships and a little bit about why things seem to be a little bit really nested it's because it's following this JSON API standard. Okay great so now we're going to walk through a more slightly more complicated example. This would be creating a project and then uploading a file to that project using API. So the first thing you'll need to do is create an API token and you'll do that by visiting your settings page on the OSF and then on the right hand side let's see if I'm lucky yes okay on the right hand side there'll be a personal access tokens a left hand side personal access token section and you can go ahead and create a new token and then tell it what kinds of permissions you would want. So you would do like full read and full write probably for this. We'll go ahead and discard one because I have that already. So now we'll move on. So this is where I'm going to define my token and also the API URL I'm going to be using. So I'm going to be using a staging OSF API the links on the tutorial that are on GitHub are to the main API. So these examples are making a private project but if you change the parameter to be public then just be aware that anything you make using this example that you find on the GitHub page will make a public project so just keep it private if you're just testing out or if you don't mind let's find two. So this is setting your your API token and also what URL you would like to make requests to. So I'm also going to just just to make the code a little bit cleaner I'm going to define a few helper functions. One is to make a post request for us so it's going to just format the headers of the request in the way that we'd want including content type and authorization which has the word bearer and then our token in it that can be kind of a kind of tricky sometimes when trying to make authorized requests to the API. So this is just so we don't have to write all that again which is nice it'll just post our post our data for us using that helper function and then we'll also do something really similar for a get request not that we actually need to authorize a get request but it's nice if we're if we authorize our request then we can get back our private projects as well which is something we might be interested in and then also just a put request helper which is for put requests are used for like updating specific specific endpoint usually. So very similar things just slightly different methods formatting our headers with our OSF tokens. So now we're going to define a Python dictionary with the data that we'd like to use for the node that we're going to create. So we're creating a node so it has to be type nodes and then these are just some attributes of the node that we're going to do. So we'll do test project for webinar that'll be our title and this is a test node created as an example we can just keep it at that. And it's public it's not public so it's private public false and it's a project so we'll go ahead and save that particular dictionary as our node data and then we're going to go ahead and use our predetermined helper function to post that request and then print out the information that we get back if you don't think we need that. Okay print out what we get back. So it looks a lot like the data that we saw before so when you post when you post to a node when you post to the nodes endpoint to create a new node it will give you back exactly what it created so it gives you back that node information with all of the gives you back that full node detail view of the node that you just created. So we can actually use that node response to follow the files relationship and then get that files link in that files relationship. So I can show you if we go to the node list and we look for files there we go it's the first one the files relationship so this is the link that we're going to find on that node that we just created and so this files thing files link leads you to this node providers list and as Ian mentioned before we actually connect to a bunch of different storage add-ons and osf storage is one that's included on every single node by default but in this list of the providers you'll also see if you have github or dropbox or google drive those will appear right here along with osf storage in this link in this list of file storage providers. We've got one question here. Philip asks what is the difference between the normal API and the staging API? Oh so the staging API is just where we are constantly adding new features and testing things and it can go down a lot and it can be unstable so it's just basically a internal test environment for our qa to use before we release a feature out to the to the production stable version of the osf so that's all it is it's just kind of a place where we can test things before releasing them and since I'm just creating a bunch of example projects I'm just adding them to staging so that I'm not cluttering up my my real osf account okay so we're going to go ahead and oh that's what that other line was for it was like why why didn't I come okay that makes sense I shouldn't have come into that all right take two try that again so now this is the response that we'll get um when we follow that file link uh relationship very similar to the list of providers over here so um we're going to go through that response and find the upload link um which is what we found here and that's what we're going to use so we're going to create a really simple text file um that just is just text um and we're going to use our put requests predetermined helper function to um use uh we post that file data in json along with um the upload link and information about that it's a file and that give it a name newest call it newest cool file just to shape it's live okay and then we're going to put put to that upload link and then get the information back so um we got the information about the file that we just created uh so it's a cool file and this is just a whole bunch of information about that particular file including its size and one other cool thing is we also include hashes so shaw 256 and md5 just so you can verify that your file is okay and it's good and it's good condition um all right so now the last thing we should do is go visit our file on the osf so if we go ahead and look we can visit this is just visiting the project that we just created and seeing the file and this is one part of fun part about staging is sometimes it can look really weird so as right now it looks really weird but you can see uh my misspelled title here for our projects and if this was working which maybe that shouldn't have used staging for this particular thing we would be able to see our files here on the osf project that we just created so yes that is a super basic example on getting started with the osf api as ian mentioned um the this github repository is connected to the osf project and it also includes installations instructions for installing some requirements that this notebook used if you're interested in um running it yourself all right thanks very much for uh running us through that erin uh i'm just going to transition back over now so next up uh we are joined by tim head who's the developer of the osf cli uh and head of wild tree technologies and he'll be walking you through what's possible with the osf cli tool hi so i'll try and talk and share my screen simultaneously which hopefully will work yeah so i will talk a little bit about a piece of software we built that is called osf cli or these days more osf client and it is a tool or it's two things it's simultaneously a command line client and a python library for interacting with the osf so um i'll talk about that kind of you know maybe not schizophrenic setup but that split personality a little bit um yeah so if you have any questions then please do ask them and to write them not into the chat but into the q and a thing um and then i will try and answer them either while i'm talking or uh at the very end okay how did i get started um with this or how did i get involved with osf so back in a previous life i was a researcher at cern which is a particle physics research lab in europe and now i work with um various uh companies and universities and some of them are genomics people and they have a lot of files that they would like to share and adding or downloading more than a few files by hand from isf.io is a bit tedious um so that you know the question is can we build something with which we can just type one command and it will fetch all the files or send all the files and that was one of the the first ideas behind trying to build a command line client for for the osf and then yes especially in in particle physics or in genomics you very quickly have very big files and then um you don't really you know if you have a file that's several gigabytes large uploading or downloading it through your browser is um oh it's just a bit tedious so that's where we've come from and you will see if you look at um the osf cli that is very focused on on sending and fetching files and it doesn't do very much of or of all the other things that you can do with the osf and this is basically because of where we've come from we've actually got one question about that uh using the api what's the file upload size limit um if you or erin want to i think it's five gigabytes per file that sounds about right i'm not i'm not actually sure uh yeah that's the upload limit through the website as well so i would be surprised if it were different so osf cli itself doesn't impose any limits um so whatever limit is built into osf is the limit uh but but but yeah so the other one of the cool things if you ask me about using osf um is that you can access the storage provided by osf but you can also connect your github project and fig share and google drive and s3 and i think racks space and lots of other kind of storage backends and if that's what you're using in your research usually when you work together with other people you find out that they're using something else um and then you know having to deal with the api of github and the api of fig share and the api of dropbox becomes very difficult if you want to automate things and the cool thing about the osf is you can connect them all to your project um you know you can connect your dropbox and your collaborators google drive and your students github and access them all through the osf and the osf takes care of all the all the difficulty or fiddle fiddly things um related to to all these projects having all these storage backends having slightly different um apis right now osf cli supports these four backends and mostly it's a question of somebody wanting to use another one and adding usually one line of extra code and then trying it out um so if your favorite one is missing the slides when we upload them we'll have a link to an issue we use to track which ones already work and which ones are still missing and you can find instructions on how to make your favorite one works um yeah and it works with public and private projects so you can already use it before you want to make everything public which is nice so to install it all you should have to do is type pip install osf client um so it's a python library and pip is the package manager that most people are familiar with when they use python and so i'm not brave enough to do things live so i recorded a little gif so if you just type pip install osf client i will download the few dependencies that it has and after that you can then type osf which is the name of the command line interface and then dash h and it will print out a little help message like we just saw there let it go by once more yeah so if you if you try it out one good way to find out whether it's at least basically the basics are working is to type osf dash h after you've installed it um and it should print out that little help message there and it should work with python 2 and python 3 um as far as i know there's no reason why it shouldn't work with python 2 so for all the examples that come in the further slides i created a project on the osf so the important thing or one thing you will have to look for when you create your own project or want to use it with your own project is the combination of letters and can also be digits at the end of the url so in this case it's edz fp and that is the project id and that's a little piece of information you're going to need to tell the osf cli how to or which project it should be talking to so in this project when i created it i also so what you see here is a overview of all the files in the project and in the osf storage there's one image and then i also connected a github repository to it which contains one file which is just a readme so but this week i can show you that it works both with the built-in osf storage and with one external storage provider in this case github so to download a project or what we call a clone an existing project you use osf space clone as the command so i'll wait for the the gift to reset and and basically what you tell it is the the idea of the project that you want to clone and then it will create a directory with the idea of the project as the name so you saw just now it downloaded the files then it created a directory called edz fp and within that you see two subdirectories called github and osf storage and so at the beginning of the gift now just shows you that if you type osf clone dash h because you don't remember how to use it and it will give you a little help message and now we see again so you type sf dash dash project and give the id and clone and then it will download all the files from all the different storage backends so if you now do a listing of the directory you see there's a new sub directory and inside it there are two subdirectories one the osf storage and the github storage and if you look inside then you will see that it contains the image and the readme that we had on github okay so how about adding a new file for that we have a command called upload and here at the beginning i just show you again or change into the directories that were created when i clone the project and make a copy of a pick a copy of picture into the osf storage directory so now there's two and i do osf dash p and give again the project at the upload and then i tell it the name of the file locally and the name that i would like to give to the file when it is on the osf project so one thing that you notice is that to upload a file even to a public project i need to give my username and my password to do that so to tell osf what username to use there's a dash u command option so then we've added a second picture and then we do osf dash p the name of the project upload the file and it will say now okay no you need to provide username and your password and to do that you do dash u which is what i'm doing now and you give the username that you use to log into the osf and then it will ask you for your password and it will upload the file if you type in the correct password so we've got a quick question here do these same commands work with components um i'm not sure what a component is uh so like a component within a project if you have uh multiple sub direct trees within yeah yeah the um uh if you're looking at the osf page on the right hand side you can add components and those are places where you can attach additional um add-ons or sort of just logically organize your project those are also nodes in the api sense okay then it should probably work from the fact that i had to ask you what a component is you see at what level i've been or the guys i work together with use the osf we mostly use it to add different storage backends um yeah but in principle i if it's a node then it should work but if it doesn't then create an issue on the GitHub repository that i'll have a link to at the very end and you know somebody can add it or you can add it yourself yeah okay so now if you go to the lead that we added the second file to the osf storage part and so one thing that might get a bit tedious is having to type in the project id every time um and as well as typing in your your username every time so so in this case what the the gif does is use osf list to try and list all the files in the project and you see you have to specify the project but if you get bored of doing that you can type osf in it and then it will ask you for the username and the project id and it will store it in a hidden file in that directory and from then on you won't have to provide that on the command line anymore and you can just do osf list or osf upload and it will look in the file to fetch your username and your project project id that's associated with that directory we've got a couple questions here on the same point about authentication is there a way to use tokens or some other access mechanism besides having to type the password in for each operation so right now you can't use a token if we go to the next slide if you get bored of typing in your password because it will not ask you every time um we don't let you store it in the file because I think it's not a good idea to store your password in the clear um so what you can do is you can set the osf underscore password environment variable to your password and then together with the configuration file it will just use um that to authenticate you in principle we can also support tokens but it's just something that somebody needs to implement um there's no particular reason why it doesn't doesn't yet work other than that nobody's gotten around to doing that all right so yeah if you set the environment variable to your password um then it will not ask you for it again and the reason we re-implemented it like this is because very often we use it from inside docker containers and then it's very easy to pass in this environment variable from the outside um so that we found very convenient and just in terms of handling file changes we've got a question here uh assume that I upload a folder with the osf client um and then some files change does the client tool upload all of the files again or make use of the hashes to identify change files or how does that work so um if somebody uploads a new file you can use osf fetch to fetch a remote file to somebody locally you could type osf clone again and that will overwrite whatever you have locally so I think that's not a good idea so you should do osf fetch and then you have to unfortunately um do that for each of the files that you want to fetch remotely um and to upload a file by default if you just do osf upload a local file and the remote file which already exists it will refuse to do that and the way to to overwrite it is to specify a dash f and to you know dash or dash dash force to overwrite it so at the moment it leaves it to you to know whether or not you want to overwrite it or whether you don't want to overwrite it um and the idea is that we don't really want to re-implement you know all the conflict handling or conflict resolution uh logic which for example you find in git especially because often it's it's data files and then it's not very we've not found it very useful to look at diffs of the files to try and figure out who's right it usually requires sending an email to somebody and saying I thought we've not you know why has our data changed for example um that requires some humans talking to each other so that's why currently it's a slightly basic um way of handling potential conflicts other than it should not let you overwrite by just typing a command or you didn't think the remote file existed it will not let you overwrite it um without you asking to overwrite it okay so I said osf client is both a command line library and a python library and that's quite useful we had one person ask already if they can use parts or can use it for building a web application to talk to the osf um and so the code you see here is every all the code that is behind the osf space list command line um command which tells you that all of all of the you know if you want to implement this yourself in for example a web application or so then creating a list of all the files in all the storages of a project is a few lines of code because you can keep using the library and you can use it as a library so you don't have to somehow fit within the philosophy of the command line client um but you can integrate it into your own project we've got a couple of great little points um sherry has confirmed that yes indeed uh working with components works just as well as with the project id so thank you for double checking that sherry uh and we have a question from alex about whether or not there is a package to access the api in r uh there is package it's a work in progress uh it's called osfr and you can find that on github the person who maintains it seems to be very nice we bumped in at some point and discussed building packages in different languages for them as they're uh but but but yeah so the idea really is to try and uh build a library that you can use to do stuff to to projects on the osf and accidentally we also build to command line interface that's not quite true but that's how we try and think about uh where to put functionality for example so that other people can build other things okay so osf uh cli is a open source project so you can find it on github um and on this slightly blurry screenshot the most important thing well number one uh is the where to find it which is circle no red but the other thing which i think is important is the number of people who have contributed so currently there's nine people who have contributed um and if you ask me then one of the goals to try and increase that number and have more people contribute because different people find different things useful and um if people can come and contribute then we can build something which does you know things that other people need and it's not uh a big drag on any individual's person's time um yeah yeah so we have um instructions on how to contribute which hopefully uh explain you know most of uh how how people work together for this um maybe if you're a github veteran then it's not so interesting for you but if you're new to github and git then hopefully the the instructions are at the right level of detail of getting you going or at least getting you to the to the point where you uh ask a question on how to do it so feel free also to to ask if you ever need help with any of the technical uh mumbo jumbo when we're happy to help you okay so yes that is the end of the uh presentation um and at the bottom of the slide here there's the the full link to the github project um that is it so if there's more questions then um feel free to ask all right thanks so much tim uh so i'm just going to uh put up quick thank you slide uh with all the links uh both to the files for this project the api documentation uh and again to the osf cli uh available on github um let's see i think we hit most of the questions there erin do you have some information on uh uh phil's last question so to answer number one is we do not yet have a javascript wrapper but that would be cool um we do have some rate limiting um it is if you're authenticated it's 10 000 requests a day and unauthenticated is 100 an hour um for files we don't have specific rate limits um but if you're planning on uploading large number of files we can totally accommodate that um if you give us a heads up it'll be really great um token or oauth they're both totally fine depending on what you're using it for um a token is useful for scripts or for like uh one time interactions but oauth is way better for web apps and basic works just fine but um is not as ideal all right so we've got a couple last questions uh will the api cover the feature to create a doi for a project in the future that is a good question i am not sure about that it should it should be very easy to do all right so we uh we may check into that uh and let you know in particular um and philx has mentioned that he will be very slowly working on uh building a javascript wrapper uh so everyone can keep their eyes out for that that's awesome add that to the project as it comes in yeah totally uh so i think that's about it and we're at time um if you have any other questions feel free to send them to us directly via the um contact at osf.io email address and thank you everyone for joining us