 Welcome everyone to the webinar today. The topic of the talk is about the APIs from Google and Orchid and how to combine them. The speakers for this talk is Liz Khrisnarch from Orchid. Liz is part of the Orchid technical team. She is one of the people who is working on the interface design and development of Orchid. In addition, Liz is also involved in automation of Orchid services and training technologies in the Orchid ecosystem. I met Liz initially in an open repository conference when she was giving a talk about Orchid API. At that time, we thought, OK, well, this can make a very good webinar. And that's why we are here. So with that introduction, Liz, I'm handing this to you. All right. Great. Thanks so much, Amir. I am happy to be here today to do this Ann's webinar. And as Amir mentioned, today we're talking all about APIs. So particularly about Google APIs and Orchid APIs, but there are some topics that will sort of apply generally. And so as Amir mentioned, I am a developer at Orchid, which means that my work kind of revolves around APIs. So the technical team at Orchid, most of the work that we do is involved in building APIs. But I also do a lot of work with internal applications at Orchid, so things that support our internal development and operation. So a lot of what I do is about getting all of our different systems to talk to each other so that we can automate some of our tasks and make everybody's life more efficient and more convenient. And that involves using a lot of APIs. So I do work of building APIs and using them. And through that experience, I think I have a few tips and tricks that I can hopefully share with you today. So we'll get started with a little background. So I know everybody has sort of different perspectives and maybe different technical levels. So we'll sort of ground it in some basic discussion about APIs. So we've got all sorts of things up in the cloud these days. So we've got things like photos and documents, data, and applications running in the cloud that we can access at any time from any computer. This is fabulous. So we can get that data and run those applications and do all sorts of great different things. But sometimes there is not just one single data source or one application that does everything that we want. In order to do something very specific, you might need data or features from multiple different sources and applications. So that's where APIs come in. So there are loads of APIs out there for lots of different popular application, Facebook Dropbox, Amazon, Slack, Google Drive, all sorts of applications that we can combine in different ways to do lots of different things. So that's what we'll be focused on today. So the idea of combining these applications to get exactly the functionality that we want. But first, we'll start out with what is an API? So we hear that term tossed around a lot. So API stands for application programming interface. What it really means is a set of rules that allows computer applications to talk to each other. And those sets of rules aren't really universal. There's no sort of governing body that defines exactly how an API has to work. Really software developers who develop applications get to decide how the API for their application gets to work. There are some common sorts of architectures that lots of software developers use, but they all behave a little bit differently and have sort of subtle nuances to them, little quirks that make using multiple APIs together, kind of a challenge. So that said, why would you use an API? So in those cases, APIs allow you to do the same sorts of things that you can do in an application's user interface, but just a little bit more. So for example, in the Google Drive API, you can do things like create a new document. In the Google Drive interface, you can create a new document. You can do the same thing in the API, but you have a little bit more control over the behavior. Also, you can do things faster and you can automate repetitive tasks. So if you need to create 900 Google Drive documents, you certainly don't want to sit there and do that by hand in the user interface. You can use the API to automate that a little bit. And finally, what we're focused on here today is the fact that when you start using APIs, you can combine data and functionality from all sorts of different applications into your own custom application and maybe even combine it with some functionality and some data that you have in your own organization's application to come up with exactly the right thing that you need. So in this session, as I mentioned, we'll talk about a couple of different APIs that I've been working with quite a bit to automate some of the internal reporting that we do at ORCID. So we'll be talking about the Google Analytics API, the Google Drive API, Google Sheets, as well as the ORCID APIs and the sort of project that this is built around is creating a custom analytics report that's uploaded to Google Drive as a spreadsheet and then combined with some data that we pull in from the ORCID API. So before we dive into those, in case anybody's not too familiar with these tools, first of all, Google Analytics is a really popular and free to a certain extent tool that allows you to track user behavior on a website that you own. So things like how many users are visiting your site, what pages are they looking at, what country are they coming from, and you can customize that even more to get more granular information about what people are doing on your website. Next up, Google Drive. So that's a pretty popular common cloud file storage and sharing application from Google. And living within that is the Google Sheets application which is basically Excel living in Google Drive. And finally, we have ORCID. So if anybody is not familiar with ORCID, so at ORCID we run a system of persistent identifiers for academic researchers and we also allow them to create a digital record of their scholarly contributions. So things like publications that they've authored, affiliations, more recently, Pew Review service that they've done in all sorts of other aspects of their scholarly record. So working with those, that handful of different APIs, the steps that we'll be going over in this session are querying the analytics API, setting up Google API credentials, getting analytics data from the API and uploading it to Google Drive as a spreadsheet, then setting up ORCID API credentials and getting some data from the ORCID API and adding it to Drive. So I know this is kind of a specific application that seems to focus around analytics and ORCID, but really once we get into the Google API and setting up credentials particularly, those aspects are applicable to almost all of the Google API. So there's kind of something in there for a lot of different use cases. And I should mention, I did post the link in the chat box, but I have the materials for this entire session, including the slides and some code samples and a whole lot of instruction on setting up and using the code samples in GitHub. So if you wanna check that out, follow along or just look at that later, it's all right there in GitHub for you. All right, so we're gonna start out looking at the Google Analytics API and using the analytics API is sort of assumes that you have a few prerequisites. The first one being since the analytics API is all about tracking user behavior on a website, the first prerequisite is that you have a website. So for the sake of this demonstration, I have a little sample website. Mine is a sample institutional repository that in theory allows users to download publications from the site and for some reason my download buttons have recently disappeared. But that's the site, there's a link to it in GitHub. So we have a site, next prerequisite for working with Google Analytics API is that you have set up a Google Analytics account and configured your website within that account. So this part I'm not going to walk through step by step because we're more so concerned with getting into the APIs, but the link and some resources for getting that set up are in the slides and in the GitHub documents. So finally, so you can get some basic information about who's visiting your site and what they're looking at with just a basic Google Analytics setup without doing any further customization. But depending on your site, you might want to set up some custom tracking. So in my case, for my repository site, I want to know which items users are downloading. So I have in advance set up some tracking on the download links for my publications on my website. And that tracking, I set up through a tool called the Google Tag Manager, which is kind of a new Google tool that allows you to set up customized tracking in a user interface rather than just straight encodes. It makes it a little bit easier. So assuming we have those things in place, we can then dive into the Analytics API. And in sort of a perpetual parade of handy tools that Google offers, when you start querying the API, it's really handy to use this tool, the Analytics Query Explorer. So what this does is allow you to sort of build queries visually and see what the data that you get back from these queries is without having to commit them to code and run it. So it just makes it simple to sort of play around with the API. So once you open up the Query Explorer, you'll be prompted to sign into your Google account and it will automatically pull in your any Google Analytics sites or any sites that are set up with Google Analytics into this section. So you don't even have to know anything about your account or the website property ID. You can just get right into the queries. So looking at the Analytics Queries. So what we're looking at here is basically the API version of what you might see in the Analytics dashboard. So the dashboard looks like this. It's kind of a pretty user interface to the Analytics data, but you're kind of limited to what Google provides you with here. And it lives within the Analytics API dashboard. So you can download reports, but you can't really mix up the data with other data to create anything custom. So if we flip over to the Query Explorer, we can pull out some of the same data that we see in that user interface, but just in text, and we can really customize the queries to get exactly what we want. So some of the aspects that you can customize here, of course, are the starting and end dates for your report. And then we get into a couple of other pieces, metrics and dimensions. So in Google Analytics, there are many, many dimensions and metrics that you can get data on. And what a dimension is, is how you want to break down the Analytics data. So do you want something like users by the city that they're located in, by the device that they're accessing, so it's kind of the buckets of data that you want. Metrics, on the other hand, are the things that you're counting. So whether it's clicks or page views or something else entirely, it's what you're counting. So you're putting the metrics into your dimensions buckets. Those are kind of the two key components of the Analytics API, and there is a huge reference to those called the dimensions and metrics explorer that lets you look through all of the different dimensions and metrics and see what they mean. That's sort of helpful to have to flip back and forth between the explorer and the dimensions and metrics explorer and the query explorer to get what you exactly what you want. Finally, the other really handy field that's available is filters. So we don't necessarily want all of them, all of the information from the dimensions and metrics, we might wanna know something about users who only visited our homepage or something like that. So we can use filters to get just the data that we just the subset of data that we want. So from my site, I've got this query sort of set up. My metric is total events, so total sort of user interactions with my site. And then the dimension that I want is event label and this is specific to some of the custom tracking that I've set up. So some of my events, events being clicks and downloads have tracking attached to them that will tell me what the DOI of the item that was clicked or downloaded is. And filters, in this case, I only want to know, I only want information about things that users have downloaded. So there are plenty of other fields that you can set here but I'm gonna focus on dimensions, metrics and filters because those are kind of the big key pieces. I can run my query right in this window and what I get back is not only the query results but a link to the report that I can go back to. And most importantly for developers, you get the full query URI. So when you go to code something up using this query, you have all the bits and pieces that you need right there that you can drop into your code. All right, so we've got our analytics query ready to go. Now we need to get access to the Google APIs to start writing up something that can take that query and run it against the APIs. All right, so next up we need to get access to the Google APIs. And this is where for me getting started, things started to get a little bit tricky because there are just a lot of steps to go through that are not completely obvious at all stages. So to access Google APIs you need a set of credentials, so something sort of like a username and password for the API. Since Google and most other API providers, depending on the provider, don't necessarily let anyone in off the street interact with their API. They have to know something about who you are and what you're doing before you can get into the API. So for Google APIs, to get access to any of the APIs, you need to create Google Developer Account and a Google Developer Project. So we'll flip over to Googlers. So this is just another sort of Google account that you can connect to. You can log right in with your existing Google account and what you'll need to do first is create a project. So when you first set up Google Developers, you'll be prompted to create a project right away. Otherwise, if you have an existing account, you can go up to the top menu and click on Create Project. The URL by the way up here is console.developers.google.com. You can also get there through just developers.google.com. And what a project is is really just sort of a bucket to put your API credentials into. It doesn't necessarily have to be related to a specific coding project. It is useful for keeping credentials separate and sharing them with different groups of users. For example, at Orchid, we pretty much use one Google Developers project to store all of our Google Apps credentials. So I'm gonna go ahead and create a new project. And this sometimes takes a few seconds, depending on how fast the Google APIs are running. So you can see that still spinning up in the upper right-hand corner. So once we get that set up, the next step will be to enable different Google APIs for that project. There are hundreds of Google APIs and they don't all come pre-enabled when you set up a new project. You do have to add them one by one. So once your project is set up, you'll automatically be shipped over to the section where you can add APIs. The first one that we want to add is the analytics API. It's great, we'll click enable. And so just as Google says this API is enabled, but we don't actually have credentials for it, so we can't use it yet. So the next step is to click this button and add some credentials. So at this point, when we're talking about credentials, this is again sort of that username and password concept to access the API. And Google has a few different types of credentials that you can get for different APIs depending on the type of project that you're working on. And this is where things can get a little bit tricky. If you are a developer, you might be familiar with OAuth authorization protocols, and that's what Google uses for its APIs. And there are a few different types of OAuth authorization protocols. Some of them require a user interaction and some of them don't. So it depends if you have an application that you want to be able to run all by itself on a server or any machine without somebody having to type in a username and password or do something that involves a browser and a user interface every time you want to run the application, then you need to set up access through what's called a service account. So these other options, particularly the API key, require some sort of user interaction. So we're gonna focus on the service account authorization today because it's kind of the trickiest and also really useful because we can set up an application or a script that we can set to maybe run automatically every so often without anybody having to touch it or interact with it. So this little widget is designed to sort of tell you what kind of credentials you need, but I know that we already need a service account. So I'm gonna click service account. And what a service account is, is kind of a shadow user in Google that can do things with the API, but it's not really a person, it doesn't have all of the same privileges that a person does, but it can access different Google API resources on behalf of an application. So I'm gonna create a new service account. For the sake of this demo, I'm gonna make it a project owner so it can have all sorts of access to all sorts of different things, permissions that you want may vary depending on your project. So here's our service account ID that we'll need for later. We also want a new private key. So this is a file that will download and we'll use that sort of in lieu of password. So this will be basically our password for the API. And I'm gonna make this P12 format. Depending on what you're doing, especially if you're working with Google Apps for business where you have your own domain, you might also want to enable Google Apps domain wide. This allows your service account user or gives your service account user access to everything on the Google Apps domain on the API side. It doesn't automatically give them sharing permissions for those things but it allows them to, or it allows the service account to access those things through the API if sharing permission is granted. So I'm going to enable that and for this sake, related to some of the other authorization flows, you do need to enter a product name even though it might not be useful for your particular use case. So we'll create that and it's prompting us to download this P12 key file which is that thing that will act as our password in our code later. All right, so that's all set to go. So we've got analytics enabled. The other app that we're going to need access to is Google Drive and we can go ahead and activate that at the same time. So this process is pretty much the same for all of the different Google APIs. So I'm gonna go back to the library and pick out Drive API, enable that. And now since we have that service account already configured, we don't need to go back and configure credentials again for that. It's just enabled and it's set to go. The other one that we need is the Sheets. So that's set to go as well. So when we're working with the analytics API, the one last thing that you also need to do is add your service account user to your analytics account. So that's one step that can kind of trip you up. And so that's under user management in your analytics administration. So you just add that person like any other user using the fake email address, the user ID that was generated when you set up your service account. Finally, to get ready for using Google Drive, we'll need to create a folder and give our service account access to it. So I've actually already done that in Drive. But the reason being, so we want to create sort of a custom report from analytics and drop it into a Google Drive folder. It's a little easier if we already have that Drive folder set up and also shared with that service account. All right, so all the parts and pieces are set to go. That's a pretty tedious set of steps and I have them listed all in the GitHub documentation. But now it's sort of time to get into some coding or at least talk about what we can do with all of that setup. So before we go on, are there any questions about configuring access to Google APIs? Well, Liz, I have one question. We talked about creating the Google account for a Google developer account for Google services. So I know some of the Google APIs are not free like the Google Cloud services, for example. You need to pay for it if you use beyond certain units of computation. So for the APIs that we are using in this presentation, would it be correct to say all of them are free API? All of the ones we're using in this presentation are free APIs, yes. Certainly when you get into higher volume usage of the analytics API, there is a paid version of the analytics API or an upgrade. But the ones that we're talking about today at a basic level are free. All right, so we're all set up and we can finally get back to writing those queries and getting analytics data and uploading it into Drive. So the code example is that I'm going to use Python, but all of these things certainly can be done in any language that you prefer to use. The one thing to keep in mind is that Google has libraries for quite a few languages. So this is the Google analytics documentation. All of the other APIs have documentation that's formatted. Similarly, and most of them have pretty robust client libraries that make interacting with the APIs really easy. So that's the first thing that you're going to want to get started with. And if you do look at the code samples, you can see that we import the all two clients and the Google API client right off the bat. So with those client libraries now available and up and running, the next step is to authenticate to the APIs that you want to use. So this is where that P12 client secret file comes in when you're using a service account at least. And the type of OAuth 2 authorization, so for those of you who may have heard of OAuth before are familiar with authorization protocols. The type that we're using, that's kind of the important keyword to look up if you're searching Stack Overflow or just Googling for help, is signed at JWT Assertion credentials. It's kind of a less common OAuth 2 authorization flow and there's not a lot of information on the Google websites about it. There's kind of an overview document that describes the whole process, but that was definitely a tricky piece to figure out. So I definitely recommend either taking a look at this code sample or doing some searching for signed JWT Assertion credential authorization if you have any trouble getting the authorization piece done. But on the Python side, right over here, this is where we're doing the authorization. So using the Google API client libraries, we are sending in some information about our service account or key and the API that we're looking to authorize into. So that piece works the same for all of the APIs that we're working with. So it's a piece of code that you can reuse across an application. So next up, getting back to those queries that we looked at in the query explorer over here, we can now take those bits of that query and put it into code. So if we look at the URL, we've got our metrics, dimensions, and filters that we need to send into the Google API all right down here. And right here is where those pieces get sent into. There's within the Google client library for the analytics API, we've got some methods we can just send that data right into and run those queries. So finally, uploading data to Google Drive. So we do the authentication the same way as analytics, but instead of sending in our query information, we send in some file metadata and back over in the query here. When we got that query data back, I put it into basically into a CSV file in memory and then we're passing that over as the data for the Google Drive file. So in the part of my code that actually controls all of the scripts at this point, if I comment out the Orchid API components and run the script, should get back a file that has some fair analytics data in it, there we go. So it's got the deep that I just ran out. So this is what we have. So it's not a terribly fancy file, but we've got our data out of analytics and we can see some of the same bits and pieces that we saw on the query explorer. Also I have some headers that I added in the script itself. So we've got all that into a CSV and uploaded into Google Drive. And the nice part about this is once you've got it in Google Drive, you could share it with any users you want and also allow other users or other applications to make changes to it. And that's what we're going to do next. So I have some sort of blank spaces in here. So I can see how many users have downloaded some of my DOIs, but in this case, I wanna know how many of those digital object identifiers are connected to Orchid records. So I've got some space in here to add that information. And next up, we're going to query the Orchid APIs and then use the Sheets API to edit this existing spreadsheet. So we're gonna switch directions a little bit and talk about the Orchid API. So again, if you're not too familiar with Orchid, I'll flip over to the Orchid website. So again, this is a digital identifier for researchers that can be linked to publications and funding and peer review activities and a few other types of activities to help attribute those scholarly contributions to the right person. Since lots of people have the same or similar names, it's really helpful to have a unique digital identifier linked to those individuals instead of just trying to connect those items to names or to sort of internal identifiers that don't cross-communicate well with other systems. So the whole Orchid identifier also includes the concept of an Orchid record. So we have this visual representation of a researcher's activities here in the Orchid record. There's a public version as well. So this is what you would see in the website user interface, but you can also get at all of this publicly visible data through our APIs. So we offer two different APIs. One is public, it's freely available. That's the one we're going to be taking a look at today. So that allows you to search and retrieve all of the information that's publicly visible in the Orchid system. So the things that you can see on the website, you can also get through the APIs. And another feature is that it allows you to get authenticated Orchid IDs from users, which is not a part we're gonna be talking about at the moment, but it is there. We also have a paid member API that also allows member organizations to write data into the system and also includes a few other features. But we're focused on free APIs today, so we'll talk about getting access to the public API. So like the Google APIs, we also require users to generate, to use credentials when they access the API. So we have a little bit of information, a little bit of control over how the API is being used. And so to do that, you first need to create an Orchid account. So create an Orchid identifier and it doesn't matter if you're not actually a researcher, it's perfectly fine to create an Orchid account. So to do that, you could just go to the Orchid website and click any of the many register for an Orchid ID links that we have. All that you need to give us is a name and an email address and then you'll be set up with your Orchid account. Once you sign into your account, you'll see at the top of the screen that you have this developer tools tab and that's the spot where you can go to create some API credentials. Flip back to the presentation since I've already created some of that account. So if you haven't created credentials yet, you'll see a big blue button that says register for the Orchid public API. Once you click that, you'll see a screen that prompts you to type in some information about your public API application and that doesn't necessarily need to be too specific for just searching the API. And once you save that, you'll get a set of credentials that consists of a client ID and a client secret. So kind of like a username and password. So those are the two pieces that you'll need to get information from the APIs. So once you have those things, you can use those to query and to get authenticated IDs. And in the next steps, we are going to use these credentials to ask the Orchid API about which Orchid IDs are connected to the DOIs that we're tracking in analytics. So to get that information, there are two steps in the API. And I'm gonna show those using just some basic HTTP requests which can be run in basically any programming language. So for those of you who aren't quite sure what that means, so HTTP request, that's sort of the same as what you do when you visit a website in a browser. Only when you're using a sort of a terminal application like Curl, you can do quite a few more things with HTTP requests than you can do with just a browser URL bar. All right, so first up, we need to use that client ID and client secret to get an access token. So the access token is what really allows you to run your queries. So you need to send that token which is a long string of letters and numbers that you'll send along with the query. And in this case, we need to send in some information about the scope. What we're looking to do with the APIs as well as the environment that we're working with. So once we get that token, we can use it to run some queries on the Orbit API. And we do have quite a bit of documentation on searching. So lots of different parts and pieces of users' Orbit records that you can search. So we have keyword searching and fielded searching as well. So for this case, we're focused on searching for digital object identifiers. That query looks like this here on that line. When we translate it over into Python, and this is this Orbit API.py file, the query section looks like this. So we're just sending a curl statement with that same query string. And in this case, we are taking that list of DOIs from analytics and just repeating that query over and over again for those DOIs. And finally, as I mentioned, we're gonna edit the existing spreadsheet to add the Orbit data in. So when you're using the Sheets API, so when I put together this presentation, the newest version of the Sheets API hadn't yet been released, and now it has. So previously, it was pretty common, especially in Python, to use another library on top of the Sheets API because the Sheets API is a little finicky. It doesn't have a lot of the features that are sort of needed to make it really useful. So I'm using the Python GSpread library, which just makes it easier to do things like search for data in an existing spreadsheet and update cells in a spreadsheet in a batch. Since the newer version of the Google Sheets API has been released, there are quite a few more features. It still doesn't have searching built in, so GSpread is still useful in certain cases. So what we're doing with the Sheets API portion of this code is getting the drive file ID of that file that we created. So every Google Docs file you'll see at the end of the URL string has a file ID. So that's the important part that you need in order to be able to manipulate the file. In fact, folders have IDs as well, and you use those in the API to find the right folder and list of files inside. And then in this code, all that we're doing is looking for the cells that contain the DOIs that we're looking for and dropping those pieces of information from the ORCID API in next to them. So if I uncomment the ORCID bits in my report script and run it again, we should see those things drop in. And this will take a few more minutes because the Sheets API is kind of slow and also we're querying ORCID as well. So we're adding a few more steps to the process. So while that's running, we can talk about just a few more things that you can do once you have that data into something like a Google spreadsheet. So we've got, in addition to just the plain Sheets API, you can add in charts from the Google charts API, and Google also has sort of a JavaScript-like language that lets you do some formatting. So you can add fonts and colors and make your Sheets look a bit prettier. Of course, you could also pull the data from that sheet out using the Drive API and manipulate it with some other application. A couple of tips when you're using the Google APIs that kind of tricked me a few times were that, so when you're using those P12 files, you are actually, and authenticating to the Google APIs, you're actually generating a token in the process that's just passed around in your application. Unlike the ORCID API tokens, which are valid long-term, so they're valid for 20 years, Google API tokens expire in an hour. So if you're running code that loops through a lot of different files or folders or whatever, you just need to make sure that you account for the fact that you need to have a valid token and your tokens will expire in an hour. Secondly, working with free APIs, so it's great that they're free, but it also comes with the expectation that you might not always have perfect access to the APIs all the time. So Google APIs, particularly since they're popular, they can have a heavy load and that load can vary. They're a little bit twitchy. You'll get some sporadic errors, just sort of unexplained 500 errors, the service isn't available, sorts of things, and that's just part of working with free APIs, especially when you're running lots of queries over and over again, and particularly with the analytics API, they're definitely query limits and rate limits that you can run up against if you're running lots of queries over and over again. Finally, as I mentioned, the streets API is pretty slow, so it's good to try to combine as many actions into one request as possible. I think my script should be finished now, and it is. All right, and I can see that my ORCID API data was dropped in to those couple of spots where I had placeholder spirit. All right, so again, the code for this is all available in GitHub as well as the presentation and a whole lot of URLs for the resources, and I think we can go ahead and paste that URL into the chat box again, but at this point, I think we have a few minutes left for questions. So I see there are a couple in the chat box, so one is you could choose to store your resulting spreadsheet locally. Certainly, that's definitely a possibility and not in Google Drive. Yep, definitely possible, but for the sake of this demo, one of the handy things of uploading it to Drive is that you can share it with other people either at your organization or outside of it. Sorry, Liz, so for the purpose of recording, I just actually repeat that question, which the question was, can we actually store the spreadsheet or the result of the script into the local drive rather than the Google Drive? Is your order the answer? Now, the next question here is that, so I'm reading the actual question is, what language does the ORCID API support? So our API is a RESTful API, so you interact with it using HTTP requests and so with that in mind, you can interact with it with any language that can send and receive HTTP requests. So we have some libraries out there for everything ranging from PHP to JavaScript to Java, also Python, and we have a couple of people that are using .NET as well, but anything that you can think of that can make HTTP requests. We do on our, so our support site is, our documentation site is members.Orcid.Org, and it doesn't mean that you actually need to be a member to access it, but there you'll find all of the documentation for APIs and we also have some example code and some libraries for different languages. You can have some, have a go app, Ruby, definitely. Okay, well, that is a comprehensive list. Well, we have one more question, some are actually related to the API. So the question is, I'm just reading the actual question and then I actually add something top of that. So the question is whether the ORCID API can be used to link publication, funding, grants, info into the authors, perhaps using the cluster of API. So if I understood the question here correctly, it's actually talking about the mashup of APIs using ORCID to actually find what publication or grants can be linked to that. So is there any function like this in ORCID system? So that would be to find in which, which ORCID identifiers are linked to, to funding. So we do have a funding section on, on the ORCID record that either, either users themselves can add funding items to or other member, other organizations outside of ORCID can actually populate funding information into users ORCID records. So you can, you can get funding information out of the ORCID API and again, just check the documentation. You can, you can see where to get those things. There also, so there are some, some funding agencies that are, that are putting, putting ORCID identifiers into, into their data, cross ref, fund ref. Some items have ORCID identifiers in them. We have, so Uber is the, is one of the organizations that we work with that also includes ORCID, ORCID IDs or allows, allows agencies to populate ORCID IDs and that's got a lot of federal funding information. That's not to say that all of those grants have ORCID identifiers populated into them, but we are seeing more and more organizations that are adding ORCID identifiers to their, to their funding items and then populating them back into, to ORCID. Okay, we, I think let's just answer one more question because we are technically at our time limit. So the last question is that what sort of ways or methods do you use basically to cope with the API errors? So that's a, that's a really good question, particularly with, with Google APIs, since the errors can be really unpredictable and not, you know, not at all related to, to your code. So I have some, some pretty long running scripts in the, the use Google APIs to be honest, have some pretty blunt work around. So one thing for the, for the token check is I have in some of my scripts, sort of a time-based token check that starts a, you know, that checks the, the current time when the token, when I get the token and then it just checks routinely throughout the script to see if it's below the token expiration time. And if it's above a certain time, it just gets a new token automatically. Certainly you could also check to see if the token is valid, but because of weight limits and API errors, I like to keep that check on the, on the script side. So I'm not using up my quota. In terms of handling just the random errors, I, if I can't, can't sort of monitor the, the logging output as it's running. I'll sometimes just use a bash script to, to retry it until it succeeds. That can present some errors if you're, if it's retrying because your, your code is, because there's something wrong with your code. You'll hit that API rate limit really, really quickly as your code retries over and over again. But those are some of the things that, that I've been doing. Okay. Thank you very much, Liz. That was a great presentation. Thanks everyone for coming to this talk. And I hope to see you in the next, next webinar.