 So good afternoon everybody and thank you for joining us today. This is the third workshop in our spring 2015 public scholarship series. Today's workshop is going to be on altmetrics. We are calling this new measures of scholarly impact, although I realize they are not new to many of you. Today's presenters are from our own library Marta Blattic, who is the freshman and instruction services librarian here. She's published widely in both scholarly and professional journals on the importance of bibliometrics and impact measurement to contemporary scholarship. She holds a PhD in English from our own CUNY Graduate Center. Joining her today is Margaret Smith, who is the librarian for physical sciences at New York University. She was an early adopter of alternative and comprehensive measures for assessing scholarly impact and her expertise extends across the spectrum of digital public scholarship. She writes and presents about social media, data management, open access, and other topics on the implementation of new academic technologies. And finally, we are joined by Mary Ellen Sins, who is the director of sales for Plum Analytics, a division of EBSCO information services. His mission is to provide individual researchers, labs, departments, and institutions the tools to convey a comprehensive and time impact of their scholarly output. Plum tracks more than 20 different types of artifacts on impact metrics in five categories. So we will have presentations from all three of our presenters, and hopefully we'll have some time at the end for audience questions. Next, of metrics in academia and in the research context in general, but also a bit about the traditional metrics that are not exactly going away, but are being supplemented and occasionally challenged by the emerging set of altmetrics. When it comes to altmetrics, we will be doing our best to introduce you, if you need to be introduced, to what some of them are, how they are communicated, and why you as a researcher, as a faculty member, as somebody who's possibly evaluating others, looking at grant proposals or submitting grants for funding, why you should care and kind of what altmetrics can and cannot do for you, and also how to get started, how to get more comfortable with using those newly emerging markers to your own benefit, to your own advantage. So let's start with considering and thinking about why do we gather today, why do we care about metrics? So I thought that the easiest was asked here at John Jay is to bring up the part from Form C that all of us, including myself, who are up for evaluation each year, who are up for on the tenure clock or later on going on for promotion, are faced with pretty much on the regular basis. So it was revised in 2011 and one of the changes, I saw the older version and it was not there, so one of the additions included a kind of a little aside, a guide mentioning that if you do know what kind of measurements you can use to bolster your case, your self-presentation as a faculty member, as a researcher, why don't you include some of the measures conventional to your disciplines, and the listing, the suggested listing for measures include for journal articles provide an indication of the quality of the scholarly outlet, for example, the impact factor, acceptance rate, rejections rate, circulation numbers, and so on, and then they also talk about us having to include the number of citations from whatever publication we have out there, right? So it was a very small addition, but for us in the library it was not so tiny because in the days before the Form C was due that year we were getting quite a few questions, quite a few queries about how do I gather that information, how is it available to me, and how I can do my best to present it on my new Form C. So the requirement for metrics is something that here at John Jay, we already live with, right? So that's affecting us as individual faculty member, individual researchers. Then of course we care about them because it's very much, as I mentioned, things happening when we are a tenure up for promotion. We also see it more and more when it comes to the hiring process, when the candidates are including and speaking about their scholarly research impact in light of existing measurements. And last but not least, funding agencies themselves do appreciate when somebody who's applying for funding is letting them know why exactly they're a researcher who's worth to be getting money. Also when it comes to reporting your scholarship, your research back to the funding agencies, again, a call for metrics is definitely going to be part of the exchange. Now, if you thought that that's already getting a bit overwhelming and a lot of expectations, a lot of pressure, believe it or not, the kind of emphasis on metrics extends beyond individuals. It also goes higher. So entire departments, schools, research centers are being evaluated and have to be presenting their contribution, have to be proving why they deserve to be in existence, how they can be funded better, how much they contribute to whoever is their academic host or sponsored. Now, the other level is actually the national level of assessment. And it's happening in quite a few countries. I just listed three of them that seem to have undertaken it at the most extensive level. That would be the UK, where some kind of assessment is going on. I think since about mid 80s, Australia and Germany are the other examples. These national assessments are huge. They're very time consuming exercises, specific departments, specific procedures, protocols are in existence. And all of that is to show and prove that the publicly funded research bodies, researchers, agencies and the research projects are actually worthy of continued support. To give you some context for these altmetrics that we're going to talk about, we're going to give you some background about what are the traditional metrics that have been used to show impact of scholarly outputs. So these can be measured on three different levels. The journal level, author level, article level. We're going to go into two of these in more detail. But under the journal level metrics, you have impact factor. This is kind of an average, average number of sites to content in the journal. SJR and Eigenfactor are similar in that they are journal level. However, they're more like weighted averages. So they are looking at things like self citation. They're also looking at who or what is the citing entity and based on the quality of that or how many citations are going to that citing entity, that citation may weigh more or less. So the other two, article and author level, these are not as developed yet. So we don't have as complicated measurements mostly. So you can have basic numbers. So number of times cited for an article. For an author, you could have number of publications, number of times cited. These can then be broken down sometimes in terms of where you have published. So number of publications for which journals or number of times cited for which years. So you can break it down like that. And then H index is actually looking at both of these numbers, the number of publications and the number of times cited to sort of give a holistic overview of an author's output. So now we're going to look at two of these metrics in more detail. Impact factor is the one that everybody knows. It was first developed, it was first thought up, dreamed up by Eugene Garfield in 1955. And what it is, is it really is kind of an average. So you're looking at the number of citations to articles in a journal in the years prior to now, and you're dividing them by the number of articles published by that journal in those years. So very simple, no complicated myth. And one place that you can get this number is from a database called a journal citation reports, JCR. If you click through on the little number that it gives you for a journal, this is what it tells you, it actually does the math for you. This is a very transparent calculation. So it's showing you we have this many citations in 2012 to items published in these years. It adds them for you and then it shows you what happens that it's divided them. So this is a very like you can calculate this on your own for things. But the catch is getting these counts, like that's the labor that they do. So they are gathering all the counts in the publications that are in whichever tool you're looking in. You will note that this number makes no sense in a vacuum, right? So knowing that you don't even have what journal this is, you still couldn't tell what is this good or not, 6.275. So it needs to be in comparison with some other journals, right? So you can say, oh, this one was cited more for its output than this one. However, you are still going to have some problems comparing journals. So for example, a review journal is going to be different than a journal that's publishing original results of research. So the citation patterns are going to be different. There are going to be more citations in a review journal than not. Also, this is going to vary across disciplines. So you are going to have way more citations in a discipline where that is like their primary scholarly output, research output is the article. Also, you are going, this is only journals, right? Only journal articles. So it's sort of limited in scope. You're looking at the content you're looking at as journal articles. You're also only looking at citations from other journal articles for the most part. So these are not exciting new critiques of the impact factor. People have talked about this a lot. Here's an article or what's from an article talking about the caveats for journal impact factor. So it really never was intended to evaluate researchers or individual articles. It is an average. It is not very granular. And then it relies on a skewed distribution. So already we know that they call it the 80-20 rule which says that 20% of the papers account for a much larger number of citations. So you really can't say, this is why you really can't say that the performance of an individual researcher or a paper is related to the impact of the journal. We talked about the review journals. And then also the impact factors are going to vary across disciplines. So you really cannot compare two different journals from two different disciplines critiques. You got it. So, and yet the impact factor was nice. It's very easy to calculate. It's very easy to sort of imagine. It proved very popular. So 2005, we have a fellow with the last name of Hirsch. He comes up with a metric to evaluate author level performance. He conveniently named names of the H index. His last name is Hirsch. So he's looking at the number of author publications compared to the number of citations that these publications have. And looking to see when does that match up. This was a number that he invented. So that's kind of hard to imagine. It's easier to imagine visually. So here we have a graph. On the x-axis, we're having number of documents written by this person. And then on the y-axis, you're having number of citations to these documents. And you can see some of the papers, probably the older ones, right, because they've had more time to be cited, are they have much higher site numbers. And then it sort of drops off. So maybe you have some newer ones. Maybe you have some duds. They're going to have fewer citations. And then the H index is the number at which those meet. So on this, for this researcher, the H index is about 32. It's where the 45-degree line meets this little graph of their stuff. However, so this was intended to capture a more holistic view of the author's outputs. It's kind of like the lifetime achievement award of metrics. So the longer you are alive, the longer you are in your career, the higher your H index will tend to be because your stuff has been around longer. It can be cited. You've had more time to write stuff. So you really can't compare an early career professional to someone with tenure. The numbers aren't going to match up. This is also affected by your discipline. So if you, these are also going to tend to be journal articles. If in your discipline you do not write journal articles, you write books, your number is going to be very different than from someone who is in a discipline where they do write journal articles. So for example, if you wrote a wonderful book, if you wrote To Kill a Mockingbird, right, it had a lot of impact, but you don't write other books. Your H index for that is still going to be just one. So no matter how many people cite your amazing book, it can never get any bigger. Also, if you have written a lot of stuff, your number really can only get bigger even after you are dead. So this is actually sort of a criticism. Like this person is no longer, they are no longer writing anything, but their H index is going to keep growing and growing and growing because that's just the nature of this number that was constructed for this metric. And then it also assumes, well, it assumes two things. One thing is that the tools can easily disambiguate author names. So these numbers are going to tend to be calculated by some sort of an automated count. It's going to look through all of the citations, figure out which ones go to which one. But if you have a name like I do, Margaret Smith, this is going to be a really tricky problem for this automated system to solve, nearly impossible. So if you get an H index graph for me from this database scopus, it will be, it will be very impressive. Let me say, I am very prolific, it turns out. So that is something that you have to keep in mind with the H index. Also, like the impact factor, it assumes that these citations are a good thing. Right? So it assumes my work is being cited because it's amazing and not because it's like the worst one and everybody's like, so unlike Smith 2004, my research is correct. Right? So I could get a lot of those because maybe I'm really terrible. And my H index still would sort of, it would grow and grow and grow. So these are sort of these criticisms, things that are sort of built into these numbers that you kind of have to think about when you are reporting your results, when you're thinking about other people's results, what do those actually mean? So general overview criticisms of these traditional metrics, they are limited in scope in that you are looking at journal articles. You're looking at journal articles and then you're looking at other journal articles that cite the journal articles. So that means they're also limited in scope as far as disciplines go. It also means, so they're limited in scope another way, which is that the tools that calculate these numbers or gather up all these citation numbers, those have limited scope. So there's not one tool that you could look at to get your definitive set of publications, your definitive set number of citations because they all are going to have sort of a subset. So looking at one gives you a limited idea of impact, but it can't give you your definitive impact. And then they lack granularity. By this I mean, well there's sort of averages, right? So you can't actually see something. They are, they're not very fast and they're also sort of, the impact is distributed, maybe over the whole journal or maybe over the whole author. You can't really see this sort of granular detail of the impact. They ignore the context of citation, whether or not they're citing me because I'm right or I'm wrong, and then they are also social. So they are kind of social in the way that they've been defined, right? But then they are also social in the way that they can be skewed or the way people interact with them. So citation stacking is an example of this. This is when journals will either self, count their self citations so they will encourage authors to cite the journal that they're in or when there are sort of secret arrangements behind the scenes where some journals will have articles that cite the other, the friend journal, in an effort to bump up the impact factor of that journal. And this is, it sounds very mafia-like or it sounds very secret and it is surprising how often that it happens. So when this happens, it's a big scandal. So this was two years ago, I guess, when they ban them, they have to kick them out of the set of journals because they are skewing the results and they're rendering their metric, you know, not as valuable as it used to be. So this is just something, a criticism or something to keep in mind also about looking at these different kinds of metrics. So those are traditional, they are print-based. They are counting what you could count when we had print, even though we had these tools counting the citations. It's nothing like now. So now, I call it the digital era, we have new ways to document and to share the research. So we can create new things, we can also share old things faster. So we have e-journals, we have e-books, we have blogs, we have wikis, data sets, PowerPoint presentations, code. And then we also have new ways to measure how these things impact the discipline or impact other people. We can look at their page use, we can look at downloads, we can look at social media mentions, we can look at your Wikipedia sites. So these are just entirely new things that didn't exist before. Numbers that we can use to sort of say like how, how impactful, how much impact did this entity have? And here are some examples of altmetrics. The one from the left is altmetric for scopus. Scopus is a database owned by Elsevier. Altmetric is sort of a little company that is, I don't know if they're owned by Elsevier, they might be at this point. Oh right, they're digital science. So they are sort of, it's a plug-in and they're reporting numbers for Facebook, for science blogs, Google plus, Tweeters, site you like in mental health. They're actually giving numbers. They're going out and they're gathering these numbers. How many times has this thing been mentioned in these different places? On the right, these are the metrics that are provided by any of the PLOS journals, Public Library of Science journals, because they are open access, you can just go in there and look at all of these metrics. So they look at page views, so HTML, how many times people have clicked through to read it, how many times people have clicked to download it, and then even what file format they downloaded it to. So that's something that's available to people if you publish your content in a PLOS journal, that will be there on the page. And then it also sort of tracks it over time. So down there you have how many months have passed and then how many cumulative views the item has received. Kind of, maybe. The criticisms of the traditional metrics. So all metrics are broader in scope. So already we have more research outputs that we can look at. We also have more kinds of impact. So it's no longer just journal articles, citing journal articles, we can have Twitter talking about a journal article, or we can have Facebook talking about my dataset. You've expanded your field in a way. It's also more granular, so you're not just looking at a journal article, you're looking at the data inside the journal article, you are looking at a blog post that maybe is more informal, tinier things, and then it's more immediate. So you can get those numbers now. We do not have to wait two years to get the impact factor calculated for that time period. You can just look now and see, well what is Twitter saying now about this content? And our question was, we kind of were not exactly ruthless on the traditional metrics, but we pointed out some of the longstanding and ongoing critiques of just the two key ones. And we are wondering, and we actually wanted to invite you to think through altmetrics, you know, as little or as much as you know about them, and to kind of join in like trying to figure out what are the issues, potential problems, or you know, things worth getting excited about this new category of measurements of our scholarly output. Would you have any thoughts or comments? Yes, Mark? Yeah, citations as a measurement, which part of the point of having the metrics is that other scholars are saying that these words are indeed scholarly as opposed to Justin Bieber's spelling. This was awesome, and therefore all of his million fans have to like retweet about it, so you're getting that in there as well. So, and also I'm just thinking of myself, there are articles that I've downloaded about two dozen times, just because I keep displacing what I did with the files that I have. So, therefore I'm skewing my advisor's work and things like that just because of my own incompetence. So I should not be part of this whole metrics in this situation. So, what am I missing? Why is this so much better than the BSC and the brawnness of it? But I don't think the brawnness is necessarily able to talk about it. Okay, anyone else would wanna add to what Mark said? Yes, Kathleen? I'm just wondering that it opens up to people's eyes to the fact that the traditional ones aren't flawed. These are flawed, but more is best. And I think it's more education, but disciplines need to talk to each other and understand that there is just a number. And it's not really the answer to this question about thoughts. Robin, do you have any kind of, yeah, go ahead. Sure, so, I'm a scholar, someone who's on the tenure track here at John Jay. I'm sort of figuring out how the foreign sea works and how places that I am published in hands or would publish in my factoring to my evaluation of myself as a scholar here. And looking at traditional metrics, it seems like those newly-favorite science journals, but for gaps, foundation practices are just different, right? And so, for me, I like seeing the diversity of how the public and how other ethical mix venues over in the United States has to be for, like, you know, a number of millions of followers. If I were a music scholar, I'd love for that to happen. Wouldn't that be awesome? That would be like one of the peaks of public scholarship to be able to do that in the United States, whether or not they're academic or not. But as an active member. Yeah, so all three of you zeroed in on the major criticism of altmetrics, but also on the major hopes associated with this new set of ways of measuring and looking at people's work. So Mark, to answer your question, people who are proponents of altmetrics would say, it's not that we talk about random people, commenting and tweeting and sharing those kind of research findings or research data, because in fact, we have a live and very active and a lot of disciplines very well established online scholarly communities. So these are almost equivalent to peer review. They're like open peer review, right? So that's what some people would counter that kind of argument. Kathleen's point about the fact that these new metrics are not exactly proving a new, better world, but they're open enough and kind of allowing us to grasp a wider kind of scholarly products, not limited to print publications and journals, is yet another point well taken. And Robin's specific concern that maybe there is something generational going on, for example, if I'm on my tenure track in year number two or three, I haven't had time to establish my scholarly persona yet. How do I have, why do I have to wait seven years for it? What if I need something to show my kind of validity, my participation in my specific discipline-appropriate community right now? That's where altmetrics will be coming in useful. But as you rightly are pointing out, there's a lot of things that people are thinking critically and seriously about. And some of them are mentioned in the kind of literature, in the scholarly studies looking at the very working, or not working, of altmetrics. When I was looking quickly in the information science literature, I kind of divided the research that's being done today in like three major categories. So first of all, people are trying to figure out, do altmetrics and citations, right, the new and the more traditional indicators of scholarly impact, are they correlated? Which means, if I'm saying that my Twitter kind of account and my results shared by Twitter have received so much attention, is it going to be translated later on in the kind of, in the number of citations my published research receives, right? Because if there's no correlation, what's the point of boosting my kind of altmetrics scores? So people are trying to figure that out. This is just one of the studies, and this one in specific, of course, there's a few other ones who would say the opposite. This one in particular was emphasizing that there's really no clear, if anything, very, very weak correlation. But of course, it was limited this one in specific to research data sets. So it was not talking specifically about article-like research project, okay? And even within that specific data said that they were using, they were noticing change that begun about 2007. So perhaps there's a trend that those things will be happening, will be changing as time passes. And one of the things that again comes up that altmetric is not avoiding, is also the fact that things look differently in depending which field, which discipline you are in, right? And another thing, again, that I think might be changing soon enough, is the kind of lack of consistency across different altmetric tools. And there's gonna be one more research study based on that. Another kind of research studies is focusing on Twitter, which seems to be the one kind of media activity, social networking sites that some groups of scientists really, really actively participate in. So people are looking at citations versus tweetations as they're being called, and trying to figure out again if we can predict. If my research agenda, my research article, research study receives a lot of attention on Twitter, does it mean that my citation counts gonna be up the roof, right? And again, they're not exactly sure it's always the case. This specific study is saying that it wasn't exactly predictive of the kind of more traditional citation counts, but there are other studies out there that show the opposite, right? So again, it's the research area, it's something that is not set in stone at this point in time, right? If we are looking for clarity, as I personally like to do, well these kind of ongoing research studies are not helping us get there at this point right now. Another research track when it comes to altmetrics is actually comparing side by side to different altmetric tools and services available to us. We will be talking about them in a second, and you will see how not only differently they look, but as researchers are pointing out, they also collect data differently. They measure different things. So one of the differences was that some of them, for example, count Facebook likes, but not shares. Other services do the shares, but not likes, right? So it's all becoming better and maybe more clear, but at this point, there's no kind of comprehensive all-encompassing tool that would be consistent and use the same kind of data sharing, data protocol all across the different social media platform, and that would be replicated in all of the other tools. So as of yet, we are waiting for one comprehensive tool that would make it easier for us to look at altmetrics and trust that what it is we see is the final answer. And that, by the way, it's also true about the traditional markers. Impact factor and a web of science through which you can get it are not definitely capturing all publications and all journals. So it's not unique to altmetrics. Now, when it comes to altmetrics on your academic CV, given the caveats, given the kind of emerging nature of them, the iffy-ness and some of the discomfort, why should you still consider altmetrics? Why should you still care and pay attention to the trend, especially if you are kind of at the beginning of your academic journey, or if you are moving on through the kind of series of steps of your career? So first of all, as Kathleen was pointing out, they help flesh out the information that you may not be able to gather. Robin was saying that, for example, like impact factors for her publications would not be easily obtainable at all, right? So if you are in that situation, rather than feeling stuck, you have the option of trying to figure out if there's any alternative metric that could serve you well instead. Now, there's also the fact of the timeliness of altmetrics, right? I don't need to wait five years from now to see if my article has received any kind of attention. And one more thing to remember is that it allows you, using and relying on altmetrics, allows you to present your research output beyond publications, right? It becomes also important for researchers in fields that are, for example, of interest to the public. A lot of research that's being done here at John Jay is relevant to current political situation, interest of the public, and so on. In light of altmetrics, yes, the fact that your research has been written about on Huffington Post, or that you have been interviewed for NBC News, that counts, right? If we just limit ourselves to publications, well, there's no place to even mention the kind of public engagement of your research. To make it easy for you to figure out what it is that researchers, some researchers do when it comes to adding on and expending on their more traditional CVs, we found kind of a nice example of somebody who's from the Natural History Museum in London. So this specific researcher, when you take a look at his publication list, there's nothing outrageous here, right? When you look at it, but you'll see if you get to take a look at the screen, we also have the same image on the handout that we will be giving you, that all of those kind of traditional metrics, if available, are supplemented and expended by if he had them available, some of the altmetric numbers as well, so that when you take a look at it, you get a better understanding of how his work is circulating, who is reading it, and what kind of audience it's attracting. And Margaret now will be actually moving on to the specific and highlighting some of the tools that you can experiment with and see how your altmetric scores might be lining up. So his question was whether, well maybe you could hear, his question was whether altmetric referred to an official entity or whether it is a description of a thing. The is complicated because it's both. So altmetrics are things, it's a class of things where you're having these alternative metrics then a company decides to buy altmetric.com and they provide altmetrics through that site, that is the little picture on the left that had the little plug-in for Scopus that's provided, the data is provided by a company called altmetric.com. So it's kind of both. That is the perfect segue. So these are three tools that you can use to keep track of altmetrics. The first is impact story. This is a website where you create a profile. It lets you feed in your publications and then it sort of keeps track of their altmetrics in that place. So it pulls in the altmetrics that are available from whatever site was there and you can see it all in one place. You can sort of organize your stuff. There is a wide variety of publication types that you can put in there so you can get credit for data sets. You can see in GitHub how many times has this data been downloaded and it will sort of keep all of that information in one place. This is very handy for disambiguation so if you have a name like Margaret Smith, you can make sure that your stuff is really your stuff and that you can get credit for stuff that may not show up in a database profile. And then orcid.org or orcid.org. So there's a little disagreement kind of about how you even say this word. It stands for Open Researcher and Contributor ID. This is another website where you create an account and it has a ton of different types of research outputs that you can enter in. This one is important. It does not track the altmetrics for you but increasingly it is used, this data is fed into other things that do keep track of the altmetrics. So now if you have an orcid.org account you can put that into Scopus. Scopus grabs the data from there and then Scopus would provide you with the altmetrics information. Alternatively, if you like creating author profiles, so we already had two, the databases tend to have places where you can have your own profile and keep up with your publications and see your citation data in one place. The citation data that is in that database, you can go to the database, log in and see it there. So these are just sort of three ways that people can keep track of this sort of thing. And this is just what Impact Story looks like, so this is a researcher at NYU. You get to pick which works, you want to be prominently displayed. On the right hand side it's showing some key metrics so it's showing tweets, it's showing page views, it's showing views of videos. On the left hand side you can see sort of the diversity of types of things that are in there. Impact Story is still a little sciency, it is kind of tricky to add a book chapter or a book to it. There's not an automated way to do that. You could email it to them, there's like an automated email, but you'd still have to write an email, it can't pull in the feed. But it does pull in from GitHub, it pulls in from where Scopus, I have a list, lots of different places, Orchid, OrchID. So if you already had that account it could pull that stuff in. So you really would only need to create that one account and it would all be in here. And this is what Orchid looks like, so this is someone's profile, it's sort of like a fun big CV on the internet. It has a ton of different types of works. So I'm only showing you two of the categories, one of them is something else, but even within those categories there are tons of little categories. And so this is helpful because even if these altmetrics are not reported here, ultimately it is seeming as though, this is gonna be sort of the primary place where this sort of data is pulled. So when you have sites like Impact Story that will show you the altmetrics, they'll be pulling the publications or whatever outputs you put in here to be able to give you that information back. And for those of you who may not exactly be overly excited about creating profiles elsewhere, I'm happy to say that the library databases to which we subscribe here at John Jay, more and more include a little widget that's sometimes hidden on the page of the article you're looking at, so you would have to be looking more closely for it. A little altmetric widget. So this one is actually called from Scopus, which is a major database, and I chose an example of an article that was specifically dealing with the Ebola virus to kind of illustrate a point about altmetrics. It was published in October of 2014, right? So it wasn't exactly the time that, or pretty much the time it coincided with the outbreak. So not surprisingly, that article received over 1300 altmetrics score, right? So 20, over 200 Facebook users, 23 science blogs, 26 Google Plus users, 60 or 53 news outlets, and so on. So that's a fairly impressive kind of attention grabber of an article that came out, and I have to say at a specific time and time, and at a specific point in time, and that also dealt with a topic that was very much in the public consciousness. The other example I have is from a collection of academic journals, scholarly journals, provided to us here at John Jay through Wiley, who is one of the major journal publishers. This one I took from a journal Closer to Home, which is a journal on criminology, and I chose an article that specifically deals with issue related to stop and frisk, something that here at John Jay we think a lot about. And again, it's an issue that's been in the press, that's been in the media quite a lot, but you will see that this kind of altmetric imprint of this specific piece, it's much lower, right? So it was blogged by two people, tweeted by 16, and 12 readers on Mendeley were kind of having an engagement with it. So once again, these two examples are not only kind of selected by me to show you that yes, in library databases, you can find, or in some of them at least at this point, you can find something that could be helpful in you figuring out what your kind of altmetric portrait is, but at the same time, right away you see the huge range in the number of the altmetric scores, right? So that's something to be aware of as well. And we wanted to conclude our part with without deceiving you that yes, it's necessary to do the work and put into time not only to get familiar with altmetrics, but also to keep up with them. If some of you were here two weeks ago, Robin and Megan were doing a presentation about establishing your scholarly kind of personhood online, so to speak, right? So Robin was advocating on behalf of Twitter. Megan was talking about the upcoming CUNY depository. So these were two kind of people urging us to be active, to have a social presence online that relates to our academic work. Without that initial effort, without having established those profiles, without being active online, you will not be able to boast a huge altmetric influence, right? So it doesn't just come out of nowhere unless you get help from your publisher who, for example, is tweeting or emailing or sharing your articles, unless a local research center like Office of Advancements of Research here, for example, is posting about our researchers' books, articles, grants, and so on, right? So you need to be aware that for, in order to get altmetric scores to represent you, you need to work for it. It's kind of becoming a part of the cycle of research, of publication, of being part of the community. And yes, there is a learning curve. We strongly encourage you as librarians, as people who like to be thinking critically through information to be aware of the many caveats, many problems, but also of the great potential of whatever metric you use, whatever numbers you refer to as you self-present for promotion, evaluation, and so on. It takes time. It's, you know, I'm not a number person. I'm an English person. So you have to be patient with yourself and kind of trying to figure out and make sense of some of those caveats. For that purpose, we have a number of online guides. Most academic libraries now have them. We have our own here at John Jay. Margaret has hers at NYU. We also very much appreciate it. Duane, it's the Washington University School of Medicine in St. Louis. And then I also found that some of the altmetric services like Plum Analytics and Impact Story have actually very well-capped blogs. And believe it or not, they're not only writing on those blogs about themselves and their services. They're really doing a wonderful job of bringing to light some of the ongoing research, trends, and excitement that's happening in the field in general. And last but not least, I also definitely benefit and learn a lot and pick up little things about stuff in scholarly communication, altmetrics, journal measurements, and so on, simply by reading the Chronicle of Higher Education to which we have access here at John Jay and also through Glancing at Inside Higher Ed online. So these are some of the things that should be helpful to guide you and make you more comfortable with this whole arena of research measurements, including altmetrics. We also have a little handout before we pass on to Plum Analytics so that you can refer to some of the resources and some of the more illustrative examples later on. They've talked about Plum Analytics. So Martha and Marcia did an absolutely fabulous job of getting feedback from us. This is something our company's been researching for three years on how to put this all together and they did a spectacular job. So some of my slides, I'm not gonna spend too much time on because they've pretty much told you the why. We need to go in a different direction and I'll then just show you how we did it as a company, how we came up with a tool for researchers to use. So I click forward. Good. So first of all, and most importantly, I think, why did we do it? Why did we think we needed to go out and develop a product that could go and look at all different resources and bring back the impact that the researchers are having? Well, we want the researchers to be able to answer the stories and tell the stories about their research so you know where to publish again. You know where to go to look for funding. So we have set it up so that you can dig really deeply into all the resources that we're bringing in for you and see where they originated from and see why they're there. And you'll understand more about that as I go further along. So the why, again, why do you want to do that? Well, for the new researchers, for I think our name is Robin, for new researchers, they want to start to get and look at what kind of impact their research is having right away. They want to get immediate feedback. They don't want to have to wait for a peer review and for the general impact factor. We also want to help researchers get better competing for funding. Four out of five grants are not funded that are applied for in the United States. Four out of five grants. So the competition is very, very stiff for getting grants and funding. And also we included when we developed Plum because the people that were developing it or had been developing it are librarians. And we didn't want to take out the non-STEM. I mean, books are still important. Lots of researchers write books. So we made sure that Plum X also counts books. And I will show you how we do that. So this is all about, I've got one slide here that was their whole presentation. Their journal factor is very old. It came out in 1955, I think. Martha said it's journals looking at other journals and citing articles in other journals. So it's container-based. Good journals can have bad articles and bad articles can be in good journals. So it's very, it's not an exact science. And I think both Martha and Martha addressed that. The other thing is it takes at least three to five years for these to come out. And with the way we move now and the way we do research, we really can't wait that long. So if we look at the timeline, it takes when you go from lead to impact. So the idea comes up, there's a blog post, there's a grant and conference perhaps. And it can take two to five years just to get from the idea to the three peer-reviewed journal. But due to the pace of scholarly publishing, it takes another three to five years from that time the work is published to get the mass of citation counts. So before Plumex, you're looking at five to 10 years and you scholar something, can't wait that long and work needs to get out there faster than that. It's more up to date. But after Plumex, because we use so many different sources for our metrics, you can start getting feedback right from the time of the idea or the blog all the way down to the citation counts. We don't just measure social media, we measure citation counts as well. So again, how can you help researchers get funding? Give real-time metrics about their impact. What have I done today? What's good? What's going on now? Help them determine where they should publish because we link back to the source whenever possible with Plumex. You can get right to the source with that antronpredict. Was it a PDF download and from where? Was it an HTML download? If I'm a programmer writing source code, who forked my code? Who came in and used my code? I can see what's happening with that and how many people have used it, which is really good when you start to publish your information again to know where the best place is hard to do that. And again, to showcase more than just the journal articles because we want people to get credit for the books that they've written. So this is what the internet looks like in 60 seconds now. I mean, everything is out there and there's a lot going along, along, but a lot of this too is scholarly now. It's not just people going out Facebook and Twitter. There's a lot of scholarly information out there. The ACRL, Top Trends and Libraries this year of 2014 did a whole big study on how we have to get to the sources of these. So we have to drill down to the sources and I think people used to tell me about that as well. We have to know the data sets behind the research. We have to be able to get to that. So these are samples of some of the Plamex sources. Joke about it being the A to Z list when we get to the Tauro and Morro, we'll have the A to Z list. But these are all different sources that we go out and measure impact from. So you're looking at for books, Goat Reads and World Pat OCLC. So you can really get in and get to all different sources of metrics. We also look way beyond just articles because we were talking before about most of the impact factors based around articles. So we like to talk to type all these artifacts. So we're looking at blog posts and clinical trials and media and thesis and dissertations and videos and web pages and presentations. So those are all the different artifacts we're tracking. And we're adding to that best follow up. We bring in the customers that we get asked to put in more sources. So our approach at Plamex to metrics is to use all metrics, not just the buzz and the Twitter and the Facebook. Let's use citations. Let's use music, captures, mentions, and social media because social media is so important. So because we have library background, we categorize because that's what libraries do. They like to categorize things. So we chose by different categories to put all the data exhaust from all those sources and all those artifacts into. So usage clicks and downloads views, library holdings. Who's reading my work? Is anyone actually reading that? Is anyone watching my videos? We found with some of the analytics that we do now that usage is the number one stat which is what we just want after citation counts. So if I don't have citation counts, what do I want next? I don't want to know who to use again. And I believe we're still the only products that's including usage. I think a couple of other companies are working out of that. So then we do captures. So captures indicates that somebody wants to come back to the world more and use it to download the articles. So many times it's fueling the account. I try to think of it as the journal articles on the desk, the page doggie because I want to come back and read it. So we're capturing it. And we found in our analytics and I'm going to show you this in one of my slides that captures are actually an early indicator of citations. So isn't that good to know as a new scholar? You want to come in, you want to do more work where your work is getting captured. So you want to make sure you're coming where the captures are coming from and what tells you that? Tells you where the captures are coming from. So there are lots of posts. People are really now engaging with your work. They're talking about it. You can automatically uncover conversations about your research which I'll show you about going into Twitter and things like that and just discover feedback and opinions. So what's going on with my work? I'm like, you've been in the right direction. Maybe I can find co-authors that people are engaging with my work that I can work with again. Social media, as Martha and Marta were saying, there's as many articles that says it has a good impact because it doesn't have bad impact. So what we do is we really look at social media as being the buzz. I mean, we don't group all this into one number and give one factor. We give you the categories. And you decide based on your discipline and based on what you do and what department you're in, what kind of impact that's having on the research and the institution itself. So it's really tracking buzz and attention to research. But start there, yeah. Definitely if I'm getting a lot of feedback from Facebook, I'm gonna put more Facebook posts out there because it is gonna lead you to the bigger conversations. So this is what an article looks like in form. So I'm gonna show you the dashboards so in about time I'll go live but it doesn't look like I'm going to. So these are the five categories there actually. So here is the article itself. And here's, I guess I can point with this. Here is usage. So it's usage from all different places. Any place it's blue, you can click through it. So any place it's blue, you can click through and see it. Here's Twitter, you can go into your Twitters and see what people are saying about your work. Captures in Mendeley, Facebook comments, citations. We're using Crossref, Puppet Central, Puppet Central Europe and Stopus right now. And we're looking to do relationships with other companies. So we're working continuously and trying to get more sources. So the researchers need to showcase their impact. So we give them the five categories that Patrick's to tell the stories, not just the single citation how, to tell the stories of the researcher earlier. Again, the citation counts can take three to five years to improve sometimes up to 10 years until they're really out there and help them create better grant applications because all the things that we put into Plum, you can get out in charts and reports. So I'll talk really quickly about a couple of customers so I could, because they've implemented it in a little bit different ways. So first we'll talk about the University of Pittsburgh. The University of Pittsburgh is our first customer so we've worked very, very closely with them. What you're seeing here is the university dashboard. So they've set up looking at Plum from starting at the university and level then drilling down to the researchers and then narrowing by digital collections, by journals, by schools and programs and by university creators with institutions. Now the special thing about the University of Pittsburgh right now for us is they have their own open journal access. So we're working with them to create some new good ways of measuring that metrics. Here you'll see the artifact summary. That's the top 10 artifacts that they track right now. You can scroll across that and get all the usage categories, the five categories. Here's the artifacts themselves in tabular format and down here you're just starting to see the impact factor. You can filter and produce the other looks and I hope I can go live for a second and show you that. So I've already said all this to the researchers and how we narrowed down the top 10 artifacts with half of different artifacts. But what I wanted to talk to you about we're working on now with in collaboration with the University of Pittsburgh because the open journal system is coming out with a sushi-like standard for journals now so that we can better measure metrics of journals. So we're working with them and on the sushi-like committee so that we can put useful stats to be integrated directly into PlumX from the open journal system. What does sushi-like standard look like? Oh, I don't know, I wish you hadn't asked me that. Fish. Do you ladies know that camera? I can never remember that, I'm sorry. Somebody can Google it while I'm working up here. So this is just one of the analytic reports that we produced to have to show that the point that my co-presenters made and the point that I'm making, whoops, is this citation's lag. So if you look at, I keep going to the screen, I'm a mover. So if we look at 2004 and across the years, you see when we get down to 2013 and 2014 they really drop off. And I presented this for people, it says, oh my gosh, they're not getting cited anymore. No, that's not the case, they really do lag. So if I add in our other artifacts, you can see the difference and how the captures are leading to citations. You can see now there's a lot more social media. These captures that are starting here are making the citations happen faster because of the captures being put in it. So you get to tell the story, you get to figure out how to do things differently. Also at the University of Pittsburgh, we are working in conjunction with their institutional repository. A lot of academic institutions are making it mandatory for their faculty to publish into the IR. And this is sort of a carrot that we have for them because we put the trauma analytics in the institutional repository so that when the researcher goes to look at if their article's being used or not, we don't, whoops, wrong letter. We don't just get the PDF downloads like they did before with their IR, which I think is key friends. You get the whole story. The whole story is right there in the institutional repository so you can see the kind of usage that this article is getting. This little thing in the corner here, everybody's probably been saying, what's that? That's the plumb print. It's us showing with each of the categories. And when you hover over it, it opens and tells you these categories or you can have it open all the time. So it's just giving, if you're looking at a whole long range of articles, you can just get a feel for the activity on the article's a visual race so that you don't have to go on and look at them all individually. So this, as we went further down in looking at this author, we went into some tweets. And here you see that he found out that the Reeve Foundation tweeted about his research. Well, he has not applied for a grant from the Reeve Foundation. So now he says, oh gosh, maybe I should be looking to go in for a grant there. So again, dig down, tell us the stories. What's the interaction my work is having? This here is a video from Mexico that had to do with his, what he had developed. So somebody put the video up there and say, hey, this is what we're doing with that and maybe we can co-author some time. So discover tweets from potential funders and people to co-author with. To emphasize my little talk about books and non-STEM data, Pacific High Graduate Institute, does graduate degrees in psychology and half the faculty is involved in research but books are the main output of what the faculty members do and they wanted to showcase their work. So here's a look at what their dashboard looks like but if you see the top artifact here, it's a book. So they've set it up a little differently. They're narrowing down by different disciplines, so clinical psychology, counseling, psychology, faculty, so they're actually narrowing down by their faculty. I'm always ahead of myself on these slides. So books matter and we also do book chapters and I think that was something else that came up earlier today. Psychiatry counselors have never done any justice to books and they never will and the humanities and social sciences need to be able to see their impact as well as the sciences. So it's very important that these books can be seen and who holds them, who's talking about them and they're often the seminal work from the faculty members. So in order for us to collect statistics and metrics on books, we're working with Worldcats so how many libraries have they both? And a lot of times they often want to click on the last one and verb is run away and see who the furthest one has it. Wikipedia, how many articles reference this book? Amazon, what are the reviews on Amazon? What's the average rating? Goodreads, how many people added this book to their bookshelf and what reviews have they written? And eBooks from Epsco because we're now an Epsco company so we have access to all the eBooks. Oregon Health and Study did it a little bit different. Different way and it's really cool what they're doing. But the Office of Research and the Library are working collaboratively to define subjects to use in analyzing their outputs. So they categorize their articles in comics by the subjects that they've developed. So the dashboards now allow comparisons across the categories. So this is a sunverse that's been produced which is really hard to see from there but this is metabolic bone, I can't even see it, diseases and these then we get to be the artifacts written about that or where that, those are articles. So that's the biggest hit they had on the subject matter. These are books and as you draw out further and further you get the original source so you can see exactly where they came from. So talking about grants, one of the reasons only to see who's doing this is they're bringing on metrics into grant proposals. NIH had come out with a bio sketch that they wanted researchers to include into their grants and they put it off for the year because researchers were pushing back on wanting to include it for specifically the reasons that we're talking about today. We just don't have the data to go in now. So each SU is helping their researchers add impact metrics to the bio sketch. So they're gonna be ahead when NIH gets the grants so they're gonna see all this other metrics about how the works have been used and hopefully that will push them over to be one of the five that does get the grant. They want those applications to stand out. And the last thing I'm gonna talk about are funders you're using too long and this is very exciting for us because once the funders start putting time and effort into it they're gonna expect the researchers to be giving this information back. So Autism Speaks is actually tracking by the grants themselves and then they break it down by geography, by institution, by portfolio and by proposal type. So this is the screen we haven't seen yet and once you get into the research or you can then just start seeing all of their articles you see this is very recent, 2014. 10 captures, no citations yet, not surprising. It's 2014, 37 social medias, nine mentions that's been used 385 times. Good information to have on something that's just been out since 2014. You can go directly to the articles, you can have plon widgets, the plon print that I showed you alongside here too. So again, I've talked about it before I get to the slide. So we talked about it. So the data table, so you can look at one grantee's output even for research presented in 2013, 2014 and the metrics are available. I guess I know my presentation on my head and myself. They're also looking at by institutions. So they've also decided to look at their metrics across the different institutions to see which institutions are getting the most out of their funding. So all the different institutions we track as well. So this is an analytic and we produce all the analytics and you can measure likes with likes then you can measure, in this case what they're showing is the different institutions and how many captures they're getting from the different institutions because remember captures are a leading indicator to citations. So is there any way to sell in the story between liking this show really quickly live or are we done? Just only a few minutes left. Well to do what I ask, if anybody has burning questions out there because if we do a demonstration we're not really gonna have time. Any questions that we wanted to ask? I mean financially how does it work? I mean do you contract with the specific company? Yes, academic institutions and grant writers. We're working with the library and the Office of Research. Profiles have to get there. We talked about before. You can get all this for you but we have to build the profile. And when we designed it from, we designed it around Orfit because we thought Orfit would be the default aid because most of the researchers were supposed to be creating work and IDs and populating them. But we'll also bring in profile information from your institutional repository if you have research you set up in there. You put in DOIs and URLs. So the profile must be built to do this. And let me just jump in a little bit because that's kind of leading to that. Do I have any questions? That's a really good question now. You have to put the data in to get it out. Do you mind showing me how to get to it? Sure. So this is the University of Pittsburgh site in LAI. So you see they have a lot more researchers than I have on that slide. So this is just something about the university. These are all the artifacts they have. So these are the top five. At any time you can scroll to this information. Then we go through the five categories of usage and whatever. We can get to a researcher just by clicking on the researcher. It's not. It must be done. I can't get all the way into that. But let's check that out. It's not bringing it up for me. I think it's because of the IP range here. But here's where you start just seeing things for the researcher. You see there are articles. You see this recent article was just done in 2015 and it's not going to let me go in. So that's a darn joke. But if I went in from there, you would be able to then see what we saw on that slide where I showed the plumb print and the five different categories and how that's done. The profiles, which I'd like to show you, but I can't right now. It's just the sheet that you go out. You can put a link to your website, if you have links to Blackboard, you can put in links to LinkedIn and just fill out some personal information. And then to add things like you have an ORCID ID right under ORCID, you put in your ID and if that's populated, we're going to automatically go out and grab the metrics. If you add some slide show of sites that you can search in URLs, we automatically can go out and grab them there. So, do you have a question? The answer to the question is, if I'm looking, I want to start building some profile for myself. Since we just started with institutional cross-section, does it make sense for people who are interested to start building a profile there? So, shouldn't we get some of that out there in LA? So, you can grab it from our institutional repository or as we guess, they're separate? Sorry, I meant directly. What institutional repository is it? So, my name's Megan, I'm the, I'm working with the libraries in the development and management of the institutional repository between Academic Works, Shameless Club. You can submit your content now at Academic Works.community.edu, and it does have real implications for both your impact factor or your citation perhaps, as well as all the tricks. And we will send you a monthly email due and download reports. But we don't have all the sexy dolls and whistles attached to tweets and all of that content right now. And so, I think with, building all of this idea of public scholarship, I heard someone say the other week, you know, think about where you want to build your profile, think about where you can, and just, you know, take part of that conversation. That's a good thing. So, we can, yes, if you build it in your IR, we can bring it in. And that's exactly what we're going to do for students, but their profiles are all set up. And the researchers are going in and adding other things. I mean, they're beginning to add their YouTube numbers or the DOIs that they have that are necessarily in IR. So they're definitely building up their own profiles so they can see what's going on. So I would just say that there's a lot that individual researchers can do in this regard before you are home one. And you can think about setting up with something like primarily, but, you know, definitely orchid is a good place to start. Orchid is a really big place to start. And the institution will love it. But you have to populate it. You have to get one to just your ID. You have to populate it. I'm conscious that we're hitting about three o'clock. So I just want to thank our... Thank you.