 From Cambridge Analytica and data manipulation to the fake news mantra to the control of information in the tragic situation we have in Ukraine, we're seeing in ways both covert and increasingly over just how vulnerable and malleable the truth is. And we're also aware of the challenges and increasingly pervasive digital existence can present for those, including libraries, who are concerned with openness, intellectual freedom, access to information and stewardship of knowledge. Now our two presentations afternoon address such issues from different perspectives. Our first presentation is bias ethics and artificial intelligence and the role of research libraries by Sandy Herbure and Amanda Wheatley, Sandy and Amanda are liaison librarians McGill University in Montreal, Canada. They're going to talk about their work in engaging communities and discussions on the ethics and bias of AI and the role libraries can play in democratizing AI. So I'll hand over to Amanda and Sandy at this point. Robin Thank you so much. We're really looking forward to, to sharing our work on on AI bias ethics. So hi everyone I'm Amanda Wheatley and I'm here with my partner Sandy. And we're going to talk today about our AI workshop series specifically about one session on ethics and bias. We really focus more on the user or public services side of how we can democratize AI. I've been to a lot of great sessions here already in the last day and this morning for me it's still pretty early morning. And there's been a lot of great talk about kind of like the tools that we use and the skills that we need as librarians and so this is I think going to flip that perspective a little bit and talk about how we can create spaces to have these types of discussions with our users and so as I introduce myself and Sandy, we do have our library site where you can kind of keep up to date with the research that we're doing and how we interact with like user services and information seeking behaviors. But specifically today we're going to talk about this AI ethics and bias section we're going to talk about why libraries are kind of pivotal place to do this we're going to break down the content of the workshop itself and our research project on AI ethics and bias. So why libraries, why should we be having conversations about AI ethics and bias with our users. And I think the answer that is really quite simple and it comes down to the social role of libraries and our perception as a third place. And that we are open to the public were outside of commercialization in a way we can facilitate these types of groups we can bring people in. You know from various different disciplines to engage in matters outside of their studies and so I think that's why we kind of are a perfect place to have these types of discussions. And Sandy and I were building what we wanted our workshop series to be we really looked a lot at conversation based programming we wanted to have discussions we wanted to have an open area where people could come in with any sort of range of knowledge about AI, and participate in a discussion and feel like they could have their opinion heard and so we did a semi structured conversation kind of style where we had both lecture content and discussion based questions. So that people could kind of feel like there was a bit of structure if they needed it. But that the conversation could be free to flow and so that was why we thought we were in kind of like a prime position to have these types of conversations and make them really multi disciplinary and any level of kind of the AI perspective or like knowledge. So the content for the workshop itself. We did specifically for ethics advice it's a one and a half hour workshop it's split between lecture and discussion as I mentioned, and we focus a lot on AI terminology case studies, we talk about AI policy and we talk about different think takes and their morality. And like I said so we flip between the lecture and discussion model so if some groups come in and they're really engaged right off the bat, then we know we don't have to do as much of the lecture content because there is going to be a really kind of interesting and participatory conversation. And other times it takes kind of people a while to get into the get into the groove of speaking and so we have some lecture content to kind of keep that discussion going in the meantime. In our introduction to AI technology, we like to start things off by kind of gauging people's perceptions of AI and their level of knowledge on AI are they somebody who's, you know, worked with AI are they coding or they working in in specific kind of technological areas, are they someone who's kind of just come off the street and just wants to know more I mean that was Sandy and I we had no technical background when we started our research project about four years ago now. And we've just learned as we went so you know if we had to wait to have knowledge we never would have been starting in these conversations so we like to ask kind of start of asking people you know what they think what words come to mind and brainstorm from there and that can really help kick off a conversation. So we use AI definitions for any of the terms so that people who are unfamiliar can can kind of get a scope of what we mean by artificial intelligence machine learning deep learning any of those types of topics. And we also put that into context with popular products or services that use these tools, and we do a breakdown of AI technology terms which is in our AI family tree so we use this in all of our sessions to kind of put into perspective. So we're talking about machine translation or we're talking about classification, how those exist within natural language processing and then how that exists within a larger umbrella of artificial intelligence. And this helps everyone kind of get a sense for you know where we're at in the discussion and what we might be talking about. And of course we do case studies and this is where the conversation I think really gets heated. We really have quite a lot of discourse around these areas. So different articles that that we bring up and so the first is from the Guardian and it's even algorithms are biased against black men so it's really about algorithmic decision making and so we pose a lot of questions about the implications of bias algorithms and the decision making process in and of itself. And so that often sparks really fruitful conversation because AI decision making is a huge topic right now. Another topic that we like to propose is more on the privacy angle so this is a box article about Alexa recording more than you think. And so it's really about you know the devices that we use the internet of things and how we sacrifice privacy for convenience and how we're moving away by using these types of devices. So it's to kind of a different AI based ethical scenarios, and we like to see what kind of conversations come out of these. Sometimes people have suggestions for how we can fix these things other times people are, you know, devoid of all hope. We never quite know what's going to happen here. So it's always really really fruitful discussions in one way or another and I'll turn it over to Sandy to kind of pick up from here we lead into policy. Great. Thank you Amanda. So, as as Amanda has already mentioned we try to strike a balance between a more lecture based and some like thought provoking questions that will get people thinking about some of the content we introduce, and then generate some discussion and people have very different levels of knowledge is something we've kind of noticed about these topics, specifically about policies regulations. What are governmental bodies doing about all of these issues right. So some of the questions we asked people to think about before we lead into some of those topics or how would you mitigate some of the ethical issues with AI like having read some of the case studies and articles we've presented. How would you mitigate that how like what are some of the ways that you would identify and manage bias specifically an algorithmic decision making. What do you think is the role of governments and monitoring and policing AI should they have a role in this. And what are some recommendations that you would make to government governing bodies about AI so we really try to get people to think, and then we present some of the policies that already exist or are that people are working on. So, we tend to focus on three main ones we do, because we're in, you know, in Canada we tend to focus on the Canadian ones, because it's our context. And not about the federal policies, where we have, they tend to be focused more on personal information. So not AI specifically, not algorithmic decision making specifically, but rather more use of personal information. So there's the Canadian Privacy Act and the data, which deals more with the private sector, but is applied only in specific provinces. And that's our provincial level because that's our context in Quebec. There is this new law that was passed in the fall, called bill 64, which is an act to modernize legislative provisions as regards to the personal, the protection of personal information. So this one has a lot more to do with AI and algorithmic decision making but also how information is collected and used. We don't know all of the tenants of it, particularly how it's going to be used for research data and like research collection, but it is something that in some part is somewhat influenced by the GDPR, right, which I'm sure you're all familiar with the NRL data protection regulation, which has a lot more provisions related to algorithmic decision making, the right to be forgotten. A lot of these policies that haven't quite reached Canada yet. They're working on it, but as I mentioned, a lot of it is really focused on the personal information side, not so much the what we can do with the personal information. And people always find that quite interesting. And it leads to some interesting discussions, as we'll see, like we get people to talk about how, you know, do your recommendations that you just came up with, compared to current legislations, how would you maybe come, what are some things you would include in them. And we share with people. This great resource called the OECD AI policy observatory, which tracks throughout the world, different policies and regulations around AI so people can compare which countries are doing what, which is always a very interesting and usually generates a lot of discussion to. That's one of the great parts of this. So if we go to our next slide. Once we've talked about governments, then we tend to talk a little bit more about tank tanks and, you know, private groups are dealing with AI and some of the examples we use are Elon Musk's open AI, compared to Element AI, which is a Montreal based company founded by Yoshua Ben-Gio and we really just talk about the work that both of these tank tanks have been involved in, and especially like their outlook on AI and what they're willing to use it for. Open AI tends to have maybe a little bit more of a, you know, malicious AI side of things, whereas Element AI is really focused on having a benevolent one, like that's a core principle of their organization is that they want to wear, they want to work towards systems that are fair and accurate and unbiased. So people always find those two examples very interesting, particularly with how they relate to the whole like the ethics side, yes, but mostly like the morality side of AI which doesn't come up a lot I think necessarily in other circles of conversation. So that leads nicely into our research project. So all of this discussion we noticed was very engaging and very interesting. So we really wanted to get a better sense of when people's knowledge of AI and their perception of it like were they on the this is evil side or this is amazing and will change our life side or maybe somewhere in the middle. So we were measuring participants knowledge of AI and their perception of it with a pre and post test that was hosted on MS forums. So before they came to our workshop discussion session. You know at the beginning they would fill out this little pre test and at the end they would fill out a post test, and we could compare and see if the answers have changed and how much they had changed. So before our research ethics application we mentioned a minimum of 10 participants can at least draw some conclusions from it. Sadly with the library being close to the general public and throughout various stages of the semester, the access was more restricted to the library. So I made it really difficult to have participants come in so we had more smaller groups that were super engaged and contributed to a super rich discussion, but we didn't quite hit the number of participants and we think it may have been because of all the restrictions in place for public events so we're hoping that in the future we'll be able to go forward and have more participants but we think it's still worth it to share with you the questions we asked people which you can see on the next slide so our pre test questionnaire had some of the standard demographic questions right wanting to know their statuses with their level of studies, their departments to because we found that we get people from really all walks of life coming to these discussions. And then we delve a little deeper into the knowledge of the I so what are they hoping to learn in terms of the workshop but how familiar are you with AI. You know which tools like identify some tools that would use AI technologies, if they use any like virtual assistance in their personal life, if so which ones. And then we delve more into the perception side so how we feel about it so do you think it's positive negative. Do you think it can improve your daily life, your score or research, and then are you concerned about any ethical issues related to AI. So we kept it pretty open. And then if we go over to the next slide. So the post test, which had some of the same questions mirrored right. So what new information did you learn. And also like on the knowledge of AI, what they learned during the workshop right if they were more comfortable discussing it. We can now use some of those technologies, which ones, and then their perceptions of it so to see and track if their perceptions of AI had changed as a result of the workshop. So, you know, some of the similar questions we've already asked are mentioned there as well. Okay, we'll go to the next slide. While we didn't get any official results because we couldn't quite hit our participant threshold. We did notice some very interesting things happening in those sessions. Like I mentioned we got people from all disciplines, like the humanities and social sciences we had some medicine students we had some computer science students as well, which is interesting. And so that means that we had some participants that were comfortable and had experience developing AI technologies so they didn't need that background knowledge on what AI was. And then some participants came because they saw an increased use of AI in their fields, right medicine is a good example of that right where it's used more and more, and they wanted to be more aware of it. And that led to some interesting discussion, especially around bias, how, you know, as we talked about extensively in the last few years that there is bias in our systems in our societies right so those are going to be reflected in the technologies that we used. So that was something that came up on that, and that a lot of courses dedicated to AI specifically computer science courses, do not have ethics in their syllabi at all. So they talk about developing it and creating it so that it's efficient, but they do not talk about some of the potential ethical implications for it, which we thought was quite interesting and hopefully our discussions will somewhat fill that hole. So I'll pass it over to Amanda who will share our next steps. This is kind of just a wrap up here with everything and I think that last point too about ethics not being a big consideration in syllabi for students who are studying on AI technology was kind of a huge eye opener for us is that we were providing something that these students couldn't access somewhere else on campus you know that they were having these conversations on their own but they didn't have a place to come together and do this kind of thing and so it really kind of reinforce what we were doing and so this kind of all leads forwards to some best practices for AI instruction and how you can start you know doing something similar on your campus and providing conversational spaces and so the first thing that we always recommend people to do is take an AI inventory, and especially at a conference like this where there's been a lot of talk about digital scholarship and the tools that we have at our libraries, a AI powered and I think it's really on us to realize which ones have certain technologies within them, how those technologies might impact our users and how we can communicate those things better with them. So that's something we recommend doing kind of right off the bat is a scan of what are you putting forward to your users and why and do you kind of know a little bit behind it. I also recommend that people kind of identify their goals so ours was really to have open kind of open discussions and then just have a neutral space where people can come and talk about AI, no matter what their perceptions of it were. If you can establish a budget Sandy and I have no budget aside from our time and our work. This is on librarians for subjects we don't work in a digital scholarship kind of center hub and makerspace or any of those spaces we're not part of our digital teams. So we have no money to buy technology we have no money to facilitate any of this and so we just took it upon ourselves say hey this is the time that we have this expertise that we can lend. So I would like to take off set that by trying to partner with local clubs or talk to one of the groups that I've been talking to quite a lot is the Montreal AI ethics Institute, and I get involved in a lot of their conversations and the kind of feedback that they provide on policy decisions and that helps kind of, you know go forward with this I think some things that we can do in the future are try to find more guest speakers from our, you know pool of professors at McGill. Our business faculty has recently opened up a center for ethics it's going to be launching this fall so this can be a really great collaboratory opportunity is to get involved with them. And just to kind of say next steps for us is to really try to reoffer these workshops in the summer to reopen to the general public so the very first time we ran this we were open to the public and we had kind of a full house we didn't have an RED at that point. So we didn't have any kind of research to collect it was really the first time we ran them. And we noticed that when we do them in person when we open them up to the general public. They're a lot more better attended than if we were trying to do them on zoom and you know students are really just kind of zoom fatigue so we think this is kind of the key to success for our AI discussions. Of course if you have any questions later on when we're in the question period. Please let us know but thank you so much for your time. Thank you both that was absolutely fascinating and I love the way you said just just get on and do it basically. It was great. I was, I was going to pick up with you later on about the the the wider societal role which you talked about and so which which is, which is wonderful. So thank you and as you say we'll we'll come back for questions and discussion later, and I'd just like to remind the audience that the chat box is there and the question and answer question box is there as well so please do engage get some thoughts down and we can pick up on these in a while. So our second presentation is by Rick Anderson, University Librarian at Brigham Young University in the US State of Utah. I'm Richard Sharan, Martin Butiker, Director of Content Management at Research Solutions Incorporated. Rick and Sharan are members of the scholarly network security initiative, which brings together those involved in scholarly communications to promote the importance of vigilant against cybercrime and to support the integrity of the scholarly record. In this session on network security cybercrime and academic libraries, they'll share the findings of a 2021 cybercrime awareness survey. So it will be to you Rick and Sharan. Thank you. I'll just share my screen. Can you see the screen now. Yep. Welcome and thank you for joining our session. Network security cybercrime and academic libraries, mind the gap, where as Robin said we're present the findings from our 2021 library survey exploring cybersecurity awareness. I'm Robin Madden Butiker, the Director of Content Management at Research Solutions, a platform developed devoted to supplying the version of record to discerning researchers. I've been in the publishing industry for over 20 years as a publisher and information professional and I'm based in Basel Switzerland. I'm also a member of the scholarly security, excuse me, scholarly network security initiative or SNSI. We bring publishers, institutions, and other industry professionals together to address cyber challenges, threatening the integrity of the scientific record and scholarly systems. Since our founding in 2019, our diverse membership has grown to include libraries, large and small publishers, learned societies, university presses and others devoted to upholding the integrity of the scholarly record. I'm here today with someone who needs no introduction, Rick Anderson, University Librarian at Brigham Young University in Provo, Provo, Utah. Also a member of SNSI, he has over 25 years of experience in research libraries and expertise in the areas of scholarly communication, collection development, acquisitions, and library administration. Together we'll present the findings of a commissioned independent survey conducted by Schiff-Learn. Thanks for the intro, Sharon. Hi everybody. Thank you so much for joining us today. It's great to be able to be with you. We want to let you know that you're going to be seeing lots of text and data heavy slides as we go through our presentation, and we are going to move through them quickly. But we just wanted to make sure that you know that you'll have access to these slides after the meeting. So you'll be able to go through them and absorb the information at your own speed later. But right now we just kind of wanted to give you a high level overview of our findings and takeaways. Thanks, Rick. Cyber crime is not limited to publishing. As we all know, it affects all areas of digital life, but our focus today is on how cyber crime negatively impacts our work as academics, publishers, and primary research providers. Recent headlines and events indicate that cyber crime is on the rise in the field of research. In 2021 saw an increased number of ransomware attacks affecting UK schools, colleges, and universities. And the UK's National Cyber Security Center lists the education sector as the third largest target for cyber crime ahead of retail. Over 400 universities and institutions across 41 countries have recorded their networks and data compromised by legal websites. Publishers and librarians have a fantastic record of collaboration to solve real pain points experienced by researchers and students alike. We need to work together to achieve our shared mission, the safety and security of personal data. So collaboration is key. And SNSI was formed to help address this several librarians representatives from leading organizations and other key stakeholders have kindly agreed to provide SNSI with independent advice and feedback on the program, which will be able to turn into tech tangible actions taken by the group to serve the broader research community. So as I said, we commissioned shift learning it's a global minded independent research agency based in the UK, and it specializes in evidence based market research around education higher education and sustainability to administer the survey. Let me first explain the background of the survey. It's a global survey of academic librarians to understand the views on cyber crime and what they thought about SI hub and other related websites that used university logins to access data. The goal of our research was to investigate the following objectives with the aim of gaining insight into how they can better support librarians in the future. We examine the extent to which academic libraries understand cyber crime data security and other related issues, what their main concerns are insights into what the library community thinks about illegal websites that offer access to scholarly resources that would normally be accessed only from publishers platforms. And where would they turn to support in the event their networks became compromised. So in order to gain a representative and robot sample shift insight sent the survey to a combination of contacts from international data suppliers SNSI member institutions and other large universities, and we conducted the survey from June 30 through August 2 of last year. Our respondents belong to the following demographics. 86% of the respondents were in North America and Europe working in a university. 94% had the title of librarian with nearly half working in large institutions of 10,000 or more students. Over half of the respondents were aged 35 to 54. Before we as a community community can discuss concerns surrounding cyber crime and how to solve the challenges we must first understand the concepts, which was the reason for our survey. And these are the questions that we asked. Yeah, so we asked respondents to rate their level of understanding on several cyber crime and data security related issues. We found that many of our respondents were reluctant to refer to themselves as experts in any particular area. But at the same time, they were unlikely to say that they had no understanding at all of the issues. Although their their confidence in their understanding consistently peaked at the level of some understanding. Those who are working in large higher education institutions were more likely to select expert to characterize themselves on each issue than those who are working at smaller medium sized institutions. The takeaway we had from this question was that it might be profitable for SNSI to target communication towards smaller institutions to help raise awareness and confidence around issues related to cyber crime. On this next slide we asked how well would you say you understand issues around the sharing of network credentials and we found that our respondents had pretty high confidence in their understanding of things like phishing and students making their university network logins available to others. And and student and staff personal data being stolen. The and here again confidence there their confidence in their own understanding tended to be higher if they were respondents from North America than if they were from other parts of the world. And this is particularly the case for concerns around privacy and online teaching around issues like zoom bombing. Our takeaway here was that, you know, we saw that respondents felt confident in in the knowledge that students making their login details available online as a security risk. And, and so we felt like this has implications for our supportive role in focusing on providing details of how these pirate websites operate and how the risks can be mitigated by librarians. On the next question. There we go. Here we asked an open response question. We asked, what is it that concerns respondents most when they think about cyber crime and data security and related issues, and we found that protecting staff and student data was their top concern. That that was reported by 37% of respondents but that rose to 41% among librarians in North America and 57% for those in South America. We found that respondents from Europe were more likely to say that nothing really concerned them regarding cyber crime while respondents from the US were much likely to say this. With disparity and confidence we think could be linked to the recent introduction of GDPR and the training that librarians in Europe are likely to have received as a result of that implementation. We noticed that a number of respondents highlighted their fear over the potential consequences of a cyber attack. Things like causing damage to the reputation or integrity of the institution, or creating more work for library staff or preventing their students from learning. The level of confidence and understanding in their understanding of cyber security correlated with how likely the respondents were to say that they were concerned about other risks beyond the security of staff and student data. We found that the more librarians felt they knew about cyber security, the more worried they were about a variety of different security risks. That's a significant finding I think. In our takeaways were that there's a direct correlation between communication about Sy Hub, for example, to student and staff data security and institution reputation. Also that preventing, preventing cyber crime preserves and protects institutional integrity and reputation and reduces work for librarians and and and help students learn uninterrupted. On the next question. We asked respondents how much they felt a range of cyber security risks were a concern for their library. We found that concern over theft of staff and students personal data was consistently high, followed by concerns about students making their network login details available online. Personal data theft concerns were especially high in North America. And for those who were working at larger institutions. Among those who who would consider themselves to be expert in understanding cyber security issues. There was less concern over things like zoom bombing, or staff use of personal devices and more worry over issues like ransomware and viruses. So, again, some some takeaways. It's worth noting that pirate sites do exploit staff data and personal data once the information is been obtained. And regardless of the size of the institution protection of personal data really is something that needs to be at the top of everyone's list of concerns. So, the survey let's see next slide Sharon. The survey then went on to ask what measures are taken by institutions in the case of a cyber security breach, or an attack on the network. Here we asked respondents, what would they do if they suspected their institutions network had been compromised. 96% unsurprisingly said that they would contact their IT department. Others others reported that they would contact their institution security department, or tell other librarians. These responses that these patterns were pretty consistent across continents and age groups and institution types. We found that respondents were least likely to talk to students about how to protect future network breaches, or to add those breaches to some kind of a network breach log. If there was anything else they would do respondents said things like, you know, report to a supervisor or a head administrator, quickly back up their data. You know, proceed with more caution like with opening suspicious emails, change passwords and alert other educational institutions. We came away from this wanting to recommend that each institution have a specific protocol something as simple as a do's and don'ts list. Regarding how to prevent future network breaches and for librarians to pass on to their students and to encourage communication between them. An awareness of which sites are illegal is an important step in cybersecurity education. So, when we asked respondents, can you name any examples of illegal sites, although over over half of respondents reported that they were familiar to some degree with illegal websites that offer pirated access to scholarly resources 21% were unsure. This suggests a relatively low understanding of what constitutes an illegal website offering access to resources. When asked, can you name an example. Obviously, the most popular answer with side hub. However, other answers included legal websites, such as research gate academia dot edu Google scholar, and even on paywall, which suggests that there is some uncertainty in the community as to what makes a website illegal. What one of the things that we hope is that SNSI can be a resource for libraries to explain what constitutes an illegal website that offers access to pirated resources, and also to delineate websites that are legal and illegal for librarians to consult. It's also worth pointing out that there are leak perfectly legal websites that can be used in illegal ways research gate and academia dot edu would both be examples of those. We're also thinking that we can provide librarians with information on how these websites operate, which would include the risks associated and who they impact and what actions librarians can take to mitigate those. Next question. We asked respondents to rate the degree to which they agreed with a range of statements to explore why they might either support or oppose side hub 64% disagreed that it's fine for librarians to recommend these sites and 45% agreed that using these sites is wrong. Interestingly, those 66% felt that free public access to research research should be a legal right and 47% agreed that these sites are useful for learners. The statements on the left of this slide suggests that for respondents side hub is is kind of a paradox. On one hand, they agree that it shouldn't be used and they wouldn't actively recommend it, but they tended to agree with some of the principles that it promotes like free access to research. And so therefore they might be more likely to kind of turn a blind eye to it. We found that the views on these issues were actually consistent geographically across our respondents. There's a significant difference in the responses but by age group the age group 18 to 34 tended to be more sympathetic to side hub with 29% agreeing that using these sites is wrong and 76% agreeing that free public access to research should be a legal right. Now it's important to point out here that the sample size here was small and so we should extrapolate with caution from this data set. In addition to what extent you agree with these statements regarding sites like side hub. We found that the statements listed on the left of the slide here illustrate the conflict that we were discussing earlier with 47% agreeing that these sites break copyright law and yet 46% also agreeing that these sites are bad for publishers but good for learners. Fewer than half of our respondents felt that these sites should be prosecuted for copyright breach. We were concerned with side hub from a data security perspective. 43% agreed that they worry sites like this may have access to their institutions network, which is a surprisingly low number given that providing access to the institutions network is fundamental to the model of side hub but nevertheless. 42% thought that students using them with their institutions networks risk. 30% of respondents were unsure whether students using these sites would put their network at risk, which again illustrates a gap in understanding it rose to 31% for those that express little understanding of cyber related issues earlier in the survey. A couple of takeaways one is that librarians seem to be concerned mostly with what's institutionally relevant to them, helping their students or protecting their institution. And of course librarians do want to protect their institutions network so providing more information on how side hub impacts network security is probably a good way to engage them. So, in conclusion. And it is worth noting that overall, we found that respondents have limited confidence around cybersecurity generally. They're they're open, open text responses revealed a lack of knowledge about how cyber crime works, and about what to look out for and how to prevent it. Our respondents were most likely to have confidence in their understanding around issues like phishing emails, students making their university logins available to others, and personal data being stolen. But librarians seemed to be only vaguely aware that side hub is in fact a threat to cybersecurity with side hub being the most frequently named website when asked for an example of a site that was illegal but offers access to scholarly resources. However, again there were others who thought that legal sites like research gate Google scholar were illegal, which again illustrates sort of the uncertainty and ignorance around how these websites work. We found that our respondents were mostly concerned with data protection with theft of student and staff data being the top concern. There were some who worried about the potential effects of the cyber attack including reputational damage or interrupting student learning. And the more librarian to me this is one of the big takeaways, the more librarians felt that they knew about cybersecurity, the more worried they were about a variety of different security risks arising from these sites. And again librarians showed more concern when the risk of using websites like side hub became more personal and institutionally relevant by linking it to their students and colleges, or by increasing their own workload or undermining the reputation of their institution. The survey also indicated that librarians would contact their IT department if there was a security breach. They were also also likely to report it to their security departments or to tell other librarians in their field. Our findings indicated that librarians saw cybersecurity as somewhat outside their realm of responsibility and reported that they would be unlikely to speak to students about network security. And the survey also showed that site hub was considered a bit of a paradox. Librarians felt that recommending the use of websites like side hub was wrong, but they were also sympathetic to some of the values that these websites promote such as access to free research, because it would benefit their students learning. So although there was some familiar familiarity with the name side hub most lack the comprehensive understanding of what it is, how it works and the associated security risks, and some weren't sure if it was illegal. And SNSI was formed to protect the scholarly record and preserve the mission of institutions that curate scholar literature. And it's the common goal of the scholarly community to uphold the standards for generations of researchers. We invite you to leverage our experience and combined expertise to support this vial endeavor. And there's a link on your screen that leads to an information security checklist for librarians kind of like a do's and don'ts protocol list to consult at your institution, and I'll place it in the chat. After I stopped sharing my screen so you can go look it up and now we're, we'll take any questions, according to Robin schedule. Any questions you have on the topic. Thanks, Rick and Sharon. Really, really interesting. For me, I think that conflict between the protection of rights and the desire for openness and the desire for access is really challenging isn't it. I mean, I don't know whether you got the impression or get the sense and you talk quite a bit about librarians attitudes but I wonder about, you know, would they see the sharing of credentials with piracy as a compromise or a positive. Sharon, you want to respond first or. I'm sorry I'm looking at the Q&A right now. Speaking as a librarian who has had a lot of conversations on this issue with my colleagues. I can say that it, there are very few librarians who would say that that sharing network credentials is a wise or good or ethical thing to do. At the same time, I find that there are relatively few librarians who are willing to condemn. There's, there seems to be a sort of a general philosophy of the enemy of my enemy as my friend and increasingly in the library profession publishers are seen as the enemy. And so, you know, I think many of us, many of us in recent years it's sort of become more tribal in our thinking and it can be, it can be hard to acknowledge that, you know, someone you generally think of as part of the enemy tribe might have legitimate rights or concerns or complaints or whatever and, and I think that that makes it harder for us as librarians to talk in a sort of a dispassionate and analytical way about, about issues like, like, you know, massive copyright infringement. But, but again, though, the, the, the, the, the, these are two somewhat connected, but at the same time, conceptually very separate issues. There's the question of copyright breach. And then there's the question of cyber security in the context of Sy Hub, they're, they are functionally connected, but they're separate issues. So you could, one could agree, one could feel that you know what copyright in the context of scholarship doesn't even make any sense. All scholarships should be free to everybody without any restriction. You could still at the same time be concerned about the threat to network security that's posed by students or faculty sharing their network credentials. You know, the fact that your network credentials, the same credentials that give you access to licensed resources might be the same credentials that give you access to your students grades and your email and your tax forms. You know, these are, these are issues that, that organizations like SNSI are trying to raise people's consciousness about. It's not just about, it's not just about licensed copyrighted work. It's about, it's about protecting our students and our, and our staff from, from network security breaches. And I underscore that completely where even if all material were open access, there would still be a need to maintain a level of security for the university network. It doesn't have, they overlap, but they're also very separate issues. And by the reference, the continual reference to, to, as I have, I guess it's, it really is seen as a major security threat. Possibly because of it, it's, it's philanthropic approach. Well, it's a security threat because its entire model is built on people sharing their network credentials. Yeah, exactly. Okay, thank you. So, you've got some questions coming in. And there's a question for Amanda and Sandy around. And this struck me as absolutely as well, you know, when you're designing and delivering the workshops, did you encounter any barriers or rejection in terms of the sense that, you know, this is not for the library to be doing. I can start and Sandy jump in. If there's anything else I would say I don't think we encountered any specific barriers or rejection from our like supervisors or from other staff at the library. We're really lucky when you know we were going to culture, like have an organizational culture where if you come up with a workshop idea, you pretty much have to put it on the calendar and, and unless it's something really kind of out of the box you you might not raise too many eyebrows so we were really supportive of us and the very first session that we ran we actually had a lot of library staff come to that session and we were actually invited by our library groups to give that same session, specifically to librarians. So they were really interested in what we were doing as well, and they helped us spread the word on it a lot which was really great and I think we're lucky because that might not be the case in every institution, where you're able to do something like that where we have the freedom to say, this is the type of workshop we want to run and we're going to put it out there and I don't know Sandy if there's anything else. Yeah, so on the institutional side and our colleagues have always been super supportive and we offer these workshop in part of the visual scholarship hub. So in a way it really promotes our work and supports it. I will say though that on the side of the participants. Those are very lovely and very open minded. But I feel like, I mean, you can kind of encounter that no matter who or what you teach as a liaison librarian that sometimes you may have people question you or the angle that you're coming at, but it hasn't happened that much. And all things that does happen, but we've been very lucky that we've been very supported and people have been probably because these issues are not really discussed in a lot of classes people have been very open minded and contributing to the discussions so we haven't really hit any major barriers in that sense. And even your reference you talked about engaging with a wider community and I've linked question I had was, was what sort of impact are you looking for in terms of your initiative you know what, what do you think will happen. So I guess that's something that I think the pre and post test will hopefully tell us a little bit more about I think right now we're just looking our main goal is to provide a space for these types of conversations, because we noticed they weren't happening on campus. We've seen a lot of groups, one of the ones that we like to talk about a lot. The University of Toronto has what's called the 99 AI challenge, where they group together students faculty staff from the public they had 99 participants, and they did like kind of a multi semester project where they did like a self paced learning course on AI and then they did discussion conversations. And so we saw that happening in such a unique area and we didn't see that happening as much in our own space and so we wanted to provide an example that how far we go beyond that. That's something that I don't think we've determined just yet I think our goal is really still pretty base level of just getting people to discuss this getting people involved in the conversation. One of the things that we're starting to do more at the end of our sections is like a call to action. So we're trying to invite students to participate more in other community groups about AI to get involved in discourse to talk, you know, to their, their colleagues to join clubs, and basically just if this is something that they're passionate about to keep the conversation going. And I think to, I mean from my point of view, I think we'll see different impact based on different groups, like we haven't seen much of edging the pandemic because it's been so restrictive to get access to the library but when we first ran the sessions we had students we had alumni we had a group called friends of the library, which tend to be maybe a little bit older to. That was really interesting because those different populations kind of interacted with the information differently. So I'm interested in seeing if it's going to impact, like what they take away and how you know the impact it's going to have on those communities particularly. And just sticking with a second with Amanda and Sandy, you mentioned, not quite plaintively but the type of support you would need or you would like to have to establish a program and so on. You know, what would you look for what struck me was the fact that you're what you're doing is you're permeating a lot of academic disciplines. And with bringing huge value to all sorts of different areas. So we're just interested and see what you think you know with support and the type of support you'd need and what you could be doing with that. So it would be fun to have. I think specifically to be able to purchase tech. And we've done a lot of research on voice assistance we would be interesting to get some of those tools and get people to interact with them build specific profiles see how they respond to certain questions. And maybe with the growth of our digital scholarship have done some points will be able to invest a little bit more in tech. Right now we have an impact on our time. So that's what what's what we're doing. I think I don't know if Amanda has like other ideas of what you could do with it. No, I think that that's pretty much like when we started. So Sandy a lot of our stuff comes to virtual assistance which is why you have seen those questions in our pre and post test. We're really interested in that aspect of kind of the user experience. And so I think one of the things that when we started looking at AI in general, like four years ago was eventually getting to an AI experience of some kind of a space with tech a space, some place where we can collaborate with it. More from a user perspective because a lot of it happens behind the scene. And I mean even just like listening to the presentation before about like side hub and all of these different websites that have AI powered generators and they go in there and they, some of them might be open access search tools like semantic scholar where you know they're designed to search for open access articles and they have AI power technology. With AI hub we have no idea how that algorithm is updating. And as librarians, we don't, you know, say hey go to side have go look this up for your papers, but it happens anyway so we should be aware of those things and I was actually just going through your checklist as well. And one of the points there was to spark have like an open forum for students and so I think that's what we've been doing with AI sharing and so I think doing that with the cyber aspect of it and the cyber security of it is a really interesting way to kind of tie a lot of these conversations together. And do we seem to be circling back to decide how questions and those are questions around is it is it too simplistic to say that keeping side have away from our network and educating researchers not to give their credentials. Is this a cyber security issue for it professionals and educating them not to use it to access papers that are already holds is a copyright issue for librarians. And do you see any difference in awareness and effectiveness where when there are converged library it. Directorates. Yeah, that's a that's a really good question Sharon did you want to say anything about that. The, the, it's not necessarily the, the only, the only. I'm having trouble kind of articulating but it's not the only threat but it certainly is one of the, one of the larger threats. And I believe the person who wrote this actually addressed it specifically to you Rick, I mean, it's, I think he wants the librarians perspective on it but it's, it's not just about people not sharing their credentials it's also about being aware of what is the information that's associated with the university and it's often difficult within the library community for the librarian to know is this my responsibility or not. Oftentimes, there's the CSOs within the university would would definitely have something to say about the topic, but they may not be informed when there's a security breach so there needs to be communication. So there's an action plan within the university, like some major corporations do. I mean, I know there was a lot of information stolen from pharmaceutical companies in the race to find immunizations for coven and treatments for coven. That was talked about less frequency frequently because corporations didn't really want to reveal that that had happened. But universities if they can speak openly with each other and within the university they they're better able to tackle a problem when it happens. Yeah, and the only thing the only thing I would add I agree with everything Sharon just said the part of the question was, have we got two different classes of issue here one is the threat to the network that arises from sharing credentials. And the other is just kind of the pure copyright issue of students going to side hub and using it to download copyrighted material to which they don't have legal access. And, and it's true those are two different, those are two different issues. And they're both important and as librarians you know I'm old enough I've been a librarian long enough to remember when librarians used to say you know we're the greatest champions of copyright and we don't really talk like that very much anymore. But, but it's also true that when students download content from side hub. They are unwittingly sharing information about themselves that current evidence seems to suggest is actually being weaponized by side hub and possibly by the actors that are behind side hub. So, the two issues do overlap a little bit, but students downloading content is mainly a copyright issue students and staff sharing network credentials is mainly a network security issue. I'll just share a really fast anecdote I've got a friend who is a professor at a university elsewhere in the US and she said that when she learned about about what side hub was doing. She contacted two people on her campus she contacted her library liaison and an IT person. And she said the IT person immediately responded freaking out saying wait a minute wait a minute what is going on with this and the librarian never responded at all. And I thought that was a little bit embarrassing frankly for me. Thank you so I'm aware we're on time. There's just one last question that which is in the chat which I think is worth, well worth having a look at our institution seeing an increase in academic integrity issues, following the shift to online learning. And given the age differential in perceptions about what's acceptable is there an urgent need to fundamentally rethink training. Any, any of you might want to answer that. I think that speaks to Santa Sandra and Amanda's observation in their study. Yeah. Well that's actually one of the things I've seen is that like AI powered tools like turn it in I think is like a really popular example. I've been seeing others at you know my faculty that I liaise with that they've been using to to review this kind of content. I've also seen schools use algorithmic decision making to assign people grades during the pandemic. So there's been a lot of kind of conversation about how academic integrity might shift because the students think that you know an algorithm is just going to give them their grade. Do they have to put certain efforts into it can they you know, maybe bend the rules a little bit when it comes to that kind of stuff. And I think that actually leads back into the copyright discussion as well. I think in terms of seeing it like an increase in academic integrity issues I think the answer would be yes but I also think it's up to us to kind of reframe academic integrity for this new generation of learners and to kind of maybe shift our academic integrity standards in a way I don't necessarily mean like what is you know copyright with stealing kind of thing but how we present that to students and how we interact with this technology and them and let them know what this technology is and I think there's lots of conversations that can be had and how we can kind of reframe those perspectives. I think so too and I think students are definitely aware of those tools because we, I managed a virtual reference system and we often get questions about turn it in and systems that are similar to that and students want to run their business through it, like they want to have access through it to make sure that they haven't. I think unintentionally plagiarized or stolen things but it's also interesting that I agree with Amanda that we may need to kind of reset the boundaries a little bit because those concepts are somewhat very western, right. They're not necessarily applied to all cultures so there's definitely a conversation to be had on this, not that we should 100% get rid of it but maybe like the bounds and how we conceptualize of those should be discussed a little bit more.