 get started. Welcome, everybody. Thanks for joining us today. I'm Cliff Lynch, the Director of the Coalition for Networked Information. I'll be introducing this session. This is one of the project briefing sessions for week three of the CNI fall 2020 virtual meeting. To remind you, week three is focused on technology standards and infrastructure and certainly issues around privacy and the interactions of privacy and technology fit very squarely there. A couple of logistical things. We are recording the session and it will be subsequently available. There is closed captioning. Please make use of that if it's helpful. There is a chat and you're welcome to use that as we go. And there's a Q&A box. You can post questions as they occur to you. And after Ken finishes, he'll work through questions with the assistance of Diane Goldenberg Hart from CNI. So let me just very briefly introduce this session. I am, as always, delighted to welcome back to CNI the one and only Ken Klingenstein from Internet too, who has as good of view on privacy, identity, security, access management, related issues as anyone I can imagine. And we're not allowed to have bouffées anymore because of the pandemic. But if we were, I would characterize this as a bit of a bouffée of recent developments that are on Ken's mind and that have affected many of us and that promise to affect us in new ways in the future. I suspect you will have a lot of questions for Ken. I know I have a couple that I'm going to invite him to speculate on if there's time, particularly if you don't. But with that, I'll just thank Ken for coming to make us all smarter on these topics and turn it over to you. Welcome, Ken. Thank you, Cliff. Sound check, things okay? Sounds good. Okay. And thanks to all of you for attending and for finding the room as it were. I'm usually parked at the very end of the Omni. We were reminiscing about the Omni Hotel in DC. So I'm glad you found it. For those who are watching this in a delayed fashion on video, I'd be very interested in responding to any comments and questions you might have from this presentation. And I'm KJK at internet2.edu. My SCAT is at various places on the internet. You can find me and I'd be thrilled to answer stuff. For those who are participating alive, we do have Q&A. Diane's going to moderate things, but I'm glad to interrupt the flow along the way. And then finally, we're on the cusp, I believe, of being able to address some of the long-standing desires of the library community with regards to combining both privacy and access control. And now that that looks like we can get there, I'm curious about what the use cases are, what are the unique access control sets of issues that people might have that might challenge the approach that we're taking along the way? So I hope we can sink into that. With that, the set of topics. I want to begin with the stew. Talk about what's happening out there. This will be largely presentation before we get to the interactive part, which is the stewardship element. The stew will consist of updates on GDPR. It's been very active in the last year in terms of developments in that space of privacy and management of user rights. We'll talk a little bit about privacy shield. I'm not sure that for the community gathered here that privacy shield was in fact much of a shield, but for some of your universities and institutions and for a large number of corporations, privacy shield was a way to interact with the Europeans and that has been struck down and we'll spend a little bit of time talking about how to manage the aftermath of that. I want to look at the stuff happening up north in Canada, partially because some number of the attendees at CNI are traditionally Canadian, but largely because the Canadians are doing it really right and want to put a bit of a highlight on that. I'll just touch briefly on how COVID-19 tracing and privacy interacts. Many of you on this call may have a more profound understanding and it would be great to exchange that. Finally, I want to talk about a piece of work that we've been developing now for several years about attribute release with consent. In the last few weeks, that activity has started to engage with FIM4L, Federated Identity Management for Libraries, and we're getting to a point where we can begin to again start to do proper access control on our collections. The stewardship piece is a lot more about process. How do you chart a course given all the things that are happening there? Where in that path do libraries feed? How do you separate the wheat from the chaff? Many campuses have created a chief privacy officer. It's almost always a second title to somebody who already had a title. The question of what their responsibilities are and how they juggle that is interesting. Finally, we'll talk about how can you build some privacy partnerships on campus. Our examples for that will be how to manage these seamless access attribute bundles as they make their way into the IT world from a group that has been largely librarians shaping that. That's the map going forward. With that, I'd like to borrow a slide from Daniel Solov on GDPR. Daniel has a number of really good resources about this. He provides them freely with acknowledgement. I would point out that there's a lot of elements to GDPR. You might be familiar with some number of them and not the others. That player's list is important that there were data subjects, us data controllers, the people who own our data, data processes, the people who ask for our data from the data controllers, and the supervisory authorities, the people who kind of make sure that rules are being followed. At one point, there was concern that the GDPR only applied to Europeans and then all of the edge cases percolated up that made its impact far broader. If you have campuses in Europe, you're covered. If you have European students in the US on your campus, GDPR affects them. It is truly transnational in some of its impacts. That lawful processing list just below the territorial scope will get into a bit more. It's where I tend to live. You'll see consent right next to it. It says attributes can flow only if certain lawful conditions are met. A contract, a legal obligation in the interests of legitimate interests of the user, some national security issues or consent. Those are high bars. Then the EU has put on consent a very high bar, which is when can you use it and how well must you present it? When can you use it? It must be in a truly symmetric relationship. Often, all relationships with our data processors are not symmetric. There are instances where they are and then we can start to use consent. It has to be informed. I'll spend a bit of time on what that means. Unambiguous needs to be revocable, but not in the past tense. In the future tense, so you can't withdraw what's been seen already. There is something called the right to erasure of all of the aspects of GDPR that's been the most difficult to implement. Think about backup tapes. The enforcement levels are quite high. There needs to be special treatment for sensitive data, et cetera, et cetera. Again, a very comprehensive set of requirements, a high bar in general. In particular, two things have emerged in the last several years around GDPR that are relevant to the access control issues that libraries face. What's the basis of release for attributes and what's the purpose of use for the information that's been released? Again, basis for release, just a limited number of categories and very carefully identified and abused, frankly, and I'll come back to that in a second. Purpose of use is really pretty vexing. If you're releasing information, how is the relying party receiving this information going to use it? Various verticals have developed taxonomies for purpose of use. In particular, the Interactive Advertising Bureau has been very conspicuous in this space. We don't have one and that purpose of use field turns out to have percolated up in May when the European Court of Justice, I believe, tightened the, lots of the abuses around GDPR. In particular, there was a massive abuse of legitimate interest where people would say, I don't have to ask for your information. I can just take it because you came to this site, so you must have meant to give up that information. That's not clearly the case. Poorly done consent and cookie walls. So perhaps the most dramatic impact of these May rulings are now visible if you're visiting websites that are frequented by people who are covered by GDPR. You'll have noticed that the accept cookies category has been replaced with a finer grain set of options. You can click to accept all of them, typical cookie wall, accept all these cookies, or you can click on that box, typically at the bottom of the screen, and discover there's four sets of cookies, and you can choose which ones to accept. There's a set of functional ones, a set of style ones, a set of marketing and tracking ones. And in particular, I always go in now when I see that refined set of cookie options and deselect everything but the functional ones. I appreciate that control. And in fact, I assume I'm being tracked less as a result of that. And then this fine, again, that kind of pushes on this fine grain controls. I did include a citation. EDPB stands for the European Data Processing Board. So lots of commentaries about that. Before I move on to privacy shield, any questions, speed okay, any refinements that folks would like? Okay. So privacy shield was a clue to replace a hack. The original hack was Safe Harbor and it covered transfer of European data to the U.S. And it gave people a sense of comfort that data being sent to the U.S. was going to be protected. And it covered a whole lot of use cases. Some of the places where it got gunky is when you have a multinational company and they're trying to pass data from European employees to the U.S. headquarters and you need protections around that. There were also instances with customers and there were social use cases. And in fact, it was the social use cases that triggered the demise of privacy shield, which was the replacement for the original hack of Safe Harbor. And in particular, the European Court of Justice, ECJ, in Shrems II, Shrems is the German student who asked Facebook for all of his data that Facebook had. And Facebook sent an early list that was slim and then sent several hundred pages of data that they collected. Shrems sued in the European Court of Justice, won. And so privacy shield is toast. The core reason that the European Court of Justice struck down privacy shield, by the way, was they didn't trust the U.S. government. That might be a very reasonable policy, but they felt that the U.S. government had numerous legal mechanisms to force U.S. companies to provide data that was supposed to be protected by privacy shield. It impacts a lot, though in our community, not so much. But you may start to see some contract clauses in content being provided by European resources that are attempting to address some of the protection that ostensibly privacy shield gave. You can also do encryption upon encryption as a protection mechanism, but that comes with lots of costs, computational and otherwise. Moving up north, DIAX stands for the Digital Identity and Access Control Council of Canada, and they've created a pan-Canadian trust framework. It's a monitor. It's developed much as the U.S. tried to develop an effort around N-Stick. I am a survivor of that effort. Some of the software I'll talk about in a few minutes is also a survivor of that effort, but that effort went down in flames. And later on in the borough, we can talk about that one. They have a trust mark called Voila, which is appropriate for a bilingual country. And if you see that trust mark on a website, you should believe that that website is playing by the rules that the pan-Canadian trust framework sets out. It's an impressive list of identity providers. Those include banks, social media, governments, and an impressive list of allying parties, planning on using it. It's just starting to roll out now, but because it's embracing some existing infrastructure as part of its approach, you could say that it has a great deal of traction already. In particular, in British Columbia, you can use an identity issued by a bank, a government, a number of other places, a validated social identity that's been verified, and use that to file taxes, get building permits, do a lot of interactions with the British Columbia government. I'm very impressed with it because I live in the notice and consent space and they have some extremely nice notice and consent recommendations. First of all, they pull out notice and consent as full-fledged members of the protocol stack. It isn't an aside in GDPR use of all of those bases for release we talked about a few minutes ago. Here, when in doubt, use notice and consent. And I'll come back to that in a second. It has authentication, but it's no more important than notice and consent, which is wonderful. For those of us who got into federated identity to move attributes around, we lost our way and we ended up doing single sign-on across the internet because there was an immediate need with very few attributes flowing, and that's been the source of frustration. And now we might be able to get back into having attributes flow. I'll come back and touch that as well. They have the concepts of verified persons and verified organizations, and then they have some infrastructure. It's an elegant framework that privacy cuts across. For the notice and consent requirements, again, for those of us who live in this landscape, they're very welcome. Consent will normally be sought. It isn't determined if it's legitimate interest, determined if it's some kind of contract signed years ago. When in doubt, do consent or notice? Notice in the places where things have to be provided consent where there's some discretion. It's always opt-in. It takes place at the time of transaction, but it can be given for a period of time, like for a subscription service. Withdrawal of consent, as mentioned, applies to future transactions, where you've given it for an extended period of time, but you can't close the barn door when the horses have left. It's always explicit and in language that is easily understood, and that is quite a challenge, and I'll show you some screens where we've attempted to do that. And then this should be a place where you have a console, a dashboard where all of my consents are, and where I can manage them, and I'll give you an example of one of those as well. And that consent dashboard has just taken fire in Europe as well, and I expect that to be the next set of efforts going forward. So I just want to touch briefly on COVID-19 and privacy, because it had huge impacts on everything, including privacy, rapid adoption of new cloud services, which we all went through. We didn't really have the time to look at the data protection clauses or the intellectual property clauses, because we couldn't have stayed open without that. So there was a number of privacy concerns that got sidelined by the urgency of providing online education. I think we need to revisit those. At a recent conference in Europe, Gartner talked about how as a result of COVID-19, the future arrived 57 months early. I'm not quite sure how they got the number 57, but in fact, we're dealing in a future world now and scrambling to make it work. I won't get into the subtleties of contact tracing, but I would assume that some of you have followed the nuances of the various approaches. And we've gone, thankfully, from a very centralized approach that often would use mobile devices, but would use your phone numbers on the mobile devices and gather from the cell phone towers that your phone interacted with where you were and who was around you. And that had major privacy impacts along the way. We've migrated from that centralized approach, by and large, to have most contract tracing apps, and I have one on my phone, be decentralized. So my phone is keeping track of what other phones have come within Bluetooth range of my phone. But no one else knows that but my phone, but there is now a thread to be triggered. If somebody's phone has, the owner has come down with COVID-19, there is a path to be followed, a decentralized path that would let the owner of that phone notify owners of phones that had been proximate, that there was a positive test. And that came at some functionality expense, but I was pleased to see that. Ironically, that approach of privacy preservation and contact tracing was championed by Google and Apple, and I tend not to think of Google and Apple as being privacy-preserving companies, but in this case, they were inspiration. So before I dive into this one, let me just, this is a natural pause point. See if any questions have percolated up? Comments. I'm not seeing any questions right now, Ken. Thanks, Diane. Okay. So I want to talk about one tool in particular, and we're just starting to get serious engagement on this tool that has been in development for a number of years, consent-informed attribute release. It's a joint effort of Internet2 and Duke University, and the deep bench of talent that sits at Duke University. Disemerged from the NSTIC grant on scalable privacy, I think it might be the only floating guppy left over from that NSTIC effort. It provides effective end user management for attributes, for release of attributes and information items. You can do it in line, and there's a self-service mechanism as the DIAC and as the consent dashboards talk about. There's very effective enterprise management of how consent and notice are presented and how policy around that is formulated. I think we all have some use cases, and I'll touch on one or two in a second, where some information really needs to be provided by the user, and other information needs to be consented to be released by the user. Some of the information that has to be released might be in that functional category that we talked about in terms of cookies, and so it's necessary to have the access control function, and some of it might be compensatory stuff where a user has color blindness and wants the screen to be presented in a certain fashion, and those kinds of attributes could be consented. And then there are some things that are negative attributes where we don't want the user to not release them. We all have negative rights. Somebody is banned from using the VPN. Somebody is prohibited from using another resource. You don't want to have a user be able to suppress information that's important for the relying party to have. So there are mechanisms for both shipping information and giving users a choice about shipping information. It turns out there's unexpected compliance benefits. Your lawyers at your institution are probably keeping track of what attributes are being released to what relying parties that's a requirement of GDPR. They may be keeping that stuff on a spreadsheet. I know of a couple of universities doing that. That tracking of what information is being released to which relying parties per user is a simple report out from the car software. Click this button and your manual spreadsheet gets replaced with a report that's issued at whatever frequency. A car is open source software. By the way, it works for OIDC. It works for SAML and Shibboleth. It works for batch feed. So one of the painful aspects of our privacy actions to data on our campuses is that we're policing, maybe, the release of information via real time interactive stuff. But we have these batch feeds going on, batch feeds provisioning to Zoom, our user base, batch feeds provisioning to Google Docs, batch feeds provisioning to third parties who are doing alcohol education services for us. Those batch feeds are not covered by FERPA today. And so if a student has selected FERPA, it's not reflected in their privacy settings in these batch feeds. This gives us a mechanism for that. It's open source software. And for those of us who go back way too far, the original Shibboleth t-shirt said we'll work for attributes. This is what that's about. So I just wanted to show you a typical screen. The identity provider here is Amber from the, some arcade novels. And here I am going to a research R Us site. And they want attributes. I wanted to show you a couple of things about this screen. First of all, over here on the right, a reminder of what site you're going to, and the privacy policy for that site. Over here on the left, we say review and edit what you provide to this site because this is stuff that you have preset. And this is what you did last time, as it were. The release and deny are in the typical red and green permits and denies. The attribute being released is presented in English or in Spanish. A car is multilingual. It does Chinese quite well. The value being released is also present. Now you can't correct values that are being released in this environment. You'll need to go to the system of record to correct that, but at least you're notified in the case that a value is improperly set that you need to change that. And then underneath that in italics is the purpose of use. We're finally presenting for the first time why an attribute could be used or needed along the way. These things are easy to change. Just buttons. This has been tested extensively on user populations across a variety of ages and backgrounds and seems to be fine. In fact, the target moved on us while we were working on this. Early on in the days of federated identity, the general feeling was users couldn't do consent. And then Google and other social environments began to do consent screens badly at first, increasingly better over time, and users got used to the fact that they might need to do consent. And now we have a population that seems to be fairly comfortable with consent. Notice on the bottom, it says don't show the screen next time. So you don't, you know, unless you're a consent and notice person, you don't want to see this more than once to get to a resource. So you can suppress that. And then if you want to change your policy, you can go to the self-service console and change it. And if you as an institution want to have policies changed for users, when something changes in the environment, you can trigger those, we want you to re-consent. What kind of changes could happen in the environment? Supposing research or us change their privacy policy. And you're not quite sure that you're like you have with a single command the ability to ask all users going to research or us to reconfirm their release policies. Could be a handy tool down the road. You can also reconcent if the value being released has changed. Here's another setting. I just wanted to give you a different view on this. This was for faculty trying to get to content. And this is important because it begins to do fine grain access control. So in this one, I'm releasing not identity, well, I'm releasing a display name, but that doesn't have to be identity. But I am releasing my affiliations. And it allows me to get to departmentally licensed content. And so we have access control. These these departmental funding controls and I'd welcome better wording on that from anybody in the community who has knowledge of that. But these are expressed as group memberships, very simple to manage from the identity provider viewpoint, and gives restricts content to members of the school of law or sociology, etc. And finally, here's my self service console. So here's all of the places I've set attribute, release and consent policies for. And when I've updated them, and the manage button will allow me to go in and change my policies. So if I've decided that a site is becoming uncomfortable to get certain information, I can manage that. If I decided that I really want to change the way the site presents information in response to an accessibility concern that I now have, I can manage that as well. This is for any geeks in the audience. There's API is all around the place. I just wanted to show you that in particular down at the bottom there are two figures. So you as a user, get to use that intercept interface. And you also get to do the self serve interface. As administrators, there are two other interfaces that I haven't shown in this one that would allow the sys admin to set institutional policies and would allow a privacy person or a librarian to administer a subset of those capabilities. So this would allow a librarian, for example, who has signed a contract for certain content to set an access release policy appropriate to that content. This stuff can get very complicated. I don't want to dive into it, given I want to leave time for questions. But let me pull up one example of where a institution has created a measure policies that need some triage. We'll talk about Harvard. We'll talk about Harvard, which allows you as a librarian when you buy content to set a release policy for all of the students and faculty so they can get to that content, except not all students. Because there is also at Harvard an office of special students, special in the sense that they're the children of presidents or the children of Shahs or other kinds of very important people and the office of special students sets policies for the release of information for those particular students. So we have the need to meld a comprehensive policy set by a librarian with a specific policy set by an office within the campus that protects certain students. That's a very interesting triage we think we can manage. Okay, that ends the stew part and I want to talk a little bit about the stewardship next. How does a campus manage all of these developments? How do you stay aware of this? And the answer I'll cut to the chase is poorly. The stewardship piece is not what we'd like it to be. I'm going to lean a bit on a publication that just came out from ECR. I want to thank them at EDUCORES about the evolving landscape of data privacy and higher education. And from that I've drawn a couple of graphs that we'll be talking about. I want to talk about who the players are, what the processes are, and what the partnerships are. So who are the players? Well, legal and compliance is very big in this. In fact, over the years I've seen an emergence of a lot more compliance aspects in enforcing privacy. I think that's a result of some of the hefty fines that are being administered out there. And the compliance officer is a natural place for this to reside. I'm the chief privacy officer. A number of campuses have created those offices frequently in conjunction with the security office. And it's not clear that that's an appropriate weaving of responsibilities, but it is what it is. Central IT is clearly a player in all of this stuff. Other registrar is also a player. They're typically the ones managing FERPER and other kinds of privacy regulations. And I put libraries in question because up to two years ago it's not clear that the libraries were as actively involved in some of this access control and privacy management as has happened recently. And I want to give lots and lots of credits to the group over at Seamless Access, RA21, NISO. They have convened a community and run a very timely process that has resulted in some access control opportunities that we'll come back and talk about. So that chief privacy officer, again thanks to ECAR, has so many responsibilities that I fear for executing them well. And first and foremost, there's been a privacy spill in IO4, would you please clean it up? Well, that's a large part of the work. And in fact, those things are hard to clean up. Sometimes you get to create policies about privacy, a lot of data governance issues. Sometimes it's important to do a stakeholder privacy training. One of the requirements of GDPR is to actually train the people in the organization who manage privacy. It's explicit. Sometimes there's providing thought leadership, protecting PII, etc. Dan at the bottom, there's some folks have unfortunately responsibility of trying to police the swamp of privacy issues that students create around social media and big tech. That's a lot of work. That means that often the CPO doesn't get to provide the right amount of focus on some of these issues. And I think in particular with the seamless access work, if that enters the portfolio of the privacy officer, it may not emerge. So I want to sound that caution. Again, from ECAR, here's some of the things that that privacy officer needs to do. It's a full-time job and it's typically given to somebody who already had a full-time job in the security space. So I want to talk a little bit about process going forward and I love for this to be interactive and I see something in, oh, I see a question. So let me answer Robin's question about sensitive attributes, institutional research analytics departments. So the sensitive attributes as defined by GDPR are stuff that I'm not quite sure Robin, research analytics will get to see a lot of. The sensitive attributes could, as GDPR assigns them, are about work, about religion, about sexual preferences, about accessibility sets of issues. So they're very much a set of personal sets of issues, race and sex. Oh, wow. So I can tell you that both race and sex are covered by GDPR. And so you might start to say, I think Robin, you said you were at Virginia. Good people at Virginia. You might start to try to find out who speaks GDPR at UVA and raise those issues with them. It turns out at least in my narrow view of how card handles sensitive attributes, we don't display them at first blush. So when you see a screen where you're saying, do you want to release information and there are sensitive attributes there, we require a second click to display those attributes. And that's the steps that we've done for protection. But I'm not quite sure if that's the kinds of concerns you were thinking about. I'd love to engage on that going forward. Anything else on that thread? Okay, seamless access. So this process has been underway for about two years. I find people in the community have been leading it and keeping a steady march. And out of it has emerged a set of attribute bundles. They're labeled in fact as, let me see if I have that on the next slide. They're labeled as entity categories, but they're really attribute bundles of attributes. And there's three bundles that have emerged to compensate for the only bundle that we've had so far in federated identity space, which is called the research and scholarship bundle. And it's highly identity oriented. So you release a number of attributes in that RNS bundle that indicate identity. And part of the reason for that is that you're typically accessing major scientific resources. And those relying parties are very concerned about abuse of those resources. And so they want identity if only for tracking purposes. So out of the work that has been happening in the seamless access activity, three bundles have been created. One that is just authentication. One that has anonymous authorization and one that has pseudonymous authorization. And on that screen that I showed you earlier in CAR where I was releasing some departmental fund codes, but not identity. Actually in that one I was releasing an email address. I could have suppressed that in that one screen I showed. That's a case where you're releasing some information that helps personalization or state or access control. The process that these bundles are going through is that seamless access created them. It then referred them over the transom to the identity crowd, which is typically an international group called REFEDS, Research and Education Federations. There's a group that does schema. They looked at these things. They had a fair bit of feedback. We're in the feedback processing stage. And I've got some hope about that that I'll talk about in a second. Once these bundles are anointed, we have to get adoption. It won't be easy. We're going to get need adoption at the federation level. And then we're going to need adoption at the identity provider level. A parallel action is being done by a contract language working group that's talking about how you put this stuff in contracts with content providers. That conversation has shown a whole lot of abuse by content providers who want identity. And so if it wasn't released by the identity provider, they'll ask you for identity out of band once you hit the content site. They'll ask you to create an account. They'll do a lot of things to grab you. I'm not quite sure why, what the business purposes are. But I think the contract language is going to be very important so that a lot of these content providers, including some very big scientific publishing houses, change their policies along the way so that they don't obviate privacy by asking the user after they're in for additional information. There was some concern about mixing attributes, about usage and metrics with other kinds of access control. So I want to talk a little bit about how Car could use these bundles. Turns out I've had conversations just today with the FIM for El crowd, a nice group of people. And we'll be doing some demos of how Car can serve library needs. But one of the ways that Car can use these new access bundles is to preconfigure IDP release. So in a typical consent screen that Car presents, gives you a list of attributes, allows you some control. It often can be displayed as to what the institution recommends for what you release. It's not binding, but it's recommendations, it will reduce the friction at the content provider if you release these attributes. We could take these attribute bundles, these entity categories that are being done by seamless access and use that to preconfigure the preference settings by the user and then as for consent, or if the campus is just comfortable just releasing that information without the user consent, they can do that. I think over time we're not going to be allowed to release attributes without consent though from the way that GDPR is. In any case, there's really good hygiene to notice and transparency. Even if you're not giving the user's control, it's really decent of the institution to let the user have an option of knowing what's being released. So I think we're into the discussion phase of this. Let's see. Let's see if there was another chat question. No, okay. So the question is for the folks who have gathered here, do you get called to the table about privacy? And if you don't get called to the table, where should you be called on? I'll tell you right off the bat that when these attribute bundles get released as maybe good things for federations and identity providers to adopt, my guess is having looked at the identity providers in the US, most of them will not track this. They won't even know about it. If they know about it, they'll say it's not for them. And so I think right off the bat there's a role for librarians in going over to the IT organization, the people running the federated identity and saying some stuff rolling down the pike when it hits, we'd like to be involved. That's going to result require learning some new language skills. And again, I think there might be some places beyond the access bundles where the unique perspectives of librarians, librarians have been critical since I watched the wonderful conversations about the importance of open shelves and the freedom to browse. And I think that philosophy has got to interject into other activities. With that, I'll take a deep breath. I'll stop and turn this over to you, Diane, for the next phrase. Thanks, Ken. That was a lot to think about and a really wonderful presentation and always so fun to hear it from you. So thanks. Thanks for coming back to CNI once again. And in the spirit of conversation, I just shared with our attendees an invitation to participate in dialogue. So I see that Robin has raised her hand. Go ahead, Robin. I've allowed you to unmute yourself and anyone else just raise your hand if you'd like to jump in as well. Go ahead. Thank you. I did want to leave you with the impression that the library has just gained in a lot of attributes and not handling it appropriately. But we had some research questions that had to do with diversity. And so this group was seeking more information about individuals who had access content. And so I went to the data stewards and got sent to the Institution Analytics Department. And, you know, they're going to identify the information when it comes back, but we're trying to deal with this gnarly thing of trying to keep that information separate from other information that we need to answer other kinds of research questions. And we want to de-identify everything and start rotating our logs so we don't actually have as much sensitive information. I realize this is not the bulk of your talk, and I'm really interested in you know, what you're coming up with and us providing more privacy to people and having their consent. Gotcha. And so, let's see. Couple comments. First of all, increasingly I'm pairing the words consent and notice in language. And even in the cases where consent is not appropriate, I'm a big fan of transparency and notice. We in CAR make notice very quiet, but it's there. And a user who's curious about what's being released can easily find it, but it isn't intrusive. Back to your particular situation, Robin. Was the content that was being accessed being accessed through IP access controls or by federated identity? Federated identity. So people use single sign-on and they check something out from the library and then that goes into a log. And we want to rotate those logs and not have information around forever, but we have to determine all the research questions that people have, you know, and retain enough information to answer those questions before we de-identify the information. So we're using Shibboleth basically. Right. So did you talk to the people when you said you talked to the data stewards, did you talk to the people over in IT, the Jim Jocles, etc., who run the Shibboleth environments and to see what they have in their log files? Yes. They wouldn't have any of their, well, they do have some easy proxy information for us. And I used to work for Jim and run identity management for him. So, you know, he and I are in discussion all the time about this sort of stuff. Gotcha. But it's more, you know, we know that this information is sensitive and we don't want it to get out and be coupled up with a person's identity. And so I wanted to go through the right data stewards. And I was just surprised when I got sent to the institution for research analytics. And I was wondering, you know, are we a one-off institution having that department have this responsibility? Or is it because we just don't have a privacy, a chief privacy officer other than our head of security? Good question. So can any of the other participants who want to raise their hand if they have an office of research analytics? You are alone. There is a someone just weighed in, in chat, or is it because Robin said research? Or is it because Robin said, quote, research? It might, yeah, Robin, that, you know, that, that, that, that through me, too. It was that you wanted to do research on the data. But the, in typical, my office of institutional research at my old university wouldn't have known where to start on this one. And actually, Mike Furlough, who made the comment, he would like to join the conversation, as would Cliff. So please go ahead, Mike. Yeah, I'm just going to make it very quickly. I just, I think that such offices of institutional, my, my guess, what I'm just wondering, Robin, whether that was like the office of institutional research, or an analog to that. But I was really kind of being facetious and didn't mean to interrupt the conversation too much. But I've had more than once people just shouldn't send me to the wrong place, because I said a magic word that they were in a hurry to get past me. So I'm not very helpful. Sorry about that. Well, it's just somewhat surprised. This is a fairly new department, but I think it is a new name for the institution. You know, data, the kind of people that would do analytics before. But it's just this new responsibility that they have now. And they told me, well, you give us the data, give us the logs with the authentication ID, and we'll give you back these attributes, or information and answer your research question. But we'll de-identify everything. That was just something completely different from what I've done before, where I'd talk to the data stewards and gain what we want. So let me appreciate that. Let me just jump in with, because this, you know, with one or two questions that we won't get a chance to answer. But I'm really curious about content that has access control that would be governed by attributes. For example, citizenship. Is there content or enrolled in a certain class? I showed you an example already of how we can control that. But citizenship, I know of some medical research journals and databases where you cannot have two different files open at the same time, because they were afraid of correlation attacks. And so the access control there is, if you have this open, you can't have that open. Vice versa. We have all of these new capabilities coming down the road in terms of access control. And I'm curious if we have previously intractable technical situations that we can now handle a bit more graceful. And I saw Don, what is on here? Don, you deal with some of that interesting stuff out there. And if you ever bump into some access control issues that you go, whoa, I'm here. Let me raise a figurative hand here and ask about something a little different. Before I do, I just want to mention Ken that within ARL, a leadership now, I believe there is quite a discussion going on about exactly what the role of research libraries is vis-à-vis privacy on campus. And there's at least one group that wants to position libraries is taking a very strong sort of educational and consultative role. Privacy is a service, if you will. So that might be in the questions you were asking about the roles and who comes to the table with chief privacy officers. That might just be something you want to explore a little further. The thing I wanted to ask about, though, was I am really nervous about issues about jurisdiction and patchworks here around privacy. I mean, we already saw a funny reaction in some ways to the GDPR in that, yeah, there were organizations that did business in Europe that had to deal with it. But there were also organizations that did very marginal business in Europe. For example, some local and regional newspapers in the States where people abroad would occasionally, expatriates and things would occasionally look at them, but they really didn't do enough business to bother. And you just see those things blocked when you go to Europe now. They're just doing geo-blocking, saying too much trouble to deal with GDPR, we just want to talk to people. I have seen some very ambiguous stuff about Europe attempting to essentially assert jurisdiction over Europeans anywhere they may be in the world. And various institutions worried about that. Most recently, we've seen legislation, for example, in California that raises the specter of a patchwork situation even within the US based on state law. I wonder if you have any predictions or reading on how that's going to play out. You're muted. Okay, interesting question, Cliff. And I thought about including CCPA, the California Consumer Privacy Act, in this thread, but it's going to be revised significantly this year if they get to it because they got a lot of things that they need to fix in that legislation. It was also consumer oriented for our particular use cases. That said, New York has done a privacy law, other States are following as well. Yes, it is a patchwork. I have almost no hope of it being anything but a patchwork in the US. And so I admire the Canadian effort. It's a tractable country. It's a tractable population even. And I think they're going to pull it off. And that might wash down to the US. With Biden coming in, he got to witness the N-Stick effort. I hope he learns all the lessons from that, but it seems to me, except for some of the big tech issues that the Republicans have embraced, that we're not going to get any kind of uniform legislation in the US around this. It's just really sad. It's a reflection in a way of the fact that surveys consistently show that in Europe, people trust the government more in terms of privacy, more than they trust corporations. And in the US, it's the inverse of that. They trust corporations more than they trust the government in terms of privacy. And what people hand over to corporations shocks me and it shocks you, I'm sure, as well. Is there going to be a patchwork? Probably is there hearing that there are cases where Europe is turning down US visitors because of the sets of issues around GDPR is intriguing. One of the things we were hoping to do and may still do is consent notices the service. And so we would be able to begin to deliver these capabilities that you saw in car without an institution standing up the full set of service offerings. We're a little confounded by a lack of data about where universities are using federated identity, who are the relying parties they deal with. I know at some campuses, there's far more local relying parties than there are in in common, but the Congress is true elsewhere. Whether we'll get to provide a service that could replace the patchwork with something so attractive and easy that people attend to that for uniformity, sad to say not on my watch. Thank you. That's helpful. Not necessarily the best news in the world, but very helpful. Ken, thank you so much. You've given us an incredible survey of developments, and I really appreciate it. This is a shibboleth and federated identity was sent here to serve this community. We got sidetracked, but we've come back home. Yes, you have. And this is one of the places where it started. Yep. Well, thank you so much, Ken, and thanks to all of our attendees. This was a really great session, very fruitful. And with that, I will close the session and wish everyone a great rest of your day or evening, wherever you are, and hope to see you back at CNI in the days and coming weeks. Take care. Thanks, guys. Bye. Bye. Bye.