 Hello everybody, we're gonna start in a couple of minutes But I did want to make an announcement that our beloved colleagues from UIUC cannot be here today They got trapped in champagne Urbana. So if you're here to hear their talk You might want to think about another session. We'll be talking about We won't be insulted. So so the only presentation today will be on the the bib flow project at UC Davis And Tim and his colleagues and their deep regrets, but they were I think it was Freezing fog or something and they they just couldn't even get to Chicago. So they sent their apologies Okay, it's five past the hour, so I think we're safe to start now I'm sure people will trickle in from lunch, but I want to keep us on time We have a luxury of time since there's only one presentation But that's good because it's a big project So we'll have ample time to talk about it and leave some time for discussion at the end So my name is Mackenzie Smith, and I'm the University Librarian at the University of California Davis campus and the PI on this project And I'm gonna just give you a little bit of context for it for those of you who have not heard presentations on this before And then I'm gonna turn it over to the person who's really doing all the work and introduce him in a minute so the flow is a project funded by the IMLS and It was the culmination of many years of observation on my part that we in the library world when it comes to Operations technical services were living in the 1970s We have Mark and other standards that were developed many many decades ago We have Technologies integrated library systems that were architected and written many many decades ago and they have evolved But in some really fundamental ways both the infrastructure that we're using and the workflows have not really kept up with the modern world and in particular the web and link data so In previous institutions, I worked on projects to try to integrate all this great library data that we have Across many different silos. So you've got mark. You've got ea defining aids. You've got GIS data You've got DDI data. You've got many many buckets of data and Very poor ways to integrate and navigate and exploit that data To support the library's activities. So We decided to take on the challenge of thinking about what it would look like if the whole library operation Flipped to link data from the ground up, you know And then how would we get from here to there and that's really what the bib flow project is about it's about developing a roadmap to help the library community and our Technical services operations get from where we are today, which is kind of the 1970s with a little lipstick on it to 2020 which is where we're going to be very soon. So This has been a very long and arduous project and there are lots of other people here in this room working on various other Aspects of link data, but this one was really focused on that back-end operation that as a library director I'm spending a lot of money on and could be done much much more efficiently So that's the context of the project and to kind of we're just wrapping it up now after two and a half years or so And so this is kind of the final findings of the project that we'll be sharing with you today And to do that I'll introduce my colleague Carl Stamer Who is the head of our data and digital scholarship department at the UC Davis library and Formerly the director of digital scholarship and has a long and storied background in linked data ontologies and kind of the future of data Standard, so he's the perfect person to lead you through this work. Thanks Carl All right, so What I'm going to do is give a this is a high altitude view of what is in the road map itself So that road map will come out within will deliver it to IMLS by the 29th. It has to be there It will definitely be available then but hopefully within a week from now actually we'll start Publishing it online so that people can get to it and it has a lot more detail behind the things that I'm going to talk about today One of the sort of premise there is that we actually have a dual access of transformation has to happen here We have to transform our data But we also have to transform all of the systems that surround that data and that's both human and machine So workflows that are in place and also a software We have a lot of different software systems and sort of machine systems in the library that also Communicate with that data. So it's a large-scale effort to try to as Mackenzie said actually from the ground up Change everything I'm not going to this isn't a technical talk So I'm going to try really hard not to get down on the weeds of like what linked data is and how it works But a just a couple key concepts. I want to touch on quickly Because if you don't just have these key concepts the rest of the talk is not going to make much sense So the one thing first thing I want to talk about is something that you'll hear me say probably quite a bit Which is a linked data ecosystem when I talk about a linked data ecosystem What I mean is a fully functioning linked data library where everything we're doing is now in a linked data space You see in the visualization that that has these four Discrete areas that are that we've identified one is our cataloging We need our cataloging to actually happen to take advantage of linked data Not just have the same records be produced and converted to linked data We've got linked data exchange So instead of exchanging mark records with other libraries that we would actually be exchanging data as linked data as triples I'm working my way around counterclockwise. We've got linked data discovery So now we spent a lot of time Making the our data as linked data. How does that going to affect our discovery universe, right? Hopefully a lot actually because there's a lot of new cool things we can do and then finally the shaded out block that you see there is linked data storage storing our data as a triple store natively and The plan that I'm going to outline is designed to get to that but I do feel obligated to say that that's not Really one could operate fully in a linked data ecosystem without doing that Because the data layer is often not based on whatever your real standard is the way you think of it so a lot of people think that Real quick here if I look at it's a this is where I get a little technical But I'm going to do it if you think about the way software gets architected this model view controller is something that came out quite a while people that do software development and most of the ILS is that you are using work this way Where your model is your data store and it lives separate like it has its own universe And then you have these two other components that interact with it You have your view is your user interface That's how you interact with your universe with the data universe and you have a controller Which you know like does the math two times two is four convert this string You know it all uppercase so the functionality is happening there But all the data lives in this data layer in the model and already in your current ILS that model is not a mark record And it is not mark-based. So this is just a snippet of a small thing from the qualiola ILS system, which I'll call even though it's not technically an ILS, but it's easier to say You know it there's some like 450 tables like several thousands of fields You know that drive that product if you look around you can find mark in it, but it is definitively not a mark Record, you know behind the product. That's the same for all your ILS is so You know and yet they deliver mark when people catalog their view looks like mark we exchange mark So it's not Imperative that your data store become a native triple store But I'm going to talk about reasons why I actually think you should do that so The other key concept that you have to have is the concept of a URI and I'm again I'm going to try not to go into the weeds of link data, but a URI is a unique identifier It's a it's a fancy name for a unique identifier Just like we already have IDs that are unique in a lot of different databases The key difference being that we would in link data land share those right in the same way that we share a particularly Formatted string now, but that URI is designed to be machine readable The computer knows how to do something with it and so in the example I put up here We see a sort of human statement, which would be that Shakespeare William 1564 to whatever his data 1616 Right and that's the properly formatted string, you know He authored this work Hamlet in the bottom example. You see how that would be configured in link data We have the URI for Shakespeare that unique identifier and then the label of the name But the label it's just that the name is a label now It's not the thing and then we have another URI which is a markry later code that we've used from Library Congress that if this was authored and Then we have an OCLC work ID for Hamlet, right? So these URIs are really The key to what makes link data work, right? So this graph you're looking at right here is something I just threw up really quickly It's one of my favorite examples where I looked in Wikipedia Wikipedia exposes as wiki data All the stuff there as linked data And so I'm able to traverse various sources and just quickly create this graph, but that graph only works because we have a The URIs there it keys on those URIs if they weren't all using the same URI if a bunch of people weren't using the same URI for Quentin Tarantino none of these connections would work and you end up with what I call linkless open data Right if we all just minted different URIs none of them talked there'd be no advantage here So the URI really is the heart and soul of linked data RDF matters triple stores matter all of that, but what really matters is that we're sharing URIs Without that your link data is not worth much even though I could formulate it as triples So what we have that's the end of my tech drill down now. We're gonna get back to the the process So the process we've identified is a two-phase process that's designed to move a library from their complete mark universe up to total native link data operations and it Importantly though the reason it's two phases is because is I'll go through and describe each step if phase one is Not that expensive there aren't that many barriers to doing that and if you move through phase one you could actually be Minimally functioning in a linked data universe You won't be able to capitalize on everything but you will you could actually put out RDF exchange RDF with people and You'd be working functionally in a linked data universe phase two takes you past that to really your full workflows and back End supporting that and so the key to that phase one is having your mark and eating it too Which is that you could you can still and this is the reason why it's cheap and there's not a big barrier for entry the whole concept behind phase one is that Almost entirely you don't have to change your workflow at all your catalogers can still work in their mark land Right, but we're doing just a few crucial things to set ourselves up for that linked data transformation So the in the first thing we're doing the first step of this is Getting URIs inserted in our mark data right, so The theory being that if we catalog in mark But every place there was a name or a subject or those things that we already pivot on we were also getting a URI into that universe Then we'd be setting our data up for good transformation to link data because we now have a map between the label that we use and the URI and Happily, it's pretty easy to do this actually so this is Again a screenshot from qualiola. They're described universe where we were able to go in and in and I mean this seriously About 45 minutes that it took to code in an auto lookup to Library of Congress using their linked data gateway Right so that when you the person starts typing it types ahead Everything feels exactly the same to our cataloger, but when they select that name This is the name that I'm talking about it's grabbing the name and the URI and It's saving that into the mark right saving both those things in the mark. It's effectively invisible to the cataloger They haven't they don't have to know anything about linked data. They don't have to do anything different here. We modified our qualiolae But also I've seen demonstrations we use Alma so ex Libris and I'm familiar with they have a linked data pilot going on And I've already seen demonstrations of the Alma interface that will do this right work for you So the barrier of entry here is for step one a phase one you have to have a Modified cataloging interface a workbench that does this URI grab that instead of talking to your local authority file Either talks to a new kind of authority local authority that has the URIs there or Better yet just goes out and hits the gateways that are out there because that's what linked data is all about right And there's every reason to expect that this will happen in the very near future Like I said, we know commercial ILS providers are working on this and so especially if you're in a cloud-based ILS system You know, there's virtually no tech overhead in implementing this I mean it just happens your catalogers don't have to do much You just sort of can roll into this universe with very little pain no retraining of your cataloging staff The next thing you have to do though this one gets a little bit harder is You have to do some batch URI insertion because once we have that cataloging interface is that takes care of everything I do from this day forward, but I got a whole ton of legacy data that doesn't have URIs in it, right? So about two years ago the PCC formed a task group that was testing URI insertion and mark and It did a couple of things number one I just wanted to see would it break it like where would you put things would it fry? So they spent a significant amount of time just Toying with mark finding places that were appropriate and then seeing okay if we stick those in there Does it break the current ILS right because it's not expecting to find that there turns out no They were actually able to find places where you can put the mark that Seem appropriate from mark point of view and that don't break your current ILS. So, you know We're copacetic. Everything's good. The next phase was figuring out. Okay. How would we do that? And I'm not going to go into the details of that I would invite you to really go online look at the reports that come from this task group Jackie Shay at George Washington University. They did a tremendous amount of work. They basically did their entire catalog They worked out systems for doing this that cover a lot of it automatically where the the computer can really know like with a very high degree of Confidence that this is good and no human even has to touch it then it falls into your other pile of Stuff that needs a human touch right so Realistically on this part you have to have some IT technical staff because you have to run the various scripts that they have And you have to devote a cataloger To this process to be the human touch right it took them you'd have to look at the report But I think they spent about six months Converting their catalog to that it was fast for a test. My guess is it would take you longer than that to really go into production Right. So this is the costly the most cost-heavy part of this phase one When you finish that now, let's say you finished it now You actually have all of your mark is set up for linked data. You've got your eyes for all the pivot points Every new thing you do is capturing them. So even though you're still working a mark You're totally set up for conversion The last thing you would need to do to actually function in that environment is to set up some important export APIs that work with Link data right so I can keep my mark remembering that my model Can be different my model doesn't have to be a triple store a set of API's That know how to receive link data and know what to send out can translate that it can use either if they're baked in API's to your ILS which again several little ILS's are already experimenting with and doing Or the other thing that we've played with and this really works Although it seems very inefficient is an API to an API Right if your ILS doesn't have that it's gonna it's designed to receive a call and send it to you You know a certain amount of data all you do it has to have another process that calls that right and then reformulates It is triples and sends it out So if the conclusion of this Process you really will be functioning in a linked data universe, but at a very low level right you have sort of minimally viable RDF That you're communicating with the rest of the world your catalogers are working in mark But they're capturing some linked data stuff But you haven't there's no net gain here in terms of the quality of your records or efficiency to cataloging You're doing everything the same way The only gain is you could talk to other libraries who are also moving in that direction to a big frame world, right? So that's and that's not nothing. It's important But it's only to us. It's only an iterative step, but if you're a very small library, you don't have a lot of resources You know this may be where you stop for some period of time actually and you could function with the universe That is really moving to full-blown linked data Phase two is designed to really move into that complete complete linked data ecosystem, right? So it's where we move from phase one to phase two and Completion of phase one is crucial to this step because we need the URIs already in the mark To start the transition the way this plan roadmap works so the first transition here the step one of this phase two is to iteratively translate to completely linked data native cataloging and When I say iterative what really is the crucial part here is we we looked at this from various directions But like on Tuesday the 24th at 3 p.m Switching your entire entire cataloging effort over to linked data is very painful and prone to error It's difficult to pull off at a training level because you basically have to be training everybody at the same time or in iterations And then you throw a switch and you hope your process is nailed and you've got it, right? There's no real chance for learning and either works or doesn't and then you're in a bad spot, right? Because in that in-between time Not a lot of work is getting done. So a much better approach to this. However, you have your metadata services Organized is you can in this world. We can move in chunks. We can move and I'm gonna talk about the technology in a second I can just take a group of four or five catalogers one cataloger who's worked doing a particular kind of work And I can say okay I'm gonna transition them first and the other people can stay in their mark universe with the interface that they're used to And that allows us to walk in It's way less disruptive and it also allows you when the first iteration goes bad And you learn what you learn to then apply that to the next iteration it gets more efficient with each small incremental move When we so we're as we're moving oh people over what we're doing then is moving them to a whole new interface a whole new view For cataloging that is really designed for linked data, right? And so it opens up a lot of different possibilities So the one I'm showing right here is the Library of Congress Bib Frame editor It has a a lot of things in it that are really modeled around this new Bib Frame universe The new data model, right? So like a first thing you see I'll just the very first thing is is this an instance of a work, right? It has baked into it this work instance model It and it but what it does in this case this approach is in order to You know ease the transition and I would say actually ease the anxiety Personally in a lot of ways it uses those labels the RDA labels that people are familiar with Catalogers see this that it's not so jarring and so different and it makes for an easy transition and it will pump out Based perfect bib frame, right? I mean it's good and it's there this version is the online version There is a according to the website a next big frame 2.0 version that will actually be downloadable software Yes, okay Sally's giving me the nod. Yes, so that is still happening And that will make it much more efficient, right? It would be hard and it would probably get a denial of service if you tried to like, you know Ping this with your whole catalog, but when you can bring it down run as your own software It's a great interface Another one I'll show we've been experimenting with it looks very similar this bib frame scribe that was developed by Zafira one of our partners and It is very similar, but takes a slightly different user interface approach and that it doesn't use any of the already labels It just says jump into this new world, right? A title is a title. I don't care What kind of title it is because I'm going to go out You know and pull variants of the title at Infanitem as they show up out of the link data web, right? So it tries to basically get you as a cataloger to throw away a lot of your assumptions that may or may not be necessary in this New world. I like it. It's cool, but it admittedly it's jarring to catalogers, right? So Realistically we're probably in a migration space. I predict ultimately we get to this other kind of world but starting with that Library of Congress interface that looks very familiar is It's a good idea. We also experimented experimented with some barcoding sort of really trying to improve efficiency So for copy cataloging we built this little phone app that you just start score you scan an ISBN and It talks goes out to OCLC You know says what work is this and through a series of pings between OCLC Library of Congress We're actually able to build the entire record on the fly a lot of the times without any human intervention If it gets to a point where it needs disambiguation and just ask you which one of these is it boom You hit a button and it keeps going But it's really easy really efficient and more of these kinds of new tools really will lead to efficiencies in cataloging Now the one thing I talked about that I mentioned is in order to make this work to iteratively walk through We have to synchronize this has to synchronize with your current ILS. That's the complicated part of this issue, right? So setting up one of these work benches not a lot of time involved works well This is the part that's expensive for a phase two So the model that we have and I'm not going to talk about the whole thing And my laser pointer doesn't work on these screens So I can't it's hard for me to really walk you through but basically here my mouse You see there's a series of connectors basically so when we're working in a linked data Catalogging interface we're pushing to a triple store and then we have processes that watch that triple store And they will push a mark version of that into the regular ILS With an API by the same token we built things that are watching the ILS and they're pushing Triples back to the triple store which we can do because we're putting mark We're putting you our eyes and our mark, right? They're thin records, but it works and it keeps the two things in sync And that's how we can just move one thing at a time move it over everything's cool There's no disruption of service on that model The last thing we have to do here or the second thing is we have to batch convert same as in the last thing right We have to run processes to convert. What's my time looking like? I'm good So I'm going to talk about a couple different tools to do the batch again key factor here is the We have the things we need to do a basic batch convert because we have our URIs We're there we can start moving legacy stuff over into our triples again Library of Congress transformation tool it will right now like The cataloger it is online version right now You can test also is coming out as a standalone version for bib frame 2.0 that you can run locally, right? And so you can use this service to transform your existing records It will not surprisingly give you really good bib frame And so it's it's a very good option if that's your complete linked data universe I haven't had a chance to play around with it and enough to know what would happen if I wanted to try to throw in other Namespaces like let's say I had cdoxerm that I wanted to use to describe a different kind of physicality about things right So this is one option Second option is mark edit which most people are familiar with if you work in in cataloging It's mark next capability. It has a very cool ability to do transformations so it'll read in the mark and Then it uses XSLT for those people that know what that is So you have these standalone files that you write you don't have to touch the program in any way Basically says take this put it there and convert the record how you want it to be It's very flexible because of that because you can just write multiple transformations like this XSLT turns it in to bib frame This one turns it in to whatever You know, you could have a thousand of them and keep doing it and Terry Reese has set up a communal Git for all those transformations so they can be shared right so there can be community work on those I don't have to build a specific transformation on my own if somebody else already has I can just use yours And we can share those so it highly flexible It's conducive to community Working together the downside for this is that it really is you have to be in a Windows universe for it to work Well, I use it all the time and I do use there are versions for other platforms I use it on my Mac, but it is buggy and and I don't blame the product for that You know, it's Terry's building this and you know Windows This is world and so why you know, it's fair enough, but you should know that right if you're gonna go in a Similar product, but that isn't bound to an operating system is this extensible catalog and Extensible catalog works as a web service. So it runs under Servlet container you can run under Tomcat jetty and it it runs as a web So you set it up and you it has an admin interface. That's a web page You go in and you point it to an existing catalog It'll do an OII PMH connection to your ILS or it will read mark XML And you set up this call and you tell it I want you to pull this every day I want you to pull it every month whatever that is and then it uses separate JavaScript files Just like the XSL T and my mark edit where you define your transformations And what you do is you say, okay check this every day Anytime there's something new that you see or a change then you run the transformation and kick out The triples on the other side It would transform it to anything but in my case we use it to transform triples What's nice about it is it does synchronize in that way because it's continuously polling on your schedule So it handles updates and changes that might happen on the other end So that the downside to this product is it's in terms of disk space. It's very Takes a lot of disk space It basically you end up with at least three separate versions of your catalog You've got your original version in the ILS The way it works when it does it's pulling through a IPMH or mark XML It saves a whole separate version in in sequel and that's how it does its transactional processing Let's change what hasn't and then you have your triples version you put out and that can turn into a lot of data For a full library catalog. You also have to have a much higher level of technical expertise in-house to run this It's you know, it takes a server administrator to put this in place and make it work But you kind of set it up and then you can walk away and the flexibility is cool. You also could write homegrown scripts My opinion there are very very very few libraries for whom this would be a good option number one barrier you have to have people that can write them and you're essentially starting from scratch for all your transformations but if you have in-house expertise and you are in a situation where you wanted to pump out data a Lot of different ways. You wanted to do different kinds of cleaning while it was on its way out You know this that's the time that I would implement this is if you really have some souped-up custom stuff you want to do otherwise we have tools and I wouldn't want to spend a lot of time building your own going your own road here Vendor services. I want to talk about Where have all the vendors gone not really that they've left because we're new to this space But this seems to me like a ripe area for a vendor service, right? We already share how we get through WorldCat how we get catalog records and OCLC in general and you know We so there's already a system set up where someone else has a record and we we get that copy of that That seems to me that we could do the same thing here, right? And so we're not all converting the same bibliographic record 3,000 times Or also Someone to literally just handle the batch conversion as a service so Zafira has done that for several libraries where they just go in and they handle the process of just doing the entire batch conversion and so there's a service model there also and I I hope I on this one. I don't want to say I predict I hope there are vendor people in the audience and I hope you see the value of the business model of this service Because it would help us immensely in terms of bringing the whole Community over and not just those of us that have in-house tech departments that can handle the you know more complicated stuff on our own Step three for the conversion is another iterative loop And this is the iteration where we deal with all of the systems that talked to our cataloging data So we get to this point. This could run simultaneously with iterating through your catalogers Also, but the idea here. This is UC Davis We drew this up that some 40 different systems that in some form or another touch our catalog and that data Right that they have to know what's there. That's a lot of stuff to move over and Again, if you just tried to do this all at one time, it would be disastrous It also wouldn't be cost-effective because what you'd have to do is bring in a whole bunch of short-term staff To try to get everything done and move it over so it doesn't make any sense So again with our iterative model This this hybrid where we're talking to both universes. You can just do this one system at a time Right and then when that one if you have a problem if it doesn't work as you've built new connectors for it to talk It's very you know, you haven't lost a lot You can roll back that one much easier than you could roll back another you know 39 or 40 And you can learn something from that first one that you can apply to the next one you get better or more efficient with each one So this iterative plan really is the way to go Again here because lots of these systems we share in common other people are using them we have an opportunity to Publish the transformation services that we come up with many of which will initially just be connectors, right? This thing's expecting to get data this way now my data looks like that So I just have to create a go-between a little API that that delivers what's it want what it wants and vice versa we can share those there's no need for us all to build those and So I'm hoping we will do that Timing wise let me go to the next slide. I'll talk about timing Once you've finished all that you've converted everything over your cataloging is now fully linked data Your all your systems are talking to your triple store You're you're good to go. You can now turn off All you have to do is just turn this off Shut it down and you're working you're fine And you know you're working because you've already been working in it, right? That final what used to be or could be a terrifying step is not so terrifying because you actually Know you've got several months probably a year of time Working in this other environment, so you can feel safe and good about turning that off and it's not a major point of disruption Now this phase two Obviously relies on a lot of things that have to get written technologies that don't exist so much, right? This is the part where again I put in a pitch But I really do have faith and confidence enough people working in this area even at Davis I mean it were two years from phase two best-case scenario, right at getting and we're highly motivated Maybe one year, you know depends on how motivated I can get McKenzie, but You know we have a limited budget, right? So we can only move so fast with stuff, right? But you know there's there is time and as we're involved in phase one and we're functioning There's every reason to expect that you know a variety both open source and commercial systems To support phase two will come online You know I really if you build it they will come if we're all now operating in a data level there I really do have a high level of faith that that will happen The couple of the things I'm going to talk about what's my time? Okay, yeah Just because they are the kind of pain points in this that everybody asks about I'm not going to get into big details about them other than to acknowledge that they're there and that there are Potentials and solutions so the one of the big ones is discovery layer You know we keep seeing these kinds of graphs however they get developed But that's not a full-fledged linked data discovery layer and there was a panel earlier today on Moving into new types of discovery and I think a lot of work still needs to be done there, right? How do we really formulate a discovery layer that? Capitalizes on this but again, I think there's time we've got multi years before we really got to that point to build things and test right now your options are You know blacklight on Lucene and solar which will index that the triples is a good option a lot of people that do do link data use that There's also a close cousin the blacklight product called colex That's a was a melon funded project for many years and we actually just last night at about three in the morning rolled out The the working latest iteration that that melons been funding that is total triple store base And it brings a lot of different faceted kinds of browsing and just it's it's different than blacklight It's really not better. It's a different sort of approach to the information universe. So But that's open-source software. It's available. You could use the so I think we'll get there on discovery The last thing I'm going to mention is the big authority control question right which comes up regularly and it comes up for a good reason Because it has to change the way we do authority control now just doesn't work in an environment where People are minting triples. They're riding triples. They're minting their own URIs potentially as part of that process So what what happens here just as the example? So this is our link data Workbench that I showed before and let's say so here. I'm typing Shakespeare in I find it It's in Library of Congress. That's great and I select it and I've got a URI. The problem is when you get this Right, there's nothing. There's nothing there. I have two choices. One is just stop cataloging and put that on shelf We don't want to do that. It's very inefficient. It doesn't let us capitalize on this link data universe So what most people agree is that we will mint your local URIs Some people think they will mint a local URI no matter what for every entity and then just connect it with a Library of Congress Or a buy-off or wherever it comes from so we need systems to manage this authority control process and One of the models that's coming out of here is some point whether it were commercial or whatever Some reconciliation process and that's my third party in the middle so if we're all minting our own URIs different sources, but we're contributing to a aggregation of reconciliation Process somewhere and we already this is what buy-off already does and it you know It says there's this version of the name this one those people have said it's this and these are all the same thing, right? We're just doing the same thing where we're doing it with URIs Cornell University right now has an IMLS grant that's looking specifically at this project or at this problem There's a lot of really smart people working on that and I this is this is a very much a solvable problem And out of that will come a solution so it's not as scary as People think it is and the good news is like a lot of linked data that solution will happen mostly behind the scenes So it's not something your catalogers are really gonna have to to deal with unless they actually are a reconciliation cataloger Which could happen as a whole job, but so it is being handled. It's there So last that's gonna pretty much close. I would encourage you to keep track of the bib flow blog We will put the entire roadmap up It'll probably because I have to convert it from its printed form to you know bloggy form So around the new year that it will come up there and at least from my perspective our IMLS funding is ending on this But I definitely consider this roadmap, you know a living roadmap as we march down this road and implementation We will continue to update as needed so that other people can learn from, you know, what we're doing So with that I'm gonna go ahead and say thank you and we'll take questions if there are any Before I turn it over. Thank you Carl. That was great Before I turn it over to questions I just want to make a few closing remarks one is a lot of people say well, you know So what we can do this But why would we bother and I have to remind you that a lot of libraries are investing a lot of money in the systems That we have today the workflows and the staff and again We're not achieving the real benefit of the value of that data that we're creating so Expensively this is a way out. It's a way to inform the vendor community about what we need Going forward. It's a way to enable next-generation discovery systems that you've been hearing about And it's a way to become much more efficient and effective so that we can free up resources to do new things like catalog data sets and Stuff that we're just unable to get to now because of these pretty antiquated workflows. So What is inspiring to me is that there is now this way we can kind of see of how over a few years You can get from A to B So even though we're down in the weeds talking about technology that we all find very boring because it's really old in a crusty like mark This is really important. You know, it is the way we get out of the situation. We're in now and Work with the vendor community and with each other to move forward So we would love to hear your questions about this and encourage you to Approach us about how we can work together as a community to make this happen. Thank you