 Thank you everybody. We're going to get started. I just wanted to thank everybody for coming and joining us today. We're here to talk about, you know, migrations, the impact and importance of them within our institutions, within libraries. Three wonderful colleagues here presenting with me today. My name is Erin Griffith and I'm the Fedora program manager. So joining me today, we have Seth Shaw, Julia Coran and Kate Dohey. And they're going to share a little bit about their experiences migrating their different repositories. There's three different stories that they're going to share about what they've done at their institutions, the process, lessons learned from what they've done. So what we kind of realized when we were coming together to plan this talk was that really no matter what the system is or what the repository is, what's in it, what kind of migration you're doing, there are very similar lessons to be learned across these processes. And there are some really key and important takeaways from all of these different stories that are really critical to consider when thinking about long term preservation goals. So migrations are essential for maintaining and keeping up with modern technology within our institutions. They are inevitable. So what we do to ensure that we've taken all the appropriate considerations when the time comes to actually like migrate the content is really important. And so what we're hoping is that these stories and lessons learned can help provide some insight into the impacts of migrations at all levels. So from the staff all the way down to the end user. You may wonder why I'm here besides thinking that this is a really important topic of conversation and roping these lovely folks into coming here to talk today. But I'm really here on behalf of community members and advocates for open source repository softwares. And you know these softwares are here to protect preserve and provide access to digital collections all over the world. You know content in repositories like Fedora they're at risk if we don't make appropriate and conscious decisions to include migrations as part of our planning processes. So and it involves everyone really not just developers or the folks that are directly working with the repositories. And we want to make sure that you know everybody kind of understands the impact of this. So right now we at Fedora we're trying to get our community and our institutions to understand the importance of moving off of old technology. And getting their stuff into something that will meet the preservation and access needs of the users and their institutions and understand the risk of what could be lost if we don't do this. So it doesn't happen overnight we know it's a long conversation the work is hard and it all needs to be carefully planned and considered but it's really important. So there is light at the end of the tunnel and I think these individuals are going to help shed some of that light for you on how to maybe tackle this battle. So speaking first is going to be Seth Shaw and Seth comes from the University of Arizona. He's an archivist and a developer Arizona State sorry. Thank you. Sorry Arizona State my mistake. Thank you. Seth is an archivist and a developer who currently works as a digital library software software engineer. He's active in the island or a community as a core committer and serves on the technical advisory group. He also frequently presents and teaches workshops on managing electronic records and digital library software. And next we're going to hear from Julia Coran. Julia is the university archivist at Carnegie Mellon University. In addition to working with the archives physical collections she also serves as the team lead for the library's digital collections program in her spare time she enjoys curating exhibits. We're going to wrap things up. We're going to hear from Kate Dohey and Kate is the director of digital programs and initiatives at University of Maryland libraries. She leads the library's digital platforms and digital content lifecycle programs her portfolio spans digital collections digital preservation digital repository application management and web and discovery services. So with that I'm going to hand it off to Seth. Thank you. And I'm actually going to continue less of a story and going to continue this groundwork of what we're talking about with migrations and advocate a little bit more for maintenance. So when we talk about migrations in a very general term we're talking about replacing system components right. But it might not be all of them we might be migrating. Sub components of these larger systems we need to pay attention to that. So we're replacing system components while retaining the critical pieces right. Not only the content but also things like business rules what are your workflows where constraints on the system like permissions that we have to deal with. But also the fundamental user experience it's no longer a digital asset repository if you can't view the object right so you have to preserve some aspects of this system. Over time not just the stuff right. So when we do this most of the time we think about these complete system migrations they're like moving house you're taking all your stuff and putting it in a new living space. But there are also instances where we do smaller migrations so think of this as the difference between moving houses or moving apartments and remodeling your kitchen. You're taking a part of this space that your materials are living in and fundamentally changing it. I don't know about if you've ever remodeled a kitchen but there's a few days where you are not cooking in your kitchen because you're remodeling. And there is transition and planning time that has to be incorporated into this process of these migrations that sometimes gets overlooked. When we're saying OK we need to update the system component how long it's going to take your developers tell you it's going to take a few months. You need to put planning processes around this just as much as the complete system migrations. So again generalized steps from these large system migrations. You've probably heard these before you've got to get the content out of the existing system. I have yet to see a migration from one system to another where you could just copy paste data across. I have never seen if you've heard of one let me know because they don't exist. You have to get the content out more often than not. There's metadata remapping to a completely different data structure. And ideally since we're already moving it we might as well clean it up while we go right. So you have this metadata remediation process which is bringing in additional librarians to figure out which fields are mapping to which ones. How are we going to restructure this. There's also going to be software localization. I don't care if you're getting this from a vendor or if you're getting it from an open source solution. You're going to probably at least update your branding on it. You're at least going to paint the walls of the new house right. Hang up your own family picture on the wall. Same thing is true at this site but more often than not as I mentioned those business rules. What were the workflows that you had the constraints that you had on the system. Often involve local customizations or retraining staff on what the new workflows need to be in the new system. Again a lot of work in coordination and either building it or restructuring workflows for your staff. But you also have to have the hardware provisioning in there as well. And I have a slide that's going to touch on this at the end so we'll come back to this point about the implications of this. But also loading it in again this is not a copy paste operation. You have to plan accordingly for this. I think this is one of the reasons why these complete system migrations often take longer than originally anticipated. Because someone forgot one of these steps and how long it's going to take and my screen is dimming on this. So examples I've been through a number of these the first two of the examples are both. The ones that I have an intimate knowledge of the first one at the University of Nevada Las Vegas where I was most recently employed before ASU. You're doing a content DM to an island or a migration and we went through all these steps and this migration took a few years right. It took longer than we expected because again we didn't adequately account for some of the staff time requirements that were necessary granted covid through a bit of a wrench into that whole process as well. But mostly in terms of we didn't adequately account for the time it would take for all the metadata remediation. That was one of those pieces that drew out longer than we anticipated as we are doing this in part with staffing changes and whatnot were there. Arizona State University started doing this before I showed up. I came in at the tail end of this but they did the exact same process where they were going from a local institution built custom project to an island or one where they were doing this as well. But again large scale migrations of complete systems. These are common things we heard about these projects for years and years and years you'll hear about more of them this morning. We're fairly familiar with these steps and the implications of these steps. What I don't think we pay enough attention to at least in conference presentations are these smaller scale component migrations and I mentioned these as system component major upgrades. So different components will have different degrees of significance in what it takes to update them. I'm not talking about the small patch you know there's a bug let's patch it and it's just continues on its way. But sometimes we have components where we have to go through these exact same process even though it's not the same rebranding that we had before. More often than not this is due to the hardware infrastructure that it's sitting on or an underlying data structure that we're dealing with here. So for example the Fedora four five six shift. So if you had a Fedora four five repository and you wanted to upgrade to Fedora six. They completely changed the underlying data layer from something that was referred to as mode shape plus some other bits in there as well to the new Oxford common file layout system. And what this requires is a complete export of your repository data and then a complete re import again granted the surface of the repository doesn't have to change. All the upper layer features all the searching and browsing and whatever you have on top of it all the business rules all those stay the same. And so it doesn't always get the same attention that I think is necessary because what most people see is consistent but underneath is significant and needs to have the appropriate amount of attention given to it. At UNLV when I was doing this shift for them again we had the interface on top didn't change but it still took us a month or two of time. To make sure that we had all the components swapped out properly and users didn't notice the thing but it was a significant difference for us. We're still doing a completely different type of transition as you right now but it still takes the same sort of planning where we're doing a hardware split. Still using Amazon web service web services but shifting how they're architecture going from one particular type of technology to another I won't bore you with details here and that's you want to talk about details. We can chat but these need the same care and attention. Now I'm over time so I'm actually I may not be over because this is a human type right so I'm actually doing really good. I wanted to talk about this idea of continuity of service when you were doing a migration. You can either when migrating between these significant pieces either take the system offline to rebuild it in place. And downtime is really uncomfortable right because these are often flagship pieces where you can't tell the researcher no you can't have access to this digital content for a few months while we replace the system right. This is a no go. But what's the alternative right you have to create a redundant copy of your data and these have financial impacts right for creating an exact copy of the data to reform as you go. It might be temporary. So you need to find temporary space to put it, but it's still a cost center that needs to be addressed. And this question grows in significance the bigger your data corpus for two reasons one, the bigger the data corpus, the more disk space, the more expensive the overall bill is, but also the longer it's going to take. Because it takes time to rewrite bits from one structure into a new one. And so you have to keep in mind that the larger this corpus grows, the longer it's going to take and the more expensive it's going to be so you have to be careful in figuring out how you're going to do these trade offs. So actually at this point I want to plug the lightning talk that happened yesterday for the Oxford common file layout. Because I think that has the potential to address this issue in a significant way. There is not yet I've seen it's relatively new and there's not a lot of implementations of it yet that have been running and need to do a surface level migration yet. But this is one of those cases that if we can get our front end repositories to actually work well with OCFL. It's going to be a lot easier to do these migrations if you can keep that baseline data structure and just replace the top of it. So I am hopeful that that will help address some of these things moving forward. So I think that is a good investment that we're moving into. And so I've gone awfully fast so hopefully it will be time for questions later and I will let my other co-presenters continue on with their own projects. Thanks Seth. Seth actually teed me up very nicely for a couple of things I'm going to dig into particularly. What happens when you forget one of those things when you're planning for your migration and how that sends things sideways. And also like when you're moving clean the windows but you just kind of smear the smudges like what happens when you've done that four or five times. So hi Julia I'm at Carnegie Mellon and I'm going to talk go into sort of more of a case study approach for some of the issues we encountered during a recent migration from a proprietary content services platform to Island door 2.0. But I particularly want to focus on the concept of object based technical debt and the cascading impacts that can have on migrations and application development. So I first want to start by taking just like a quick look at our institutional history with digitization. Carnegie Mellon was a really early entrant into the digital collections landscape. I think particularly in regard to complex archival collections. Our oldest collections that H. John Heinz the third papers was digitized in 1994. So you know we're looking at 25 years of digital collections and that first collection that we did came at at a whopping 292,000 documents which I think was quite a lot that early on. Since then we've done three total system migrations. Our first two systems were built entirely in house archival where our third system was vended and now we are using a highly customized version of island island door 2.0. So if your collections are younger than ours I may be kind of waving to you from 10 years in the future. So why specifically talk about the object component of migrations. I think in part because the objects frequently have very little to do with the business reasons for a migration. Even when the issues spurring migrations are object centered. I think we still often blame the system and not the underlying data for the problems. For example, the reasons for our recent migration were almost entirely interface focused. We just liked the user experience of our system we wanted a number of enhanced features that weren't possible when we were in a vended ecosystem. But we didn't really acknowledge the role that poor metadata and data structures were playing in the overall poor user experience. But you know we weren't thinking about the object part of the migration we were just talking about let's have a better more modern front end. So while our reasons for migrations are often system focused the challenges during migration are actually often object focused. At CMU we certainly encountered plenty of development related challenges during our migration. But we spent significant time addressing issues that have been accumulating for over 25 years or I guess less than 25 years with our objects. And again like we did not account for the amount of time we were going to have to spend dealing with that technical debt component. Object related challenges I think frequently followed the two categories. Metadata and file management the metadata challenges during migration are kind of very well worn territory. So I'm not going to dig into that further. But file management and file care challenges and even just that movement of files that Seth was talking about I think we engage with less frequently. But those can provide some of the biggest barriers to running a smooth migration. And I think it's also really important to acknowledge that object based issues can be the most pernicious because they are often compounding. The concept of technical debt is probably a familiar one to most people in this room. It's frequently present in software development literature. But technical debt in a glam context bears some really fundamental differences thanks to the concept of perpetuity. We can leave systems behind. We can declare bankruptcy. We can kind of erase some of that debt. But the objects are left behind. We carry those objects forward into the future. It's part of our remit as institutions that we are preserving these things in perpetuity. We've invested a lot of money in them. But so because we carry those objects forward with us from system to system on address technical debt is carried forward as well. So again just accruing accruing. So again migration we not only carry it forward. We can often accrue it and grow that technical debt if we're not being careful again. That's sort of like we tried to clean that window but we really just smudged it and now moving forward it's harder to see what was on the other side to begin with. This is largely I mean this is very abstracted view but somewhat accurate for how the technical debt on our objects specifically snowballed over time. Not only over the course of 25 years did the number of objects carrying debt increase. So the number of items we were going to have to remediate increased but the level of debt carried with them. And it really until that situation became critical it kind of is like a Xerox of a Xerox of a Xerox situation until you can't see the original image and you have to kind of go back and almost start over. So this is another point that is probably obvious but I think it's still worth mentioning. The number of objects or documents in the collection doesn't represent the number of items you actually need to migrate preserve and maintain. We need to track the metadata each page multiple derivatives and preservation information. For example just talking about Shakespeare's third folio which is an item in our collection just migrating that object alone means moving over a thousand files. So just like you have 75,000 objects in your repository but that could mean we're going on a million objects that you are maintaining over time. And we have to acknowledge that. But so this panel is about sharing migration experiences to build community knowledge. So I just want to engage a little bit with how object-based technical debt affected our specific migration. Fundamentally we realized mid-migration that we did not have a canonical list of what was actually supposed to be in our repository. We did not know what we had and we did not really know everything we were trying to migrate. We had originally planned to use the exports from our previous system to guide our migration but discovered that those exports weren't actually accurate. Documents that were visible in the system were not always included in the metadata export. And some objects we knew had been there had just totally gone missing entirely. The metadata was gone. We couldn't find the files. And in some case we also discovered that objects had been missed during previous migrations, a decade or more prior. So that was really fun. This issue was kind of relatively straightforward to address for serials because you could look at the numbering, see where an issue was missing, go find it. But almost impossible to determine in complex archival collections, particularly when over the course of 25 years, items had been intentionally removed in the past because of issues related to copyright or PII. So again, going back to the first files 25 years ago, we might then be reintroducing things we had intentionally removed from the collection. And because we were in a vended system, there wasn't really a way for us to dig in and determine just what the heck was happening. Our vended system also relied on PDFs as service copies, which we needed to move away from so that we could implement some of our desired features, mirador, page turning, better quality images, which meant we had to locate our master files, many of which were on tape backup. Which now meant that we had too many master files and no easy way to tell which one was the correct one because we didn't have checksums or voting systems in place. And even our preservation copies were no help because we had made the choice to ingest PDFs as our preservation copies to save money. I don't recommend doing that. And then finally, we were unable to define the completeness of some of our documents as we discovered multiple cases where the number of TIFFs or JPEGs did not match the number of pages in the PDF. So many of these issues require and are still requiring a year in manual intervention to address, which is not fun when you're talking about 400,000 objects and several million pages. You know, we didn't know why the number of pages didn't match. Did a page get lost? Did we skip a file during initial scanning and added it in at a later stage? Was the page reoriented and added twice to facilitate OCR? You have to actually go and look at the document to figure that out. So where are we today? Almost exactly a year since we launched our repository. We were able to use migration as a chance to address issues with our metadata, which was amazing. We were also able to centralize our master files, which will hopefully reduce some of the pain points the next time we migrate. Because there is always, always, always, always a next migration coming. As soon as you finish the first one and migrations are truly never done, it's time to start planning the next one. But however, we're a year out, we're still working to complete the migration of our oldest collection, which inevitably had the most challenges. And we're also starting to migrate a second repository, our Luna instance, into Islandora. So why does this matter? Why should you care about the fact that migrations require migrating objects? Fundamentally, we spent most of our migration bandwidth addressing core issues with technical debt, not the actual business goals of our migration. We lost time that could have been spent on feature development to triaging object-based technical debt. It not only extended our migration timeline, it affected our ability to deliver on the charge we were given. And that is it from CMU and on to Kate. First, I do want to thank both Seth and Julie as well as Erin for organizing the panel. That does take a lot of work and labor. This is our decision tree, or at least the one that I keep in my office, about our Fedora 2 repository. And today I'm going to tell you a little bit about our Avalon migration process, what worked and what didn't. But before I get into that, I'm going to say that, first and foremost, I consider migrations to be as much of a social problem as they are a technical problem. And if what you were expecting from me today is a deep dive on a lot of the technical solutions that we came up with, that's really not what I can talk about. I have people in my team that certainly can and probably will, but I have to bring something different to the table. So at the University of Maryland, I'm going to tell a little bit of a story, which is that our digital collections program began in 2005. We began to expand in large scale over the course of the late 2000s and into the teens with a wide variety of projects, including mass digitization efforts from our Prang archive of books that were published in post-war Japan, as well as a wide variety of audio-visual collections, including the Jim Henson Works, IA Films at UM program that we have, and a wide variety of public television and public broadcasting materials. Our digital AV content in that time period was a solution that, you know, where we wound up storing our video assets in a commercial system called ShareStream, and then linking to that from Fedora 2 and our aging Fedora 2 repository via metadata. However, it didn't work well for us and the program was starting to really show issues by the time I joined the University of Maryland in 2016, which as the more that I think about it was a pivotal year for our digital programs in a wide variety of ways. So number one is that we had by that point determined that ShareStream and Fedora 2 was a problematic user experience in a wide variety of ways. Number one and first and foremost was their curators were unable to edit metadata effectively after an item had been published. So we were unable to manage our own content in real significant ways. It was presenting real problems on the discovery side. It was increasingly a problem on the access side. And fundamentally, we didn't have very good control over our own audio-visual assets. The system that we had chosen had locked them up in a proprietary access format and we were unable to work with those access copies directly, very effectively at all. In 2016, we brought up our Fedora 4 repository and more importantly for us really began to re-engage with the Fedora community over the course of 2015-2016 and going onwards. After a number of years where at the University of Maryland, we had detached somewhat from the community for largely strategic and funding reasons. We badly needed to sunset that Fedora 2 repository, see that flowchart that I got. And so what we did is started planning in 2018 and implementation in 2019 of a one year long Avalon media repository pilot. As I said, I consider this to be as much of a social problem as much as a technological one. And all of my colleagues have talked about many of the technical problems that we also encountered at Maryland. Well, I want to talk about those, how we came to our choices from a values-based perspective and how we articulated the ways that our values are reflected in our software choices. One being, you know, the openness of the platform. It is imperative to us at the University of Maryland, at least in our technology group at that time, that as much as possible we pursue academy-owned infrastructure as much as possible. Furthermore, we care about the sustainability of a platform. I talk about this a lot. I say that our business is permanent. That is the thing that as librarians and as libraries, we have to bring to the table is being here for the long haul, a long-term commitment to technology. And from our perspective, Avalon met our needs in that regard. Finally, we care, sorry, not finally, I'm almost there. We care about usability. And we take a user-centric approach as much as humanly possible to selecting our technology choices. Like I said, these are not merely technical problems. We're not doing this to create the most perfect object model. We're doing this to get our content into the eyeballs of the people who need it. Maybe not directly into the eyeballs, but, you know, you can follow me on that one. And then finally, the last thing that we care very much about at Maryland and one of the values that guides us in our decision-making process is inclusivity. Not merely talking about inclusivity in the design and accessibility of our application, you know, the application as well as its content, but also in the way that we approach our stakeholders. I am a big believer in building the relationships across our entire organization, and the way that we do that is with a co-creation approach that treats the expertise of people who are maybe not always asked for it as important stakeholders in our entire software selection process. And so as a part of that, in the Avalon pilot phase, one of the things that we did was we conducted interviews and site visits with the Digitization Lab and the student employees who have to interact with our systems. We did it with graduate assistants going out and talking to our users up in the lobby, pretty much. We reached out to as many people as we could to engage them again as partners with feedback that matters to us. And so as a result of that, during the pilot, we articulated 50 user stories total across our staff, our librarians, our end users, as well as our, like, Digitization personnel and metadata specialists, and also the repository managers. And like I said, we did that from a very collaborative, co-creative approach. About 25 of those requirements were met by out-of-the-box functionality in Avalon, which to us seemed pretty good, either as, like, native out-of-the-box or in a future version. And then finally, there were about eight essential issues that were going to require custom development, which we could do because of the largesse of our institution and development resources. We could build out features and functionality again with community-driven open-source infrastructure. This is one small portion of our overall project plan, and it definitely went exactly as laid out there with no changes whatsoever, and it was precisely on time, because that's the way the project works. It didn't. We had a pandemic, and we had a wide variety of other initiatives for the organization that also took priority in this time period. But you do get a sense of, like, all of our streams across the organization. Again, we were not only working with the content and technology, but we were doing product development at the same time. We were doing policy setting at the same time, from our curatorial personnel, as well as from our technical teams. This is all that I have, guys, is memes from here on out. Our strategy for the Avalon migration was that first and foremost, after we exited the pilot phase and made recommendations to leadership that were accepted, was to bring up Avalon Greenfield, effectively, so that we could launch our first major digital collection in it, which was an NEH-funded digitization program for the Liz Lerman Dance Exchange, which generated about 1,100 videos. So, you know, we like to start small at Maryland, that's kind of our deal. So we brought up our minimum viable instance. We did that on May 4th, 2021, hence the memes that you see there, and prepared our initial instance. And, oh, I was wrong. It's actually close to 1,200, not 1,100 videos. Excellent. Of the Liz Lerman Dance Exchange project. We use that to learn what would and wouldn't work. And this is one of those approaches that I would articulate as capital A agile, right? Get into the system, find out what is going to work for us, and then iterate on that over time. So we learned a lot of things. The things that we learned were that, number one, we were not going to be able to use Avalon as a preservation repository at our scale. As soon as I started to talk about, like, I don't know, if you can get like 150 terabytes sitting on that server, that was going to be a non-starter. And so we made the choice to implement Avalon as an access only repository for our digital collections, and to continue to have a split workflow for preservation copies. In addition to that, we also learned fairly quickly that the asset transcoding that would be required at our scale would require a presubstantial amount of computational resources to be diverted just to that application. At the University of Maryland, we are running Kubernetes, and we are running Avalon in our Kubernetes cluster, which has worked by and large fairly well for us. Although I do think we brought down the cluster once, if I recall. One of the other things that we learned from the product perspective was that institutional systems that we use for group authorization and authentication management were not going to work as we originally had designed in Avalon. So in particular, we use a service called Gruber to do most of our digital library application permission groups. And we had hoped to be able to implement that in Avalon and discovered that it wasn't going to be effective as we had initially thought as happens. We learned also very quickly that we would need to build in a pretty substantial amount of file retrieval and request fulfillment procedures or processes into the Avalon application, which we undertook as a part of our final migration project. And we also learned that target migration mapping would or target collection mapping was going to be a considerable amount of our time. So at the University of Maryland in our aging fedora to repository for digital collections, we have a digital collection called digital collections, which is not at all confusing to anybody. And the digital collections digital collection became effectively like the overflowing junk drawer after a while, right? Like everything is kind of crammed into that digital collection. It is, of course, a real challenge from the discovery side of things for the end user experience and also for being able to direct analytics and answering a lot of questions about the usage of our digital content. So that was something that we really wanted to clean up and fix as part of our migration. And finally, just like all of my colleagues, we discovered we had no single source of truth for a variety of issues related to the location of our assets to their permission controls. And we had to, in some cases, make our best guess about what to do with those materials. And I'm going to talk a little bit about the social aspects of that. So we had, after bringing up into production, after testing out a few different collections, 10,600 AD files that needed to be migrated in approximately six months. And spoiler alert, we did make it. And this is a screenshot from our first collection that we migrated, which is actually fitting because it's one of the first digital AD collections in our original repository, which is the Jim Henson works. As part of the migration project and what I knew was going to be happening as far as the work went, I had articulated to leadership that we needed to take a clear the DEX approach. We needed our technical team as much as our curatorial and metadata team and also my department as much as possible focus on this migration as their top priority for that six month period. And we got it and we made it. So what we had to do on the product side was build out our external IP manager, which we use for access control and authorizations at the University of Maryland. And also build in our token URL base request fulfillment feature into Avalon. So this is something that we had customized and which I'm having a moment with right now because I'm trying to track down a very irritating production level bug with that functionality. But please believe it worked in testing. So we had built out this functionality to be able to facilitate curate or mediated requests as well as access requests for content that was restricted to the campus, which our Henson collection is. In addition to that, we had a huge amount of work to do with the content and I can't even articulate all of this, but we had to regenerate manually download pull access copies from like our actual hard drives that we're still floating around in our office, especially over the pandemic. But I believe we avoided a binder of CDs that were sitting in one of the offices. It came pretty close down to it though. And then finally we had a huge amount of work to do with the metadata where we had to crosswalk all of our descriptive metadata over into Avalon's ingest format and we had to remap our source collections. And yes, this was a huge amount of technological labor. You know, we have all of our custom development we have all of this metadata work and I can tell you the same things that my co presenters have about that. Instead I'm going to say when it comes to the emotional labor. These things all have emotional impacts on your participants. You have users who are emotionally affected by the state of software at any given point in time. It stresses people out. It causes morale when your systems don't work or causes morale problems when the systems don't work and it buys you a lot of goodwill when you launch something that vastly improves their work day. The change leadership aspect of it can be challenging. We do have a lot of personnel that are very attached to the way that things have been done and getting people transitioned to a new system requires a lot of training. It requires a lot of care. In our case I had a graduate assistant who was on the ground going to different offices in our special collections library offering up training. That took a lot of energy. We also had to be empathetic in our communication plans but keep our participants well informed. I felt like towards the end of it I was sending out these big multi-paragraph spills every week about the status of the Avalon migration. As a team lead I needed to hype my team. I needed to provide coaching. I needed to troubleshoot things. I needed to occasionally debate perspectives and approaches. All of those things took a lot of time and energy from personnel. And then finally at Maryland we had issues with turnover as part of this. Migration happened like during the pandemic. We had a variety of people depart between our pilot and our production migration. We lost about and turned over about 42% of our team that was working on that project and that does not include our graduate assistants. It does include our heads of digitization and special collections operations who were critical partners for us in this. You know that takes its toll all the way around. And then finally we migrated the rest of our digital collections out of Fedora too without any problems at all. I will not be taking questions on that topic. Yeah so yeah thank you. Yeah so just I mean just I want to give a really big shout out to Seth Julia and Kate here. I think a lot of people in this room can probably relate to some or all of what they've shared today. But really for us like the messages migrations are hard. You know they take a lot of work a lot of resources and not just financial resources. It's people's time. It's the time it takes to do the work. It's the infrastructure allocation. It's all of those components all the way along the line. And all of these things tied together to keep you know is important in order for us to keep the content safe. Keep it safe. Keep it secure. Keep it accessible for our users and continue to preserve it you know as part of our digital preservation programs. So what we're hoping that from today you know you can take away maybe a sense of relief in that you're not alone. I'm sure that for every hard decision that you have to make you know every uphill battle that you're fighting with in your institution. That there's a really solid chance that there's somebody that's made that decision somebody that's fought that battle. And sharing that story and sharing these stories can maybe give some hope that you know we you know we can we can do it as a community. So don't wait also to start thinking about planning migrations. It's going to happen. It's inevitable. You know thanks to some maybe emerging concepts like OCFL it might not be so challenging but ultimately you know you still need to plan for it. So make it part of the conversation so that it doesn't become mission critical and then put that added stress added time and taxing on people's lives and on the system needs on the system requirements right and involve everybody. It can't be the responsibility of one person or one small group of people within an institution. So why should the migration migration conversation be just you know related to that one person the content is everybody's it should kind of you know be everybody's conversation to be had. So that's all we have. I want to thank you for your time. There's all of our email addresses if you have any questions or you can come find any of us.