 Let's go ahead and start with introduction and we're going to switch over to our PowerPoint in just a second But we thought that you might like to see our faces first So I'm Heather Heckman associate dean for technology at University of South Carolina library Megan Oliver. I'm the digital collections librarian along with a team of part-time paraprofessionals I'm responsible for digitizing and describing two-dimensional items from five of our seven special collections units Kate great Kate Sure, I'm Kate Boyd on the director of digital research services I work with Megan and digital collections And I also work with Amy Freeman and Stacy Winchester research data librarian and scholarly communications librarian Amy Yes, and hi, I'm Amy Freeman. I am the scholarly communications librarian at University of South Carolina So we will get started All right, thanks everyone I'm going to switch over to the PowerPoint and we are going to turn off our cameras to preserve bandwidth So first very quickly, I'll just go over What we're going to do in the next 30 minutes or so so I'll do kind of a brief Crash course on the method and then the four of us will do more of a dialogue Where we go through advantage of disadvantages lessons we learned and things that we would do differently And then I just want to say upfront I'll say again at the end But if you want to have a candid conversations about the options that we considered or if you want to learn more about our process Like don't hesitate to email us. We'd be happy to set up a virtual meeting with your team To go through some things that that maybe we wouldn't be comfortable saying on a file that will be up on YouTube for for the longer term Okay, so what is multi criteria analysis? MCA or sometimes MCDA multi-criteria decision analysis is a framework for evaluating and weighing conflicting criteria So it supports consideration of qualities that aren't always easily monetized like values and features and in this way It's often distinguished from cost-benefit analysis and this has given it some appeal in the policy sphere In fact, who's a UK government manual to guide our process? And there is software designed to support MCA, but we didn't use it Instead we built our own tools in Excel and we're happy to share those Just reach out to let us know if you're interested in seeing any of that material So, um This is the question we asked can we replace our current digital and institutional repository software with alternatives? That meet or exceed our needs for approximately equal or lesser cost Um, it's a question that probably all of us are asking at least kind of in the backs of our institutional minds Um, but it was made more urgent this year because of the budgetary crisis associated with COVID-19 We're under pressure to make cuts where we can and and previous efforts to consider the question had kind of fizzled I guess it seemed like this was a good time to adhere to a rigorous framework and come to a decision if only a provisional one Um, we currently use content DM and digital commons and we decided to consider them together Since at least at the outset 15 the main way we could effectuate savings and and that has generally proven true Um, it also I hoped had the potential to save some time But of course introduced other complications and we'll probably talk about that a bit more later My preferred way to make a decision about software is to try it But repository software or enterprise systems. They're not like an app on your phone Nobody listening will be surprised to hear Um, I often think of the metaphor of a home Whether you Buy or rent you rarely get the chance to really live in a space before committing to a major change Moving is expensive. It's labor intensive And uh while it's relatively easy to see what's wrong with where you're currently living And at least try to make provisions to correct those issues in the next time It's much harder to see what is right with where you are MCA can be really helpful when it comes to producing a full catalog of considerations Okay, so What's the process? Uh, here are the basic steps. Uh, so the first is identify process options to evaluate in our case What repository software would we consider? Then you brainstorm criteria the manual About six to 20 criteria for success. Um, it's important to bring diverse stakeholders in throughout the process But especially at this stage I would say The next step takes the most time. Uh, you evaluate each of your options on the basis of your criteria The tool for the step is called a matrix in our case. It's just an excel sheet with options as columns and criteria as rows Scores are new normalized. Um, so that everything is on the same scale When you actually do it you grasp the importance of this pretty quickly But maybe an easy way to sort of explain why things get normalized is to point out that One criterion might get, you know, zero to five stars another might be in dollars You can't just average a hundred thousand dollars and five stars together and come up with a meaningful number out the other side And after scoring comes waiting, uh, which may be the least intuitive step MCA uses swing waiting. Um, so it doesn't ask how important is this criterion per se It asks how important is this criterion given the difference between the options considered So returning to the home metaphor For many of us, uh, cost is extremely important when we're thinking about where we're going to live But if you kind of imagine that you're considering three apartments, um The monthly rent, let's say is identical But the refundable deposits vary by a small amount and every single one of those deposits is lower than your budget for a deposit So in that case, even though you absolutely care about cost when it comes to deciding where you're going to live It's not something that's going to help you distinguish between the three options So its swing weight would be very low. Maybe even zero In practice, uh, it's very difficult to let go of the a priori value that you you place on a given criterion And and whereas swing waiting in this particular example is very clear cut by design You probably won't be surprised to hear that that is often not the case in practice. Um Because it's unintuitive, I had to do some training Including one-on-one meetings to get through the weight and I'm not sure that it was even perfect In fact, I'm quite sure it was not perfect despite the training. Um So even with that investment, I ultimately felt like I wasn't totally satisfied with this step of the process Um Finally you discuss scores and issue recommendations Crucially, um, the point of the exercise is not to select the highest scoring option It's to invest time and thinking slowly and carefully about the option And about the features we value and the costs that we can and can't bear Um regarding recommendations We were looking to narrow the list to one or two products to trial And to build a list of bacteria that could inform a request for quotes or proposals and from that perspective I think we were broadly successful Okay, so all of these steps are iterative and flexible There's an order to the process, but it's not strictly linear and it may be recursive So if you realize that you overlooked an option or a criterion Or that something is less important than you originally thought it's okay. You can always go back and incorporate those changes Um, it's not necessarily painless to incorporate changes, but but it's totally doable um Some examples from rk We narrowed our option list after evaluating some initial criteria And then in some cases we brought options back into consideration after eliminating them Mca is is super labor intensive. Uh, it's a formal process and I would not recommend it for most decisions Um, it's valuable important decisions that affect many units and user groups Especially if the ways that they will be affected are in conflict um But the point again is not necessarily to pick the highest scoring option even though it forces you to score things It's instead to think slowly carefully and to consider things and listen to each other Nevertheless, you do have to assign numeric ratings. It can be uncomfortable It is still worth doing if only to kind of force rankings Um measured in person hours, we've probably dedicated more than a month to just deciding which products to trial Um, we've used content EM and digital commons for over a decade We have invested literally tens of thousands of person hours in them I think it was reasonable to spend hundreds of hours asking whether we wanted to replace them and considering potential replacements But it was not a lightweight process for us. I mean, let me be very clear. It was not Um, mca can be less intense the manual uses the kind of tongue-in-cheek example of buying a toaster And uh, yeah, I have used mca in my private life Uh, but what makes it powerful is its formalism and for good or ill that that typically takes time um And then I'll just quickly mention a few things about our specific process at u of s c before we start more of a dialogue format Uh, I set up several groups to represent stakeholders throughout the libraries um, the primary group included everyone here plus our digital repository development librarian um and our research data librarian um Those folks had to attend every meeting and score items. So they made the largest investment um members of other groups including special collections technical services and it Um had to attend at least brainstorming and then they could opt in To rate any criteria that they were invested in so so this let them strike their own balance between influence and time invested um users are of course, um super important to consider as stakeholders I will admit in part because of the covet crisis that um, I decided to For the sake of just a little bit more simplicity Not to take user feedback on all of the options that we were considering In the fall, uh, we are taking user feedback on the options the narrower set of options that we're looking at in spring Um our primary group included our decision makers, uh, which is part of the mca process designating decision makers um Decision makers have to listen to everyone else's ratings, but they don't have to take the average as the final score So for example, if you're the decision maker for usability, you have to listen to the input from attending colleagues and consider their scores Uh, but if the median say is a two and if you really believe that the product merits a one You can still give it a one as long as you Listen to the input and consider it before making that call Um, so this helps protect scores from being swayed by too many representatives from a single shared perspective Um while ensuring nevertheless that those representatives are heard Um, and it also gives the decision maker some important power. Uh, and I've done right that can help Help ensure that the people who use the system get more decisive input for example Our matrix started with 20 options and 100 criteria. Remember that the manual recommended six to 20 um criteria And it doesn't I if I recall correctly, there's not a recommendation for the number of options, but 20 is a lot um The manuals right, uh 100 criteria is too many to discuss. So we ended up grouping them into categories and discussing at that level Um, similarly, although we started with 20 options, we use some initial discussions to narrow the list um In particular, uh, we asked ourselves about the user community the size and scope of the user community And and we verified that all of our content could be ingested and represented Um, we do have a significant collection of moving images and that did limit some of our options I tried to think about which people want to attend to discuss criteria when I was making the category grouping Rather than worrying overly about strict definitions of the categories um In theory, you want to have uh categories that don't overlap for mca in practice. That's quite hard Uh, it's difficult to entirely eliminate overlap Yeah, we did our best. We worked our way through In the end, it took about six months to complete the review of the six options considered And we're now trialing two alternatives One of which actually was not one of the six options that was considered Um, we also of course learned a ton about our current systems and that I think Is altogether more than enough of just me talking. So, um, I'm going to Hand it over to megan uh to discuss what worked well Sure. So we had several things that worked well in this process. Um, the entire multi-criteria analysis process Forced us to slow down as you would imagine over the course of six months Be rigorous and follow through. Um, so we kind of had nowhere to hide With each other. We were always working together. Um, so we spent a lot of time communicating and We really learned more about what each of us wants in a repository on a personal note I found this incredibly helpful to have this open and continuous communication with each other Um, broadly speaking even though it was a lot of uh, criteria as Heather was outlining We agreed on the criteria set and this did allow us, uh, some more speed and decision making Believe it or not. Um, and also this put us in a really good position for crafting an rfq if and when We are ready to migrate Um, so yeah, Heather, do you want to kick off the the dialogue? Um What you saw worked well. I know we're throwing it back to you Yeah, um, well the only thing I'll say here is that I joke from the beginning that um, everyone would probably hate me by the end Um, Kate told me she thought it was bringing everyone closer together and I really hope that was true Sometimes sometimes I definitely clung to it And this is Kate. Yes. I do think it did we we all talk together. We really I mean at least about twice a week sometimes solely talking about repositories the same group of people That primary group met a ton We heard a lot from the other groups as well And like megan said it really made us focus and and also I want to say just creating that master list of criteria Was a wonderful exercise Made us think outside the box and really dream big. Um, so those were those were some good things that worked well Yes, and this is Amy and I will say that I certainly agree with all of those things And I do think it brought everyone closer together and made us all much more knowledgeable About the different systems we were using and those that we hope to Learn more about it helped us to just be incredibly thorough In comprehensive in that evaluation process And I know I felt a lot more comfortable with each of those systems After we went through this process since we really had to dig in we explored We navigated, you know the front end and the back end plus Got way more Involved with documentation than we ever would have otherwise And because we did this we knew that Regardless of what we ultimately wound up deciding in the end. We hadn't explored all of these different options And and we knew that we that would give us some ownership Of our decision and make us feel comfortable with it But at the same time I think having that input from stakeholders all across the library Really helped us to think about the different perspectives that were going into making this decision. It wasn't just us Um, so they made us think about things we wouldn't have necessarily considered like The technical perspectives that were coming from the library informations team Was really useful for me and then also getting some user perspectives from for example the research and instruction librarians I really helped me think through that end user process as well I agree Working with digital collections for years. I learned a ton from moving image research our film archive They um have a very different operation down there and I was Unaware of some of the processes they had in place for inputting data into contendium So I learned a lot from how other people in the library see and use our repositories Yeah, um, another thing that I thought worked really well was weighing that criteria Sometimes we felt a little bit uncomfortable with how a score landed but once we waited it and it helps me out any of those inconsistencies or Irregularities that we had in that scoring process. So If something Popped up to the top of a category and was scored very highly that we actually didn't really care that much about We could wait it a lot lower so that ultimately it didn't have a huge impact on the final score And and that helped me also feel better about that process And in the end we had a we had a ton of information that really did boil down to sort of the gist of what we were all thinking That was good Yeah, I'm kind of curious to know um what I value changed over the course of the six months Did did your favorite options remain stable over time? No, mine definitely did not I definitely came into this really excited about sam vera and In the end I was I was pretty lukewarm and I was turning towards other other repositories such as islandora and tend Uh, I was also surprised how um digital commons definitely Held up well under scrutiny. That was that was interesting. Yeah, this is me again I I had a lot of difficulty having favorites. I had high scores, but I wouldn't call them I wouldn't call them favorites. Um, so so many of the repositories that we surveyed in this process They function in ways that require a great deal of conceptual crosswalking for me So it's not about the metadata or the items Going in and being uploaded themselves I was thinking of what we have in our collections now and how that's going to look when we migrate um, so The look and function of each I think Was throwing me a little bit. So it was hard hard to pick favorites I'll say instead that I developed a short list of what I like to think of as like steady options that I could definitely go with Um, which sometimes matched what amy needs in an institutional repository Right and and I'm with me again and I also had trouble picking favorites They all differed so much from each other and they offered different functionalities many of which were desirable Uh, but I was surprised that some offerings that I had sort of considered standard from the start Because of the current systems we were using actually hadn't already been implemented or were far down in the roadmap In other solutions that I thought would actually come up to the top of the short list for me. All right Kate can you talk a little bit about what didn't work? Sure, so We had a lot that did work well But of course, um, we need to sort of look back and think about what what we could have work done differently Or what didn't quite work well And honestly overall as academic libraries go this was a very fast and furious process. It was um Pretty intense at times. We were all learning on the go how to use the excel spreadsheet Um, it was also difficult for us to combine the user and operator expectations of an ir and a digital library um Much we we actually I think learned a lot about the differences of those two repositories through this process and and ratings ratings are hard and definitely um feel arbitrary So we were always we there were times and we were puzzled by the scores that we ended up with assigning um Amy, what did you think? Right. Um, well, I'll definitely agree with that. Um, the first point sometimes that turnaround period for the next criteria discussion session Was a little bit challenging. Um, we were in the middle of the covid crisis and we were all still sort of getting used to doing everything virtually rather than in person so sometimes it did feel a little bit tough to Find the time to evaluate all that different criteria On on a pretty tight deadline um, the other thing that I noticed that made it a little bit difficult to work through this process was that um, sometimes documentation was very hard to find particularly for the newer or some of the um different components of open source software and Sometimes that lack of documentation or confusing documentation resulted in poor scores for me and for others. I think as well Despite the possibility that Those features might exist. They might be available and they might be, you know, perfectly functional Uh, so that was a little bit challenging. Uh, so as a whole I would say that vendors who put documentation behind a login tended to be scored a little bit more poorly than others um We did try to make up for this by talking to other institutions who use the system And that was very valuable, but some systems had both forms of evidence and others just had one so So like I said, it tended to result in lower scores from newer and more You know, so sort of the emerging products even though they had a lot of potential and also like Kate said institutional repositories and digital collection repositories Require very different features. So that made it hard to score certain aspects Um, and it was clear that certain repositories are are much more intended for one purpose over another And like for example cortex is clearly not designed for an institutional repository. Yeah, it's hard I mean, I think to some extent because Every option suffered for some of these things to in at least some ways it evened out, but there were I mean um Cortex is a great example So it's documentation is behind a paywall and it really is focused on digital collections. So it's scored Low relative to other options Even though there were lots of things that we did like about the system Um, and that I guess is just another case of saying It's not so much the score that values in the end as as learning about the product Um me and do you have anything? Yeah, I think the only interesting that didn't really work well for me. Um But probably because I my I structure my brain in certain ways or It is structured in certain ways I found that I had a lot of issues with comparing Hosted repositories and open source because I feel as a collections manager. I'm not a coder. So Open source is very nebulous. It's from vendor to vendor what what you can actually Have what you can actually do with an open source repository Um, if you're not contracting out your web developers, you have to have like a full in-house complement of web developers um, and so for me All of our open source repositories Hinged on on that. Um, so it was hard for me to score them together hosted in an open source Yeah, and it also kind of oh, I'm sorry. Um, but it did also kind of seem like sometimes criteria that we had come up with Uh tended to either favor or disfavor open source or commercial products. Um, but I think that sort of wound up evening out Yeah, yeah fair enough. Sorry. I jumped ahead and you there. Amy. Uh, kate. Do you have anything to add to what didn't work well? No, I think that was good. Yeah, all right What we would do differently. Um, so one thing that I would definitely do differently. Um, is I think I would work harder to incorporate values into the matrix Um, this is absolutely something that falls within the framework of mca um But uh in our brainstorming sessions, we tended to focus a lot on features Which it's very important to get that kind of material documented But values also matter in in the decision that we ultimately Will come to um and and also, um You know anything that we pick is going to reflect our values to the people who who use our collections So, um, it did end up being part of the conversation But uh, I do wish that I had driven the initial brainstorming sessions more in that direction Um, and then, uh, we've got a note here to conduct more preliminary meetings at the outset of the analysis To clearly outline project expectations for team members. So, um, I set up those preliminary meetings And at the time I felt like I was Asking too much of y'all so so that was interesting for me to see that actually there was a little more demand for some further Further introductory sessions one thing that I might do there possibly So we had some orientation meetings set up for the folks who were involved in the process I might add One-on-one meetings too that people could opt out of if they feel like they're comfortable and Don't want to spend another time to meeting with me But but more of an opportunity to talk about the individual's role and what that's going to look like and what their other commitments are um Kate, uh What would what would you have done differently? Yeah, and a more practical level the excel spreadsheet was intense that we all used and um, I think heather changed it halfway through Um, you basically fixed a few of the formulas I think that and and that that was good and in the end it worked well But there were some bumps in the road of a little bit of a learning curve for all of us to figure out how to use the excel spreadsheet And I think if we did this again, you'd have that down and we would all have it would be much easier the second time I think we'd figure it out. Yeah, um, I can definitely I can definitely cop to to figuring that out as we went along and um I would like to think that we'd be in a better position for the next time and As I said at the outset we're happy to ask to help anyone attending kind of refine those materials more at the outset There is also software that can support this but it's not something that we wanted to invest in Just for the sake of this project Meagan Yeah, so I What I would do differently, um, isn't so much about this project per se but what I learned from this This project this initiative, um I think we should perform multi-criteria analysis more often as a as a group um Maybe even annually to assess a variety of digital tools and decision-making activities not just for repositories Um, these are the kinds of meetings that can't be emails. So it's not um while there is a lot of work involved and I would like to Pair down the criteria um and find smarter ways to streamline that I did find the actual meetings themselves To be incredibly helpful and connective with my colleagues So just in terms of doing things differently at university libraries, university south Carolina I think we should do multi-criteria analysis for a variety of digital tools And I'd like to chime in on that point. I agree And I wish we had separated digital collections and the IR now I think that would have made less criteria for both and We could have really focused on those values and functions for each one of those repositories That would be different. Uh, Amy anything to add? Um, yeah, no, you know, I actually thought it was a great process. There's not a lot that I would change about it other than Uh, you know working through some of those little wrinkles logistical wrinkles that we had early in the start But um, you know always open to improving the process in any way possible, but Gosh, I thought it went pretty well All right, Amy. Uh, why don't you take us home Yeah, definitely. So, um, you know by the end of this process, we had learned a lot Um, some of our takeaways as you can see are a lot bigger than others But all of them were pretty helpful to us When considering how we could move forward with these decisions Um, and also, you know in framing how we might use mc in the future So I hope that some of these takeaways will um, perhaps help you if you ever decide to go and undergo a similar process Um, but some things that we learned with that Um, that open source versus proprietary conversation is it's tough. It's complex. It's nuanced um, and sometimes um values and pragmatic concerns are at odds Um, like we talked about the process was time-consuming But we did think it was worthwhile because we had invested so many hours in our repositories um We also saw that our current system use influence our evaluation process pretty heavily. We'll talk more about uh, but we also saw that the Visualizations that heather came up with for us during the process were really important and they helped us understand um While we were doing that rating those ratings Of course, we saw that active communication was important And that it was important to rely on our colleagues to fill in knowledge gaps about All kinds of stuff work clothes procedures repository issues Um, what did you think Heather? Yeah. Yeah, so the visualization. Um, just in case that's um Worrying anyone in the audience. Um, well first of all again, there is software that supports it but in excel all I did really was just some some bar charts that let people kind of fiddle with their ratings and Uh weights so that they could see how the numbers they were assigning affected the final ranking for each criterion Which can can feel um It's just a little bit easier to take in I think if you're looking at a chart then if you're looking at the numbers um Kind of bigger picture, um You know, it really drove home for me that the open source models out there are are difficult for A research one university with a small team of developers and that describes our our case. Um A very stripped down open instance isn't going to meet our needs We have diverse collections and lots of different demands on um our services But also we don't have the staff to support a customized implementation design specifically for us um cost to outsource um We may very well be looking at outsourcing and open source implementation But those kinds of costs can be unpredictable and that can be a difficulty for a large institution like ours too. Um You know looking at cost uh more broadly as well When you included labor Relatively expensive annual fees rapidly became cost competitive And they they even outscored lower annual fee solutions once once all of that Person time was included. Um, and that's just looking at cost. That's not kind of thinking about Um other things that we might value related to um to the time that we spend as librarians Now I will say all things equal. I would rather invest in people But in practice even when I get approval to hire it's very hard for Track to retain skill developers at at our state salaries And my talented staff inevitably leave and like I wish them all the best. Um, I really do it takes months to replace them uh We really feel that interruption and experience has Told me um that while vendor support is not a magic bullet We have been less likely to suffer this particular pain point when we depend on an established external vendor We haven't made our final decision yet. It it may not be a commercial vendor, but um thinking through that slowly Really helped kind of illuminate um How difficult supporting open source was in could be in our case. Yeah, and and building off of that I think one thing that really started um Standing out for us was that you know sometimes the value of good and easy service for our users That might be offered Um by a commercial vendor might conflict conflict with some of the values that we have internally maybe as an institution Uh with things like open science and open access So, you know, I think as a whole we're a group that's largely vested in the benefits that are brought about by um open science particularly, you know when we can invest in that as an r1 institution but we tended to struggle a little bit with the fact that um, sometimes the services and the products that were offered by the proprietary vendors that we were considering Would meet the needs of our users better than those open source products um, so You know the matrix is really practical. I think um And that's for a good reason and and so values were considered throughout but I think maybe Like Heather mentioned embedding those values throughout the conversation Or throughout that whole process would have been very useful for us And I I do I do think that will do more of that in the future All right. Well, I will just close by saying that MCA at the scale isn't practically not something that I would do frequently although we could do something We could do it annually if we Did something a little bit more stripped down. I mentioned at the top that I actually use it in my home life and usually those meetings are about a single hour per decision. Um But we were able to do uh as a result of going through this process Was um Really kind of zero in on the things that we cared about for each option And we managed to come up with a very short list of things to track for each of those products So just two or three criteria to check in on annually And and I think we're in a really strong position to be able to do some quicker reviews on this particular subject Um in years to come and maybe turn our attention For more ambitious reviews to other topics. Um, so just a final reminder We are happy to set up virtual meetings to discuss any of this in greater detail Including talking about specific products we considered. We are so grateful to anybody listening who took a meeting with us during our process Um, and I guess we're grateful to everyone else too For spending time with us here today