 My name is Deuce Neidman. I'm the Associate Director for Digital Strategy at the Stanford University Libraries. And this afternoon, I will talk about two projects, Spotlight and Mirador. So we'll start with the rather uncontroversial assertion that images are fundamental carriers of cultural heritage. They convey our meaning. They convey meaning and history. Faculty and researchers and students use images for all manner of teaching and research work and dissertations and course projects and scholarly publications. And for that reason, we have at many of our institutions built these sophisticated digitization programs and complex infrastructures to manage and provide access to these important resources. So Stanford is leading two open source projects intended to extend and enhance our use of digital images. And for that matter, other media types. But we'll get to that in a bit. I was talking with a colleague a couple of weeks ago who tried to make the argument to me or made a compelling argument to me that the work that we're currently doing with digitization and digital images is actually not enabling transformative uses. It's actually just accelerating traditional tasks. So if we digitize something, it saves us an airplane flight to go see the original, these kinds of things. So I think there's some interesting challenges to that argument, but needless to say that these projects that I'm going to report on today are really looking towards enabling transformative uses. So I'm going to start with Spotlight, which at its heart is an exhibits framework, digital exhibits framework. I've been looking at it recently as much broader than that, a tool for enabling digital storytelling with the digital collections that we have in our repositories. But a platform for creating digital collections, exhibits, or digital stories that scales for curators, faculty, graduate students, or others that might use them. So I'm going to briefly describe the reasons that we decided to pursue a different approach to building digital collections and exhibits at Stanford. And it's a rationale that seems to have resonated with many of our peers. So this is a diagram that we've, that's a little hard to see on the screen, that we've used to explain to our curators and other library staff and our peers the kind of range of possible methods providing access to our digital collections. Along two axes, the vertical axis is the amount of time or effort it takes library staff, IT staff, to implement these things for a collection. And on the horizontal axis, the depth of exposure, the kind of level of context we're able to provide with these different methods. So bottom left-hand corner is kind of simple and cheap, but not very sophisticated. Up right-hand corner, real expensive and pretty sophisticated. So just quickly, we do things like blog posts and news posts in the bottom left-hand corner of that diagram to advertise exhibits, maybe talk about the acquisition of a new digital collection. We've kitted up our content management system, in this case Drupal, to enable curators to create online digital collections or exhibits. It's kind of self-serve. Our staff can create pages and provide context about digital objects. Not super sophisticated in terms of features, doesn't integrate tightly with our digital repository, our search environment. But we can provide a little bit more context about our digital collections. Moving up diagonally, we also have, of course, integrated digital objects into our main discovery environment. In our case, our digital catalog is called SearchWorks. So we are able to, once we take in digitized objects, we have a pipeline that makes them discoverable in SearchWorks. So there's rich discovery of our digitized images, along with all the other items in our collections, but not a lot of opportunity for experts to provide that extra amount of context, intellectual scaffolding, and storytelling. Doesn't really involve the expert, the curator, or the librarian too much, aside from the creation of basic library metadata. But again, it's a little bit more sophisticated technically and a little bit more expensive. So this is the cluster towards the bottom left-hand corner of the diagram. Then we get to the upper right-hand corner. And these are the Rolls Royce's. These are the high-profile collections that we take in, and we'll often seek grant money to build very sophisticated websites with lots of features, lots of involvement of the content expert. It may take small teams of software developers, metadata analysts, experts to produce these beautiful websites. There was an 18-month period at Stanford where we did three of these. Each of them took somewhere between three and nine months with at least two developers each, a UX designer, and lots of project management and engagement with the curators and faculty experts. We're really proud of the websites that we build, but it's really not scalable and really doesn't satisfy all the needs or desires we have for managing and providing access toward digital collections. So we endeavored to find a middle ground, and this is where Spotlight came in. And we showed this diagram, we talked this through many of our peers, especially those, but not exclusively those in the blacklight and hydro community, that we wanted to find a sweet spot that allowed us to provide a self-service environment but with some levels of sophistication. So the key features of Spotlight, so a full featured, we wanted to enable faculty and curators and others to build really robust sites to highlight their collections with lots of interesting features and widgets, but we needed it to be self-service. We didn't want any software developer or library IT involvement. We wanted it to be integrated with our existing discovery and digital repository workflows and environments, so we didn't have to make copies of files or recreate metadata in a separate environment, and we wanted it to be very flexible, so to a lots and lots of customization. Importantly, Spotlight is an extension of blacklight. So for those of you familiar with blacklight, it's an open source discovery platform that's used by many institutions represented in the room for their online catalogs or other kinds of digital collection discovery environments, provides facet, it's got a solar back end, and Spotlight is part of this emerging ecosystem of blacklight applications like GeoBlacklight and hopefully the forthcoming Arclight, which is helpful because if you adopt Spotlight, you're adopting blacklight, there's a ready community of open source developers and institutions that are actively supporting blacklight. If you already have blacklight in your institution, then your technology experts and your developers already understand the stack, and as new features are added to blacklight, they're added to your exhibits and your digital collections platform. So what I like to do now is just give you a brief run through and I'll show a quick video of each of the features of Spotlight. And there's important features from the end user perspective, this is what your end users will see, and then importantly from the curator or the exhibit creator perspective. So from the end user's perspective, we have a full-featured kind of website, it looks like a lot of the collection websites that we've created as one-offs, it's visually attractive, with blacklight, it is more or less responsive and accessible, but we can kind of vary up the homepage a little bit. We have the ability to create customized browse categories, these are saved searches, we get the facets, we get the search results in a variety of different formats. So let's go through a quick video and I'll go do a little bit more depth. This is the landing page for our exhibit platform and this now comes with any Spotlight installation under the assumption that you're gonna have multiple exhibits in your Spotlight platform, so it's intended to scale. So I choose one of my tiles, the Bob Fitch Photography Archive, it's a recent archive of photography of many important figures in the civil rights movement. So we've got a nice homepage with galleries for both the King Collection, the Martin Luther King Junior Collection, and the Cesar Chavez Collection, it's a fully customizable menu, so those menu choices are menu choices that the curator or the exhibit creator can set. We've got the basic search features of Blacklight so I can do my basic search and get a very familiar looking search result that I can display in a variety of ways. I have my facets on the left that are automatically configured with my index and then I can click on a record view and I can see my image and it's got open C Dragon, it's kind of built into the software so I can zoom in and do deep zooming and panning. I have my metadata on the right. The metadata can also be customized so exhibit specific metadata can be added. The browse categories, these are actually saved searches that then I can create customized images for and I can kind of navigate the user more deeply into subsets of the collection. You notice the top banner changed so the curator was able to choose a different image specifically for the Chavez Collection, add some narrative context to that search result. If I add more Chavez images it automatically gets added to this browse category. It's kind of directly linked to my repository and again I have my result view. I can also create custom pages so either thematic pages about my collection with text and example images or about pages to acknowledge contributors or talk about the author and these are all completely customizable. So Ben Stone who's our curator for British and American history with a little bit out from our lead designer because this is one of the earlier exhibits built this more or less on his own. Without really any engineering support to make this particular exhibit. So from the exhibit creators perspective it's intended to be a really easy to use form-based, widget-based, drag and drop kind of building environment. So the exhibit creator or the collection creator can set the identity, the title, subtitle, descriptions. There's some user management for different types of users. So basic contributor to the site or a site administrator. There's the ability to control. It's quite interesting the metadata fields. So what metadata fields actually display and which kind of search results and what their labels are. So now we're empowering curators to actually change the labels for our metadata. The ability to create these custom browse categories. Ways to customize the icons on the browse category pages or the mastheads by taking existing images in the collection and cropping them or trimming them to include them in your site UI. And then a widget-based page configuration framework that allows you to build these custom pages to tell the narrative story. So here's a quick video. The creator page. So for all your custom pages, you see you have an edit button. So for in-place editing of any page. And this shows you the widget framework so I can create individual sections of a page. It's kind of block-based. And then I wanna add, say, another section to the end of this page with a new type of widget. I hit the plus button and I have a variety of, whether it's an image carousel or an image with text to the side of it, I can start building a page with the objects that are in my exhibit. So I go to the dashboard, variety of tasks that I can complete here. We've built in Google Analytics so automatically configures your site with Google Analytics. This is a demo site, so not much traffic, but it shows you basic traffic information your top hits. So your curator can see those immediately when they go to their dashboard. The ability to change the title and subtitle, customize the overall appearance. So you can see here the masthead of the site. It takes an image from the site. You can choose a different image. It configures the size and you can basically custom make the theme or masthead of your site. Here's the metadata configuration controls. So you see a list of all the metadata represented in the solar index and now I can change the order in which it's displayed on a record page or I can just kind of click my mouse and change the label and it will appear on a result page. Same thing with facets, right? So I can configure the search and I can customize the facets that appear on the left, their labels, whether they appear or not. So a high degree of control over the user interface for the exhibit builder. In the curation section, I have my list of items. I'm gonna talk a little bit more about the buttons on the upper right, the add non-repository items in a minute. This is basically the ability to take any image, even an image that's not in your repository and add it to your collection, either using a single image upload or a spreadsheet that points to the image resource on the web and has some basic metadata in the spreadsheet. Here are our browse categories. Again, custom searches that I then can turn into custom pages with my search results and then the ability to add new about pages or feature a theme page. So that's just a quick kind of video walkthrough of the basic features. What's new? So we started developing Spotlight in 2014. We've had 25 week-long sprints worth of effort. The major improvements have been with kind of visual design, what the end user's experience looks like and feels like, adding widgets that are associated with objects in the collection, but also a variety of enhancements to the curators experience. One of the big pieces of feedback, and I think this was kind of a misrepresentation, maybe poor communication in the early days, was a conception that you needed to have a Fedora or a Hydra repository to use Spotlight. And while Stanford has in fact integrated our exhibits platform, our Spotlight platform, with our repository, it's not a requirement, right? If you can spin up Blacklight and a Solar Index, you can use Spotlight. And in response to that, a lot of folks said, hey, you know what, we don't have our repository well built out and certainly would be difficult to integrate it, so I just have a lot of faculty or curators who want to build exhibits from things they have either on their desktop or in some other web-based environment. So we created the add non-repository items pipeline to enable creation of exhibits from items that might not be in an integrated repository. We're currently working on another wave of engineering on Spotlight right now. Some of our aspirations, much of the work right now is to accommodate non-image resources. So if our image resources have full text associated with them, a way to do some basic full text searching. Affordances for non-image resources like audio or video or geospatial objects. We do understand that the Avalon project has some aspirations to do some media integration as well. So I think that's forthcoming. Also the ability to add any triple IF compatible resource to an exhibit, working on that today. And I'll talk a little bit more about triple IF in a minute. So there's still a kind of a wave of work happening now. So use cases, why do we use Spotlight? What have we seen? So at Stanford, we certainly are using Spotlight to build these companion exhibits to physical exhibits or pure virtual exhibits to highlight gems of the collection. We're also using it to create enhanced portals to large collections of tens of thousands of objects, not just specialized exhibit type features. We're also seeing some interest though in a couple of other uses that we maybe didn't intend or think of early on. One is the use of Spotlight in the digital humanities for a new form of digital storytelling. So faculty want to use resources that are in the library's repository to tell a narrative. We've also been contacted by graduate students and faculty about using Spotlight as a teaching tool. Assigning undergraduates to build a customized page using some image resources on a specific topic as a form of instruction. So that's been a really exciting development and somewhat unanticipated. So there are a variety of other institutions. Maybe some of them are in this room who are either experimenting with or intending to use Spotlight. And we can talk in a few minutes about ways to get engaged. So this is Spotlight. I think I might hold questions for both till we get to the end. I'm gonna transition maybe not too smoothly into project number two, which is Mirador. So how many people by show of hands went to Tom Kramer's triple IF session this morning? Okay, a handful, early this afternoon. Okay, so Mirador lives in the ecosystem of software that is compatible with the International Image Interoperability Framework. And I'll just talk briefly about that in a second. But it was really, it was driven by a use case of more sophisticated comparison and annotation of images at disparate repositories. Spotlight has some ability to take in images from disparate repositories and we're working on triple IF integration now. But Mirador kind of from the beginning was about bringing image collections together from disparate repositories and doing some comparison. The kind of driving use case I like to use to explain this about a year ago or maybe less than a year ago, I had the good fortune of going to the Rijksmuseum and I saw the physical exhibit they had just installed on late Rembrandt. And it was startling to me to think about the effort that it takes to assemble the objects, these rare and amazing works from private collections and museums and galleries around the world into one physical space and the effort that it takes to do that. So it made me think of the use case of the art historian who might want to compare a Rembrandt to a Franz Hall's to a Vermeer that are in three different museums. And digitization helps our cause because the Rijksmuseum and the National Gallery of Art in the Met all have digital versions of these images. But still I'd have to go to the Rijksmuseum website, figure out how to use it to go to a little bit of Dutch, figure out if I can download the thing. And then I would have to go to the National Gallery of Art and find the Franz Halls and the interface is going to be a little bit different. It might not be. And then I would go to the Met website. And so now as a scholar, just to compare three paintings, I've got to go to three different websites and to go to three different user interfaces, download on my hard drive, maybe open it up in Photoshop and on we go, right? Imagine if we could take these three images and put them in the same interface, right? And I could zoom them side by side and annotate them side by side, manipulate them side by side without having to do all that. This is what TripleIF enables and this was the driving use case of Mirador. So Mirador is open source, JavaScript software. It is truly a community project. I'll talk a little bit more about who's contributed to this in a little bit. It is compatible with open standards like TripleIF and the open annotation standard. And it is interoperable. So it's able to work with images from a variety of distributed sources. So the International Image Interoperability Framework. Tom spent probably an hour or more this morning talking about it. So I'm not gonna go back into an in-depth explanation of it but it is a standard of a technical framework that enables exchange of images across repositories with compatible software. So Mirador lives in this ecosystem of TripleIF compatible software. There are image servers that you can install as infrastructure that support TripleIF. There are basic viewers like Open C Dragon and Open Layers that are already TripleIF compatible. We're talking about this top level of applications that can do more sophisticated things with image objects like the Inner Archives Book Reader, Diva.js. The Welcome Library has sponsored a software project called the Universal Viewer that all do slightly more sophisticated things with image objects. And Mirador kind of lives in that layer of the TripleIF ecosystem. So I am going to show you two videos to demonstrate the core features of Mirador. And the first video is gonna be the basic features and it makes me move kind of fast. So let's see what happens here. Okay, so you hit the plus button and already you see a variety of image objects from several institutions. Harvard, the National Library of Wales, Yale, the Bodleian. So we get an interface that has my image in the center and I can zoom and pan deeply. So this is an experience that we've grown to expect now with high performance interactions with large images. I can navigate. This so happens to be a book object with the thumbnails. If I have table of contents information encoded in my TripleIF manifests, I can navigate the book using a table of contents. I can also hide the table of contents in the thumbnail viewer. There are a variety of other views. So if it's a book object, and I'm told it's a book object, I can go to a two page view, which allows me to zoom synchronously on both pages in the inner margins. I also have a thumbnail view. Again, a different perspective on a multi image object to scan it. And then I have an information panel that gives me metadata, a logo of the host institution, some of the rights information, et cetera. So I also have annotation capabilities. So I hit the annotation control and when I hover over pre-existing annotations, I can see them. I can zoom into an annotated region and hover and see the annotation. And I can also make new annotations that will be created in an open standards kind of way. So I click the control and I draw the rectangle and I can do a simple annotation. I also have the ability to format the annotation. I can also embed, have video or audio annotations as well. And these are saved in the open annotation standard. We've also seen some examples of transcriptions of objects using Mirador. And the UI isn't perfectly suited for transcription, but in the pipeline are some new designs and new aspirations to build a more tailored transcription interface as part of Mirador. So that's kind of the basic functionality, but really the comparison use case is the use case that we were striving for. My colleague Ben Albritton, who runs our manuscripts program at Stanford has written this wonderful blog post about an interesting comparison of Chaucer's Canterbury Tales. And it actually, if you just Google Canterbury Tales Stanford Library and this will be one of the top hits, it actually gives you an explanation as to how you can do this comparison on your own if you want to test it out. But there are two manuscripts, one from the National Library of Wales and one from the Huntington Library that he wants to compare. And we'll kind of show you how Mirador enables this level of comparison. So, we start with the Huntington. We've already seen how you can get a simple object into your window, but on the left I have this add new object and I can add a slot to the right, I can add a slot to the left, I can add a slot above or below. I click the plus button on the second space and I find the National Library of Wales example. And I can find the opening page of both versions of the manuscript and then my user can zoom in to the opening initial and compare them side by side. So, the next thing that you're gonna see, I'm gonna play this one more time, there's a new feature in IIIF, a new part of the spec in IIIF that allows me to find an object in a repository that's IIIF compatible and just drag it, drag an icon into Mirador and drop it in. One of the big questions people ask us is about discovery. How do I find IIIF items? And if I find them, how do I get them into a player? So, Stanford so happens to have an illustration of Chaucer in one of our in one of our manuscript collections. So, what you'll see here is I can go up to the change layout icon and I can create all kinds of sophisticated layouts depending on how big my screen is and I can manipulate the workspace. I can go to Stanford's Chaucer portrait and I find my IIIF icon and I drag it into Mirador in the empty space and now I've got my two versions of Canterbury Tales on the left and I've got my illustration of Chaucer on the right and I've got this really interesting comparison of three objects in three different institutions. So, this is kind of a special sauce of Mirador. So, we've seen a variety of use cases, a comparison of objects side by side, annotation, transcription. Both Stanford and Harvard are using Mirador in support of MOOCs online courses where thousands or tens of thousands of students are comparing objects from different repositories and creating annotations. And one of my favorite use cases for IIIF and in Mirador is this idea of reunification. That is, taking an object that has been dispersed from its original format and reunifying it virtually and there are a handful of examples of this emerging. One of the earliest examples we've talked about is, so Otto Ege is a 20th century Bibula class. What he did was he bought whole manuscripts for our manuscripts and he cut them up and he sold them in packets of 50. You can actually find Ege leaves on eBay and they're distributed in many repositories around the world. So, the same fellow I talked about earlier who runs our manuscript program, but Albritton has endeavored to reassemble Ege one virtually. So, this isn't a very interesting screenshot, but what you're seeing is a virtual reunification of this manuscript from, on this screen, at least three different institutions. Stanford, we have about 15 or 16 leaves. The University of Mississippi and the University of North Carolina and Greensboro. And so, that's kind of a really powerful, I think, transformative use of some of these technologies. So, IIIF is truly a community project and Mirador is truly a community project. It started out with development at Stanford. Last year, Harvard, because they were running at least two or three MOOCs using Mirador, invested quite a bit of software development. We've had contributions from Yale and from Princeton. The Bibliothèque Nostril de France, a manuscript project there called Bibli Sima. We have a community call every two weeks and when we get at least 10 to 15 institutions represented on each call. So, development of Mirador continues. We've got some unfinished business. We're at version two. We're not quite at pace with the entire IIIF spec or the latest versions of the APIs. So, that's a really high priority objective for the next wave of development. Support for image overlays or multiple variations of the same image, so you can toggle. If you have multi-spectral images or X-ray images, you can toggle when you're zoomed in on multiple variations of the same image. Adjustments, brightness and contrast and those kinds of controls, support for right to left and top to bottom objects. The user interface is almost entirely internationalized. We have a little bit of work to do, but Yale has helped us with the internationalization framework and they've got at least a Chinese translation. I know there's a German translation, a French translation, and a Dutch translation of the interface. So, that's a way that people are contributing. We are friends with, good friends with, the folks that make the universal viewer, which is one of the other IIIF-compatible viewers and we're actually talking about doing some shared components at the API level. I know the folks who build the universal viewer are building a 3D viewer that will eventually be IIIF-compatible when IIIF actually gets to 3D. The conservation space project that is being driven by a variety of institutions to help build conservation documentation has chosen Mirador as their embedded image viewer and annotation framework. So, they have invested several engineers over the next six months to do some significant Mirador development primarily in the space of annotation. And then we have fully-baked designs and user stories for transcription and translation. And I know that Yale has a project that requires a transcription interface in Mirador. So, that's coming. So, there are a lot of threads happening with Mirador development. So, with both of these projects, the kind of community participation and community adoption is a sustainability plan. So, please get involved and stay in touch. And I've tried my hardest to leave 10 minutes for questions and I think I've done it. So, questions.