 Hello! How's it going? All right, psyched. It's great. Well it's been a, if you haven't been in here today, I'm sorry for your loss and missing a wonderful day of programming. So this is, I'm really excited to kind of cap the day of programming with these lightning talks. We have had lots of great people step up and volunteer to to do this and I appreciate it. So we have Dave Rice, Hannah Frost, Eric Peele, Catherine Granzbel, Casey Davis, and then we have Karen and Mark, Mark, what's your last name? I'm sorry. Bussy? Okay Bussy, yeah. So so thanks to all of you for agreeing to do this. Yeah so with that I think I'll we'll just kick it off. Dave Rice, where are you? All right, once again, Dave Rice. I usually use the half hour before my presentation to prepare for my presentation, but I don't have any. So what's this going to be? Seven minutes, up two, seven minutes for presentation. We'll have extra time at the very end for general Q&A discussions. So we won't do Q&A following each presentation. All right, so when I was back in my baby archivist days of democracy now we used LTO tape and it was LTO 3 and it was really really hard and then a few years ago this thing happened called LTFS, which I'm going to talk to you about. With LTO tape version 5 it supports this thing called linear tape file system. So you can write a file system onto a tape as if it's like a hard drive. So it makes it really nice. You can plug your tape into your deck and you can mount it as if it's a hard drive. There's a lot of things you can't do. You can't copy two files off of it at once very well. It's linear so the files are in different physical places and you can't get to them independently as well as you can on a hard drive, but it makes access to digital storage quite cheap and quick. An LTO deck, I think it runs like LTO 5, I think it's right about $1,700 right now. The tapes are about 25 bucks for a terabyte and a half. So you got some storage problems, sometimes LTO helps you out. So one thing and one thing I want to point out about LTO is that almost all the vendors or possibly all the vendors that right now make LTO decks release these tools. I'm not sure if they're totally open source or a little mysterious. I'm a little suspicious about them, but they all share the same tools that help you manage LTFS. These are command line tools but they present a GUI called LTFS manager. So there's one called LTFS and just lets you mount the tape. There's one called LTFS checked for some diagnostic issues. There's one to erase it, but basically this lets you mount the tape and one thing that it turned out to be a big bonus for this is that the file system is stored, expressed in an XML where you have like a big nest of all your directories, the files, you get information on like the normal file attributes like name, the modification date, the size of it. So my job at City University of New York when we write out LTO tapes, when we got some preservation going on, we take these XMLs and we just have them uploaded into our database and that gives us access to like all this file system information. I feel like LTO is something that is a lot more approachable than people think. I mean there's options out there from free to enormously expensive and I feel like the free stuff is often quite overlooked, but almost all the vendors, or possibly all the vendors, put out the software in multiple platforms for Linux, Mac, and Windows that lets you mount tapes kind of in the same way you would treat hard drives, push files in there and get like an XML back that says what you did, what's there, what's left. Yeah, that's lightning talkish, right? Did I go too short? That's like what? Can I go back to QC Tools then? And I can't take any questions, right? There's like a rule about that. Sweet. Alright, somebody asked a question I'm going to call on James Snyder. Bring it on, baby, bring it on. I mean some people like Brew, like it's kind of a middle rank, I mean it's like 500 bucks I think for Brew, it's just BRU. It supports both LTFS and the older school method of just like piping a tar file right out to a tape, you know, so it's kind of mid-level one and then there is certainly really expensive ones or like companies that try to pretend that LTF, LTO tapes are videotapes and stripe MPEG-2 across it all day. But no, I've heard good stuff about Brew but I haven't used it myself. You know, for the most part the tools we're using for LTO are really low-end. We're just using command line tools but little shell scripts to help make it a little easier. But I work at a television station and we probably have about like 15 users who like know how to like look up a tape in the database, grab a file off of it and have a nice day. Yeah it's true, sometimes you'll be like I want to copy off file A and file B and it'll be like a half mile between them on the tape so often like you'll try to copy a file off and it'll like nothing will happen, you know, because the tape is like rolling up to the to the file and then it'll start start moving pretty quickly like usually I'm getting about 120 megabytes per second off LTO 5 tape right now. So you know the best is to like read on and off of it in bulk, especially when you're writing on to it. One thing I like we work in a Mac environment and one piece of advice I'd really recommend is if you are mounting LTO tapes into the Mac file system, the Mac file system and the operating system anticipates that everything is going to be as responsive as a hard drive whereas with the LTO like some of the things take a while to actually perform because of the delay. And in the Mac, one of the things the Mac likes to do is make little productivity thumbnails for all your files. So if you have a hard drive quick times we'll try to make little tiny pictures of each one of them and if it's trying to do that on an LTO tape it's going to be so sluggish because it has to like move, you know, from like one section of tape to the other. So in the finder and their preferences you can be like, you know, shut off all the icons. I don't want to see any of that. I'm only going to enable file attributes that actually live in the LTO XML and that's not thumbnail. And that makes LTO much more responsive. But on forums all the time I see people like, Oh, it's so slow, it's horrible. Oh, these little thumbnails are showing up though that's nice. Oh, sorry, guys. Thanks. Is this your phone? Eric, I need you. Okay. Hi, everybody. Hannah. Okay. Eric, I got nothing. This is not taking part of my time, is it? Microsoft PowerPoint is encountered a problem. Let's try it again. Drap. I don't have to do it in presenter mode. Yeah, maybe just export as a PDF. Yeah, that'll work. Talk amongst yourselves. Okay. So if you saw my talk earlier today, you know that I'm increasingly interested in what to do with all this content that we've been working to get into digital form and get into repositories and share. And I recently have been thank you, have been learning about open annotation, and it really strikes my fancy. So what is annotation? This is somebody a scholar's marked up version of Ulysses. Look at all those annotations. They had a lot to say about this thing, right? I recently was at the Rock and Roll Hall of Fame. And outside they have this, I took a picture of it, I posted it to my Facebook page and a few people liked it alike is a form of annotation. We do it all the time. So turns out the web annotation has been part of the web, at least and conceptually, since the beginning, Mark and Dresen had put it in mosaic and then took it out. And there's all these other there's this whole timeline around annotation. So why do people annotate? Bunch of reasons highlighting bookmarking, commenting, tagging, classifying, studying, questioning, replying, editing, all these things. Like I said, we do it all the time. So what is an annotation? This is kind of the official definition for this, what we're talking about in this context, a set of connected resources typically including a body and a target, and where the body is related to the target. Here's the basic data model. Do I need to explain this? Here's the body of the, you know, the substance of the comment or the annotation itself, the target, the thing that you're commenting, and then this relationship through RDF. There's a web link here if you want to find out more about this. So lots of scholarly applications for this peer review. Just when you're organizing your and bookmarking stuff that you're working with, if somebody's looking at a data set and saying, hmm, I should, I should bookmark this, I should use this. Or when you're tagging stuff out on the web. That's some, you know, some astronomer comes along sees this picture is like, Oh, I think I see a planet out there. Let's tag that. So in terms of open annotations, what does it mean for video, or audio for that matter, but any kind of media resource, while video or audio can be the target, the thing that is being annotated. And people do this all the time on YouTube, not in any kind of formal way, but it's there. But video could also be the format of the annotation, you could do a little, you know, a little recording of yourself talking about how Ulysses is interesting. Open annotation will support segments of a video, we can just comment on a part or the whole thing. And you can do rectangular regions a part of the frame, or of course, the whole frame. So Oh, this is the problem with PDFing it is that it unhid all the things I was hiding. Because I borrowed these slides from somebody. So I'm going to go back to this. Thanks for your patience. It just doesn't like this. Too bad. Here we go. Here. Okay, yes. Okay, so can you see that good enough? Okay, so it is becoming a w three, three C standard. It has a very strong community effort. It's one of the most successful w three C community groups ever apparently 138 participants is the fifth largest of out of 179 and anyone can join even you. We are doing a pilot right now like kicking it off this week. I'm not involved in this. But I'm here. But we're going to be storing annotations about digitized medieval manuscripts in a fedora for repository. That's the latest version of fedora for so we're kind of testing it as a large scale repository but also testing this model. And this is part of the link data for libraries project that Stanford's currently involved in. And there's a link if you want to hear more about that. The real expert on this is not me. It is my colleague Rob Sanderson who recently joined the Stanford staff. He's a brilliant guy and he's a community co chair and he's going to make this happen. And it's really going to change the way we do all of our work. That's it. Is Eric, I'll be quicker than seven minutes. Just I'm gonna start with this. It's kind of like the wobbling pivot of moving image archiving. Is technology going to shape and control the archive? Or are we going to shape technology to suit the archive? We find ourselves doing this all the time. DIY projects and reengineering specific technology to make it suitable to our ends for preservation. So I see the DIY scene and and the preservation community is being like very intertwined. So as an individual, I'm like very, I ask a lot of questions about technology and I do a lot of experiments and maybe a bad thing, but I don't accept rules and restrictions that don't really make any sense to me. And there are a lot of them out there. So we try to tackle a few at a time with the resources that we have. So one of the problems that I had, I should mention my affiliations, I have about five of them, but I primarily work at Anthology Film Archives in New York. And one of the bigger problems we have with the video collection, I specialize in analog video preservation and restoration. And one of the bigger problems that we have is that 90% of our collection is very sticky. And there's going to be a lot of conservation techniques applied to our collection before digitization. And it's not really cost effective to send it out to a vendor. I also happen to be a vendor and I also know how to do this stuff professionally, which not many people do, but I have that knowledge. So I want to bring that knowledge in house and make it affordable and not be a major time wasted. So just an example of oxide, everyone's seen this before. Now, there, if anybody knows the, I guess, tape cleaning scene, there are a lot of already DIY practices in there. And it's basically, it's not cost effective for us to buy one machine that does one format. And I respect RTI as a company. I'm not knocking them here per se, but it just can't really afford that when we have multiple formats that that need cleaning and baking. And you need to clean after you bake. That's, you know, having an oven in an archive is not a solution to this. You need to have both of these things working in concert with each other. So about a year and a half ago, I just started designing different tape pass and scenarios using different tape cleaners as models. I wanted to kind of encompass like multiple formats. So half inch, quarter inch VHS, umatic, beta cam, things like that. And then started to pool a lot of things that were just available to me. So for example, these are like old DC motors pulled from umatic decks and a power supply and just see how much on the cheap I could, I could build a cleaner. And so the underlying technology that I was using was is all open source. So it's Arduino with a motor shield on top of it. So it's able to control up to four DC motors and then two continuous servo motor. So what I decided to do was to get a little bit wonkish with this, use the servos to control a Pellin system on both sides of the tape, and then use the motors to just control the take up on the supply wheels. So, you know, for about a year and a half, I was just starting to like experiment and mess around and, you know, trying to hit a few benchmarks, working beyond full time. So about so I premiered a little bit of this at Mia last year and then dug back into it and decided to scrap the whole cassette based idea of tape cleaning and just using specific spindles. So we actually modified, when I say we, I mean, Maurice Schechter from Duart and Dave from CUNY and his engineer flip, we actually modified several spindles to just be modified. So you just put one on top of the other. So you can just put your cassette spindles on top of it, which actually was really a lot more graceful solution than designing a cartridge that would have like an elevator, per se. So, so here's a little bit more cleaned up version of it with the Pellin rolls, take up supply and the tape path I showed these last year. And then here's kind of like a breakdown of the budget that I that I worked with. Super cheap. I mean, if anyone's bought a tape cleaner, it's, you know, for one format, you're, you know, it's definitely a fraction of that. I don't know what fraction is, but just think about that. I mean, you know, in thinking about openness and open source technology, you know, there are definitely niches within the field. And points that we can exploit using open source software. And I mean, I'm trying to grab some low hanging fruit here. So I'm sure there are plenty of other sophisticated technologies that we can apply open source technology to. Secret note. Okay. Hi, I'm Catherine from AV preserve. And I'm also here to talk about something else from AV preserve. It's a tool called fixity, which surprise surprise has to do with fixity. So just to kind of give you an overview, you may have heard of it or seen it or maybe even used it. And if that's the case, please come talk to us. We'd love to hear how you're using it, what you're doing, how it's working. So essentially, it's just a free open source tool utility that provides automated documentation and review of stored files. It does this through the creation and validation of either MD five or SHA 256 checksums and also allows for the monitoring of file attendance. So what that means is are the files that you expect to be there. And are there any new files have they been changed in any way have their locations or found it's been changed. And so it's currently, as I said, in version 0.4, it's available for Windows and Mac operating systems, as of this new version. And the link is available at the end of this presentation, or if you just use that internet search that shall not be named according to Ian, just maybe preserve and fix it and it'll pop right up. So before I really get into demonstrating what it does and how it does it, I want to talk about why it was created and why it's available to the community. There is a huge issue with fixity. It's something that everyone talks about. Everyone knows it's central tenant of digital preservation along with awesome authenticity. And because of that, it's it's obviously in all of our, you know, crosshairs, but even though some organizations or individuals may be generating checksums, there is essentially a very small group that is actually doing periodic and controlled and managed validation of those checksums in any capacity. And so this is understandable because the current tool set that was available didn't really allow that to happen for any number of reasons. The most important one being that the tool sets were really for super IT tech heavy folks. And while that's great, and that's totally usable by them. That's not always the situation that you're in when you're in an archive or a library or any other organization that has content. So those tool sets were for a very specified group of people. And if you work that group of people, you had to rely on that group of people. And as we're all aware, sometimes resources are stretched a bit thin. And sometimes you can't get your IT department to give you the appropriate amount of attention for what you need to do. And then adding in a whole nother management of biothexy data integrity is just sort of adding to existing issues. So these are all kinds of the background reasons why there was a need for a really simplistic, easy to use non IT tech heavy tool. And that's what we have with fixity. The other part before I move on is also that the tool sets that were available, didn't integrate like checksum fixity checking with file attendance. And I think it's kind of important to know what files you're checking the integrity of, right? Like that's something that you would be interested to know in your collections. Yes, can I get one head nod? Yes. Okay, great. Onto the next slide. So how fixity actually does this is as I mentioned, creating and validating checksums. So this happens, you can choose MD five or shall 256 as I mentioned, and a manifest is created of the full file path and the associated checksum. Then you can set a schedule for when that manifest is checked against and every time that fixity runs a scan, according to a schedule that you set as you seem appropriate, as you deem appropriate rather, it will check it and it will tell you what happens. So that's what happens over here. This middle part is actually the report. This is a tab separated value or TSV file. That contains a lot of things, but we'll get to that a bit later. But as I mentioned, the checksums and the file pass and sorry, the file pass and then indication of if anything has been altered. So moved, missing new files, renamed, what have you, it's in, it's going to be in there. And the really cool thing about it is it saves it locally to a dedicated folder. And then it also emails designated users or designated participants to just shoot their email in there and it shoots it off anytime you schedule it to. So this is the interface. Super, super simple. I'm actually going to come back to this because I want to show you the report first. Maybe hard to read, but essentially what is in here is header information saying it's a fix your report. The project name, the algorithm use the date of the scan, the total files scanned, and then a summary in this area of confirmed files, moved or renamed new files, change files and removed files. And then below here sort of row 11 and down, as I mentioned, the file paths and then what actually happened to them. So you have the high level sort of summary information and then you have the file level information. Also know in the moved or renamed categories, for example, row 14, you have the original file path and then change to and the new file path. So super easy. Don't have to have any technical knowledge about anything to understand this. You just have to have some kind of non denominational spreadsheet reader. I learned a lot today in that keynote. So lots. Another thing that I don't have an example of it kind of looks the same. It's just another spreadsheet is the history. There's a history directory in the fixity file structure. And what it does is provide snapshots of your data at any given scan moment. And this is really useful. It doesn't have this summary information. It's literally just what your data looks like at that moment. And it also includes another thing that I didn't mention, but is available as a preference. If you want to filter out certain files, for example, if you're scanning your directory full of access and master image files that happens to be on one directory, and you have tips and jpegs and whatever you have and you're like, wait, I don't want to scan jpegs like that. We're not going to do that. You just type in the extension into fixity in a certain menu. And it will just skip over those. And you'll only have to worry about if you're just worried about chips. That's what your report's going to say. So yeah, thank you. This is the link. That's my information. Come talk to me. Thank you. I'm Casey Davis. I work at WGBH. I'm the project manager for the American Archive of Public Broadcasting. Today, I'm going to give you a demo of our archival management system that was developed by AV Preserve. The archival management system is a tool that is tracking our digitization project and all of our metadata. So let me log in. Okay, so yeah, basically, the archival management system is a way that we are managing the workflow and the progress of our digitization project. There are nearly 100 stations that are having materials digitized through the American Archive of Public Broadcasting. So the archival management system or AMS is the central location where station admins go and where the American Archive team goes and where Crawford, our digitization vendor, goes to manage the workflow and manage the project and stay up to date on the progress of the digitization. So this is the homepage or the dashboard that an admin would land on when you log in. It shows how many hours have been digitized, what percentage of content has been digitized at the 40,000 hours. It tracks the digitization by region, how many scheduled assets have been digitized and how many have have by format, let's see, and by radio or television. And then if you scroll down, you can look at and see all of the different formats that are being digitized and how many are being digitized. There are so I said there's three main users for the AMS. There are the American Archive team and there are station admins who log in and are able to view all of their metadata records and all of their proxy files that have been generated through the project. And there's Crawford, that's our digitization vendor. The AMS doesn't manage the media itself, it manages the metadata. And there's a player within each detail of each of the records that points to the proxy file that's being stored on the server at Crawford. Let's see. So this is the records page that one would land on. You'll see this, the blue highlighted tab is assets and there are instantiations. So we have 2,160,000 assets. And then if you click on instantiations, you look at you're seeing a table of all of the instantiations. So there are more instantiations because there are about 56,000 assets that are being digitized and three video instantiations that are being created for each digitized video and then two instantiations for all audio. I should say that the AMS runs on a lamp plot technology stack. There are 92 or 91 tables in the AMS schema based on the PB Core data model. So let's see. Oh, obviously, one may want to search the AMS. So there are different browsing functions that are available. And then there's a keyword search. Let's see. And also I should note that you can filter by what's been digitized. So if I click reformatted, you can all of the digitized assets appear in the records or assets table. Also nomination status is what we're using to nominate assets based on priority of digitization. So anything that was nominated first priority is probably likely going to be digitized in the project unless it failed to digitize. And then second priority are materials that we would like to digitize in the future for future projects. There are a lot of organizations that are participating. So as a as an admin of the system, I can view all of the different stations that are participating. Any station user that would log in can only see their all of their records. So I will go and click on and just one record. And this will take you to the detailed view of the page or one record for an asset. The blue highlighted tab is this is the asset information and below it are all of the instantiations. So there's a video player or audio player that one would be able to view or listen to a proxy file. You can also edit assets, you can edit the metadata about one individual asset manually or you can add new physical instantiation. So if you find another tape, then you can add a new instantiation of that asset. And here you see all of the intellectual content metadata for this asset. So so the first instantiation below asset information is always going to be the master tape. And in this case, it was a quarter inch audio tape. And here below it, you'll see all of the event metadata that's created by Crawford and is ingested into the AMS via Google Doc, the Google Doc API. And below it are the is the preservation master, the digital file and the proxy file. There's two instantiations, as I said, created for audio. So there's only going to be two additional instantiations at this point. And here's all of the technical metadata about the preservation file. And below it is the media info metadata that's also being generated by Crawford and is ingested into our server onto our server. And then obviously, there's the proxy instantiation and all of its technical metadata as well. And also you can edit instantiations. Currently in the AMS, you can only edit physical instantiations. Oh, I have a lot more to have to cover in one minute. So we have all of these assets, but we have been adding new assets and you don't have to just manually add new assets or instantiations, you can batch import assets and instantiations via Mint, which is a mapping tool, open source mapping tool created by University of Athens in Europe. And we've integrated that so we can import CSV or pvcore XML and map it to the AMS pvcore schema and import it into our into the AMS. And let's see what also if you're going to import assets and instantiations at some point, you may want to export. So you can export limited CSVs, which is what most of the stations want because a lot of the stations don't use pvcore XML. So the limited CSV has just the GUID, the local identifier, the title, the format, the nomination status, and then you can also export a baguette which has the all set found set of records that you want to search for. And it has a pvcore XML record of checksums. It has the premise metadata also if it has been digitized. And I believe that's all. Okay, thank you. We're gonna, I'm gonna tell you what we're gonna do. We're going to show you a demo of WGBH's HydroDAN based on the cell set, based on the Hydra tech stack. We're going to do a live demo. So hopefully, all of you guys will stop being on the internet so that we can get a clean, clean link. We're all right here. I don't trust the internet. Marcus brave enough to do a demo here. One thing that we didn't mention earlier, when I was showing the slides is that it does have the ability to, well, I think Mark will show you to the ability to tag things for access. So you can have everything, things that you ingest as being open access or private access that has those controls. So you sure. But you can't take my notes because I have to know what I'm showing. Oh, also, this is based on the Penn State Sophia, solution bundle, DCE, digital creation expert with which Mark is part of help build out their version of this for us. There's actually pieces from many parts of society happen to be looking at code for another project. I realized how much of the PB Core export that we're going to show you is actually the work of Adam Weed during his tenure at the Rock Maroll Hall of Fame. If we were to go through and comb through this code, there are pieces from from all across the hydro community. So as Karen said, the main design is from Penn State Scholar Sphere, which is a self deposit institutional repository. And their goal was to make it easy to get folks in the scholarly community to give content to them. But that was a key driver for WGBH as well. So to get content in, I just find something I want to upload. So here's my movie. And I say start upload. And the two. I get to choose a little bit of metadata about this to usher. Wait, I can't spell in front of people. There we go. Oh, we'll keep this private. So one of the things Karen was talking about was the fact that I can select as I upload content as an administrator or contributor, whether I want this to be available open access, limited to the institution, we're set up for WGBH, but you can install this for whatever institution, or keep it private to me so that you have the ability to manage this, the content, but it's not being displayed and shown. And that's in in sort of harmony with also what rights assertions I'm making about the the object and who might be able to use it. So I'm actually going to make Kevin open access. So while I'm doing that, we'll see that I've just uploaded something while it's uploading and being characterized, it stays private later on, it'll become open access. The system was primarily envisioned to handle audio and video, but because we inherited it from this other area, we have the ability to pull in images, text, all kinds of other things. So for instance, if I have Oh, perhaps a video that I might have the transcript of, in addition to bringing in the video, I happen to have a PDF of the transcript so I can bring in and download the transcript, which turns out to be handy. I have the ability so you saw that there was a very limited amount of metadata, I could have actually opened up a form and given much more metadata as I uploaded the object. But I also have the ability to go back at a later date and edit a fairly extensive set of PV core metadata. And once I've done that. So you just saw that I had lots of fields filled. We actually have the ability to export this to PV core so that it can be interchanged with any other system that's capable of reading PV core. Although I was noticing that we're 1.3 because of when this was minted and it's time to do some updates. And then I already talked about rights and access. So again, actually, let's go back. And my latest isn't here yet. Let's look at Karen's dashboard. I'm Karen right now. There, Kevin and the two. So one of the things that will happen. Oh, we got flipped is we'll get a thumbnail of the initial still. We have a little playback. You can play bigger. There's a person in that too. I can't flip that but you all have two suits, right? So again, having done this, I can go edit this. I can Kevin might not like this being shown. So I'm gonna make it, you know, require you to log in and be a member of WGBH. So for instance, right now as a user, I can search for Kevin and the two. If I happen to log out now, because we have gated discovery because I'm not logged in, we won't see that any longer. That's my list of things to show on it. We just wanted to show it live because there were lots of sort of screenshots of it earlier. But if you have any questions, come up and talk to Karen and I. Alright, so let's open it up for if there's Q&A for any of the people who presented. Go ahead and shout it out. I have a question for Eric if nobody else has a question, which is, are you are you the documentation with that? Are you putting that in like GitHub or anything? Or is this just something? Yeah, I'm gonna I'm gonna once the code's a little bit better. It's been really hard to like, the coding has been hard to pick up and especially to relate to things that I want to do through Arduino, because they're probably do better with just regular like hardware hacks. Like so, for example, the pellet will have to move like very slowly. And somebody's was like, Well, maybe you can use a motor from like a clock or something like that. And in order, I want to incorporate all of like the motor shield within Arduino. So it's the code's got the codes and be there. Cool. Yeah. I guess that's a good question for everybody. Maybe if you didn't mention it, where can people find what you presented on today? The amus is on a server slash amus and I forgot one on GitHub, right? Are you mean? Yeah, it's really important. The thing with the amus is that you can use open refined to batch refined data, which is really important. Our data is terrible. Hydrodam is on GitHub on curation experts It's probably the and fix it is also on GitHub, probably those open annotation stuff is on the W3C website. Okay, great. Are you looking for collaborators on your open hardware? collaborators now more like coders, right? Really helpful. Yeah. Yeah, I have all the ingredients, but it's a matter of like, good chefs. very much that I imagine about the others. Yeah. Yeah, I'll put the list and the code that I'm looking with online like this week. If there's if there are no other questions or questions, please shout them out. Yeah, I have a question. For the big city reports, and you may come out flat like that. There's no it doesn't look like it's in a structured way at all to do more dynamic reporting. I don't think the folks in the back can hear. So the question is, does the fix the report is it just flat like we saw? Or is it more structured? It's flat but structured. I mean, it gets you so if you want to support it out, there's something like it's thoroughly structured together and create something free. So it's flat, but it's part easily possible. Yeah, it is multi platform. Yeah, somebody I think it was the artifactual folks mentioned that they were had got it to run on a bunch of I was unable to do that. So I'm curious to hear how they did that. It's Mac and Windows currently the scheduling is different. Any other questions? If not, if there's no questions, I'd love to hear if other people have things that they want to mention that they're working on or know us in this area. Yes, good. Hopefully this time next year, I'll know how to retrofit LEDs into film viewers and projectors. So you replace them involved with a new LED. You want to talk a little bit more about that or you could come up and no, no, no. Chris, I completely got to distribute the piece of paper to collect people's ideas. But maybe we can just do it now. Yeah, so you want to Yeah, sure. Yeah. So today in the open source committee meeting, we were just talking about, we will do a little bit of a more in depth survey after the conference about what people like or want to see more of. But we thought it'd be useful while people are in a room, just get the kind of well hanging fruit on thoughts for what people are hungry for more of our thoughts on maybe presentations for next year or things along, you know, in the same domain. So if people have thoughts on it, or workshops, or workshops, you're talking about maybe there's a need for an introductory for the conference, maybe before packaging, so that you feel more comfortable going into my thing, things like that. Thoughts on that? Yes. How many people use open source software in their organizations? How many have actually done development on open source software? There are a lot of users 25%, 30% development. So what did we can I make a shadow is part of the hindering group. One of our biggest challenges is documentation. And part of the problem is the folks who are keeping the leads of building the stuff, already know it and don't know it's not obvious to folks using it. As a user, you can be tremendously valuable to whatever project you're working with to volunteer a little time. It's two paragraphs, two paragraphs that we've never gotten rid of before. I know from our project, we would love you and I can imagine that's true for every other project. So Mark's love. Yeah, last year, actually, one of the winners of pack day was a documentation project on FFB. There was more about this year. What happened with that? Oh, why don't you say, well, that's why I haven't seen you. Well, so there's I know what there was. Yeah, there was more documentation this time. But last year's documentation was posted to GitHub. I think it was distributed after the conference the link or whatever. It's actually it's under the it's under the open source committee GitHub account. It's also on the hack day page at the bottom. It's well documented. We will we will post that again. When is the other hack day events? For the people that do not know? Sounds like you know today. Other hack day events. Oh, with this conference with the judging. Oh, so tomorrow evening, we'll be judging so that the folks who participated, including everyone who participated in the end of the funding will be judged by a panel of the same judges, some of whom are in the room. And then the results, all results will be presented on Saturday morning. Unfortunately, we only have 30 minutes to present like two minutes per project, probably that and then the same winners are. And then it doesn't that include both by by moving your body to a place in the room. But it's kind of sad about it, that we don't really get to show off the tools that well, like that's a little bit important. Because it's just like a it's a lightning lightning talk. Yeah. Yeah, I mean, we wanted on the on the plus side, it's a plenary. So we get to be in front of the whole organization on the downside. It's very limited time. Maybe that's something we can push for next year. You want to see more? Okay, any other thoughts or questions? Just wondering in general that the slides from today's presentation to be available in a central place. Yeah, I think that the answer to that question is yes, for most of all of them. But but certainly we I was a video recording and we have yet I have yet I was slacking on distributing release forms. But so there's other video recording that happens at a Mia which actually goes into a Mia's archive. It's not made accessible. I would like to make for everybody who agrees, we will make the video publicly accessible. And I imagine people will make their presentations. Okay, well, I guess like the lightning talk, we don't have to use all of our time. So thank you so much, everybody for attending. Thank you to the presenters and I look forward to everybody around.