 So, this evening we have Debbie Bond who is the Electronic Records Archivist at Washington State Archives with us and she's going to talk to you about the Washington State Archives digital repository and about her position there. But the first thing she wanted to do is share a little bit about the Washington State Archives through a YouTube video. In 2001, Washington Secretary of State Sam Reid came up with the radical notion of creating the first ever state run digital archive. When more and more public documents were born digital, it was clear we needed a facility that could house these documents as well as digital photos so people could access them on their computers. In 2004, the archive went online. The success of this pioneering effort exceeded all expectations. It's between 40 and 50,000 unique users every month and in one year we've had access to more researchers than the whole history of the Washington State Archives. Government agencies are only beginning to grapple with keeping track of their electronic records with being responsible for electronic records and with archiving them. But it's highly important, they can't afford to ignore it. In order to fulfill our statutory mandate of preserving records for the people of Washington, you must preserve electronic records. This is not an option, 90% of records are produced electronic, therefore we must preserve them. So, a lot of people say, well, why should I start a digital archives now because it's going to be outdated in four or five years. Digital formats are constantly evolving and changing over time so we are constantly re-architecting and rewriting and that's what we found we had to do with our back end process that preserves our data. Recognizing the work they had accomplished, in 2007 the Library of Congress gave them a grant to partner with other states and share their knowledge. It's beneficial for other state archives to use because it's about the ground that's already been laid. We have already dealt with so many issues. The infrastructure they developed was utilized to create and host digital archives of other states. This project was fundamentally transformative for Indiana because it put in place a repository and a system to allow the public to access archival information online that had never before been available to the citizens of this state or any other. So that's what made this such a pioneering effort. We broke down all kinds of barriers and provided a very effective delivery system. As states, we also shared the same custodial legal role that the state archives in Indiana had or that the state archives in Nevada had. It not only helps states make its records available to its constituents but it helps smaller communities and agencies within the states do the same. Smaller communities have a mandate to manage their electronic records so that if they're creating records no matter what format whether it's paper or electronic they must follow the laws in managing those records. So the Washington State Digital Archives provides that help for them with their electronic records. We also assist them in education for managing their electronic records. I am in awe of what the staff has done here at the Washington State Digital Archives and I'm just amazed to see how tightly people work together and what they have accomplished. We have preserved over a hundred million digital objects. It's as large an archive as exists in the world. I came from the background of working exclusively in paper records. I'm now on a multidisciplinary team. We have developers, a network administrator, people from the IT world and then archivists. We've just created something new and it's very exciting. Archiving is not just about history. It's about sustaining and preserving records for the future. What I hope in ten more years is that this archive is not unique. Failure is not an option. The model of the Washington State Digital Archives has proven to be successful and scalable for the next generation. Archive 2.0. Thank you Dr. Franks. I'm going to turn my speaker down here so I don't get feedback. And hello everyone, thank you for having me tonight. I believe that video is important and I usually show that at the beginning of most of my presentations because I think it serves to help you understand the background and the mission of the Washington State Digital Archives. It also allows you to see our facility. We are in that big beautiful building that you saw at the beginning of the video. You were able to see the server room and also how the staff interact with each other. I'm going to give you an overview of the preservation strategy. I want you to understand that the repository that I work for is very unique in scope, content, and the disciplines involved. Most repositories that are trying to preserve electronic records are much smaller. Please remember that we are talking about all repositories, not just public records. But smaller institutions, many of which are universities. And then of course you have historical societies and museums and all types of archives. Tonight I'm going to talk a little bit about the Digital Archives itself and then our preservation strategy. And I'm going to try not to wander into the access portion of what we do here because that poses the greatest number of challenges for us here. And then we'll talk about the challenges that we face for preservation. And then I want to spend a couple of minutes explaining what our next big projects are going to be. So that will be the looking ahead portion. For just a minute let me tell you about the models that we have for the Washington State Archives. We are under the Secretary of State umbrella here in the State of Washington. We do archives and records management. We do state agencies and local government, which is a little different than where I worked in California when I worked for the California State Archives. California State Archives helps preserve the records of only state government. It's a very big government and so they don't have a lot of resources to go out and help all of the local government and support archives for them. In Washington, however, we do both state and local government and we have a branch system. We have five regional branches that preserve the paper records of local government in their region. And then the main state archives is located in Olympia, which is our state capital. The digital archives is located in Cheney, Washington, which is way over on the east side of Washington in Spokane County. And this location was chosen to build the digital archives because it has fewer risk factors from the physical environment. And this is an important consideration and preservation, not just a technological consideration, but we don't want to be in the big risk weight zone that the fault line runs up the west coast. So we are way over where it's a little bit safer. Here in our current board chart, you can see up at the top we have the State Archivist and that's our administration and they are located over in Olympia. And then below that is the layer of the chief applications architect, Jim Timmons, and myself, the electronic records archivist. We're very much a multidisciplinary team. We are the two leads. And then the third lead is Harold Thor, who's our network administrator. And we all wear very many hats here. But in the core mission of preservation and access of Washington's public records, my duties include acquisitions and metadata creation and then access. Jim, the chief applications architect, is mainly in charge of preservation, which I'm going to be talking about tonight on her behalf, and then system functionality. And then our network administrator is in charge of hardware and system security. Well, not just system security, but physical security as well. So let's look at some statistics. We have been live for 11 and a half years. And as you can see by these stats, we've undergone a lot of growth. We started out with three and a half million records, preserving three and a half million records. And that was indexed digital objects. And where we got to that point at the very beginning was during the building of the digital archive, for two years prior to that, our branches have volunteers come in and scan the most frequently requested records from the paper branch holding, mainly vital records and property records that people request most often. So these were scanned and indexed just so that they could populate the databases before the digital archives went live. Even though the main purpose was to preserve born digital records of government, the creators of the digital archives knew that government agencies wouldn't be ready to just start transferring records at the very beginning. So that was in 2004 that we opened 2005, 3.5 million records. I was actually hired in the summer of 2008 as the first electronic records archivist here. The digital archives was developed by computer science experts. And I came on board, like I said, in 2008. By 2011, we had 99 million records preserved. And this was because we had gone out and started helping the county auditors who are the recording officers in the state of Washington to start preserving their recording systems. Now, today in 2016, we are preserving over 173 million records. And you can see the little snapshot I took this morning on our website where it says records preserved, records searchable, and new this month. On the new this month, I think what happens is maybe at midnight or something, the number for the day that we have ingested rolls over into that. And it's updated every day. Right. So let's jump into the need of the presentation, which is going to be our preservation strategy. I'm going to frame the information that I present to you as the answers to six different questions. And that is, what do we preserve? Where are the files stored? What injection tools do we use? What processes occur during ingestion? How are the files, preserved files protected? And have we always done it this way? And you can probably guess that the answer is no, we have made improvements along the way. Let's start with what do we preserve? And the answer is many things, including born digital archival records. And this is based on the records retention schedule for state and local governments. And some of the examples of these are reports, publications, photos even, administrative documents such as let's say city council minutes ordinances and resolutions. We also do have two very special categories, and that is the county recording system. We have 39 counties in Washington, and today I have helped 23 of those counties become actively involved in monthly transferring their recording systems to the digital archives. And then we have four counties clerks transferring their superior court records. We also preserve access copies of frequently requested holdings from our branches. And these are primarily vital records like birth, death, marriage, a lot of naturalizations, the old census records, and historical property record cards. And then we do take annual snapshot of state and local government agencies' websites. We do that for historical purposes, much like the way that machine does, but we do it for our government agencies. We do have a web crawler that does that, and if an agency is going to pull their website down, they can give us a call and ask us to manually capture that site, which we do. When we crawl an agency's website, we pick up the HTML only. We don't go after link dodges. We can't do that. So if they have archival material such as reports or any kind of documents that are accessible through their website, we use their responsibility to transfer those in a separate agreement. We don't pick up any transfers through the web snapshot. When agencies transfer their records to us, we ingest them through an interface that we call Archivist. That's the name of our application, Archivist. And then the engine that runs in the back end is called Autotod. And that stands for Archive Utility to Optimize Transfer of Digital Documents. And it's the internal process that automates the ingestion of all electronic records transfers through Archivist. So why do we call it Autotod? First, I can tell you that working on a multidisciplinary team, and this has been my first experience working with developers, they like to name everything. And Todd is the name of our ingestion coordinator. And as I will talk about in a little bit here, there was a time when he had to do everything manually. And that was very time consuming for him. So they created a new engine, and they wanted to name it after Todd because they were automating Todd. So that's where the background for that name came from. If you look at the little screenshot I have here, and I don't know that it might be a little blurry. You might not be able to read the fine print. But this is one of the screenshots from walking through the Archivist process that the agencies use when they send us records. Now this shot is not the first screen. What happens is they log in, their IP address that they transfer records from must be registered with us as an Archivist user. Because obviously we are trying to protect our system from intruders. But they log in and it recognizes which agency they're from because that metadata is all set up in the system. And then in the next screen they will select the type of transfer data they're going to submit. Whether it's a particular county's recording system or one of the county clerk's superior court systems, or if it's going to be digital objects only, or if it's going to be digital objects that are going to be accompanied by a spreadsheet or a delimited file of indexing metadata. The screenshot that you're looking at is the page for the transfer of digital objects that are accompanied by indexing. So in this screen you would select your metadata format. Let's say that I'm sending digital objects and tiffs, 100 tiffs with a spreadsheet with all 100 tiffs indexed on there. My metadata format is going to be an Excel spreadsheet. So I will select that from the dropdown. And then I'll browse to my metadata path and pick up the file. Just like you attach an Excel spreadsheet to an email, you're going to do the same thing, browse to it and select it. And then we're going to go to the digital object format, select that format from the dropdown. It might be tiffs, it might be PDF, whatever. I'm going to select that from the dropdown. Do the same thing. Browse to my folder that holds my digital objects and attach those. So there are a number of screens. I think there's six or seven different screens that a user goes through in order to attach their data. And they go through the remainder of the process like this. And the next thing I want to do is explain what happens after they get everything attached and they click what amounts to the send button. So what processes occur during ingestion? And I will say that this is where your SIP turns into your AIP. For those of you who are familiar with the OAIS model, SIP is a submission information package which is the data that is submitted by your user or by your record creator. And then we attach all this stuff. And we do all this stuff to the submission information package. And then it becomes an archival information package. So I believe from looking at Dr. Frank's syllabus that you will be learning about that if you're in the EMC and preservation class, you'll be learning about that later this quarter. So once the records are sent, Autotod first does a virus scanning because if it detects a virus, that just stops the ingestion process. And it's probably no surprise that emails are what we find most of the viruses in. It does validation which means that Autotod checks all kinds of things. It checks for correct data. It will check for if there's any index file errors or index file rows that don't match a digital object. And I'm going to give you an example. This is a super simple example. Let's say that I'm going to send three photos through Archivist. And they're going to be processed by Autotod. Three photos. And I want indexing metadata to go with those so that they're searchable on the Web site. So I will attach an Excel spreadsheet that has three rows of metadata. One row for each photo. And one of the photos, the file name is MyCat, M-Y, Space, C-A-T. However, when I index it, let's say that I accidentally said my file name was MyCat. Well, the validator is going to check that and it's going to tell me you have a digital object for which there is no matching index row and you have an index row for which there is no matching digital object. So it does that one-on-one correspondence matching. And if it doesn't match exactly, even if you have a spreadsheet that has 4,300 items on it, it's still going to do the one-on-one matching. So it's not a matter of how many. It's not a match exactly, no matter how many. It will also tell us what Bay County Auditor sends a document through with its recording system for which Autotod doesn't recognize the document code. It will also send an error in that case. And the example I'm going to give you on that is that last year, there was a new law in Washington regarding transfer on death deeds. And so people had to record these at the county recorder's office. And when you record something here, the recording officer has to attach a recording, a document code to each recording thing, what type of document it is. Every county recorder, when they create a new document code, needs to let me know so that I can apply that to their account in our system. If they don't and they try to send a record through with that document code, Autotod will tell them, you can't send this through. We don't recognize that document code. So I had county after county getting these errors when the law said that people had to record transfer on death deeds. And I just knew every day I was going to have a new county auditor call me and say, I tried to send last month's recording and everything from transfer on death deeds, TBD, did not go through. So those are the kind of things that the validator checks. It also does text them creation and we use the MD5 hash method. And then a really exciting stuff happens. This is what June, our chief applications architect, calls unpacking the files and what I call applying the business rules. What happens is it opens the HTML string for a file and it starts reading by it, I mean Autotod. Autotod starts reading and applying the business rules for that particular file. Autotod asks, what titles do you belong to? Are you restricted? Is your digital object restricted? Are any of your metadata fields restricted? There are many things that many business rules that apply and that's a pretty complex process. So I'll leave it there. The next thing that happens is the creation of the manifest data. And that's really administrative metadata. So an example of administrative metadata might be Autotod will assign an ingestion session ID number and an accession number. And then it will record the IP address of the computer that's transferring the file. It will record the name of the person whose account is logged in that is transferring those files. It will record the checks and value. It will also put a date and time stamp on the transfer. I want to say basically that's the who, what, when and how of the transfer. And that's really important for the chain of custody. Now, before I go to the next slide, I want to say that there is one point that I forgot to put on the slide. And that is the system will also create a preservation or excuse me, a presentation copy of the file. So that we always store a preservation copy in its native format, but we also create and store a presentation copy. Okay. So what I'm showing you here is a snapshot of our ingestion log. And the staff has access to the live ingestion log. And this is a snapshot from last week. I think you can tell. If you look in the column on the far right, it will tell you the date that the transfer happened. But it gives you, it runs live all day long. All the staff are able to monitor the ingestion log. And we do it for different reasons. I monitor it because whatever comes in, I want to QC it or assign one of my staff members to QC it. And also, I need to update the scripted metadata if I see something that's an accretion to an existing series. We have a lot of series and I pretty much have them all memorized so I can recognize when something is coming in that's an addition to something that's coming before or if it's a brand new title. And then if you look the column on the left where it says view, if you want even more details on this ingestion session, you just click on view and then you get all the details for that particular session. So before we go to where are the files stored, I'm going to ask if there's any questions. We can probably spend about five minutes answering questions if we have any right now. Thanks so much Debbie. This is really awesome. Just a really quick question. Is AutoCAD like a homegrown in-house built system or is it a vendor tool that you purchased? That's a great question. We are a homegrown shop. We don't use any canned applications. We do get a lot of really good ideas out there from open source software. Now one thing that I will say, there are a few tools from Microsoft that we use. No surprise because we're in Washington. But just about everything is created from scratch here. And that also is one of the things that makes us unique. I see in the chat box too that Edward has a question. How do you catalog? Do you use DAX or RDA? Well, we do use DAX because we are in the U.S. and so we use the U.S. standard. It's a DAX. Danielle has a question I think as well when you're finished. I have. Okay. Yes. When I do my cataloging, the initial description system was not DAX. I'll remind you that for the first four years that the digital archives was live, they did not have an archivist. So when I started, I can't remember how many records were online when I started, but to my great dismay, every thing that you looked at online in the description field said coming soon because they had no one to create the intellectual access to the records. So I created a DAX model for the front page or for the results screen at the item level, which is another thing that was difficult for me to get used to was item level retrieval. From the paper world, I was just used to aggregate the records. Nevertheless, I did create a description model based on DAX, and right now the only record series that I have displaying in DAX are minutes, ordinances, and resolutions. My graduate students are the ones that usually work on the description for me. So they're in the process of taking some more of our records and putting them in the DAX model. Notice that you have a hundred and seventy-three million records preserved and about sixty-nine million records that are searchable. Is it your goal to have everything searchable or is that not practical? Right now it's not really practical. Right now I'm concentrating on the most frequently requested records to be searchable on the website. And I liken it to having an open stacks and a closed stacks. The open stacks are, of course, the website where it's all self-service, although we do have archivists available to answer questions if people need help. I feel that the website was well designed, but there are some navigation challenges. So we're here to help people if they need that. The closed stacks are all the digital objects that are being stored in our databases that there's really no public-facing intellectual access to those, which is something that we're going to do in the future. So right now the staff is pretty much the only ones that know what's there. We have so much to do and we just have to bite off little chunks at a time. So, yeah, it's just not feasible right now for us to take everything. And I do have to say, too, that a lot of the records that are stored are index records for reference purposes, such as we do subscribe to the Social Security Administration's Master Death Index. The only thing that we make available are those associated with the state of Washington, and then we don't preserve any of the index records for the other states. What I'm going to tell you is very, very high level because it's a very complex process. And, of course, this is June's domain, not mine. And so I'm probably not able to articulate it as well as she could. But let's talk about the database structure. And the high level databases are stored on different servers, and that's for the protection of the records. One of the servers holds the digital objects, and that's both the preservation copies and the presentation copies. There is another server that holds the discovery metadata for the digital objects and also the indexing, the searchable metadata that I told you about. And that's for records available on the website. Right now on our website, we have 34 artificial collections. And by artificial collection, I mean, you know, archives traditionally are records would be organized by provenance. But we can't really make them accessible that way, or that's not the way it was built. And so they will put into artificial collections based on a single document type with common metadata. For instance, marriage records is an artificial collection because your metadata is bride, groom, marriage date, birth records, father's name, mother's name, birth date, baby's name. So that's why these are arranged in artificial collections. Like I said, we have 34 of those. And in the discovery metadata database, there is a table for each one of these collections that holds the indexing. And then we do have the administrative metadata, which is on an even different server. And that contains the manifest information and the information, a copy, excuse me, a copy of the archival transmittal form that goes to the transferring agency. And everything that we need to assure that we can prove that we have authentic copies. It documents the chain of custody to all the files. All right. And let's go back past the ingestion logs and talk about how are the preserved files protected. Intrusion prevention is, of course, a very important topic. But it's also very complex. And it's the job of the network administrator. His office is right next to mine. And I laugh every time I walk past his office. And we have glass offices. And he has got a huge bank of monitors that he's been watching, looking for intruders coming in through the system from bad IP addresses. He has to constantly block IP addresses because he subscribes to the services that advertise, for lack of a better word, bad IP addresses that are trying to hack into systems. So he monitors that at the high level all the way down to the staff clicking on something that they shouldn't, whether it's an email or attachment or something in a website. So we have to be really careful here. In the 12 years that we've been live, we've never had a security incident. And sometimes I say, I say to him, Harold, you've never been compromised. Our system has never been compromised. And I guess that's the wrong language because he tells me, well, that doesn't mean someone's not in there. It just means I'm not aware that somebody's been there. But I do know that they haven't done any damage. So it's, I wouldn't want to have his job. There's so much responsibility to what he has to protect. And he does a really good job. Another way of protecting the preserved files is back-ups, back-ups and back-ups. And that is partly the job of the database administrator who's one of the physicians under June and also the network administrator. So what the database administrator does pretty much on an hourly basis when he's here is he monitors the ingestion and he backs up all the newly ingested records to disk. And then what the network administrator does is he takes a quarterly back-up to tape of the entire system. And of course they make numerous copies of those and distribute them in various physical locations, which is the best practice for back-ups. So in order to give you kind of a visual of what this means, think about the 173 million records that we are preserving right now. 173 million digital records, digital files. If you think of how many pages that is, and let's just say that there are all 173 million of them are single-page documents, which we know they're not, but at a minimum they're going to be single-page documents. How much physical space and storage that would take up if you had 173 million pages and had to store it? Now compare that with those in electronic format saved to tape. One and a half cubic feet. Such a huge difference. And that amount of space is actually getting smaller as the density of tape is expanding. So instead of using tape now that holds parabytes, gosh, you just get tapes that hold a huge amount of storage that are the same size and get cheaper. So that's a nice part of his job. On this page, this is an FYI only. This is a handout of the Digital Archives Network architecture. The public handout, because our network administrator certainly doesn't want to show everything. And I could not possibly explain everything on here to you. However, we usually have people in the audience who do understand technology at this level and network architecture. So I've given this to you for whoever is interested in it. And the final question for the preservation strategy is, have we always done it this way? Well, of course, the answer is no. And our earlier ingestion methods, there are actually two. In the beginning, it was very much a manual ingestion process from hard drive. So staff would go pick up records from the records creators, put them on hard drive. And the ingestion coordinator would manually ingest the records. And he would have to deal with all the problems. This is the real Todd. And Todd would have to deal with all the problems of the data that came in. So the next step was for the staff to create what was called the one-click wonder. And that allowed agencies to transfer remotely via FTP over SSL using an interface called the data transfer wizard. But the problem with that was there were no standards for the data that went through. There was no validator. And so even though remote transfer via FTP was possible, Todd still had to deal with all the current data that came through. So in 2010, our developers released archive this, which is the tool that I explained earlier. And because we have a validator in there and correct data and whatever can't go through, the problems are then given back to the agency to do the data cleanup themselves. And then they can resubmit the records. And we're always here to help them if, you know, they'll get errors and they'll call us and say, I got errors. I don't understand why I got errors. And then we'll help them with their data cleanup. Explained at a very high level our preservation strategy. So quickly, what are some of the challenges that we face? Well, format some standards compliance and restrictions for archive this. We have, one of the biggest problems we have with that is, and it's related to the next point, faced with decades of e-records that weren't managed for eventual transfers to an archive repository. One of our biggest problems is huge, huge file path me. We have a very limited number of spaces that people can use. A file path, we limit them to 172 characters because there are more characters that are added to the file path during the ingestion process. We also have only certain formats that we take if it's going to go on the website. That's not going to go on the website. Right now it won't take any format. If it's proprietary, that may cause us some problems in the future. And then the third bullet point here, proprietary systems, that's a whole other story in itself. The county recording systems and the county peer report record systems are proprietary. It is a lot of work working with their vendors to get each one of those to export their files in a format that's compatible with the digital archive system. So I just wrote a case study about that which is going to be published in a textbook in June, I believe, edited by Phil Banton. So look for that. It's untrusted digital repository. I would say that a couple more challenges that we have are data sets. We have not taken any data sets yet. We're not sure how we're going to handle those. Email is a problem. It's our biggest nightmare. Somebody even said the E word. We all run screaming. And then social media has become a little bit of a challenge. And then looking forward, and I'm going to say in about the next two years, what we're faced with is I will be responsible for doing kind of a self audit on any compliance for detrusted digital repositories ISO, which is 16363. And that's going to be a multi-year project for me. And then Jim's project is going to be coming up with a migration plan because we have not migrated any of our records yet. I am going to stop there because I want to leave at least 10 minutes for people to ask questions. I'm going to talk about my presentation in a nutshell. What else would you like to know? Edward, go ahead. Okay, let's see. Edward says, if you were faced from starting from scratch in 2016, would you do it the same way? And would you use a third-party repository like dspace? No, I really, I don't really want to use something like dspace. I like the flexibility we have of doing everything in-house and looking at different models with the open-source software. I'll tell you what I would have done differently as I would have hired an archivist at the beginning. It has been a real challenge working with a system that was architected without the advice from archivists. So there are some things where our hands are tied. And I have to say, too, that June was hired in 2006. So the digital archive was two years old when she became the chief applications architect and four years old when I became the archivist. There are things that we would do differently, but one of them is because of the scope of our holdings, I would still do it from scratch. Hope that answered your question. You're welcome. Dr. Frank, I see that you mentioned that, yes. What it is, let me explain what's happening. For the trusted digital repositories ISO, those of us who are familiar with it know that there's the checklist and the criteria. So we have the framework for it, but nobody, as far as I understand from Phil, has yet written any literature, extensive literature, on the implementation of a trusted digital repository. So this is theory and practice. It will have different chapters on various aspects of the trusted digital repository with each chapter will have a theory section and then two practices, three studies. So when I was asked to write for the book, I was given my choice of which chapter I wanted to write for. And I chose access simply because I thought that the big challenges that I had, and I won't give this away because I want you guys to read it. The big huge challenges I had working with these proprietary systems for the county auditors because there are no recording standards. So every single county gets to do it the way they want. However, they all have to send it to us. So that was just, yeah, quite a challenge. I'll be looking forward to seeing that. I've got five minutes to take more. Yes, I understand it's going to be published in June. That's what Phil's going to do. Any other questions? I believe the name is trusted digital repository. Okay. Yes, Stephanie, no recording standards right now. I think Edward must have a question again. I see it in the cat box. He said, we'll be moving on to the RDA model from DAX eventually. Not in the foreseeable future. I haven't even thought about it. And I will say not on my watch just because I have the next probably five or more years mapped out of what I need to do. And that is not high on my priority list. You have to remember too that searchability of the record is a great enhancement to any descriptive metadata we put up there. The descriptive metadata certainly serves as an intellectual access point for the record. But once you get in and find what you want, you can also do a lot of keyword searching because many of our records, we do OCR as a post ingestion process. All right. Anything else? You're welcome. Does your five-year plan accommodate for upgrades to hardware or software? Oh, yes. We're constantly upgrading the hardware. I think the last hardware upgrade was a million and a half dollars. And I should say that we are funded our day-to-day operations including any hardware and software and licensing, all of that is supported by a $1 recording fee in the county's recording offices. So any type of document that you record, you're going to pay also pay an extra $1 fee to support the digital archive. Is funding stable? You know that, thanks, Mary. That's a great question. And I'm going to say that one of the reasons why our staff is so small compared to the amount of work we have to do is because in 2008 when the real estate industry took a huge nosedive and recording the amount of records that were recorded in the county recording offices really went down because of that. It drastically reduced our funding. So the digital archives was built with the intent, I think, of just starting with ten full-time employees. That's just starting, you know, preserving three million records with ten employees. Well, we just barely have that now. And in the foreseeable future, I don't think we're going to be able to expand our staff, which is why I hire a lot of my graduate students. And I'm also the manager of the regional branch here, the paper archive. So I use people down there. I use whoever I can in the building. And we have to really stretch that funding. It's not that stable. Okay, we've got a couple more minutes left. I just put my email address there if anybody wants to contact me. I'm very supportive of this program. I was in the first MARA cohort, which is quite a while ago. I did not finish the program as I was explaining to somebody earlier. I have a master's in public history with a concentration in archives. And I decided when I wanted to become an electronic records archivist, I had heard about the MARA program, which was going to be launched for a year. So I started the San Jose State MLIS program and did that for a year while I was waiting for MARA. And then I was in the first cohort of MARA, which I think was in 2008. Is that right, Dr. Frank? Yes, it is. 2008. I believe the program has changed somewhat from the first cohort. We were really guinea pigs. It was a very rigorous program where it was three solid years, even during the summer, no times off. And then during the third year, my father became ill and I had to take care of him in a hospice. And so I didn't finish the program, which I wasn't too worried about. I learned what I needed to learn and I already had a master's degree, so I did not finish the program. But I'm still very supportive of it. I do internships for MARA students and I will answer any of your questions just send me an email. Thank you, Debbie. And I know that your time is short and you're going to have to leave us. I'm going to officially turn off the recording.