 Hello, everyone, and welcome to today's brown bag talk. I'd like to start with a land acknowledgement. The Archeological Research Facility is located in Huchin, the ancestral and unceded territory of Chicheno-speaking Ohlone people. Successors of the historic and sovereign Verona Band of Alameda County. We acknowledge that this land remains of great importance to the Ohlone people and that the ARF community inherits a history of archeological scholarship that has disturbed Ohlone ancestors and erased living Ohlone people from the present and future of this land. It is therefore our collective responsibility to critically transform our archeological inheritance in support of Ohlone sovereignty and to help the University of California accountable to the needs of all American Indian and indigenous peoples. I'd like to point out that next Wednesday, we have the pleasure of hearing from Niko Tripsvich and Chris Hoffman who will give a talk about small objects and big screens exploring artifact collections and sites on the cave 3D visualization system in the ARF atrium. So with that, welcome everyone to today's Brown Bag Talk, data conversations, good practices, ethics and outreach with Dr. Lee Lieberman. Dr. Lieberman is a research associate of the ARF and is director of strategic partnerships at the Alexander Archive Institute in Open Context where she's building institutional partnerships with libraries, museums and other cultural heritage organizations in order to develop sustainable initiatives. She's also an archeologist whose research explores how and why artifacts and spaces were recycled and repurposed, especially in the ancient Roman world. She's taught extensively at the university secondary and primary levels in both the United States and Italy. From 2018 to 2020, she directed the development of the Digital Humanities Initiative at the Claremont Colleges, a Mellon-funded program that aims to create a robust curriculum in the digital methods for faculty, staff and students across the seven institution academic consortium. She currently serves as the manager of data and information resources for the Pompeii Archeological Research Project, Porta Stavia, where she's leading the publication of the artifact assemblages from the excavation, as the head of materials for the Tharros Archeological Research Project and as the data management director for the American excavations at Morantina, Contrata Agnesi Project. So today, Lee is going to give us a talk to introduce the Data Conversations Workshop series, which will explore various topics related to how data impacts our research, teaching and professional development. But before turning the mic over to Lee, I want to take a couple of minutes just to give a brief background to her talk. Let me just get my slides started. So this work is supported by a grant from the Andrew W. Mellon Foundation as a match to an NEH challenge grant that the Alexander Archev Institute received in 2019. The AI, which I direct, is a local nonprofit organization working to improve research and teaching through innovative uses of the web and specifically with open context and open access data publishing service for archaeology. The Mellon and NEH funding supports a new data literacy program aimed at providing scaffolding to guide professional students and lifelong learners and thoughtful engagement with research data. It also supports the exploration of sustainable initiatives around digital data management, public humanities and data literacy with an eye to sustainability, which is the work that Lee is involved in. So this is just an overview of the topics that Lee's going to cover in today's talks that I wanted to give a take a brief opportunity to do some introductions and give, as I said, a little bit of background. I'd like to introduce you to the rest of the team that we're representing today. If you're able to attend subsequent programming and the Data Conversations Workshop series, we're hoping that some of them might be able to make an appearance, especially concerning one of our current, the current work in our data literacy program, which is led by Megan and Paulina here. And this is the launch of the Digital Data Stories Project. If you're immediately curious about this project, then to sort of find out more about it, definitely check out our blog on our website for more information. So why be deliterate? Through our work as archeologists, we produce clean, analyze, and translate data. This is well-represented by the types of activities that most of us perform, from artifact analysis to aerial survey, to excavation, et cetera. It's also well-represented in the types of records that we create, from notebook narratives to lists of finds, from photographs of specific artifacts to illustration of entire typologies. It follows that with so many opportunities for data creation that we should be data literate, or at least well-versed in the practices that involve data. As you may know, many of you may know from recent headlines, probably from signing petitions and taking other action. Many archeology programs and the humanities more broadly are struggling with losses of institutional support. This is well-illustrated by news and headlines like these that seem to be a more frequent occurrence as a plate. People who control flows of money are suspicious of the humanities and social sciences, and they push a narrative that we're irrelevant to a high-tech society economy, but we think that's exactly wrong. A high-tech data-driven society absolutely needs perspectives from the humanities and the social sciences. The humanities and social sciences teach appreciation for diversity, complexity, and agency, as well as skepticism about biases and ideological blind spots. That kind of awareness needs to be brought to data. So our data literacy program seeks to demonstrate the value of humanities and social sciences perspectives and interpreting data. That's what we mean by data literacy. It's not just about technical skills, but thinking how to use those technical skills appropriately and with much greater self and contextual awareness. So another aspect of this work is just very practical. We're trying to build capacity and archeology to create and reuse data more effectively. There's so many horror stories, costs, and missed opportunities around data. So data management, sharing, and reuse all need greater professionalism. A bottleneck that many projects currently face is relying on just one data manager position, which is often filled by a student who eventually moves on, and then everything falls apart. We all collect data, so we should all have some skills at working with data, rather than outsourcing the role to a single individual. Furthermore, broader skills-based learning outcomes will enable us to attract students that may not otherwise share our passion for material culture or the ancient world. This is especially important because many of our students are not gonna become archeologists in the end. In fact, it's arguably a good thing for more back-and-forth circulation between the academy and other career sectors. We want our academic, humanities, and social science perspectives to have a wider impact in other institutional settings. We should also welcome people to come back to academic roles after other types of work experience. Data literacy thus promises to help us in higher ed become less insurer. So with that, I will turn the mic over to Lee Lieberman to talk about the what and the why of data literacy. Great, thank you, Sarah. While I get my slides set up, I just wanna note I am zooming in today from my classroom on the Pomona College campus, and I wanna acknowledge the Tongva as the traditional caretakers of the land that I am on right now. As someone who resides on unceded indigenous land, I recognize that I have and continue to benefit from the use and occupation of this land, and I honor not only the history of this land, but also the Tongva people that are thriving members of our community today. So Sarah provided a fantastic introduction to why we think this is so important and why we're embarking on this data literacy campaign at open context. So I wanna build on that by starting off with just a few definitions. Everyone likes some definitions to ground the work that we're gonna do. So the first one is just a definition of data. It's probably a word we throw around all the time, but might not know exactly what it means. We consider data an object variable or piece of information that has the perceived capacity to be collected, stored, and identifiable. Data literacy it follows then is the ability to engage constructively through and with data. In other words, data literacy refers to the ability to read, to work with, to analyze, to argue with data. Data itself can be structured and unstructured. Data can be big or small. Data can be quantitative or qualitative. And one of the aspects of data that we're going to keep coming back to over the next 45 minutes or so is it's broader social context and how data without collaboration, how data without a social understanding is really missing a lot of its point. So Sarah spoke earlier about what's at stake for us as archeologists with respect to data literacy. And I just wanna build on that a little bit to stress that data literacy really isn't an all or nothing badge that you're going to earn at some point in your career. Ultimately, I think one of the most important takeaways is that data literacy is not about learning these specific tools or techniques, but it's rather about developing comfort around data and a familiarity with good data practices. Data literacy to that end does require practice. A great metaphor that one of our colleagues, Paulina shared with us last week, is that learning the principles of data literacy is like learning the alphabet, but you're still going to need to practice writing words and you're still going to need to practice writing sentences. So it's all about that practical and practice element. Also, I think it's important to remember that you don't need to know how to do everything, but I think you owe it to yourself to start to familiarize yourself with the range of what is possible around data, the kinds of vocabularies and tools and systems that can help you learn how to develop these skills. That way, when the what and the how of data literacy change, the why and the principles that ground us can really stay consistent for us. So turning from the what and the why then, I want to look at open and restricted data. Open data refers to research that is freely available on the internet, permitting any user to download, copy, analyze, reprocess or use without any barriers, any real barriers that are inseparable from gaining access to the web. The principles behind this definition of open data have been and are currently being defined in the context of big international conversations about open access to scholarly resources more generally. And we're tapping into those conversations when we talk about the importance of data literacy for scholars in our profession. It's worth reflecting on some of the positive outcomes that can result from our participation in an open data network. Open data promotes greater visibility of your data, thereby encouraging reuse and encouraging citation. We all love it when our articles get cited and when our chapters make their way onto a colleague syllabus at the start of the semester. So why wouldn't we want the same kind of reach for our data? It makes a lot of sense. Moreover, if you have access to other people's data, you've got access to models for your own data sets, which can promote better data creation. Open data is necessarily going to democratize knowledge as your data now have the potential to be shared with audiences that don't necessarily have barriers to those paywalls that many of us have come against. And finally, open data can also promote collaboration between groups that might not otherwise know about each other's work. And I think even about all of the different types of disciplinary foci that are represented in a place like the ARF, I think all too often we get stuck in our own disciplinary silos and just having the ability to access data and to really explore what other people are publishing and putting out there promotes collaboration in ways that we might not have even been able to to think through yet. However, all of these potential outcomes require training, require an understanding of the ethical landscape around data practices and data curation and access to data. An organization like OpenContext provides services for publishing data that can and should be open, but we readily acknowledge that Open isn't a possibility for all data. In some of the upcoming workshops in the data conversation series, we'll dive into this more fully examining issues of context so that we can make sure that data are protected when necessary and shared responsibly when appropriate. So I'm really looking forward to those conversations over the next couple of months. Linked data adds another layer to our discussion of this open data network. Linked data is a set of design principles that promote interoperability that uses web identifiers or URIs to refer to some shared concepts. Linked data principles encourage you to think about how your specific data set relates to a wider world of data. It's not just the technical infrastructure that supports this kind of work, but rather it's a broader methodological framework that really brings you into conversation with other people that are doing work in this area. If you think about traditional scholarly publishing, you would never really quote or reference another source without proper citation and linked data relies on those same principles. So when standard vocabularies have been published, linking to those standards, like some of the ones represented here on the screen, linking to those standards in your own data set not only saves you the time and effort of recreating the wheel, i.e. you don't have to come up with the definitions yourself, but it also allows you to participate in a wider network of data-driven scholarship. And then we turn to the question of privacy. Should all data be open? We've seen a number of articles that sort of encourage us to think about this with respect to the objects themselves and the materials themselves, but this is a topic that we may be less familiar about with respect to the data landscape. One example that I like to highlight comes out of my own dissertation work where I had a lot of trouble finding and then thinking about how to share details about the locations of tombs at a site that I was studying, Lentini in Sicily. These data have not been regularly shared previously and still aren't in many respects because of the existence of looters in the area. And scholars and community members have decided that it's more important to protect the integrity of the sites than to let other scholars know precisely where they might find these remains. It's really important to have conversations around these concerns with your local community, with your collaborators, to learn about the local and international standards that govern these kinds of questions and then to establish policies for your own research and your own field projects. Knowing about these things is sort of the start. And again, these are conversations that we're much more used to having around the objects themselves, the sites themselves, but with respect to the data, this opens up a whole different set of concerns that we should start to feel comfortable talking about with our colleagues. So with that brief background about open data and restricted data, I wanna turn to a discussion of our data collection methods as archeologists. Remember, data and data literacy don't necessarily rely on technology and they don't necessarily rely on advanced computational tools. Data can be collected using analog forms that you may or may not digitize later. And likewise, data literacy, as we've said, is more about a general attitude and familiarity with good data practices rather than expertise in a specific tool or a specific method. So as archeologists, we might come to data through field work. In the field, you and your team determine the research questions and you determine the means by which you're collecting that data or the data architecture for your project. That data then consists of observations that are recorded by you and members of your team. And necessarily that data then contains your own biases and evidence of your own priorities and your own interpretations. Biasis aren't necessarily a bad thing but they do need to be explicitly acknowledged. And that's very important when we're trying to think of our general approach to how we consider data trustworthy and what we might look for in a trustworthy data set. Outside the field, you might also come to data through library research. Many of us have probably worked on what we consider legacy projects. This term often refers to older projects where the data were collected in a way that might not meet today's best practices but data that we still nevertheless want to use and take advantage of for our own research. When we are using the resources that we find published in libraries or I can even change that to archives when we're using those resources in our research, we are applying the research questions and the data architecture to information that might not be structured in a way that is conducive to how we want to question it. Moreover, we're applying our own biases, our own priorities and our own interpretations on top of those of our predecessors, on top of those that are already sort of infused into the data set. And again, this isn't necessarily a bad thing but it does require the understanding of all of these different layers in the ecosystem that we're creating and then using in our own research. Whether you are in the field or in the library, your work may require a certain amount of digitization. Digitization practices are governed by their own set of standards that are usually managed by our colleagues and collaborators in libraries and archives and visual resource collections. And this, I think, really highlights the fact that in our movement towards becoming data literate, we are not in this alone. It goes back to the idea that you don't have to be an expert in everything but it helps to know what the possibilities are and what the range of resources out there are. And I think that working with your librarians, working with your archivists to make sure that the data you are digitizing are in their appropriate form and at the appropriate standard is an important step in this process. You may decide for any given project to use a comprehensive database to collect and organize your data. A database is an organized collection of data and it's sort of basic definition. Databases will have a backend that is meant for the development of the structure and databases will have a front end that is meant for input and access to that by a user. Databases are not necessarily a requirement for any kind of work involving data so you don't necessarily have to leave this talk today and go learn how to do this. But again, this is one of those areas where it helps to have models and it helps to think about the ways in which you've accessed data in the past and things that you've liked about it, things that you haven't liked and how you might want to improve the experience for yourself and for people that might use your data down the line. If you do decide to build a database and use the structure to organize your data that you're collecting, there are some pros and cons to decisions that you will make throughout the process, especially regarding the type of platform that you use. One question that you wanna think about is the cost. How expensive will it be for you to maintain a digital infrastructure over time and who's going to pay for that cost because it can add up and usually including these in initial grants is something that we forget to do. The data often ends up being an afterthought when in fact, it should be one of the first things we think about when applying for funds to start an excavation or start a research project. There's a question of transferability or translatability. So if the specific software you're using stops being supported, how are you gonna be able to access your data? This is really important too when we think about the idea of the front end and the back end. So a lot of people will want to produce their own flashy website that communicates their research and their data to wider audiences, but all too often those flashy websites come and go and the durability of the project and the strength of the project is really based on the data architecture and the stability of that platform in the back end. And those are the questions that are again, often left a little short-sighted in these initial starts. You've also got to consider the learning curve for the creator. So if you yourself are gonna be building this for your own research project, a dissertation, a postdoc project, how hard is it gonna be for you to learn the ins and outs of using a specific platform? And then if you're thinking about how these data are going to be accessed by users later, how easy is it going to be for them to understand this interface that you've created? So just a few of the considerations that come up when you're thinking about databases and thinking about how to organize the data that you're collecting for any given project. But again, this isn't an all or nothing process. You don't need to leave this talk and learn how to build a database this afternoon. And in fact, the back end image of that database there kind of turns my stomach into knots. So I wouldn't blame you if you didn't wanna do that this afternoon. But nevertheless, setting yourself up for success, especially during the initial phases of any data-driven project by thinking through these important questions can really pave the way for success by the time you get to the point where you do want to build a structure like this. And there are many tools out there to do it. So experimenting early in the process and finding models that you like and that you find accessible and easy to use is one step that's going to get you there. When you're working on any data-driven project, you might also be interested in figuring out how to access open data archives. Where do I find data related to my research that is clean and that is trustworthy? So data repositories and data publishers all have different strengths and they cater to different types of formats. Some are disciplinary specific, some are not. Some collect data from a whole range of disciplines. And it helps here to familiarize yourself with some of these resources so that you can know where to look to find data for your research questions and so that you can anticipate where you might want to deposit your data when you've completed your work. And remember, having access to data means that you have access to these models that you can use to structure your data at the beginning or any phase in your project. So do you see good practices that you would like to emulate? Are there any relevant data that you should be citing and referencing? Again, participating in this open data landscape requires citation. Are there vocabularies or typologies or recording practices that you should cite even if that particular data set doesn't have records that are relevant to your specific topic? These are all questions that you can pursue just by tooling around some of the repositories and publishers that are available. It can be a really eye-opening experience and might have you asking questions that you didn't even know you had before at the experience. So from that conversation, let's dive a little bit into this idea of the background information around the data that we're creating. And this will surely come up in our next workshop on data trustworthiness. But we're gonna chat for a little bit today about metadata and unique identifiers. So metadata we can think of as data about data. And we might imagine this as kind of the tip of the iceberg that can give us access to the heart of the argument, the data below. There are three main types of metadata if we wanna organize them based on the things they can tell us about the data. Those are descriptive, administrative and structural metadata. Descriptive metadata such as who created this data? What is this data about? When was this data created? And what is its unique identifier? So all questions that could be answered by descriptive metadata. Administrative might answer the questions who owns this data, who gives access to this data and how can this data be used? And then structural metadata might answer the questions of how is this data set organized? What version of the data is this? And what other resources do you need to interpret or access this data? And again, these are the sorts of questions that you'll begin to think about and you'll begin to sort of suss out, does the data set that I found answer these when you start to question the trustworthiness of a particular data set and when you start to design your own data sets to be trustworthy. It's also useful to think about traditional knowledge labels. Traditional knowledge labels are an important resource that can sort of either supplement the way we think about metadata with the definition that I provided or can actually serve as an alternative to metadata. So traditional knowledge labels identify and clarify community-specific rules and responsibilities regarding access and future use of traditional knowledge. And these were created in sustained partnership and testing with indigenous communities across multiple countries. TK labels then offer community-based alternatives to the metadata standards that you might see in traditional catalogs and traditional repositories. And it's noted that TK labels are a relatively new thing. So this is something that we're still trying to figure out how to work with in the open data landscape that we are trying to participate in. Ultimately, TK labels acknowledge the fact that data cannot be created and managed solely from the outside, so to speak. And they allow us to really zone in on the wider social impact of and the roles played by research data. Overall, metadata and TK labels, I'm sorry, can inspire confidence. And this is what we're going to get into in more detail later in these workshop series. They can enhance discoverability. They can encourage reuse and they can promote ethical standards. And it's important to know, again, that you're not expected to become an expert in this. We've really got a lot on our plates thinking about data literacy and metadata literacy is a whole different field on top of that. So in collaboration with representatives from data repositories, from libraries, we get the idea that we're not in this alone. When we work within this broader community that includes experts in data management and when we take advantage of those personnel resources collaborating whenever possible, we really improve our practices around data. So this is just one more step in the line that we're trying to tow regarding data literacy. Unique identifiers are another element that can really inspire the confidence in a data set in a variety of ways. So unique identifiers are going to ensure the individuality of each record in the data set that you're creating or the data set that you are using. So think about this in the context of an archeological project where you might have a ceramicist who is numbering sherds and a zoarchaeologist who is numbering bones. If they both have a numerical sequence of objects, that can probably get incredibly confusing, especially for individuals who are not directly affiliated with the project. So unique identifiers allow us to all stay in a system where we understand what the other is talking about. So unique identifiers come in the local variety and this you can think of as the number that is internally defined and internally consistent. These are often human readable so that even as an outsider, I could come to this site and probably understand that pottery 211 refers to a particular vessel or a particular sherd. Universally unique identifiers are kind of a step above this and universally unique identifiers draw from standards that are published by the Open Software Foundation. These are typically not human readable. I wouldn't wanna refer to the sherd every time I talk about it by that long string of numbers but these are numbers that you more readily see on the backend of any database structure to keep the backend organized but not necessarily meant to be read by the users. Persistent identifiers are here, you've got an example of an archival resource key. These have two really important things about them. The first is that in a technical sense, persistent identifiers facilitate interoperability. So they give clues about the data and concepts that it may reference and recording schema that may be shared between different data sets. So technically they are very important but also in a social sense, persistent identifiers are useful for citation, the goal and one of the goals in all of this and they allow us to situate our own data within a wider context of related information which is ultimately not only our professional but also our ethical responsibility as researchers. And using that then to think briefly about some of the ethical considerations that go into this discussion that we've had today. We've touched on some of these already, the big questions like can I collect this data? Should I publish this data? Am I being collegial in how I'm citing others? How I'm using established community standards to organize my data? Data ethics is a field in and of itself and again while we're not required to become experts in this area, understanding and building up a familiarity with some of the principles that I'm going to highlight is really incredibly important for us in our journey towards data literacy. And we may already have some background in this area. So as an archeologist, I was much more familiar with some of the conversations around what kinds of objects can and should be displayed in museums. And it makes sense that similar ethical principles may apply to the data concerning those objects as well. So I wanna end with a discussion of just two sets of framing principles that can guide how we work with data. The first is the FAIR principles. FAIR is a stakeholder-driven initiative to make data findable, accessible, interoperable and reusable. And these are great. These are wonderful sort of rules to strive towards. But in many ways, this isn't enough. FAIR has rightly been criticized for focusing exclusively on increased data sharing while ignoring some of the power dynamics and historical contexts. This creates attention, especially for indigenous peoples who are also asserting greater control over the application and use of indigenous data and indigenous knowledge for collective benefit. So in light of those criticisms, the CAIR principles come out of this context. The CAIR principles focus on collective benefit, authority to control, responsibility and ethics guiding our work around data. And learning more about these is an aspect of any data literacy trajectory. But again, it's not all or nothing. So making small adjustments to your habits and your practices to make sure that they reflect these principles now will go a long way to you fully mastering them down the road. And the key point here, again, is the social context of data. Understanding of context really needs to drive our decisions around making data open or keeping them restricted. And community archeology practices may be one of the best ways to gain that contextual understanding, to know the kinds of communities that you're working with and the standards that they put for, for how they handle not only the objects and the sites, but also the data around those things. We often think just like with my slide before, it doesn't all have to be digital. It can be digital and analog. We can still do good things if we're using paper. And the same thing is true about data practices. Open isn't inherently good. And it's not inherently in opposition to promoting indigenous data sovereignty. It's gonna boil down to the context around all of this and thinking through the particulars of any situation and really just being open to asking the questions around this, this familiarity with the kind of work that is required to do data well and to become data literate is really the first step in our journey. So how can you move on from here? Some takeaways and some next steps. Some things that I think would be really easy for you to look into immediately after this talk ends today is to explore an open data repository. Start to find data that are relative to your research and models for your project. You can identify or establish a standard system of local identifiers for a data set that you've produced or are working with, right? Set that up for success down the road. And then brainstorm the kinds of metadata you would want to be sure to include with your published data set. This isn't a project that you need to undertake on your own. You will have help and you will have collaborators working with you down the line from libraries, from data repositories, from archives but being familiar with the landscape around this kind of question is a good start. There's also a number of articles that have informed my thinking about this but that can definitely get you started in this area towards becoming data literate. So I think that the scholarship out there might not always come from disciplines that we're necessarily familiar with. There's a lot of work being done on this by data scientists and data experts. And we as archeologists get to really benefit from that work and from the theories behind it that we may be less familiar with. And one other thing you can do is make sure you attend some of our upcoming workshops in the data conversation series. We've got three that are planned for the next couple months about data quality control, data cleaning and dissemination and then storytelling with archeological data. And I think that in each of those we're going to be able to dive a little deeper into some of the things that we introduced today. And I'm really excited to continue those conversations. Attendance to those is going to be free but we do ask that you register and if you have any questions you can be sure to email me and I will try to point you in the right direction. But on behalf of myself and the colleagues who have helped me form these ideas on data literacy and kind of the introductory thoughts we presented today I just wanna thank you all for coming and I think there's a little bit of time left too for questions if we haven't. Yeah, thank you so much Lee for that presentation. And yes, anyone who's watching live wants to type in a question in the chat. This is your chance so there's a little lag so I'll leave a few moments for any questions to come in. But in the meantime, I had a couple of sort of question e-comments that I wanted to add. I really appreciate your discussion of the the TK labels and the fair and care principles. I think that that community archeology the points that you made about community archeology are really and so the practices around that are exactly what's needed and sort of implement the TK labels appropriately. And I at least in my own experience in this area I feel like the fair principles are just something that's starting to be discussed in archeology and people are trying to explore how can we address these and the care principles are absolutely something that's needed along with the fair principles. And I just, I haven't seen them discuss quite as much and I haven't seen the application of TK labels thus far in these types of contexts that we work in. And from an open context perspective we could totally implement TK labels and we are actually in discussion with some colleagues now about starting to do that. And to see how they would work it's metadata on top of additional metadata that is really needed to address these sort of community archeology type questions. And I think that I just really appreciate that you commented on that. Yeah, and I think it's so sort of disciplinary specific too. And as someone who works in modern day Sardinia like these are things that were totally new to me when I started planning this talk and I think that does us a real disservice too because I think that knowing about this full landscape and the range of work that's being done on these issues can only enhance our work even if they're ultimately not going to be applied in their entirety to the work that we're doing having that background information is never a bad thing. Definitely, and I assume this is something that you're gonna dig more deeply into with the participants of would it be maybe in the data cleaning and dissemination workshop the second one? Yeah, I think especially the dissemination part is where it's gonna come in for sure because that whole conversation about what can be made public and what should be shared openly and how do we make those decisions and how do we sort of inform ourselves about why those decisions are the way they are? That's gonna be really exciting to talk about. Yeah, so I would encourage listeners to this that's on December 1st, that workshop and you can sign up if you go to the link that Lisa at the end of her talk to please weigh in on this discussion. We have a few questions in the YouTube. So, and Nico is asking in terms of linking to other archives are there initiatives in other countries that OpenConnect connects with and are these successful perhaps the US archives? Yeah, I can address that. Yeah, that's a great question I think. So, OpenConnect has a ranking the German Archeological Institute, the DAI and we have a mirror hosting from them so that actually hosts our data somewhere else but also speed up the access to the data because it's being hosted by another service and we also have archiving agreements we work with the California Digital Library here at UC but we also have some archiving services that are provided by Zanotto which is based in Switzerland and we also leverage the Internet Archive to do a lot of archiving of images and that kind of thing. So we do have lots of partnerships. OpenConnect itself is not a data repository we're the data publishing service and so the archiving has to be provided by we sort of use a network of partners to do different services for us because we are on a smaller side and so we leverage those partnerships to do different jobs as part of this work. Thank you. So that looks like there are no other questions coming and Lee did you have any final comments you wanted to make? Just that I'm excited to see people in these follow-up workshops I think that they're gonna be really exciting and they're gonna give us a chance especially with a workshop that's tailored for the ARF community. I think that it's gonna give us an opportunity to really dive into some of the topics that you all think are important and we can get into a lot of the particulars of how these apply to your own research and your own teaching and I think it'll be a really fun experience. Yeah, I agree. And I think this is also a nice opportunity for people to come in and sort of share their own data stories and experiences. Sometimes it's sort of therapeutic to come and talk about your struggles with data. The data was. With your peers, exactly. And I know as a zorkiologist I have a lot of stories to tell about those challenges and just one other point I wanted to make is I appreciate your talking about them. Basically not needing to know how to do everything but it's important to understand sort of how the ecosystem works and who does what. And so for instance, data management plans it's not just this sort of checkbox compliance thing that you just right want to be done. It's something that really should be discussed among the team and everyone should talk about their approach to data and what the project's approach to data is and who's providing what data and what their needs are and it should really be a collaborative process of development that should be revisited frequently. And I feel that there's a lot of work that could be done to improve data management planning in that way. Yeah, it's the sort of collegiality that you would bring to other aspects of field work and other aspects of research but I feel like people get so scared away by data and just sort of the technical background that it assumes sometimes. And yeah, to get over that hump and to realize that you don't have to do everything you don't need to know everything but you have to sort of be open and start developing this familiarity through practice and through time. That's the first step in the process, I think. Sorry, real quick, there's one more question coming in about asking about locations for where things are archived. I think that must be referring to new workshops where you discuss options for archiving your data. And sort of when you get to that point in your research process where you're ready to disseminate is that gonna be something that will be ignored? Yeah, I think that will come up more fully in the conversation about dissemination and kind of what that entails. But in the meantime, something that the question asker can do and that others can do is sort of go back to the slide in the recording that showed those different data publishers or data repositories and just kind of explore those to see what's best. A lot of institutions will have their own not only sort of scholarship repository but also their own data repository, which could be a very easy option for people that have that kind of access and access to those kinds of resources. But that may not be right. That might not be right to get your data out to the audiences you want to share it with. So looking at some of the disciplinary specific repositories, looking at one of those bigger institutions like Zenodo could be really informative because you're thinking through what you might wanna do with your own data. Okay, well, thank you very much, Leah. We'll look forward to the follow-up workshops. Thanks for coming to join us today. Great, thank you, Sarah. Okay, bye-bye.