 Well, thank you, Allie, for the invitation to come and speak to you all here today. I will try to get through this. I've never spoken for 10 minutes, as short as 10 minutes in my life, so let me move right ahead. Research data management, obviously something I think everyone here at BIDS is very familiar with. It's a very large and complex task. I think you as researchers find it, you know, that there's increasing numbers of new requirements that are coming at you, so not only do you have to do your actual primary research activities, but there are a host of new requirements that are coming from funders, from publishers, from our academic institutions, and from the evolving norms of scholarly best practice. All have to do with various types of, really almost administrative types of activities aside from your primary research. What makes this even more difficult, I think, is that these whole host of new activities should not be seen as sort of single point in time things. Instead, they very properly have to be considered across the entire life cycle of research data management. And that is something that falls under our purview at the CDL and the UC Creation Center. For those who may not really be familiar with us, the California Digital Library is a centralized service unit. Administratively, it is attached to the office of the system-wide president, President Napolitano. And although we still have library in our name, we consider our stakeholder community to be the entire UC community, not just the libraries, although, of course, we have very, very strong relationships with the library. At the CDL, the UC Creation Center unit I'm affiliated with, as well as my colleague, Stephanie Sims. This is the unit that deals with solutions surrounding the long-term viability and utility of managed digital assets of all sorts, certainly including research data. As such, we are currently deploying a set of suite of services that span the full research life cycle. We don't quite deal with each of those aid areas quite yet, but as you'll see, we actually have services associated with the majority of these activities that I'll go through very, very quickly, much too quickly, I'm afraid. To begin with, we think sort of the foundational activity in all of this is comprehensive planning. And we think this is really of paramount importance because it is always better to be proactive rather than reactive with respect to the management of any type of intellectual asset. And we also think that it is much better that important decisions are made explicitly and deliberately rather than tacitly or in an ad hoc fashion. To that end, we have a service called the DMP tool, the data management planning tool, that is basically an online wizard that will allow you to construct a comprehensive data management plan. And in particular, it is configured with templates for, I think, 29 public and private funding agencies, all of whom have slightly different requirements with regard to what a data management plan should be. Beyond that sort of upfront planning, another foundational data management activity is collection, that is moving a particular digital asset into a managed environment. We think this is very important because while it is certainly possible that collected content may still be susceptible to some future risk or failure, it is almost a certainty that it would be susceptible to that risk if it is never collected at all. So the most important thing is to be able to transfer your data into an appropriate environment. In order for that to happen, it's really important that the mechanism for that transfer is as easy as possible, as flexible as possible, that there be as open eligibility as possible, and that as many of the messy technical details involved in that can be sort of hidden from view. Our platform for doing this is called Dash, and although it gives the appearance of being a full-fledged repository in itself, it actually isn't. It is really just a set of user-facing interfaces that have been optimized for use by individual scholars and researchers, such as yourselves. Currently, it is sitting on top of our merit curation repository. I'll be talking about that in a moment, but we actually have work underway that will make the Dash platform applicable to any standards-compliant repository. Once things are collected, it is very important that they get associated with scientifically meaningful description or metadata, as well as some form of a persistent identifier that can unambiguously identify a dataset and can be used for its retrieval. Dash provides both of these functions for scientific description. We allow the opportunity to supply description in terms of the data site metadata schema, which is a fairly general schema for describing scientific assets of all sorts, both in terms of textural description as well as geospatial description for a large body of materials for which that's appropriate. Dash also will automatically assign a DOI identifier to all datasets that are transferred through it. The DOI is sort of the gold standard for identifiers for purposes of long-term identification and retrieval. There are lots of options now available for preservation or what might be termed the long-term hosting of digital materials, but in making a selection, we think it is really important that you understand that that preservation requires both a technical as well as an organizational answer, and that you select your organization very carefully. We think it is very important that we are able to give you tools that will allow you to maintain control over your research outputs and not just hand them over to other possibly commercial interests or other types of organizations that don't necessarily share our same mission with regard to long-term stewardship and open data principles. So as mentioned, our main curation repository, which provides functions for both preservation and access of digital materials of all sorts, is called Merit. It's been in operation now for about eight or nine years, I suppose. We have about closing in on about 40 terabytes of materials, which sounds large, but it's actually still fairly modest. However, if you look at the growth curves, they're ominously exponential. So Merit is the repository that actually sits underneath the Dash platform, and it also functions as a member node on the NSF-funded Data One data grid network for additional replication of the materials. For purposes of discovery, you can actually use Merit as its own built-in native mechanisms for doing that, but we think it is actually simpler to use the Dash interfaces, which again have been optimized for individual use. It offers a very comprehensive set of faceted search and browse. You can do both textual search as well as geospatial search if the descriptive metadata includes those types of data elements. Now, once data is retrieved and reused, it becomes very important for data owners to try to get some sense of what impact their data is actually having. And with that in mind, we're collaborating with the Data One network as well as PLOS, the Public Library of Science, on a research project called Making Data Count, which is trying to develop what we're calling a data level metrics platform. Essentially, we want to be able to quantify the use and impact of data in a way that's very similar to what we've always traditionally done with the scholarly literature. Oops, I think I missed some there. All of this is a big job. It's a hard-work job, and it's something that we don't try to do by ourselves. In fact, we always try to look for partners. We're always open to interesting collaborations, and we certainly have a very long-standing relationship here on the Berkeley campus with both the Berkeley Library as well as with Central IT, particularly in its research IT initiatives, such as the Venue Joint RDM program that I know is going to be rolling out some new services of their own, I believe next Monday, I think it is. We were also been involved with the Research Data Alliance organization from its inception. We've been an original partner on the NSF-funded Data One project, both in its initial five-year phase as well as its current five-year phase. We were a founding member of the Data Site Consortium, which is a provider of DOIs for data sets. That was a real whirlwind tour. I will just leave you with a few key thoughts. One is, again, to emphasize this notion that research data management is only going to be efficient and effective if it is understood in the context of a larger scholarly life cycle. We think it is important for you in thinking about these things to look on all of your research outputs and certainly your research data as composing a very important part of your intellectual legacy. Hopefully, that will become an enduring legacy, which means you need to look for a set of comprehensive solutions. Luckily, there are some, which we've given you a real glimpse at here. There are certainly many others. As well as solutions, there are also many partners, which, again, I think is a large part of what BIDs was created to do to enable these types of partnerships. We at the CDO and the UC Creation Center certainly stand ready to work with you in partnership for these data management activities. Thank you.