 Hi everybody. I think we'll get started now if you don't mind. I know it's right after lunch so people will trickle in But we only have an hour and there's a lot to cover so with that Today we're going to be presenting a project called bib flow and results to date Your presenters are me. Oops. Sorry got two computers here Thank you. I'm the university librarian at University of California Davis campus on the PI on the project And joining me today are Carl Stamer who's the director of digital scholarship at UC Davis and he is project manager a long-time digital humanist and a long-time advocate for linked data and the potential it brings to digital scholarship and Finally Eric Miller is the co-founder and president of Zafira who is our development partner on this project and has been Working with the Library of Congress and other organizations on bib frame For quite a few years now so the three of us are going to be presenting this project and We have basically three parts Eric is going to talk a little bit about Bib frame and the potential of linked data for this community and then I'm going to talk a little bit about what the drivers for this particular project are and what we're trying to accomplish with it and then Carl will do the majority of the presenting talking about Specifically what we're doing in the project what we found to date and demonstrating some of the results so far So with that I'm going to turn things over to Eric, and I think your slides are queued up. So just take it away Got it. Yeah, you have the I've got it. You got it magic Good morning everyone. Good afternoon. Good evening. Not sure where everyone's coming in from That's always nice to cover basis I'm going to start a little bit so I'm going to talk a little bit about linked data and and bib frame But I'm going to do it first by sort of talking a little bit about our perspectives so Many of us that are looking at this from the severe perspective. We've been fortunate enough to be part of a lot of the Low-level underlying standards and technology that are part of the web. So a good chunk of all the poorly named acronyms That underpin the web we've either been directly responsible for or have had a hand in Including the naming sorry We've also been very fortunate to work with the Library of Congress OCLC and other national libraries related to strategy and design of link data implementation the underlying vocabularies large-scale implementation projects large-scale deployment projects and Really sort of ground up thought exercises and designs related to next generation library So we're looking at this from a lot of different perspectives on the technology The social cultural, you know and market opportunities Some of the shared observations of this You know just in this perspective is you know a library is far more than its underlying collection Link data is linked first and data second. I'll talk a little bit about what that means going forward The web of data is here now and being used very successfully for a variety of different commercial and Collaborative and sharing efforts outside of the library communities and libraries can dramatically participate in this with just a couple of key changes so I Like to sort of roll this back and when we start thinking about link data big frame and libraries Chuck Gibson who's the director and CEO of the Worthington public library I mean I like the way he's really sort of characterized sort of the problem related to Libraries and the web in general, which is just in short when my community searches the web for something that we have We better show up as an option This is an issue that basically is is challenging public libraries in particular, but libraries in general When people go and search the web for stuff Whatever that might be we literally just don't show up as an option and You know in part this is because we're not speaking in the way the web understands We have a tremendous amount of valuable cultural, you know, you know fantastic information, but we're Encoding the sharing this and locking this behind legacies and closed technologies and in fact this is what from our perspective Bib frame basically is addressing head-on so the Library of Congress several years ago launched an initiative about a New bibliographic framework building on the foundation of mark, but embracing the the fabric of the web Bib frame is that initiative? It's from again from our perspective and from the web Interpretation of that perspective a way of defining controlled web points For more effectively sharing bibliographic information collaborating and navigating among this and It's designed very intentionally is a enabling simple replicable patterns that in fact can allow us to describe a wide range of assets and Make the simple things simple and the complex things possible so from a vocabulary and modeling perspective there are a lot of Atomic replicable patterns that can be used in a lot of different ways to describe very traditional assets But two very complicated assets and sort of everywhere in between And if we look at this from what this also might further enable working in a particular context of Of cultural heritage museums archives We have a tremendous amount of sort of descriptive information and descriptive standards that we as various different communities have come together and used to Arrange our assets part of what we're looking at in terms of link data and particular Bib frame is a way of sort of taking that and projecting that into a way the web understands and Recognizing this continuum this very important continuum with between descriptive standards for organizing assets to discovery Standards for basically using the web as a platform for driving people back into the assets that we have so there's a lot of really Very important work that's happening at the discovery layer the search engines around schema org Facebook rearchitecting around open graph and in fact lots of different initiatives really about using the web to help people Find what they're looking for Part of what we see as an opportunity here is is using bib frame as a way of projecting our valuable assets into a way in which we can then Project into more effective discovery layers now the ones the web understands but also going forward, you know this is a very exciting dynamic space and Allowing us to be well positioned to project into whatever basically comes forward So I want to take and be very concrete about this. This is a way in which we can take very existing You know our existing mark records model them as bib frame and use this as a way of sort of seeing how that data starts to Connect together, so this is a a small collection of assets that The University of California Davis and George Washington University sort of we're using to experiment around Jane Austin Taking 70 mark records and materializing, you know the people the places the concepts You know all the different kinds of things that are inside of that mark record in a way that allows us to sort of link together We can navigate in on the work and start to see basically what we might traditionally think about the work taking just the top one Again, this is looking at this not as a new discovery system, but the raw data that's behind it We can start to extract the creators and the contributors the genres the subjects the fact that this particular work is part of a collection We can choose any one of these things In particular navigating on the person Jane Austin and see all the things that she created All the areas in which she's a focus of which means the areas which she's basically, you know, you know In the context of subjects take one of those subjects one of those concepts see how it connects to the various authority services and and keep pivoting around how People and places and concepts and work start to interconnect This is a simple way of taking our existing data and exposing it to the web And then this is in fact ways in which applications can in fact consume that so what you were looking at wasn't a Human interface necessarily to mark data But in fact a machine interface and part of what we do is very similar on the web in terms of making the things that humans Understand, but it's this strip behind the system that machines pick up on and being able to take that machine readable data and start to do interesting things with it such as Populate the things you see on the right-hand side of the consumer web search engines the Jane Austin on the right-hand side of that is Generated not from a single web page But from aggregating data from lots of different trustworthy sources lots of different credible sources and creating a comprehensive Overview of this part of what we're doing in link data and bib frame is Surfacing our credible assets in the library community so that other applications can take advantage of these and start to basically Merge it and drive more people in fact back to the library where that credible information is available So in the context of those particular exercises and very quickly that overview of the potential of link data bib frame and libraries Rewinding back to the specific observations again libraries more than its collection link data is about linking first in the data second being able to connect things together Is how the web works the value of anything on the web is proportionate to the number of things that link to it not link from it We have a tremendous amount of linked data in our library We're just not exposing it in the way the web understands But mark via bib frame and the work that the Library of Congress is doing in this Really allows us to to project into the web a really powerful library Substrate of credible information that the web wants to consume and that our patrons are using as a way of finding things It's incredibly valuable outside of just our catalog We have a tremendous amount of local information that in fact actually exposed in these global standards provides new control points in which we could start connecting to and One of the last bits of information that's worth mentioning here is that not all of our library data needs to be linked You know, there's a lot of discussion in terms of link data and the potential but in the context of libraries and library systems trying to understand where the Critical parts of our assets can be used more effectively outside of our libraries or more effectively across System boundaries and those that in fact can be local to specific applications is in part what we're trying to explore in bib frame So in that context, I'm going to hand it back over to Mackenzie and talk about some of the drivers and goals behind this project And how we've taken these standards and general technologies and start to exploring in a very practical way What that looks like in terms of a university? Eric So now I'm going to take you back in time a little bit because um, let's see if I get this to work Once upon a time there was a project that I worked on at MIT called simile with Eric and In his sea sail W3C days And we were really trying to figure out, you know, this was a long time ago back around 2003 for six seven years What what could the semantic web do for libraries? So this project was really looking at and produced a lot of tools to do things like Create link data merge link data visualize link data navigate link data blah blah blah and so what? Right, there are other projects working on this now the LD4L project is a great example of you know What can you do if the world is linked data? Right and Eric just spent 10 minutes telling you about some of the potential that we have in the link data space But I was at the time running a large library it operation that looked like this It's a Rube Goldberg machine, right? You know the interdependencies the historical legacy systems Millions and millions of mark records people who only knew how to use particular tools This huge ecosystem that we're part of so the problem that I had is how do you get from what he just showed you? When you're starting here So We asked ourselves two questions with the complex system of interdependencies that we're all living with How could the library community imagine adopting bib frame the library community not any one library, but all of us and You know the Library of Congress is just one player in a very large ecosystem and what might adoption of bib frame mean to a typical research library and its Technical services operations the operations that we're spending literally millions of dollars on every year So those were the key questions that drove the bib flow project We're focusing on academic library technical services Operations the processes that we do all day every day acquisitions licensing cataloging and so on and We're exploring the impact of the new standards on those operations and those workflows Okay So we have to ask ourselves a lot of really hard questions How does existing software systems and workflows inhibit adoption of new standards? You know I have Ex Libris Right, that is my ILS. It does not deal with link data So what a lot of people we're doing is taking their data out of these legacy systems and dropping them into Customer pilot systems, but you can't really transform a library operation with that kind of model How effective is simple conversion if we just take mark and dump it into bib frame, you know, what what is that bias? How can we? What could we imagine a next-generation library management system doing if it was link data native? How might these workflows work in a wider library ecosystem where we have Organizations like OCLC and the Library of Congress and vendors and Yankee What investments should I am a university librarian? I have to decide, you know When am I going to take the leap? What am I going to invest in and we're talking about very large investments that we make every year Is incremental adoption feasible? Is there some way we could inch our way into this or does it have to be a big giant join hands and like leap off the Cliff kind of thing and could libraries adopt this technology at different times, you know If I'm ready to go now, but the other nine UC campuses aren't Would that work so this is just one of many many questions that we're kind of trying to tackle with this project Which is why we need a community of people like you to help us figure out what the questions are and what the answers might be So we're looking kind of at the whole ecosystem. We're looking at organizations that are key in our environment And so we have the Library of Congress, but we also have you know vendors OCLC Yankee Many many others you'll see a picture in a little while of just one example of that kind of interdependency We're also looking at the range of metadata that we deal with in the real world every day It's not just mark There's a lot of other metadata to and one of the things I remember talking to Sally about when big frame was being Cooked up is could we come up with an umbrella that would let us integrate all of that metadata into one standard not just mark Workflows, you know, we you'll hear more about this soon, too but we have many many workflows in the library and Probably, you know, fifty to a hundred staff who live or die by those workflows Those are not going to change overnight just because there's a new standard out there and then many many tools tools that are available today And then tools that are being developed by ours and other projects like LD4L So we're looking at all of that but from the context of this fundamental question of what is a research library going to do What is the roadmap for us to be able to adopt this kind of technology at scale? So with that, I think we're going to talk about what we've actually done so far And I'll let Carl take it over here, and then we'll come back to what's next All right So as McKenzie said the real deliverable this project is that roadmap And that's our goal is to figure out how do we get there and to make some suggestions that are really community-driven In order to do that though, we needed to test some things to figure out Okay, what would work and what wouldn't and this is the slide that we can see alluded to So this is the modeling of the Rube Goldberg machine where we took the Davis library We went through and all of these are different Locally software systems that belong to us at the UC Davis library. See these are Davis ones. We've got that's the you know The university as a whole we've got the University of California systems. We're part of a larger system We've got our external vendor systems There's some 40 odd different software systems out there that in one way or another touch our bibliographic data Right the work that we do in the library So it is a very complex universe in its worst-case scenario We'd have to build a little tool for every single one of those right and in some ways That's kind of what's happening now without a roadmap. There's this sort of ad hoc, you know, okay I need to do this do that that's an equation for disaster because inevitably you're gonna miss something so What we really the whole point is to figure to test How do we change this out as effectively as possible one of the things that was on McKenzie's questions I think we've already answered which is it's has to be an incremental rollout There's no universe in which we're gonna flip every service over, you know on June 23rd 2016 the entire library universe is gonna go link data. It's just for a variety of reasons That's not an optimal solution here So what we have to do is figure out this sort of phased-in approach One of what we did to start the project part that's sort of already been completed is we went through each one of those I'll go back for a second each one of those software systems as McKenzie alluded to as multiple people whose jobs are Devoted towards working in that system So this gets exponentially large when we talk actual bodies each one of those has a particular workflow that they go through So the first thing that we did is we started documenting those workflows and in a very detailed way saying you know This person gets a book they go to this piece of software They type these things in then they send it there So we had a real sense of what our ground truth baseline data set was We're at what we're at now is a sort of development and testing phase We want to get to and to have a system that we could test in what we're doing is using Kuali Ole is our system I think most people here are probably familiar with Kuali Ole just real quickly people who aren't it's a open source very Community-driven project to replace essentially an ILS system. It's quite consciously not called an ILS and there's good reasons for that It's designed to be very modular in a way that an ILS, which is you know an integrated system Isn't and so it has a much more object-oriented and modular design The part that we're specifically concerned about here for this project is the describe function And I'll get to that in a moment, but Ole provided us a way that we could fork all the code And so it gave us a ready-made code base for a package that handled all the various functions in the library And it also had talks to other rice accounting functions. So it really provided a good test bed So we're doing a lot of development in Ole We're trying to do that with an eye towards the community as a whole and the idea that some of the work that we do Could ultimately move out of its own separate fork and become part of Ole But our real I want to emphasize that the real purpose of this project isn't to develop Another software platform. It's to give us something. That's a testable environment so we can produce our roadmap so This is in Ole right now This is the if you want to describe something It's sort of what we'd expect you go to their mark editor and you see the things that you'd expect all your mark fields And you know you it's very field-based because mark is very field-based Because the card that mark was based on was very field-based, right? And so you get this monster form and you say okay, I want to put this data in that field and that's the way it works The data store behind that the system architecture for the current version of Ole is Patently not linked data oriented. It's based on that model. I don't say that as a criticism There's good reason for that which is people needed to implement it in the mark world and that's how it lives So the rough shot here is a very high view Is that it has this doc store here, which is what stores actually all our bibliographic data It's got another database and I actually don't know now that it's probably the same database engine serving it But it holds a lot of data. It's just its own business data, right with what it needs to manage itself But it maintains a stock store at one point. That was a jackrabbit system now It's sequel driven and that's where your stuff lives then it has the various modules that communicate back and forth there One of the things it has that's really nice is a very robust API system for talking in and out for a Putting your different front-ends and discovery layers on so what we are doing here is Replacing for our bibliographic data for this project We're not going to the sequel. We're going to a native triple store for that and For right now We're not touching the the other modules are still communicating through the old channel And what we're doing is just using the persistent ID to bridge the gap between those two and I'll talk later about where we think that could go in the future But for now, that's where we've drawn the line. We're just dealing with our bibliographic data and So we're developing towards that So with that I think this is my movie slide Where we're at we've got a very early version of this that we just completed I say completed, you know, I mean, it's an early version and there's a lot more work to do But we're gonna do is show where it's at now and because I don't have the same level of stress tolerance There's a lot of other presenters here. I'm we I'm not doing this live I just recorded a video of me working in the system and cataloging something and that's what I'm gonna be showing you and It and I want to preface that by saying that I may a data guy and a semantic web guy Not a cataloger and I also am notorious at making typos and I consciously sort of actually didn't do this video 10 times So that I had it's perfect perfectly off my script because it's gonna betray the ways in which actually working in linked data Helps with the fact that I have a lot of finger errors and reduces the problematics with that so we're gonna watch it in all its error-filled glory and And I'm just gonna kind of narrate and talk through it a little bit. There's one point where I do want to pause But so basically we go to the scribe function we go into what still says mark editor But instead now it takes us to a totally different interface that is based on the scribe I'm just gonna have to talk fast. So we get to choose which linked data things we go to this is There's a fear base scribe. We've ported it into Olay You the first thing you do is choose what kind of thing you're gonna catalog based on that It loads a profile for that type of object so the screen changes depending on what we're clicking you can see a different set of fields show up here and So we've predefined like this is this set of big frame for that for that and it loads the appropriate fields for that now here I'm gonna do is I'm gonna Catalog a version of Shakespeare's Henry the fourth that critical edition and what happens here is I still get to type in What's my local title? This is just you know the normal stuff, right? So this part I'm not doing anything linked data. I'm just typing it in But you'll see here when I get to author is where the magic is gonna happen as I start typing It starts to auto fill based on the lookups from the various linked data sources that we've indicated we want to go to and Ultimately, it lets me say, okay, that's the guy. I want it's William Shakespeare. That's that's my guy And it adds that As we keep going down, I'm gonna do the same thing for my editor now The key thing here is that we're still seeing Shakespeare William what the computer seeing is seeing the uri a Persistent identifier for William Shakespeare and its brain. That's what it knows It's only displaying the literal string from my benefit as a human user and that's a big key to the whole system of how we get to link to those authorities and make sure that our Records can talk one to the other so you basically you move down the system and it's pretty robust already in terms of talking to various Control vocabulary is there there. It's I'm gonna wait till we get to the next one. I'm can I speed this up a little bit? You wouldn't good call. Dang it. I just did Somebody try to speed that up for me play it see Eric. Can you get on there? Can you drive? See this is why I didn't go live and now I'm dead on the water already. There we go. Okay. Let me just go here I think I got it I'm gonna click down here. Look at that. I ended up right where I was. Okay, so here. I'm still a string literal There are some things here that we're already seeing like with that will continue to change so right now What's happening is you can see where we've got our nice look up on different fields But I still have to sort of start over with everyone, right? We think going through this and at the end I'm gonna talk more about this that there are ways where when I put Right up atop that I was doing with Shakespeare and Henry the fourth that using linked data It could go in and already grab data and pre-populate make suggestions to me about what go in the other fields So we can chain These things so that it gets dramatically more efficient and that's an end game goal here It's not just to realize the benefits of linked data. It's to say can we actually build an interface? We believe we can just a little bit. We've been in here that actually makes everything faster more efficient and better all at the same time so Here this part I do want to mention specifically when we get down to this world here We're in now stuff that isn't very mark-like at all, right? These are our sort of more fervor-like bib frame Sections that let us put relationships into things that was my mistake I decided this went somewhere else and so here I'm saying what this is this addition of and it's gonna is I keep typing It's gonna find me a Henry the fourth it shows up Okay, there is a Henry the fourth part one that is what this is in addition of that's the point We're there if we played with this interface and had that first, right? We could start to pre-populate a lot of the data that's there The Publishers here that I show this one because it brings up a particular function, which is I will say this link data We haven't connected these dots yet But this has a function which we can add a new provider So we can't maintain locally We're trying to look up to the local authorities or to the remote authorities via link data and Sort of grab that name from there in the case that we can't we create a local version of that We haven't worked through the details of that But what we imagine is that ultimately we're gonna have to have an interface Which is then a sort of new kind of work where we've minted a local URI for this new thing that we're only using locally And we have discussion Well, should we just put the string literal then in the data that we're publishing for that moment? But ultimately we have to connect those dots right eventually that entity will show up and an authority list and Then we're gonna have to make note that this thing that we've entered in locally is that thing And do a sort of same-as reference Here's now though the part that I don't want to talk about so what I'm gonna do next is I'm gonna put in an ISBN When I get to my ISBN here, we've done some initial Playing with this. I truly believe that this is possible Which is that we would type in the ISBN which is I'm gonna do in a second and based on that ISBN we could look up Here's where we could really chain things we take that ISBN I can go to LC I can go to Library of Congress I can start to chain all of these things together and Actually pre-populate about 90% of this in theory I could pre-populate all that's all I have to do is my local variance Well, my local title is this or put some other things like it has an autograph in it here, right? That's type of stuff the level of efficiency there is extreme and we are as this project going to experiment with actually using our Phones we're gonna do the poor man's version just scan the barcode with our phone extract the ISBN go do the look-up chain the whole scenario Here I'm gonna export it and then show you so we've gone through and again What we've been seeing is the human catalog or the strings all the way along, but what the computer sees is The URIs and that's what's reflected in our bib frame. This is we have it serialized as RDF XML And you can see that all of those things are their perfect good nice authority controlled URI And that's the key to all of the stuff that Eric was talking about how we can connect our records to everybody else's records so That's our You know produces our final endgame here as I said, this is very very extremely beta. This was our first pass at that user interface And it will improve in several ways. So first of all Ultimately, we can see bringing more than just the bibliographic data into this universe, right? This is a discussion. We have people say, well, why would you want to do that? Well, for example, and this is a true story on Friday I got an email on my desk from a sociology faculty member who is doing a study and wants to look at circulation data around set a particular text in our library Right now There's some privacy issues with that because we can't have human names attached that search data But if he wants to know that other people want to know that as well I could make a compelling argument why search data belongs in the triple store, right? Because that data is now attached to this record this literal thing So we will ultimately be thinking about how we could move other things into that universe What we will definitely do here is Continue so we'll complete our sort of initial phase our beta version of the software That's where we're in right now. We're not completed But we're working towards completion of the sort of initial linked data version once we have that Testing environment in place the next phase for the project is to do local testing and enhancement We even by local testing is we are going to actually sit a whole crew of our catalogers down in front of the new system and say do your work right you have your Work that you would normally do in this world now do it this way and then tell us Was it better? Was it worse? What worked? What didn't work based on that feedback? We will hone the user interface and try to hone the system Then once we finish that phase we will go to external testing and enhancement We have a first round of that already planned That's UC internal for all the UC libraries where we're going to then have run virtual sessions and have people at remote other UC campuses work in the system tell us what they liked what they didn't like based on that we will do another round of fixing and tweaking and Then we will go to external testing and we've already started to form relationships with NLM in particular and some other Institutions externally who then want to work and help with this testing we're open We would love to hear from more people that want to sit at that phase and bang on it And we're doing that not just to enhance the product itself Right, so the goal here isn't just test and fix the product. It's testing. What do we learn? right from this situation so and That goes back finally into the road map and so I'll give anecdotally like one thing we've learned already is That when we went back and everyone can just imagine I'm gonna click back to it But that interface where we're doing the cataloging people are immensely concerned and this was a shocker to me Actually, they're tremendously concerned with what the labels on each of those fields Are I mean it really matters to people like to me? I was like well, that's a no-brainer. I can change that label in 10 seconds, right? We make it whatever it is But it actually didn't occur to me that that was going to turn into an hour and a half Discussion that I was gonna have to talk back people out of some panic when they first saw it, right? And that that's a huge point of learning actually for the road map and for the ultimately if we continue to develop the product There's a solution for that which is make that a very easily externally configurable thing and people that want it a certain set of labels can have theirs and others can so there's a solution but Noting that documenting it and working that tells us a lot about how the community is gonna have to move forward in terms of the final road map so for us success on this will be Actually the fact that we're able to what we've learned in all of these stages Produce a successful road map as in a literal document where we can communicate that well to the community at large That rests on this phase Right the if we don't have community participation in part of the process Then whatever road map we produce will suffer to the extent that we don't have that right the wider that Participation and we can get people involved in the project that level the better chance that we actually produce a road map that will work For the wider community and it's something Mackenzie said We don't think everybody has to and I sort of alluded to it earlier It's not necessary that every library, you know flip over at the same time There's naturally going to be a progression, but if the people who are progressing that way aren't moving in a direction that That everybody else in the community wants to go Then it's a failure, right? We have to solve the business and sociological problems You know really before we tackle the technology problems. So with that I'm going to turn it over to Mackenzie to kind of wrap us up So we've deliberately kept this presentation pretty short and to the point because I'm hoping that we can have a little bit of a discussion during the Q&A period because We're sort of determined that we at UC Davis will you know eat our own dog food and really try to make this work in a production environment very typical kind of environment And try to understand what what would get in the way and What we haven't started to do yet, and we'll begin soon is looking at other workflows beyond just cataloging What I drank your water. Oh So other workflows like circulation in her library loan all of those kind of things and to Eric's point not everything has to be linked data But what beyond the bibliographic data what does need to be linked data in order to Create a technical infrastructure in a library that works Because we don't want to have lots of parallel systems that we have to maintain over time, right? So So we're sort of getting beyond the point where we Do the the discovery layer that Eric showed and we just take our mark data out of our LS and throw it over the wall to the link data system we want to make sure that we're thinking about all the kinds of data all those interdependencies and Starting to reach out to some of the organizations in the community that we all depend on like OCLC now OCLC has been very involved in the link data world, too But there hasn't really been a forum for lots of research libraries to talk to the organizations like OCLC that are Doing good work in this area. So that's kind of the next phase of this is bringing the community together We can do a roadmap for my institution, but that wouldn't necessarily You know have the effect that we need community-wide So so we do want to produce a roadmap that would be of benefit to other institutions And that's the question. I'm coming to the the partners that we're officially talking to are the Library of Congress OCLC Kuali Olay and NISO But there are a lot of other projects working in this space. I mentioned LD for L. So that's one but You know, I think Eric you interact with lots of organizations Yeah, so so really as a community, how can we begin to knit some of this stuff back together and in particular if you're in a library organization What what could we produce from this project other than code in Olay that would help you understand What you need to do when in order to join the party and get the benefit of link data Assuming that you're convinced that there is benefit which I am So I think I'm gonna stop with that and thank our funders. I am LS who made this project possible And then ask you if you can give us any advice. Well, certainly we're open for questions, but what would you like to see? That we could do in the next year that would really help you understand What your choices are and what investments you need to make and with that I will stop and We'll take questions All right. Well, we're gonna close now. We're around but thank you all for coming. I appreciate your time