 Hello, and thank you all for joining us today for our webinar, Sharing OMECA in the Web of Digital Scholarship. Before we get started, just a few logistical things. This webinar will be recorded and made accessible shortly, so you should all receive a follow-up email at some point with a link to the online version or the recorded version. Additionally, you will see a chat feature in the lower left of the screen. This chat is being monitored and you can use this to ask questions as we go along. We will have some time. We'll look at the agenda in just a moment for a discussion or a Q&A kind of at the end, so feel free to ask questions as we go. Moving along, this is our agenda for today. We'll begin with a brief introduction followed by some information about the pilot project. We'll get into results of the pilot project and then that discussion or Q&A that I mentioned before. Our presenters today, I am Cynthia Hudson Vitale. I'm the Head of Digital Scholarship and Data Services at Penn State University. We have Michael Roth with us as well. Michael is the NEH funded share intern with Penn State University Libraries. He is the former Digital Scholarship Lab coordinator at Fenwick Library with George Mason University. He recently received his master's degree in Applied History from George Mason and a graduate certificate in Digital Public Humanities. He also has a master's degree in Education and Multimedia from California State Polytechnic University. Also speaking today, you will hear from Dr. Heather Frohlich. She is the Literary Informatics Librarian at Penn State University. She received her PhD from the University of Strathclyde where she studied representations of social identity in Shakespeare and other early modern London plays. Before that she studied English and linguistics at the University of New Hampshire. And finally facilitating the Q&A we have Judy Rittenberg who is the Program Director for Strategic Initiatives at the Association of Research Libraries. So just to provide a little bit of background about SHARE. So SHARE is a free and open data set or database and a set of tools that aggregates metadata about research and scholarship across the research lifecycle. So more than just publications, SHARE includes information and metadata about funders, about funding, data sets, code, preprints, and more. And at Harvest normalizes stores links and allows individuals to query and build off it given its open nature of course. So in September of 2017 SHARE received a planning grant from the National Endowment of the Humanities to investigate requirements for scholars to link all the components of their work for librarians to have a means to accurately track usage of all the components of the DH project for scholars and students to quickly find the relevant scholarship and primary sources they need and for new project leaders to quickly gain an understanding of all the existing content and tools at their disposal. And to address these goals we took a sort of mixed method approach. We began with a survey of DH researchers and content creators to understand the current state of DH projects. We held a workshop with DH stakeholders to wire frame solutions to common DH stewardship issues. And then we also conducted on-campus focus groups with DH centers to really dig into DH discovery and access requirements. And today we'll be reporting out on a prototype that has developed for this project and Heather will be giving us more details shortly. So this is the project team for the larger NEH grant. It includes myself, Judy Rutenberg, Matthew Harp from Arizona State University, Joanne Patterson at Western, and Rick Johnson at the University of Notre Dame and Jeffrey Speza to 21B Consulting. And I should acknowledge that SHARE has been around since approximately 2013-2014. And it's been through the generous sponsorship provided by the Alfred Brees Sloan Foundation and the Institute of Museum and Library Services, and most recently this planning grant through the National Endowment for the Humanities. So with that, I am going to turn it over to Heather to dig into, to lead us off. Great. Thanks, Cynthia. So as many of us know, the discoverability of DH assets is both distributed and difficult. There's lots of platforms. Previous attempts have represented various forms of failure. Some of you will be familiar with, for example, the TAPOR and DIRT directories. Lots of these kind of become, let's call them graveyards of projects that are hard to maintain. And when we were talking about what we want to do specifically with this project, we really want to focus our attention on just one platform and just one issue, seeing how there's a whole range of projects in which digital humanities can cover. And in doing so, we really want to focus specifically on a project like OMECA, which is widely adopted by a number of different DH practitioners and non-DH practitioners as well, doing a variety of different things. And we also really were curious about how we could make this both scalable, exportable, as well as generalizable, customizable, multilingual, and in case that wasn't enough, just to kind of put in a sort of level of a graphical user interface that almost anyone could use at a very low price point. So, you know, free to use, perhaps free as in beer, but also free as in something that would be accessible to the widest variety of people. So, one of our first questions for you is – oh, I've lost the question. Here we go. Our first question for you is, are you currently using any of the following content management systems? And we have options including OMECA, WordPress, Drupal, Joomla, Canvas, Moodle, Blackboard, or any other sort of similar object, Jekyll, other. And I believe that voting is open. Is voting open? Amy, do we need to reset the question? I'm attempting to do that. We may need to ask people to answer the responses in the chat rather than – but give me a minute to try to reset the poll. This would not be a Digital Humanities event if something didn't go wrong, so you're always in that fun moment. One second. Someone can sum the Jeopardy song to themselves. Heather, if you go to the next slide, you should see the poll again, and I apologize for the problem. Okay, yeah, no problem. Here we go. So, if we could please quickly vote on if we're using any of the following content management systems and you're welcome to choose more than one here. Okay, so I'm going to close the poll in a second. So, final call for votes. All right. Well, thank you all very much. That's really interesting. You can see that Canvas, Moodle, Blackboard is far and away the most popular with Drupal in a very close second. Fascinating. Okay. Well, now that we know that, I think at this point we're going to hand this one over to Michael to tell us about what he's been doing about scraping Omega projects. And Michael, are you ready? Yep. Okay. Michael, take it away. So, once again, I am Michael Roth. I am the NEH Chair intern here working for how to scrape Omega projects. So, I personally was drawn to Omega as a grad student at George Mason for my first year of studying history because it allowed me to organize a bunch of different digitized artifacts into one location and then exhibit them alongside any content that I found. And what I like about it is that you can have all these artifacts with their metadata attached to them and then still have all the narrative content along with it. So, the system has been widely adopted as you saw about a third of your responses that you have used to make it or currently using it. So, one of the problems that I found was that Omega's website has a fairly comprehensive list of projects, but at this point it's just the title and a brief one-line description. So, for me, there was a way, I needed to find a way of extracting more descriptive information about the projects, not just the title, but anything on its about page, any contributors, rights information, all that sort of thing. And to do this, we decided to focus on two web scrapers that use HTML and CSS to extract information from live websites. The first one is called PartSub. It is a downloadable application for Mac and PC. And the second one is called webscraper.io, which is a Chrome browser extension that can be installed onto anyone who has the Chrome browser. And what I like about both of these is that both have free-to-use options and they also have very detailed documentation and forums to help with any issues that might arise. So, our first step was to find a wide range of Omega projects that we could use to test those two systems. So, our goal with that was to see what are A, what's out there, and B, to gather ones that have different styles of organization or different elements for presenting the data. So, we found four main types. The first is exhibit space, which as a historian, that's what I'm used to, which is a lot of content and that use items to help visualize what, visualize the narrative that's being presented. The second was common were collections based, which tend to be special collections or archives or anyone that has a wide range of artifacts that they just want to post publicly for people to come and find. A third, lesser known or not known, but ones that use maps as their main viewpoint and focus, which they tend to use a map as the main layer, and then you click on different elements that bring up an item or other content. And the fourth one that we found sporadically are sites that are built by university professors, and those are mainly for students to come in and learn how to input items or just generally learn the system what Omega does and just get their feet wet for what a digital humanity tool does. And for our purposes, we decided to focus on the collection and exhibit base sites, mostly because they tended to have a well-developed project with enough variety within them to test how those elements affected the web scraping. So these are the two that I tested the most. The first is color conventions, which is still being updated. So it's very much active and alive. It's a fairly straightforward site from an HTML standpoint. It focuses on a bunch of conventions from Free Blacks during the 1800s, and it's really centered around the documentation that was generated during these conventions. And most of the information is organized into individual items, but it also has a large number of exhibits that help provide context for them, and just help discuss the topics that are in these conventions. And the second one that we tested is the Florida Memory Site. It is currently the main website for the Florida State Library and Archives, and it organizes it a little differently. It has six main thumbnails on the homepage, half of which are specific media types, and then within those you have other projects within them like exhibits and collections. So it's a bit more complicated in that sense, and that you have all these different branching websites. And then these are just two other examples. The building history is an example of one that's matte-based for this screenshot. The one in yellow is the one that I clicked on, and as you can see in the top right corner, it brings up a couple of photographs and information about that building. And then the Gothic Path website is actually presented by the University of Dublin. So I like that one because it has a wider range of both collections and exhibits. And these are just some of the other websites that I found that I tested during things in the scrapers for as well as on this next slide some more. And all these are linked, so once you have the presentation, you can go and look at these to just see what they're like. All right, and we have our next question. Do. Okay, so we're wondering, do you guys include any of the following information on your DH project sites, including license for reuse, data type, the temporal span of the project, funder, open accessibility and resources reuse, codes, APIs, other options are available. We'll leave this open for a minute or two. So we have a question which is we're not sure what we mean exactly by open accessibility and we're thinking kind of more broadly along the lines of open access resources, ways in which you've got different kinds of data, things like that. I think you can be very broad in that answer. Okay, all right. Should we close the poll? Is everyone ready? All right, let's see. So it looks like license for reuse is quite big. Information about funding is quite big. Interestingly, open accessibility, which quite broad, is also quite high up there. And data type is also obviously quite important as well. So, hmm. All right. That's interesting framing for what's going to happen next. So I'm going to give it back to Michael at this point. And Michael, ready? Yep. Okay. All right. Well, once we decided what projects we wanted to use for testing, we needed to go in and actually set up how to extract the information. So like I said in the intro, we came across both Pars Hub and web scraper.io. Both systems utilize sitemaps, which is basically the umbrella that holds these things called selectors. And selectors are the things that tell the scrapers what to do. So normally, it's saying I want you to extract this particular text or click on this link or scroll down the page, things like that. And with Pars Hub, some of the benefits with this is that it's pretty used. It's a very friendly user interface, so it's easy to find buttons and selectors. It's easy to add some individual information to the selectors. And then once you're done and you've scraped the data, you can export everything either as a CSV file or a JSON. There are, however, some drawbacks to this. On the free version, you can only have five total projects at one time. And the other caveat is that since it's on the free version, they can theoretically be viewed publicly on their servers by anyone else who's using Pars Hub. So there is some issue with copyright or privacy issues that might come into play. Another thing is Pars Hub runs on templates in the sitemaps, which typically translate to a specific web page. So you'd have one template for the home page and another template for an about page and then a third one for a collection stage and so on. And getting them to talk to each other just to make sure that each page is getting scraped can get a little confusing. So that's one of the less user-ability things with Pars Hub. And then also that the export file doesn't have as much information and navigating through it is a little confusing as well. So this is just a view of loading up Pars Hub. So for this one, it's my sitemap for the Color Conventions website. So it has the project you're working on as the big window so you can see what you're doing. Then off to the left is the template on all the selectors so you can see what you've done and which one you're working on. And then on the bottom window, if you have one of your selectors already clicked on, it gives you just a brief view of what information it's looking at. And then just to give more example of what a selector is, with this window you get to it by clicking on the plus button saying you want to add a selector and you have all these different options for Pars Hub. So you can say, I want you to select this information or I want you to click on this or hover over something. So each one of those becomes its own selector in the sitemap. And one of the main issues with the templates is since each page has their own, usually at the end of a template you have to add a selector telling it to go to another template. That's really the only way to make sure that the system will go to the entire website and scrape all the information that you want. And this is one thing you like about Pars Hub is if it sees that it's a list of things like usually with links, like for here it's a list of items. It could be anything from a list of exhibits or a list of items. If you click on the first one it will register the rest of the list. And if you want to select everything you would just click on the main text of the next one and it will select all of them. Or if you wanted to go one by one you could either click on the X or the check mark to say yes I want it or no I don't. And just go through the list saying which one is that you want. So once you have your sitemap built you have to actually tell the scraper to go ahead and scrape the information. Pars Hub does have a way to test the templates just to make sure everything is set up right. So you can do the first box, the blue one, just go out and test them and it will go through all of your templates. Just to make sure it's there. And once that's all set up, sorry about that, someone's doing a hard work. So once you have your sitemap already built you would hit the run button and that's what tells Pars Hub to go through and actually extract the information. Alright, so once we were done with going through Pars Hub we moved on to WebScript.io. Like I said, like Pars Hub it is a free to use extension of Chrome but it does not limit how many projects you have. So you can have as many as 50 different projects going at one time if you want to build that many. And so one thing that I like about it is that if you can add multiple levels of HTML so say you have a title of an exhibit followed immediately by a description that it might be listed as a header tag followed by a paragraph tag you can tell the system to grab both of those at one time. And also when you select a link it will pull not just the name of the link but also the URL and it will give you both separate columns in the exported file. One drawback is that the HTML is very present in the selectors so if you're not really familiar with HTML it can get a little confusing. So this is just another screenshot of a sitemap that I built. So you can see in this column here that that is the specific code that it's pulling from this website. So each column just gives you some specific information. Hey Michael, do you want to like hand over the slides for a minute so you can take care of your puppy? Yeah, yeah, sorry about that. He's usually very quiet. It's all good. Heather, do you want to take over for a few minutes? Heather, I think you're still muted if you're trying to talk. Sorry folks. Yep, sorry about that. I forgot I muted myself. So Michael really dove into Web Script for I.O. As you can see he found out that he was able to select multiple tags including paragraph tags, H3, other ones, and you can do multiple instances of each one in every selector. There's a set selector for the next page button so projects that include exhibits, items, collections of stuff that are spread over multiple pages can be accounted for and sort of aggregate. And images are recorded as URLs so any information within the images are not scraped. It's only what appears in the HTML code so it would only take the URL for each individual image source. So this is just to say that it's a slightly more robust version of what we were just hearing about with the par sub. And as you can see the Web Scraper I.O. selectors let you do a little bit more granular kind of work. And so I don't know if you can see very clearly but there's different levels of information that you can include here. Where are we? Right. There's also something called the selector graph which is a way to kind of visualize the hierarchy of selectors and here's an example that Michael's created from the Gothic past page. And it covers a variety of different levels so you can kind of visualize at what level different metadata can be extracted. And you can see that there's a root which you could call like the main website and then there's a variety of different levels within that. Yes. Sorry, I'm just looking at my notes here to make sure I'm getting all the correct information to you guys. Yes. Okay. So when we export data both par sub and Web Scraper I.O. exports data as a CSV and the selector label or the ID depending on which program we're using becomes the column headers and the par sub and Web Scraper I.O. both organizes the CSV file slightly differently which is of course an issue for us as a sort of aggregation problem. In particular, the par sub CSV data separates the templates into different blocks and each template is at the bottom and to the right of the previous template which I understand is hard to visualize so thankfully there's an image of that on the next page. Michael are you back? Yeah, I'm back. Okay. We've just made it to the end of slide 35. Okay. Yeah. So we just talked a little bit about blocks. Yes, that's going to my notes. That's fine. Okay. Sorry everybody. Okay. So since par sub uses templates as its main building block the data is organized in the same way so each of these blocks is its own template. So this first one would be like all the information straight from the homepage and then the second block which starts at row 34 in column F looks like that's everything on the exhibits page and they would just continue to build like that for the rest of the template. So it's a little confusing and because it's not, since it's all these blocks with a bunch of white space in between them and webstaffer.io does it by hierarchy. This is where that selector graph comes into play because anything on the level with root comes first in the columns and those ones get copied for each row. So anything that comes after those ones will get tacked on to the right. So I do have another screenshot because it is a little confusing. So it starts out with the information from the actual site map so it tells you when it's created that particular row and where the website that started at and then it goes into whatever selectors were on that first page. So for this one it would be the creative work which is like the title and the project about information and then it would delve into anything that was linked to that, like the contributor page or then to a contact page and so on. So this one it builds to the left is the first little dots in that window and then to the right it's those extra dots that branch out from that. So once we tested both of these systems we gathered our thoughts about what we liked about each and we decided at this point to use web scraper.io mostly because we weren't limited in terms of how many projects we could use and there was no issue about privacy or any of that and then it also you get so much more information on that export file that for us it really gave us all the information we could use. Okay, I'm back with another question in case you're tired of hearing from me yet. What do you guys think the necessary next steps for a DH project aggregator would be and we have options including prioritizing omega sites for our harvest, developing community standards, metadata curation and remediation, scaling to larger DH projects beyond omega and E other were open to suggestions on what other could be. Okay, just another minute here very quickly. All right, let's see what the responses are. Developing community standards is very high up there followed really closely by metadata curation and remediation. Interesting. I think scaling to larger DH projects seems to be in third place. All right. Michael, are you ready for the next slide? All right, take it away. Okay. So at this point we decided to focus on generating a metadata dictionary just for us to use from that point forward and this was really to create a standardized list of terms for us to use within these scrapers. So it really involved comparing the share schema along with what we found in these projects and since omega uses double core for a lot of its creation it was also seeing what was common between all three of those and as well as ones that did an in one. So what we did was once we had our list of terms we created a definition of them like what's the title, what is the description and what we found was there were some duplication as well as some that don't have a direct translation. Like there are some of the ones that got duplicated were like title and description because those appeared not only in the overall project but within an exhibit or within a collection or within items. And other things that we came across were some were not quite obvious about their definition like at what point is an organization a contributor or if we could flip that around is a contributor also part of a larger organization. So and also since items are their own thing really they have their own set of metadata embedded within them so we had to figure out what to do with that and the last thing was we had to add anything that we missed like I think I have that. Okay so this is basically what our dictionary looks like so we started off with the higher level information for us the share term for a title was creative work so that became our consistent word for that higher level of information and then anything underneath that became a project and that includes both collections and exhibits. And then we also wanted to provide an example from our scrapes just to give more of a concrete explanation of a real world application so for each one like for a title we input a title for a description we put in a description just so we were clear about which one we were talking about and once that was completed we had a couple updates to the scrapers first of all we had to change any other terminology that was already entered into the web scraper so it was mostly changing titles like from whatever was in the project itself to this metadata dictionary there were a few that I didn't add initially like a publisher or any rights information so it was going back and adding any of those terms that hadn't been scraped yet and then once we were finished with all of our metadata we had a couple updates to the overall project the first of which was we decided to focus only on the upper levels of information this means not trying to scrape items or any content this was just to help manage the amount of information you were getting as well as more of a time management mostly because the amount of time it takes to scrape an entire website one up dramatically once we added all the items each scrape took about 15 to 20 minutes for it to scrape all the information and this was up from about a minute or two at most for just pulling the titles and descriptions and it helped us keep everything manageable in terms of the information it helped us get rid of some duplicates in terms of like titles and descriptions and during all of this we had a question in the back of our minds about what happens if the project is not English so what I found in trying to figure out what happens is Chrome can give a rough translation of any foreign language so once you have the browser translate the page you can set up a web script for iOS sitemap as any other English website so the process of it doesn't change and what we found was once we went to extract it it kept the original language so once you enter in your information about this selectors Chrome will not translate any page unless you specifically tell it to so when the scraper goes in to load the page for scraping it's still in that original language and there were a few general problems along the way the most pressing was when we transitioned to the metadata dictionary the exhibits and collections pages got combined into one term so it was a little bit of head scratching trying to figure out how to set a selector to go to both pages so what we found was this thing called type code which I believe is CSS the web scraper I.O. will let you type in that code into the selector and it basically says to the scraper I want you to go to this page and this page and not the other one unfortunately that doesn't really work for these more complicated sites as I said in the beginning Florida memory there's top three thumbnails each have at least two other links so that right off the bat you have six different pages to link to so that really confuses the web scraper so the solution that we came up with was to keep everything separate so each page would have its own unique name and then once it's exported using data cleaning stuff or like open refine to collapse those columns under the one term and this is just to show you how complex that Florida memory project is in the scraper because you have the main page that has eight different selectors on it and then half of those more than half have other links to it so for our naming structure we started off saying this is a project and then having an underscore saying okay what kind of information is this so it became project underscore photographs or project underscore audio and then just hacking on more specific information to the end as the scraper was built up and then another issue that we came across was in trying to scrape images it's more common with logos of organizations because if you're trying to figure out who helped create the project the organizations are really helpful for that so the problem is they usually only provide a logo which doesn't have its own textual information and the scraper can only see the URL or HTML information so if that's all you need that's great you can just go in and say I want you to get the HTML code for that image if you need the specific text within that the scraper can't really do that so you would have to manually download those images outside of the scraper and to use an OCR software that can read that text so another issue with images is that sometimes or about half the time really they're also set as links to those organizations which can also confuse the web scraper so the solution I found is when you're going in and setting up the selector you need to set those as a link selector rather than an image because that will tell it to say this is a link rather than this is an image and that can give you the same information as any other link now in terms of expanding this into other non-omeca projects since both Parsehub and WebScraper.io use HTML and CSS in theory can scrape any public website so those of you that use Drupal WordPress, any of those it should be able to load those projects and you can select the information that you want from those websites and so the only thing to keep in mind is that those do organize their data differently than OMECA so just be aware of that when you're creating your site maps and extracting your data just keep an eye on your terminology if you have a schema already built use that whenever possible or if you don't just keep it consistent throughout your site maps and if you're using more than one project try to keep everything the same wherever you can and our focus for this project was just to get this top level of information it doesn't have to if you want to scrape just the items link collections it can do that fairly easily just be aware of any copyright information or if it's just to keep aware of that legality and one fact that I had through all this was if you wanted to create an archive of the final iteration you can or if you want to see how one project progressed through its creation you can set up your site map along the way as you're building the project and periodically do a scrape just to see how the information changed throughout its lifecycle okay thank you very much Michael before we get to questions we have a question for all of you which is after hearing this webinar what support do you guys all think the each community needs most? Is it A, recommendations on how to abandon how to handle abandoned digital humanities projects in for example Omeka WordPress other HTML sites etc B, best practices for scaling the production stewardship and project lifecycle of DH projects more generally is it project management strategies or is it D, assessing methods for digital humanities project success value and or impact for this one we're going to make you choose something so if you could all head to the polls okay we've got 26 votes alright in the interest of having a lively discussion we need to close the poll in a second last call wow okay far and away the question of value and impact is very clearly the highest hmm with the recommendations on how to handle abandoned projects coming basically tied with project management strategies so it sounds like this is a very divided audience very interesting okay well at this point I think we would like to thank you all very much and we'd like to open the floor to questions if you can please type those into the little chat box we will have those sent to us and I believe Judy is going to read them to us yes thank you so much for that wonderful presentation and very grateful handoff and teamwork so thank you very much for a terrific presentation we have a couple questions queued up already so I'm there for anyone on your team I think to answer with respect to providing community guidance for metadata standards the question is are we talking about digital project level metadata that is laid on top of a project existing metadata standards for example for OMECA on top of DCMI and VRACore this is Cynthia I can take that question yes so we were interested in getting a lot of the descriptive metadata about a project at the high level so as Michael sort of outlined in some of the earlier slides there we were interested in getting information that would map to the share metadata schema so again it's share metadata schema is heavily based on data site so it can get very granular but for this instance it was just looking at some very high level information about the project overall which was in many cases distributed across many pages within an OMECA site is there anything else anybody else wants to add okay thank you Cynthia we also have a question with respect to the scraping of or having to manually download images shouldn't all the logos or funders have alt text otherwise it represents an accessibility fail and isn't this something that OMECA site creators need to be aware of alright I'll take this one theoretically yes they should all have alternate text but not all the projects I found took the time to do that so what I found is those who are the projects that were aware of it usually have the alternate text on all their images and notice that didn't enter anywhere usually didn't have any alternate text at all throughout the entire project so it could be a matter of those creators not being aware that it's an option so and anything else so we also have a question here on what is this is interesting what is the added value in web scraping over sharing project on github for example if all of your code and access are in one spot and github sharing the github by share maybe the OSF so where does the how does the where's the value in the web scraping I think the value in the web scraping is that it's currently not a norm to put code and assets all in one spot what we found when we did our survey is that as we suspected people put their different assets in different places and so going to each of those places to retrieve them is often necessary to show the entire cycle of the project it would be great if everybody kept things in one place but as it is things are highly distributed or again lock down on one small website or one big website okay thank you then one more question in the queue and we've got a couple minutes if there are more what are the tools used or planned to be what tools are being planned to be used for analyzing this great data so an analysis isn't really like a next step what I think we are looking at as far as next steps is trying to then map an injustice into the shared data set so that it's discoverable and then potentially linkable with other assets that are already there so we don't have anybody from the highly technical side of the shared project on the call today I don't think but they do have community calls regularly where they discuss the future technical developments and those sorts of things so as far as next steps for this project that's where it's headed next and one thing for me that would be quite interesting I think would be looking for similarities across lots of different kinds of websites different kinds of projects you know what are consistent across lots of different kinds of digital humanities projects and I imagine that would be something to kind of observe at scale rather than at an individual level too Michael do you want to add anything? Not really I like Heather's idea about seeing the trends over a wide swath of projects just to see what's similar and what's different and just seeing if there's any differences between the exhibit space and the collection space just to see what sort of story these different types are telling Great, well thank you all so much to our Michael, Heather and Cynthia for a wonderful presentation thank you all for joining us this afternoon this webinar as Cynthia indicated has been recorded and will be made available so I just want to want to thank you again for sharing such a wonderful project with us today