 Hello and welcome everyone. My name is Eric Franzen. We would like to thank you for joining us today for this webinar, a production of DataVersity with our speaker, David Booth, of Hawaii Resource Group and Rancho Biosciences. Today David will be discussing the Yosemite Project for Healthcare Information Interoperability, standardizing the standards and crowdsourcing translations. Just a few quick points to get us started due to the large number of people that often attend these sessions. You will be muted during the webinar. We will be collecting questions in the Q&A box in the bottom right-hand corner of your screen. At some points during today's presentation, depending on the system you're using, the layout of your screen may change. This is due to the type of media that the presenter needs to show and the system that he's using. However, please be aware that if that happens, a drop-down navigation panel will appear at the top center of your screen. You will still be able to access the Q&A, the chat module, all those other modules you see over on the right-hand side. Again, that will appear in the upper center of your screen. As always, we will send a follow-up email within two business days containing links to the slides, the recording of this session, and any additional information that may come up during the webinar. This webinar is actually the first in a series. At the end of today's session, David will be speaking about how to access the next installment. Those following installments will not be hosted through Dataversity, but they will be part of that same series on the Yosemite Project. Now, without further ado, a few words about our speaker, David Booth. David is an independent consultant and senior software architect at both Hawaii Resource Group and Rancho Biosciences using semantic web technology to make healthcare and biomedical data interoperable between diverse systems. He previously worked at NOMED, that's K-N-O-W-M-E-D, using semantic web technology for healthcare quality of care and clinical outcomes measurement, and he worked at Pangex, applying semantic web technology to genomics in support of personalized medicine. Before that, he worked on Cleveland Clinic's semantic TB project, which uses RDF and other semantic technologies to perform cardiovascular research. Prior to that, he was a software architect at HP Software, where his primary focus was emerging technologies. He was a W3C fellow from 2002 to 2005, where he worked on web services standards before becoming involved in semantic web technology. He has been programming for many years using a variety of programming languages and operating systems. He holds a PhD in computer science from UCLA. He's a wonderful speaker, I'm thrilled to have him with us today. David Booth, welcome. Thanks, Eric, and thanks for having me. And thank you, everyone, for joining. Eric, if you could give me the ball, I will start sharing my desktop, and we will get started on the slides. I'm now sharing my desktop, and you should now see, coming on, the title slide. You have somebody projects for healthcare information interoperability. Is that visible, Eric? That is there. Okay, so here we go. So first, I'm going to talk briefly about the mission of the Yosemite Project, and then I'll briefly talk about the foundation of our strategy, which is based on RDF, and I'll say a few words about why it's our foundation. And then I'll get into the roadmap for interoperability itself, which has three main components. Standardize the standards, crowdsource translations, and incentivize. So imagine a world in which all healthcare systems speak the same language with the same meetings covering all healthcare. Imagine what we could do if we had that situation. That would be semantic interoperability between computer systems. Like grab this definition from Wikipedia. You could define it different ways, but I like this one because it's pretty concise to the point. The ability of computer systems to exchange data with unambiguous shared meaning. So we're talking about computer systems exchanging data. Unfortunately, today we have pretty much a tower of babble with the information systems that are involved in healthcare. There's a huge variety of them, and they don't interoperate well. So the mission of the Yosemite Project is to achieve semantic interoperability of all structured healthcare information. When I say structured healthcare information, we are excluding the unstructured stuff like doctors, prose notes, which are very important as well, and in some cases can be turned into structured information. And things like images, which also are unstructured, but again, it may be important structured information associated with them. But the focus here is on structured healthcare information, achieving semantic interoperability, and not just a small part of the structured healthcare information, but we want to address it all. Here's the roadmap that I'm going to be going through, and I'm going to talk about each of the components of this roadmap in turn. So first is RDF as a universal information representation. That's kind of the foundation of our roadmap here. RDF stands for Resource Description Framework, but it's really more instructive to think of it as standing for reusable data framework. It's a language for representing information content. It's a standard that's been around for more than 10 years, developed by the W3C. It's been used in a lot of different domains, including biomedical and pharmaceutical domains. It allows information to be written in various forms and then captured. And I'm going to show you here if you're kind of interested in what it looks like, what some English equivalent might be, and then what the RDF equivalent would be down below. So if we want to say that some patient 319 has a name John Doe and has a systolic blood pressure of OBS001, and in turn has a value of 120 millimeters of mercury, then we could formally, in other words in a computable way, express that in RDF as written in the lower left. And if we want, we can visualize that in what we call an RDF graph. That's just kind of a pictorial way of displaying the information content. Now what I captured here in the RDF is the information content that was implied by that English pair. That's basically what it's about. I'm not capturing information about English, I mean, but capturing content. Now one of the important things about RDF, which is going to come into play in the rest of this presentation, is the fact that RDF captures information content. It doesn't care about syntax or data formats. It's actually data format independent. And in fact there are multiple syntaxes for RDF. One is called turtle, another is called n-triples. There's a JSON-based format, there's an XML-based format, and there are some others as well. This means that the same information content can actually be written in different formats in different ways. So means, and this is a more critical point, it means that any data format can actually be mapped into RDF in order to capture the information content of that data format. So here's a little illustration on slide 14. Different data source formats resulting in the same RDF. So on the left, here's a little example of a bit of HL7, version 2.0 whatever. It's got these cryptic vertical bars as separators between the data strings. And on the right is a little example of an emerging standard called FHIR, asked healthcare interoperability resources produced by HL7, and it uses an XML-based syntax. But both of these may express the exact same information content which we could capture in RDF as illustrated in the bottom. So why does this matter? Well it matters because it puts the emphasis on the meaning of the information, which is really where it should be. Who cares really about data formats and things like that as long as you can understand them. What we really care about is the meaning of that information. It means that RDF can act as a universal information representation. It also means that we don't have to throw away our existing data formats. We can still use our existing data formats, but each one can have an implicit RDF equivalent that captures its information content. So there is no need to actually explicitly exchange RDF format, per se, as long as the format that is exchanged has an RDF equivalent available, a standard RDF equivalent available, that can then be understood in kind of a universal way. So in other words, this is not about getting everybody to change their existing data formats or existing data representations. It's instead about being able to understand what those data formats and representations mean. So there's a lot more detail about why RDF is a good basis for this and why over 100 thought leaders have endorsed RDF as the best available candidate for this purpose. And another webinar in two weeks will get into more of the details of that. We won't have time for that now. So let's get into talking about standardized standards. Again, I said we're going to go through this roadmap and talk about each one of these things, so let's talk about the standardized standards here. Today we have over 100 standard vocabularies in the UMLS, the Unified Medical Language System, which is under the National Library of Medicine. That's a lot. Now, they are not all used to the same degree, and so the Office of National Coordinator, the ONC, has put out its standards advisory recommending a certain subset of these to be used for the most common kinds of healthcare exchange. So it recommends around 30 or so of these standards, plus a whole bunch more clarifications to the standards for the many cases where the standards weren't clear enough or didn't mail things down enough to actually achieve real interoperability. But the problem is that this is still a patchwork of different standards that use different data formats, different data models, different vocabularies, and that are even defined in different ways. They're not defined in a common, uniform, computable form in a computable way. So you might think that the answer to this is to come up with a new standard that will kind of cover them all, but there you're going to run into a trap if you do that, and that trap is illustrated very nicely by this XKCD cartoon where you start off with 14 competing standards, and then they complain, well, that's ridiculous. We've got to have one universal standard that covers everyone's cases. Yeah, and pretty soon you have 15 competing standards. So why does that happen? Well, developing yet another standard is not necessarily going to get you where you want to go. And the problem is that each standard tends to be a bit of an island. Each one has its own sort of sweet spot of use, the set of use cases that it was designed for, and there's a lot of duplication between them. So by using RDF and OWL, RDF has a kind of a family of technologies that go with it, and OWL is one of them, by using RDF and OWL to define the meanings of these standards, we can have kind of like semantic bridges between the standards. So the overall goal then is not to kind of do away with these different standards, but to have a cohesive mesh of standards that act as a single comprehensive standard, rather than a bunch of islands that are kind of partially overlapping and inconsistent and things like that. So we need to standardize the standards. So the way we do that, the way we need to do that, is start off with the use of RDF and family, that means OWL and there's a few other things that go with it too, that act as a common computable definition language for those standards. That allows us now to semantically link those standards in a computable way, and it also facilitates the convergence of those standards on common definitions, so that they will then form a single cohesive mesh of standards, rather than a haphazard patchwork of partially overlapping and inconsistent standards. Now, to do that, in order to facilitate this standards convergence, it would be very helpful to have a sort of, what I'm calling here, a collaborative standards hub. And we don't have this yet, but this is identifying the need for it here. You could add a cross between the existing bioportal site, which is funded by the NIH, I believe. GitHub, WikiData, WebProtogé, the semi-repository. There's a few things that you could kind of imagine it being kind of a bit like, and maybe it would even be the next generation bioportal. The point is that it would act as a collaborative hub for both the developers of standards, those who, the committees that are creating and managing these standards and updating these standards, and the implementers of those standards to access and use, would act as a repository. And it would hold RDF and OWL definitions of the various data models and vocabularies and terms that are used in those standards and thereby facilitate and encourage the semantic linkage between those definitions and those data models and those vocabularies. And implicitly, also by exposing the similarities and differences in a machine-processable way, it will also facilitate the convergence of them into kind of a consistent set of definitions and data models and vocabularies. So some of the things that it might do, for example, is it might suggest related concepts based on natural language matching, for example. It might check and notify of inconsistencies, semantic inconsistencies by using Reasoner, both within a given standard and also across standards. And it would be helpful, also, if it could be accessed either by a browser with a regular browser interface or a RESTful API so that any committee or person that didn't like the browser-based interface and wanted their own interface, they could use their own interface. They could create their own interface and it would just talk to that RESTful API and still use the same backend. People like to use their own interfaces sometimes. And furthermore, it would be very helpful if this collaborative standards hub could also scrape or reference definitions that are held elsewhere, because for one reason or another, various standards committees might want their own places where they keep their standards and they might not want to keep the original or the master in this hub. So that's fine, too, as long as there's a way to reference it or access it. And it could also provide metrics on both objective and subjective metrics. For example, the size of the standard, the number of views that it had, how much it's linked to other standards, how many people downloaded it, that sort of thing. And it may have subjective ratings, too. And the point is that it would use RDF and related technologies like OWL under the hood to semantically connect and allow computability of these standards. But it would not have to expose, it would not have to put RDF or OWL in people's faces. These would be hidden under the hood so that the use series of this hub could be domain experts without knowledge of RDF or OWL. Now, the closest I have seen to this so far is a tool called ICAPT that was produced by the Red Courage team at Stanford. And this was a tool that was used for creating ICD-11. Here's a little screenshot of it. In the three years of its use, it was used by over 270 domain experts around the world. They defined 45,000 classes, 260,000 changes were made in this, and there are 17,000 links to external terminologies. So it was quite a successful effort in demonstration of this ability to have a broadly collaborative tool that used RDF and OWL under the hood. I want to mention also a similar effort that's going on right now in the financial industry, and it's called FIBO, stands for Financial Industry Business Ontology. There's a group that is defining various financial-related standards in RDF. And the effort is somewhat similar to the Assemity Project, but it's a narrow-wearing scope. It's limited to financial reporting and policy enforcement, and defining ontologies for those so that they can have computable ways to represent that information in standard computable ways. And they're using things like GitHub and other tools to help with this collaboration. Another thing that, another way that RDF helps with standardizing the standards is that it helps avoid what we call the bike shed effect. And if you haven't heard of that, I'll explain that in just a moment. And the reason it helps is that it allows each group to use its own favorite data format or syntax or names. Remember, RDF doesn't care about data format or syntax or even the names of things. It just cares about the information content. And so if you haven't heard about the bike shed effect, it's also called Parkinson's Law of Triviality. Parkinson observed in 1957 that organizations tend to spend a disproportionate amount of time on trivial issues. And he brought this salient example where this committee had like three items on the agenda. And one of the items was a nuclear power plant which was going to cost $28 million. And another item was a bike shed that they wanted to put up which was going to cost $1,000. And sure enough, they spent two and a half minutes discussing the huge item, nuclear plant, $28 million and 45 minutes discussing the bike shed. Well, standards committees unfortunately often do this as well. They often have a tendency to spend hours deciding on things that really are inconsequential or completely irrelevant at the computable machine processing level, irrelevant to the computable information content. Things like data format, syntax and naming. Things that really don't matter to the computer or to the content, the information content. Okay, next one, part of the roadmap I want to talk about is crowdsourcing of translations, this one here. So how do you achieve semantic interoperability? How do you make anything interoperable? Well, there are basically two ways to do it. One approach is you can make everybody speak the same language. And in terms of computers exchanging information, this means exchanging the same data models and the same vocabularies. The other approach is that you can translate between languages. And in terms of computers communicating, this means translating between data models and vocabularies. Those are the only two ways you can do it, right? That's what you have for translation. Now, obviously, we would prefer to use standards because that's a much more efficient way to do it is to avoid that translation. But there are some fundamental limitations. One is that standardization itself takes time and the more comprehensive you try to be in the standard that you're developing, the longer it's going to take. So there's really what I call a trilemet here that you can pick any two of these objectives. You can have a standard completed quickly. You can be timely. Or it can be high quality. Or it can handle all the use cases, meaning comprehensive. And you can choose two out of three of those, but you can never simultaneously achieve all three of those goals. Okay, a second fundamental limitation of standards is that modernization itself takes time. So even if you do come out with the latest and greatest standard that's going to improve the world, it takes time for existing systems to be updated. They simply cannot all be updated at once. So there's going to be a variety of states of adoption of different systems. And you need to accommodate that. Okay, a third fundamental limitation of standards is that one size does not fit all. There are diverse use cases that need to be addressed. And especially in healthcare, it's just a wide variety of use cases that need to be addressed. And I've illustrated this picture about the need for, not just for different data, but different granularity of representations. On the left, here's a little representation of a blood pressure measurement that glows the systolic and the diastolic into a single string, 120 over 70. And on the right, it has broken it apart into separate numbers, 120 and 70. Now, you can imagine that the representation on the left is going to be easier for, or more convenient, I should say, just for display to a doctor, because they're very familiar with seeing it that way. But the one on the right is going to be more convenient for other kinds of machine processing, where you want to do some kind of looking at the differences over time, maybe. And furthermore, the one on the right has a finer granularity than the one on the left has. It has captured the body position as well, which for many cases, many use cases won't matter, but for other use cases, it may matter. So the point is that we don't have a one-size-fits-all situation. There's such a wide variety of use cases in healthcare and medicine that different representations are needed. Finally, another fundamental limitation standard is that standards themselves do not stand still. And as Doug Prisma likes to say, the only standard that isn't changing is one that nobody uses. So, you know, we have to be able to accommodate when a new version of a standard comes out. So here's some data that Rafael Richards at the VA came up with. He looked at the published information from these common terminologies listed on the left and what their rate of change was from year to year. And this is what he found, that on average, they were ranging around 4% to 8% per year in general. This is according to the publishers of those terminologies. So that has to be accommodated. So the point is that even though standards are always preferable, whenever you can use them, you don't want to do translation, but translation is still unavoidable. It allows newer systems to operate with older systems. It allows different use cases to use different data models, and it allows standards to evolve. So we can't get around the need for translation. So a realistic strategy for semantic interoperability needs to address both standards and translations. Now over time, the expectation is that the amount of interoperability that we get will go up, and the amount of interoperability that is achieved with standards will become a larger and larger portion, but the amount that's achieved due to translations will never go to zero because of those fundamental reasons that I just discussed. So there are some other reasons why RDF also helps with translation and one of the key reasons is that it supports inference, and inference is a key thing that can be used for translating when needed. It also is helpful for supporting translation because it acts as a universal information representation, as I said previously, and this means that it will allow data model and vocabulary translations to be more easily shareable because you have kind of a uniform basis that you can work against. So here's a little model then of how translation of patient data can happen. Let's suppose you get some source data in one representation, and I've just illustrated it here as HL7 version 2.5, some flavor of that, and on the right, the target wants to receive the information in this format called FHIR. Now, the way this can be done in a uniform way is, step one is to what we call lift to RDF, which means this is essentially a format translation. It's just getting it out of its existing format and into RDF. It's a very direct kind of translation, getting it into RDF. At the other end, step three is the inverse of this. It's the drop from RDF. It's just, again, a fairly straightforward direct syntactic mapping from RDF into the target format. Now, the real important work actually happens in the middle, the translate step, step two, and this is RDF to RDF translation because the RDF that you get out of the lift, even though it is RDF, it is not necessarily semantically aligned or using the same data models or vocabularies as you need on the right when you want to go drop out of RDF. So this middle part is known as semantic alignment or model alignment. It is an RDF to RDF translation. So by viewing this process this way, we have neatly factored out the direct lift and drop, which are just simple syntactic mappings in and out of data formats. We've factored that out from the semantically meaningful part of the translation, which is in step two, which is the RDF to RDF part. So how do you do step two? Well, translation is not easy. It's not easy to do this model translation. There are a lot of different data models and vocabularies that may be in use. And currently this kind of work is typically done in proprietary black box integration engines. But now by using RDF as this common semantic layer, as this uniform information representation, we now have the ability to have shareable rules or shareable executable translators that can then be crowdsourced and reused, mixed and matched. So this brings up the notion then of having a crowdsourced translation rules hub. Now it doesn't actually have to be a hub. It could be some kind of distributed thing. But the point is that it's crowdsourced and shareable. Here's an illustration. Again, it might be based on GitHub or WikiData or BioPortal or WebProtogay or something else. The idea is that it would host translation rules. And when I'm saying rules, I'm using the term rules in a pretty loose way. It's really just any executable way of transforming from one form of RDF to another form of RDF or it could do the lift and drop as well. So this rules hub should be agnostic about the rules language. And I've illustrated here that it might accept a wide variety of rules languages. It might use general rules. It might allow L2 rules. It might use N3 rules or Sparkle rules, spin notation, or even just Python or Java code. Anything that knows how to translate from RDF to RDF could be put into this translation rules hub and reused. So there would be various metadata that would go along with the translation rules. It would indicate the source and target language or classes that you're translating from and to. It would indicate the language of this rule set or translator. It would indicate any dependency it has. It might have test data that goes along with it and validation information. It might have license information, ideally pre and open source for crowd sourcing. But maybe some other model would emerge as well. There might be commercial efforts as well. Who maintains it, there might again be metrics on it. Both subjective and objective, like how many downloads, who wrote it, things like that, how many people like it. And it could even have digital signatures of endorsers of parties that endorse it. Very much like the way Red Hat produces Linux, even though Linux is open source, Red Hat as an organization produces their endorsed version of Linux and sell that commercially. Now, one thing I want to emphasize, that this crowd source translation's rules hub is only holding the rules for doing translation. It's not doing the translation itself. So there's no patient data that it's getting uploaded into that rules hub. The rules get downloaded for individual use. Okay, next component of the roadmap is this center part incentivize. So the reason this is on here is that the sad fact is that there is no natural business incentive for any healthcare provider to make its data interoperable with its competitors. That's just the way it is. And the bottom line then is that there really have to be some kind of carrot and stick policies that propel healthcare providers to make their data interoperable with their competitors. It ain't going to happen naturally. Now, this is not focus of the Yosemite project, but it is an absolutely essential thing that needs to be addressed, that policymakers need to address, and that's why we specifically call it out on the roadmap. So the final thing I want to get to is a question of what will semantic interoperability cost? So here's my wild guesses. I've given a very broad range here. For the standards part, initially it might take 40 to 500 million to get that going. For the translations part, it might take 30 to 400 million to get that going. And on an ongoing basis, it might take 30 to 400 million to get to maintain this kind of standards work and 20 to 300 million to maintain this kind of translations work. So a total of maybe 50 to 700 million a year. I feel like a lot of money, doesn't it? And these are just my wild guesses, right? I'd like to know what other people's guesses are. But the main point here is not what these precise numbers are. The main point is the cost of not doing this is enormous. And one estimate that I saw estimated the cost of non-interoperability, which is the current situation now, what is it now costing us to not have interoperability? It put it on the order of $30 billion a year. That's $30,000 million per year. Way more than it would actually cost to do this interoperability. Okay, that brings me to the end of this. I want to bring to your attention some upcoming webinars also. I'll just list them here. And I'm going to open the floor for questions now, too. So, Eric, do you want to... And I'm going to put the romance slide back on. Eric, do you want to open the floor for questions now? Sure. So before we do that, and thanks so much, David, that was a really wonderful presentation. Before we start the Q&A, and while we allow some more people to type in their questions, I want to let you all know about an opportunity to meet David face-to-face later this summer at the 2015 Smart Data Conference, taking place in San Jose, California, August 18th through 20th. David will present key things you need to know about RDF and why they are important. So if you've had your interest piqued about RDF today and would like to hear David speak about it more in depth, it's promising to be a very important and interesting presentation on that topic. Also, to remind you, attendees of this webinar are eligible to receive a 20% discount to that conference. Just use the code webinar. And as we give you some more time to type in some questions, let's get started with these. David, you mentioned, you had one really compelling slide pretty early on about how there are currently over 100 standards in the National Healthcare Repository right now. Right. How is that reflected globally? Are any of those standards are global, do you know, or is there a big global effort to make healthcare information interoperable? Is that primarily a U.S.-based effort at this point? Well, the National Library of Medicine is a U.S. By the way, here's the URL for these slides in case people would like to look at the slides later or download them or something. I just put it into the chat window. Yes, and we will also post those along with the recording of this webinar and send everybody who attended today a link to those. But go ahead. Yes, so the National Library of Medicine is a U.S.-funded effort. So it is going to be U.S.-centric, but I think a lot of these are international as well. I don't actually know exactly what the coverage is internationally. I know at least a number of them are, but I don't know how the international coverage is. You laid out this wonderful grand vision and you talked a little bit about some of the challenges. Do you see the most difficult hurdles at the moment being technical, cultural, financial, regulatory? Where do you see the biggest hurdles at this point, getting movement toward this vision of interoperability? Okay. Ultimately, the most important one is providing the motivation for doing interoperability because if there is insufficient motivation for doing it, then no matter how good the technical solution is, it simply will get nowhere. So that's why we explicitly called out the need for carrot or stick or combination incentives for interoperability, and this is being addressed to at least to some extent already. So I would absolutely put that as number one. But beyond that, there are significantly significant technical barriers as well in this sort of patchwork of standards that we have already. There's so many different standards and they're defined in so many different ways. So I think that there are multiple things that are important here. It's kind of hard to say beyond that. What's number one? What are you... Do you want to speak a little bit about the Yosemite Project as a project? What organizations are involved right now? How listeners might be able to get involved themselves? Sure. So the Yosemite Project is kind of a bottom-up effort that came out of a year and a half ago of a workshop that we did at the Semantic Technology Conference that explored the idea of using RDF as a universal healthcare exchange language, and that was in response to the need that was identified by the President's Council of Advisors on Science and Technology, the PCAST report in 2010, that called out the need for a universal healthcare exchange language. And a number of us who had already been looking at RDF and had already been thinking along the lines that are recognizing that RDF could be a very good basis for that kind of thing, put together this workshop to explore it. The workshop had, I think, about 25 or so attendees, and it went very well. And the culmination of that was a proclamation that we put out called the Ossamplity Manifesto, which said, in essence, that we think that RDF is the best available candidate for this purpose and said a few other things. So a year later, at the next Semantic Technology Conference, we decided to follow that up and with creating more of an action plan on how to do it. And this was the genesis of the Ossamplity Project. The Ossamplity Project is a bottom-up effort by its collaborative bottom-up effort by people that are interested in achieving semantic interoperability in healthcare. At the moment, it is not funded from top-down. We have not pursued funding for it yet, although we may at some point. But it's really a group of people that are interested in achieving and dedicated to achieving interoperability and who are involved in these efforts on a day-to-day basis. So if you want to get involved, probably the easiest is to email any of the people on the Ossamplity Project Steering Committee. So if you go to OssamplityProject.org, at the bottom of the page, you can look for the Steering Committee and email us. Great. Okay. Well, I do want to give people the opportunity to ask questions. This is sort of a last call for questions at this point. I want to remind everybody that the next Smart Data Webinar in this series will be on August 13th and will be on the topic of one of the syntaxes that David was speaking about a bit earlier for serializing RDF. It's one of the newer syntaxes. It's called JSON-LD. And the presenter for that webinar will be Brian Sletton of Bosoxu Consulting. Brian is a wonderful instructor. And so if you're curious at all about how you might begin to model RDF using what a lot of developers consider a very familiar-looking format, JSON, it's a good opportunity to start looking at that and learning about how to use JSON-LD. So that's Thursday, August 13th. If we don't have any questions, and it looks like we don't have any at this point, David, if people do have questions after the webinar, how can they reach you? Yeah. Just email me. You can look up my contact information also. Well, if you go to Yosemite Project.org, there's a contact link at the bottom of the page. You can contact us that way. Let me also just point out that today's webinar was kind of the intersection of two webinar series, the Smart Data series that Eric was just mentioning, and then the start of the Yosemite Project webinar series as well. So I just wanted to point that out in case people were confused at seeing other dates just done slide here. Indeed. David, thank you so much for a really great presentation. Thank you to everyone who tuned in today, and we look forward to seeing you again online. Great. Thanks very much, Chair, and thanks all for joining us.