 So let's look at our agenda if we could, Michael. We're going to talk about MISMO and FIVO and what they are. We're going to proceed from there to talk about semantics and FIVO and then our process of building FIVO for loans, including our objectives and our approach and the use case that we're working on today. We'll then conclude with our lessons learned. Let's get in the next slide. So MISMO, what is it? So MISMO, the actual word, stands for Mortgage Industry Standards Maintenance Organization. It is a term that describes the organization as well as the standard that the organization produces. It's a fully owned subsidiary of the Mortgage Bankers Association. But today we're going to talk about what specifically does it do and how is it different than FIVO? So MISMO is a mortgage industry data standard that is used to exchange data between business partners, usually when there is a mortgage-related transaction to be had, whether it be loan acquisition or a request for credit report, for mortgage insurance, for underwriting. These are all transactions that MISMO can support, where you have a request coming from one business party and being sent to another who would, in response, provide output or a response. It's based on XML schema. And the structure of it is based on traditional data modeling concepts. The messages are structured explicitly, meaning they contain just the information that's necessary and has been agreed upon between the two business partners to conduct the transaction with that hand. And so usually interfaces and services are designed to expressly support those messages and those transactions and their responses. The actual model content for MISMO reflects the needs of the industry that has created the standard, which is the domestic United States mortgage industry. And it focuses on all the things that need to happen behind the scenes to make the mortgage industry work. If we go over to FIVO then on the next slide, we have the Financial Industry Business Ontology FIVO. It is managed by the Enterprise Data Management Council, or EDM Council. It's at a sort of different scope. It's a standard set of financial concepts in terms of definitions. It's based on RDF-AL, Plamantic Technology. The structure is not based on traditional data modeling concepts, but on triples that are used to make decisions. And we'll look a little bit more in a more detailed way at that in a couple of slides. The data content for what FIVO is used for is typically expressed in web pages, published ontologies, or other content on the web. Generally not transactions that are between specific parties for a specific type of business function. The model reflects information of business concepts, financial concepts, used by industry partners across the financial industry internationally, as well as in the United States. So going to the next slide, we have a few graphics to kind of really put this into perspective the differences between the two standards. MISMO is based on the mortgage industry, so it's on the US domestic market. It's expressly data needed for transactions. And then you usually have a defined sender and receiver. The MISMO organization has done it in place for around 20 years, so a pretty long time. It has a very mature process and infrastructure for creating the standard and has iterated several series of versions prior to the one that it's working on now, which would be version 3.4.1 or 3.5, depending upon what needs to go into the next version. FIVO, on the other hand, is focused on the wider financial industry, certainly international in scope. We have participants in several nations. It reflects all financial market concepts. So today we're going to look at loans, but certainly there are other areas of FIVO that you've probably learned today that reflect different domains and different areas in the financial industry. And FIVO is used to interpret disparate information or to harmonize data, as we like to say sometimes. So you can really see that from a scope perspective, you're looking at one that is more vertical, focusing on mortgage industry, specifically in the US, and then FIVO being more broad, all of finance, and also not a domestic concern. So moving to the next slide, you'll see we've done the same thing with some technology differences. MISMO is XML SEMA-based versus FIVO being RDF OWL-based. MISMO is used to create structured messages, where FIVO is used to interpret unstructured data as well as structured data. MISMO, generally speaking, you're using to transmit a transaction between two business parties, where FIVO, it really is focused on conducting reasoning and generating insights on purchasing data. So sort of in a nutshell for the next slide, Michael, when you're looking at requesting a service from a specific party or submitting data to a specific party, you're looking to transmit a transaction detail or a detailed request for some type of service. You're probably looking at a MISMO-like structure if you're in the mortgage industry and looking to use a standard. If you're looking at something more broad, maybe you want to integrate your enterprise data across several different domains of financial instruments, you would be looking at FIVO. If you want to stress machine inference or integration of multiple data sources, then you're probably looking at FIVO as a go-to standard for your purpose. Thank you, Lynn. So as Lynn pointed out, there's some important differences between MISMO, which is an XML standard, and FIVO. And FIVO really is about semantics and meaning. So look at the lower left picture. We have the estimate when they were established, how high and the number of feet population. You add up all those numbers and you get a number. But what does that mean? What does commitment mean? Well, you've got to do what you've got to do. But is that good enough for legal contracts and databases, et cetera? And so as Lynn pointed out, XML is about structure. It uses a standard which was quite revolutionary at the time. It used to be you look at data in fields and a human couldn't look at it and understand it. And XML changed all that. You can look at the data fields and they have names that humans can understand. But the machines can't understand the meaning. So the meaning in an XML data schema resides in the names of the words and the human has to figure that out. Whereas FIVO takes us to the next step and says let's represent some of that meaning in ways that we can do machine level processing to add value. So we're gonna now look at this notion of a triple and see how this actually shakes out and how the semantics works and how the ontology works. We've been talking about FIVO, but what does this ontology really look like? We'll just get a little bit of a close up view. So these are triples, all right? So we have, let's see, I can't actually see, I can't, nevermind, I can't point, but that's okay. So we have triples, say lower left to the upper middle. There's a data item, right? Which means that the town and country is a borrower from the Fed. So there's some loan contract where the Fed is loaning money to this town and country bank, right? That town and country bank exists in a national registry. So we have the ontology as the metadata. So this is a loan contract. This characterizes a class or an entity in an ER diagram. And then we say, well, these are two examples of loan contracts, okay? And then the other part of this, the key aspect of a loan contract is the parties. So the loan, this borrower and lender, those are the two main parties. So we have something called independent party and it could be a person or an organization, okay? So this is a quick little snapshot of what this would look like under the covers, okay? So you have the schema and the data and it turns out they're all in the same underlying representation. It's all triples, okay? And here we describe at the meta level how a loan contract is connected to these parties, right? So it necessarily has a borrower and it necessarily has a lender, okay? And at those point too, these things must be independent parties. So this is an example of some actual all and some actual triples that's in a graphical visualization syntax, okay? Now what's really important is, while it's true that these ontology layer is a conceptual model, it describes the meaning, we don't have to have multiple layers, conceptual, logical, physical. We can actually use the conceptual model directly to create our triples against it. So this is an important thing that you have the option to take advantage of. If you're a bank or any kind of institution, the triples can come from anywhere. It could come from a different database and because you have globally unique your eyes, you can actually do that and snap things together. Whereas typically in a major organization, it's hard enough to share data amongst, you know, your next group or the next database because you don't have the technology to achieve that global sharing of both data and metadata. Okay, so triples are the common denominator. So they can come from a variety of places. You can take XML and extract out triples. You can take relational databases. There's tools out there which automatically extract out triples from relational databases and you can even do it virtually. You don't have to move the data, no ETL. You can just kind of create a virtual triple store out of that data. You can extract directly from web documents. There's lots of natural language technology. You can pull out triples automatically and of course there's lots and lots of available information on social media. So let's look at, expand out this example a little bit more. So here's the loan contract and these are actual triples. This is the actual data that you would have in your triple store. Okay, these are obviously made up. So we have the borrower and the lender, right? And then for a mortgage loan contract, this is a specific type of loan, there's a security agreement which is for the collateral, okay? So there's an actual collateral which is gonna be some real estate and real estate has some value and there's an appraisal that's an estimate of that value. So you have all these key pieces that are part of a loan contract. Over to you, Lynn, to explain this XML schema. Yeah, so the things that you're seeing here on the screen are two views of essentially the same thing. In the right-hand, lower right-hand corner, you see a conceptual picture of deal. So the XML model is structured such that it uses containment as a concept to establish relationships as well as X-Links which is a way of making explicit relationships between the various containers that are within the model. So in the blocks that you see, you can see that we have an instance here of loan and that it includes collateral as well as other information like parties and other loan level information. If you look over at the XML side of things, what you're seeing here in the middle of this slide is a snapshot from XML fly which is an XML editor, I guess protege would probably be a similar or parallel tool from an OWL perspective. But here you can see that we have explicitly described data. We have what we call the deal which is sort of a transactional level of container and that we have multiple instances of loan at the bottom that refer to loans as a specific snapshot in time so that you can have a sense of temporal or event-driven type of perspective on the data. And then we have an entire structure that looks at collateral and understands that you might have collateral subject property that is tethering or securing the loan, the mortgage, and that you also may have a need to describe other types of properties or collateral that isn't real property such as personal property in addition to real property in a deal and so we have the ability to do that. So if you go to the next slide, you'll see that this is an XML snippet that reflects the same concept that Michael just talked about. So you have deal at the top and it's holding containers related to the collateral. So subject property is a type of collateral and we know that the actual property that we have listed here in the yellow address line text is the subject property and is being used as collateral for the loan because it is expressly contained within the transaction deal. We are able to connect property information such as the estimated value amount separate from the actual appraised value amount because we have expressed data points that explicitly state that that's what they are. So when people look at a Nismo XML schema or a message, they note that it's very expressive language and that all of the tagging is human readable. However, what that does is it makes it very sort of verbose in its definition of concepts. Usually you're describing some basic concepts but also then clarifying and adding additional information to it to build up to the business concept that you might be looking for. So if you work back down this XML message, you'll see that we closed the collateral container and that then there are information about the loan. We have added some identifiers here to be able to say the loan MC-123, which was the same loan that Michael just referred to in his example, we know that this is related to the underlying collateral. We know the actual number of the loan and we know that this is expressly based on the way the message is structured. So we don't need to infer that this is a mortgage or that it has collateral because we expressly can see that that is so. So, and this is something that you would want. If you're transmitting data between business partners about a specific transaction, you don't want to infer detail for this type of use case, right? You want to be able to say what exactly it is you're requesting. You want to be very exact about the characteristics that you're providing for pricing, let's say, or for risk assessment as another example. And this structure works very well for a lot of the loan origination and processing applications that are the foundation of the mortgage industry in the last. Thank you then. So, Michael, you're going to go and talk about formal logic in front. Right, we're running a little bit short on time so I'll run through this a bit quickly. Okay, so what does it mean to be formal logic? And it's kind of the underpinning of the semantic representations. Essentially, you represent a meaning in a way that specifies what's in people's heads, okay? And what you can do then is increase the reliability of the information and you can compute things automatically. So that's what inference is about, a drawing conclusions from information that's already in your database and it's used in a number of ways. Very importantly, it can detect subtle inconsistencies. An XML schema will do kind of data integrity and format checking, but it can't detect kind of subtle chains of inference that detect problems that you otherwise wouldn't exist. So it goes beyond XML validation capabilities. Another thing you can do is you can do automatic categorization. So you don't have to come out and say it's a mortgage. You can say it's a loan contract. You can say it's collateralized and it uses real estate and then the system knows it's a mortgage. So this is a way to also add a bit of value. All right, so now we're going to talk about the process of building FIBO and then we're going to go into a little detail on the HUMDA use case but we're going to have to run through this a little bit quickly. Okay, so what was the point of the loans part of the ontology? Why did we have it? Well, we wanted to define loans concepts in short so that the industry can have a standard way to represent information and important constraint is the regulatory compliance. As we all know, loans were a big part of why the economy blew up, mortgages. So this is going to help track that better. Okay, and another key thing is to support not just to throw it out there and say have added industry but really just to support and help the industry make use of it so that we can actually get the value we're all looking for. All right, so this is a quick run through what we actually did. So we first kind of educated the team. It's like, here's what we're doing. Here's why we're doing it. And then we define the scope. Then we'll talk a little bit about that. Then we have the work team. We say, okay, team, we're going to get started. We discuss the scope. We'd get various contributions, people who know a lot about loans. They contribute their information. And then we established a core model and then we kind of essentially iteratively build that out, right? So then we kind of do this cycle of scoping, more scoping outside the core. We author it and then we send it back to the SMEs and we review it. All right, so now this next is kind of that review cycle. So we have that working scope. We select the key subset and model that out carefully, right? And then we use whatever tool that the people prefer to use. There's lots of tools out there to author in RDF and all. And then we create the materials which can be more easily consumed by the subject matter experts. Some of them may be familiar with all, some may not. So we kind of create some pictures and spreadsheets, things that make it easier for them to consume. And then we kind of get some feedback and then we go through a little cycle there until we settle on. And then we say, okay, when we've got that scope modeled out and we're happy with it, we just move on. We extend the scope a little bit, right? And then here's kind of a simpler level of abstraction of the core model concepts that we saw an example of before. The contract itself, the obligation to fund, to repay the different parties, Bower or Lender. This expands out a little bit from what we saw earlier, okay? So then we had to integrate the loans ontology with what already existed. So we don't wanna start from scratch. And there was quite a lot of work done already. There's a foundations element, there's business entities, all kinds of organizations. And then there's a whole bunch of ontologies and they're kind of in a network of importing each other. And then on top of that, there's an area called business finance and commerce. So this is what existed before we came along and started building loans, right? So then what we do is we say, okay, don't invent a wheel, let's not just create new concepts for what's already out there, but connect up what we have with what's already there. So for example, loan contract is a financial instrument, right? I'm not sure what happened here. Anyway, it's a financial instrument, which is a written contract, which is a contract. As David said earlier, everything revolves around agreements and contracts. So a credit check, that's something that happens, right? It's an occurrence. Universal identifiers, financial instrument identifier, and there's a more generic concept of identifier. So this is the kind of thing that we do, and that's just filling out a variety of other concepts. So the whole process of connecting up to existing FIBOs, there's a number of things that can happen. Maybe there's a nice connection that we can just grab onto that's good, or maybe there's not a connection and then we have to make suggestions to the other working groups to say, well, here's something that's missing, let's add it in a good way, and then we'll come back and we'll hook up to that new evolving piece, okay? So now over to you to Lynn, and we've got five minutes left, including questions. So try to go through this as quick as you can without losing out the essence. Lynn, are you on mute? Hello, Lynn. Yes, I'm on mute. So let's please pass this slide and we'll look at why we would look at Honda. So what is Honda in any case, right? So Honda is the Home Mortgage Disclosure Act. It's a rule that was promulgated by the Consumer Financial Protection Bureau at the U.S. last fall in its final state. And it is a rule that basically requires every United States lender to report information through the regulators, describing not only key risk assessment factors, but also, I demographic information about our borrowers. It is required so that the regulators and the public can assess any particular market participants' ability to lend in a fair and equitable way and to provide equal access to credit. So why would we look at this from a Honda perspective? We took that to the next slide, Michael. One of the reasons that we wanted to look at this is that for one thing, the file that we used to send to the regulators is being pretty much completely rewritten. And so lots of lenders and processors will have to retool their infrastructures as they look at complying with this new rule. The actual data that is required in the new rule is significantly expanded. What used to be a handful of data now is, you know, stretching to look at the very granular data, about 130 data points to create the report. The reporting also requires a lender to integrate data across different lines of business or transactional silos. So integration and harmonization of data takes a focal point here, in the sense that every lender will have to take data across their business lines and integrate it to create these reports. And then lastly, the new rule has quarterly reporting on top of the annual reporting requirement for large filers. So for large organizations like my own, not only do you have to be able to process more data across your business lines in an expansion of scope, transactional scope, but you have to do it four times as fast as you used to be able to do it. So if you moved on to the next slide, Michael, this is a picture of our use case. I mean, it really is depicting what I just described. You have multiple lines of business in illustration, and then each of those, you know, creating a specific data sets that are based upon their own native formats, passing through a 5.0 translation layer to generate an integrated report for the regulator. It's called the Law Report of Loan Activity and Loan Application Register for the CSBV. So we have been working on this, this use case the last several months, among others of the amount of data that's required. It has direct application for a number of the parties that are already participating in our working group, and that it showcases some of the capabilities of the semantic technologies very clearly, you know, right from the start. So we've been working on this and have been working in working team meetings that occur every week, and look forward to anybody who likes to join and participating with us. Thank you. I will hand this back over to Michael to wrap up and talk about our lessons learned. Okay, so we're out of time, but I'm just gonna call attention to one thing, the last bullet point, that you can directly use, you know, some of the FIBO ontologies as a data model. You don't need to have multiple layers. You can if you want to, but this is specifically been designed so that you can out of the box, take it and create data. So key takeaways, Mismo and FIBO are not two alternatives that one should, sorry, it's not one should replace the other, but both can live on and each has a role to play. And I'll just leave time for a question or two. And I'm around afterward as well. Okay, so the question, if I understand it, is there are a lot, the comment is there's lots of other standards out there and how many others are we gonna work with? That's a good question. We're working with HMDA right now. Lynn, I don't know, you may have a comment about that. Are you on mute? The question is why did we focus on Mismo? Well no, the question is there are other standards out there, how are we going to make sure that they are, can we work with other standards as well as with Mismo? I can't understand what the question is. All right, well I will just give my two cents. I think that as other standards come by, if people who want to use FIBO and they want to know how it interacts with the standard, we can work with them and try to find those connections. Okay, the question is Mismo is pretty substantial. He's not familiar with, they ask you the question, he's not real familiar with Mismo, but he acknowledges that FIBO is very large and what problems did we have in actually mapping the two? So we're not actually directly mapping the two at this point in time. Yeah. Lynn, do you have a comment for the comment? Yeah, so what we are doing though is letting it inform our work. So given Mismo has a good amount of adoption and industry penetration, you don't want to recreate the wheel. We want to teach the lessons we've learned using Mismo and its structures and let it inform our work. And it's certainly not the only standard that we would refer to. We also look at ISO 2022, we look at other standards like those related to insurance or the Accord model as relevant. The reason that we look primarily at Mismo as sort of the model is because a lot of our working group participants cross over from both groups. And so we have a base of knowledge there that helps to inform what we're doing.