 Thank you very much David. Appreciate it and good afternoon everyone. So when I was asked to come and talk at the present at the FIBO conference, I had to make sure that I was going to be doing the right thing. So did I need to turn up and show my rippling muscles off at the world's premier fitness and bodybuilding conference in Germany? Or did I need to teach our kids to find their hidden artistic creativity at the FIBO Kids Art Academy? Or maybe I needed to earn my fortune by applying Fibonacci mathematics to the financial markets? Or maybe I needed to come along to talk at this conference about the financial industry business ontology. So my point is that really understanding the information, understanding its meaning is essential to ensure that you don't turn up to a business conference wearing your speedos. So within the enterprise we have this huge amount of diverse information and what we're helping our customers to do is to really unify all of that information that exists in different systems in order to get an understanding of the risk at an enterprise level, such that they can control and manage that risk. But also to be able to gain a real 360-degree view of the customer and to be able to gain insights into the financial markets. So we see FIBO and FIBO V as a real powerful enabler into addressing these key business drivers for the banks. And you might think that semantic technologies and ontologies are really bleeding age and certainly they are new, but actually we have hundreds of customers within the Fortune Thousand that are now using semantics today. So this has moved away from bleeding age into real technology that's been applied at enterprise scale to solve real business problems. And in banking we see the ontologies being used to support the massive burden of regulatory compliance that the banks are mandated to be able to automate business processes that are very rich with content and to be able to gain new insights within their business analytics layer because they're able to unify information and deal with new information or different or wider range of information types. So the problem really, particularly for those organizations that have consolidated and grown for acquisition, they have all of these different systems in different divisions of the bank. All of those systems use different schemers and they all use different vocabaries. And so everything describes the same thing in a different way. And for the end users, this means that they struggle to really understand the data, to find the data that's meaningful and important to them and to know its real heritage and trustworthiness. And also it's very difficult to be able to work with all the unstructured content such as agreements and contracts that exist within the organization. So sometimes banks will try and address this through traditional ETL processes. The problem with those is that they're very time consuming to build, but more so they're very brittle as the data itself changes or the types of questions people are trying to answer change, then these ETL processes can break down. So Fiber provides a real solution to a lot of this problem because when combined with other taxonomies and vocabaries covering issuers and counterparties and securities and instrument types and all those other things, if you can combine those things together, link them in a way that makes sense for the bank and be able to enrich them for the purposes of the bank, then you've got a real powerful source of truth that can be used across the enterprise. So that can be integrated into different line of business applications and data sources, but also it can be used to enrich the transformation of data as it comes into data warehouses, data hubs to make that information well described, consistent in its vocabulary and really meaningful to end users. So data columns that are similar can be identified and combined together across multiple systems. Data filled with different vocabulary can be mapped to a consistent set of vocabulary, so a single meaning for a single thing. The data becomes much more traversable and understandable by end users because it's well described and the facts and information that are contained in unstructured documents such as contracts and agreements can be brought into the mix as well and combined with the structured data to answer important questions for the business. So as you'll see in the live demonstration, assuming everything goes well, there are three parts that make up SEMA4. First is a platform for ingesting, customizing, sharing and managing taxonomies and ontologies and we use the SCOS Excel format and we've focused very much on making it really easy for business users, subject matter experts and analysts to be based the vocabulary and structured data, but also to be able to extract facts and relationships as RDF triples from unstructured data which can be stored in the triple store of your choice. And then we also provide a whole set of interfaces and components for knowledge discovery and surfacing of all of this information for end users. That SEMA4 can run on-premise or in the cloud. Just a couple of examples before we go into demonstration. So first of these is Thomson Reuters and I use this case not only because they have a lot of financial markets information, but because they've grown through a lot of acquisitions and actually the data that's behind all of those TR applications exists in many many different systems which have been acquired over time. And the problem that they face is that it's very hard to understand what data is in there and what the data really means and how to use that even for developers it's very difficult for them to build new applications or to enhance existing products for their customers because the data is so opaque. So what they did is to build an information layer which allows them to describe all of that underlying data to organize the data into hierarchies so it's easy for people to reverse and access and to create rich metadata that really helps people to correctly understand that data. And so on top of that semantic layer they're now able to build new products and enhance existing products much much more quickly but also be able to for their end users to give them much better access to be able to report and query and analyze that information. So rather than ripping and replacing a whole set of systems which were simply deemed too expensive and risky they've been able to abstract the problem away. The second example is with KPMG who are providing a service by which tier one and tier two investment banks can onboard customers. The process of onboarding a customer involves hundreds of different document types with thousands of different information data points required to actually onboard that customer correctly. And so what they've done is to use semantics and ontologies to be able to automate the processing of all of these different documents as part of the onboarding workflows to cross-reference all the data from these different documents to make sure that everything lines up and there's no suspicious behavior and then as a result onboard the customer in a fraction of the time and create all of the reference data that then feeds the other banking systems involved in those counterparty interactions. So it ensures compliance, it cuts time and cost but it also creates all of the reference data required. So just to summarize, we provide a platform by which you can ingest FIBOV very easily, share that around the organization to get feedback and collaboration with different parts of the bank to be able to apply that to all of the downstream systems to provide a single source of truth based on FIBOV and to be able to automate the process of getting that data into data lakes in a consistent and harmonized fashion. So I'm going to hand over to my colleague from the live demo. We're making the URL for this instance of semaphore in the cloud available. So if anybody would like to actually access FIBOV through the software they can do so just based on this URL which we'll provide at the end. We can see okay. So yeah so what we want to do now in this section is just give a live demo of actually a few different things. So first the FIBOV itself, I should be very specific here and then secondly some of the other things that you'll want to model as you go through expanding FIBOV into specific use cases across the organization. So some of the things I'll show are some of the things that we did on projects like the Thomson Reuters one like the KPMG project. But we're going to start with FIBOV. So as you come into the tool of course you can have a look at all of the information that FIBOV has. And I think a lot of people have already said this that there's kind of a challenge at the moment with adoption. So you know how do we get everybody to adopt FIBOV and you know by exposing it to as many people as possible in the most usable way possible hopefully that's going to aid in that adoption. So as we come into the ontology we can of course see you know all of the things of want or the concepts that have been converted through those spin processes from the owl into skulls exhale. And as we look at any of these we can then start to see all of the definitions that form the model itself. What we can also do in semaphore is then additionally visualize that. So we can walk through that more as a graph more of a chart view. So we can drive into you know lots of the different areas see that. And then we can see all of the additional metadata again. So proof that it is going to go wrong. Let's try that. There we go. It's just a bit slow. So what we've got there is now you know a nice way that people don't have to understand you know all of the technical details of of the semantic technologies so of skulls of RDIF of our constructs. But now it can come in a very easily navigate around the ontology find out what things are, find out what things mean. And so you can see here the definition of a contract all the different concepts that it's related to and how it's specifically related to them. So that's the first point in adoption. So let's get people to see it. Let's get people to start to understand it. Start to see if it's going to be useful in their environment. The next piece and I think it's alluded to on a couple of other talks is then how do we get comments back. So as you spread it across the enterprise then there's features in the tool for example to leave comments so you can get all the people in different departments collaborating together on the ontology and potentially these comments could also then and suggestions be fed back directly to the EDM council. So you've got mechanisms in the tool automatically potentially to feed that back. But then the next stage is always going to be extension. So there's always going to be a few things that are different in my environment or new that we need to create and control. So what we can do in the next step is actually start to customize the FIBO V vocabulary. And we don't want to really alter the main vocabulary. What we want to do is link to it so we can reuse as much information as possible. So what we're going to do here is actually instead of creating anything from scratch bring in my FIBO V model and then have an instance where I can just extend it to my own use cases. So if I come in here and have a look at the model now we have the well hopefully it's thinking about it. Here we go. So we now have the model linked in. So what we can see there is in front of every concept or we will and again in a second because I clicked it too fast. We'll see a link in front of each concept. So that means it's just linked in. It's a reference to the original FIBO V vocabulary. We're not duplicating anything here. Anything we do here is going to be directly linked back. And then we might want to extend it. So there's certain things that our kind of clients and prospects have talked about. And if we want to we can go into things like dates, specified dates maybe. We might want to specifically add in a piece of information. It might be called a settlement date. Let's do that again. Here we go. So what we can do here is then add in things like the settlement date. That might be a new thing that we want to monitor or talk about in the organization. And we can add that. And then we can also add additional definitions to it. So all of the information inside the ontology is completely customizable. You can put whatever relationships and information here. And what I'm going to do with that in a definition and actually a definition of origin. And it happens to be provided to me before. So I've got this page or specific section of page from the MSRB. And I want to come in here and add that as our definition origin. So very easy to come in and extend it, to define it in ways that are more suitable for the applications that you're using. Once we've done that link, one of the interesting capabilities we've got here is actually to bring in that information directly that page that we saw just a second ago. So if I actually review this in the side panel, again we'll see if my internet connection is going to allow me to do this. But what it's going to do is automatically look up that URL and bring back the page and the specific definition for it. So again, we're trying to be as interoperable as possible. We don't want to be copying lots of data from here, there and everywhere. We wanted to link to either directly to the FIBO V vocabulary, or to the external references as well. So you can see the page being displayed there specifically to say this is the definition for that concept. So that's our starting point. We can start to adopt the specified vocabulary, but then again we may want to extend them. So one of the big challenges for Thomson Reuters, for example, is actually then detailing and knowing what information lived in which sections of which system. So again, they added that to the model. So for settlement date, for example, what we could do is say in a specific system, it might be referred to just by settlement. That might be its title. So what we could do there is specify a specific alternate label to say in our reference sources, this is how we might identify a column or field. And of course, you'd have many, many different ones of those for all the different systems you have, all the different data types that you have. And again, it might not be done manually, you might have profiling tools that actually build this information into the ontology. But the idea is there, it's customizing, you can put this information in, and then as you go to query the ontology, you get all of this information back so you can easily connect back to all of those different sources that you're describing. So as we then look to extend it even further, and as I think as the previous conversations mentioned, what you really want to do then is extend it with vocabulary. So the terminology that backs the fields that we've got. So once we know where everything lives, we now need to know how to standardize it individually. So in one system, you might have an issuer name, a counterparty name. In another system, you might have a Dunham-Brown Street number. You might have a different way of referring to exactly the same thing. So once we've normalized the where things are, we now need to normalize what is the information inside those systems. And again, we can do that. In this case, I've got a different ontology and many different sources for these, counterparty ontology. And again, as we go in here, we'll see all of the information. In this case, just been imported again. So we can come in here. We can see the additional information. And as we come in, we'll see all of the counterparties. And we'll see again, a lot of additional information that's relevant to any of those counterparties. So if we click on one of the companies, we can see that we have, excuse me, a number of related concepts. So the industry that the organizations are operating in, locations that they're operating in, that of course links off to then currency exposure, many different ways of kind of modeling this information. But then you've also got all of the alternate labels. Again, the terminology, the tickers, the other ways of identifying these organizations. As we look down the list, we've got things like the Dunn's numbers, you know, other information that again might be relevant when we start to query. So with 7.4, you've got a pretty easy tool to, you know, build out, to adopt FIBOV, to build it out, to comment back to the standards council and internally so you can build out the right structures, to then build out the terminologies that you're also going to use when you standardize the information. So again, you know, as Toby mentioned, the goal of the design here is to make it as easy as possible to make it accessible to the business. And so that's why we've gone for a lot of the design choices that we have. But then we've kind of covered two scenarios so far. So we've discovered or covered the ability to adopt the standard, maybe customize it a little bit to a specific organization, and maybe then to extend it to understand what fields are using which data and what data is being used in each field. So we've modeled that out. So now what's the next step is all of that and structured content. So now you've got all of the structured content dealt with, you need to get in there and analyze all of the other bits. So all of the contracts, anything else that comes in that's not already structured. So again, what we've got in there or in here is a different model for dealing with that kind of capability. So what we're doing in Semaphore now is defining a way of extracting information from each of those contract types. So as Toby mentioned, there's hundreds of different documents that come in. There's potentially hundreds of different fields in each of those documents. And we want to be able to take them out of that unstructured text, make them structured so we can now again incorporate them in our analytics and our queries. So this model is all about the document structure and not really about terminology anymore. But as we look through some of these, what we'll find is that they actually link very nicely back into the definitions in Fiber. So a lot of the elements that we might want to extract from each of these documents actually tie nicely back into the contract elements that are defined in Fiber. So what we can see here is all of the different document types we can work with. Obviously, there's many pages here. And you can see all of the things that we might want to extract. So as we go through an article of incorporation, we obviously want to know the company that that's incorporating. We want to know who the directors are, what the addresses are, maybe what the purpose of the company is or any clauses if we're going through contract. So we can use the ontology editor again to maintain all of this information to make it very easy for business to use. But then as we look at the results, we actually look at content coming in. What we might have is very different documents. So as we come in, there's a couple of examples of articles in corporation. So one, very kind of textual, very kind of decently formatted. It's come in obviously in electronic form already. So that makes it slightly easier for the extraction. When we look at the other example, of course, that's very, very different. That's come in as a text document. It's been scanned in OCR. And then we're going to try the same extraction form that document as well. So as we do that, we can come into the classification engine. We can literally, obviously you wouldn't do this one by one in real life, but we do this just for a demonstration. You can come in here, put those documents in and then see the kind of extractions that are coming back out. So as we see on the right hand side, we've got the representation of the document. On the left hand side, we've got all of the things we tried to extract and all of the things that actually were successfully extracted. And so the key behind this extraction is not that we're just looking for entities and things in a document. We're looking for entities in a specific way, in specific sections, so it means exactly what we want it to mean. And so as we look through this example, we can see that immediately that we've actually identified the correct document type. It's an article of incorporation. That's the first step, the document fingerprint. Then once we know that, we know the fields that we want to extract from this document. So from the article of incorporation, we definitely want to get an entity name. And again, we've got that entity name very easily extracted. So when we do these extractions, then we've got a couple of different options, we can actually then manage that and put that back into the context of the ontology. So where we had that counterparty ontology a second ago, that Florida Greenways and Trails Foundation might have already been a known instance. So instead of just building a piece of text here to say this is what the document's about, we can give it a URI, we can say this is your reference in the ontology to your counterparty. So again, making it very connected. We also then do extract just pure entities, things like dates, things like people's names that we might not be controlling yet. So dates we probably never want to control, but people name we might want to monitor over time. So again, that could be a source of information of feedback into the ontology itself. And then there's also information that might be purely textual. So the purpose of the organization, for example, might not just be a simple thing, it might be actually a complex piece of text that does need further processing, further review. So what you've also got then is around the core semaphore tools, you're going to want to set up the workflow that actually helps you validate all of that information that's being extracted. So it's definitely not a 100% tool, it's a way of reducing the effort of manually processing content on contracts and then building ontologies, building semantic data warehouses with that information. So hopefully, mostly those demos actually worked a little bit slow, but I think that's a success in its own right. So really in summary, I mean, the semaphore tools can help with a lot of things. What we really want to do is help people adopt FI, though, so trying to make it easy for them to see it, to view it, to understand it, then to use the ontology editor to expand on that, to make it relevant to themselves, to expand it with the reference vocabulary that they need to, and then to be able to process the unstructured content to bring out of all of those facts and information so that all of the data is described in exactly the way you want it to. And that, do you want to talk about the trial? Yeah, the URL is available here. We've also got some fact sheets, so the URL is on there. If you want to take a look at FIBOV loaded up in semaphore on the cloud, all you need is that URL, so feel free to grab a copy of that. When you've decided that you can't live without it and you want to buy it, then there's my email address and happy to take any questions. Dean? So Dean, perhaps I'll jump in and actually answer the question because I think that's really on us, in a sense. And what we've already positioned ourselves to do is that, so the question is, is how do you really transition from FIBO vocabulary? And this was a great demonstration of the operational use of the vocabulary itself without going to the RDF OWL implementation. However, if you did want to transition to the RDF OWL implementation, how would you go about doing that? And what would be that migration path? And we're actually providing that link because what we're publishing in the FIBO vocabulary, which we didn't see here, but it's rolling into our basic release of FIBO vocabulary, is the link to the actual element in the RDF OWL ontology, so that we'll be able to show the linkage and interoperability. So just by having that already, we're well positioned to align with the RDF OWL. But Toby Stewart, did you have any other comments to extend beyond that? Okay. And there's a one back there has a question. So I think it's an evolutionary process and many organizations, as Dean had alluded to earlier, are really not ready to get their arms around the much more expressive but at the same time complex RDF OWL representation of the same concept in FIBO. So the SCOS representation is much closer to a vocabulary that organizations can more easily assimilate because the model is much more coarse grained if you will in many ways and aligns much more easily with the notion of a vocabulary of a glossary. It also drops very easily into vendor products that have positioned themselves to support SCOS vocabularies and it provides the visualization. And this was a very nice demonstration of showing the multi-dimensional visualization of concepts with their superclasses and subclasses in a broader narrow within relation as well as the related properties and attributes and the metadata. So I think as far as a evolutionary process, this provides a very good initial ability to be immediately valuable for a BCBS 239 type of business need that will allow companies to begin to start assimilating the value of ontology and then using this as a stepping stone to migrate to the RDF OWL and it's not an either or it's based on use case we will support both. Both our vocabulary and RDF OWL have operational benefits for organizations. So with that we are out of time. I want to very much thank Toby and Stuart from Smart Logic for succeeding with a live demo which was was really awesome and thanks everyone. So what we're going to do is resume at one o'clock after lunch. Now for those of you who have been speakers this morning, Dennis wants to get a picture. So if you're a speaker this morning stick around and we'll get a quick picture and then send you off to lunch otherwise one o'clock.