 So just to get started, a couple of real quick slides. I haven't seen some of these statistics this week. And I think it's important to note that the banks who are considered globally systemically important. Of course, Wells Fargo is one of those, most of the top ten are in the U.S. But in the European Union, there are another group of banks that are considered globally systemically important, including Nordam. We'll talk about that in a few minutes. But from the Basil III risk data reporting requirements and recent reports that have come out of the regulators, it's pretty clear that the number of banks that are not meeting the Basil III requirements is a fairly high number. And one of the reasons for that is that there are certain things that they just start not doing well. Throughout the assessments that have been done so far, and especially in the European Union, they're falling down with respect to data architecture and IT capability, with respect to the accuracy and integrity of their data, data governance, timeliness, adaptability, all of those things, they're all missing. And so, Nordea and a number of the large banks in the EU have been fined heavily for that. And they are not happy about being fined. And so, what they really are starting to get and to organize around is that they need to do something about this. And it turns out the lowest area of compliance across the board, according to the ECB, the European Federal Bank, is in data architecture and IT infrastructure. And this is certainly a big pain point for Nordea. And the close second is governance. And so, what we have been doing is working with Nordea to build a capability within the bank to address some of those issues. A lot of them involve changes in personnel, but others involve education and training, involve rearchitecture and IT systems, and a whole number of things. They have to do a better job of meeting those requirements. So, Nordea Bank is the largest financial services group in the Nordics, which includes Norway, Sweden, Denmark, Finland, and Nordea actually is also in business in Russia, in the Netherlands, and they have offices in New York. They have roughly 11,000 customers. They have roughly 650 billion euros under management. They are a very, very large institution, even though many of us in the U.S. have never heard of them. So, just saying, they're the largest bank in the Nordics, and they typically are compared with Deutsche Bank and a few others in the European Union. I don't think they're quite as big as Deutsche, they may not be as big as Credit Suisse, but they're still a little talked at. So, the work area that we're talking about here involves capital markets, and part of what we've been doing with them is to look at their business architecture, look at their value chain to understand where data governance needs to play. Some of the main areas involved in that work include product valuation, position management, P&L reporting, all very basic things you would think. Except that there are 35 systems for trading. There are 17 systems that give them sources for market data. There are another 50 systems that report in with respect to the P&L process. So, the challenge is they have all these siloed systems, many of which do not speak to one another, and then within the bank they speak five different languages, none of which is the bank language officially, which is English, but if you walk down the hall, what you hear is a bit of Danish, a bit of Swedish, a bit of Norwegian, Russian over here, French over there, because they hired people from all these other banks, a lot of German. And the challenge for them in a multilingual setting with respect to a common language that they can use for governance purposes is just to speak, to communicate. Look at the data management maturity model and which aspects of it were really business-facing that they cared about in order to meet the regulatory requirements. And so, for us as a starting point we're talking really about their data management strategy and specifically about governance, which is where LIBO comes into play. And so, working with the business side, getting them to understand that they own the governance process, it's not owned by IT, it's something the business has to champion, they have to own the data sources that reflect their capabilities and the value streams that they're enrolled in, even getting past that point has taken some work. But I think we're getting closer. And so, the objectives for the program that we're working with that's established include just putting the principles in place for supporting the business data owners, creating their policies and knowledge base for data governance and metadata management, ensuring that the metadata meets the requirements of ECBS 239 and other regulatory requirements that they're faced with and then getting IT to collaborate with them so that the metadata is completely accurate and that their data schema can be aligned with it. So, there's a number of things involved in that process and LIBO plays a critical role in the middle. Rodin came to the U.S. in like two years ago, in the first part of the year they went to Wells Fargo, they went to KPMorgan Chase, they went to see people at the Citibank, several banks in New York in order to figure out what they could do and what American banks were doing to address some of these challenges and it was through that that they got involved with LIBO and were introduced to us. So, we start from the business architecture because they are still having challenges understanding what they do and so really working with a strong business architecture to identify who their stakeholders are, the managers of the data, the data owners are, what the relative processes are for a particular area and so forth has been critical of extracting business vocabulary from the resulting business architecture. That process is the one that's followed by the books in the Business Architecture Guild and any of you who are familiar with that, it's affiliated with OG. Unfortunately, my partner was not able to be here to tell you more than that. He's a founder of the Business Architecture Guild in the Lubin area in the field. I retired at IBM, a distinguished engineer and a fabulous business architect and Jim unfortunately had a family member pass away late last week so he's not with us today. But what we do is put together a series of value chains so what you see here is the P&L, I'm producing a P&L value stream which has a number of capabilities associated with it so the value chains, the numbers of the value stream are across the top. You have the various capabilities in the sort of pink boxes down below that, things like value control and reviewing the valuations for all kinds of instruments that are on the books, et cetera, in order to come up with the final P&L for the day. Out of that work, once we've had the value streams and capabilities identified, we've built an information map that talks about what pieces of information are required to deliver that capability and that comes out of that capability. And again, more terminology can be derived from the information that's in that map and also from the interviews that are done to develop it. So the overall strategy with respect to metadata, once we have at least an idea of the capabilities for a particular area that we're working with, are to reuse standards and best practices where possible because in our day I had so much trouble with a common language given the number of languages they speak given how they overload English terms incorrectly and all of the other things that were going on in the bank they felt that starting with standards was really critical it was the only way they were going to succeed and so we've been using FICO as the basis for that and in fact some of the knowledge that's come out of the work with NORDAD has already been integrated into some of the original FICO standards including business and commerce. In addition to that we derive information and metadata from the business architecture as I mentioned and we've been working with NORDAD to tailor the policies and principles derived from the data and security model in order to put their own program in place and finally education of the IT people involves that they can actually adapt their systems and data architecture to supply. So the governance program is in place now at NORDAD includes a tremendous amount of stakeholder analysis because again they're new to the game and they're really trying to give an accurate and understand what it is that they need to do so they all participate in active sessions for developing so value streams, identifying the capabilities that are involved in the business architecture from those we've been working with them to look at the business process models that they already have in place identify where those need to be modified and to map those onto the capability and information map and onto the other vocabulary so that everything starts to line up and they're using the same terms across all of those artifacts. We've created with them a glossary that extends the set of standards that we're using from Bible and elsewhere for any of the elements or metadata identified by this architecture and through their use cases and so that's all collected in right now in Access Database we'll be moving that to start out hopefully this is coming fall and we'll talk about that in a few minutes and so once we have at least a glossary for the terms that we can't find in Bible then we look at the relationships between those we try to establish connections with Bible and put more of a at least taxonomic and high level logical apology together we work with their team to document the business rules that are associated with those elements that are in business glossary that comes out of that work and then work with IT to connect that to their IT artifacts including the business logical data model. So from a terminology perspective this is one of the sort of requirements of the BCDS23 standard they're asking for integrated taxonomy and architecture across the banking group a huge challenge for a bank that has a retail bank, a wholesale bank a wealth management group a juror's company, et cetera et cetera and they are small in comparison to many of the large American banks but the vocabulary is what provides a basis for ground truth and allows me those requirements and it's comprised not only of metadata but of a unified logically consistent approach to concept identification meaning representation and model so that they can do data governance using that vocabulary we've been building our vocabulary terminology with them using an ISO 704 approach. How many of you have ever seen a version of ISO 704? So it's the terminology work standard in ISO which may be something you might want to go out and read if this is important to you and so what that involves is a rigorous process of building up definitions ensuring that they have a standard format making sure that you have annotations that support them including what sources were used to define things examples, usage nodes and a myriad of other annotations that we use in order to define something so that people at the bank understand what you mean by a particular concept and so there's a process identified here I won't go through it since we're publishing the slides for you to grab that we use in order to come up with a real good glossary with annotations that is input to the ontology developer piece so once we have that then we do the analysis of the relationships they know by the information map from their current schema from what we know from FIBO to put together additional pieces that are part of the ontology so we started with them on a pilot program involving the definition of trade that took us and resulted in 200 terms in addition to the word trade in order to dive in to actually meet the goal some percentage of the concepts were already in FIBO, not all of them but we contributed many of those back some pilot included market valuation and now collateral optimization is where we're working at the moment so these are very huge areas as well 500 concepts related to market valuation we still are counting with respect to collateral optimization and many of those are not yet FIBO but Sardaea is committed to contributing back what they can we had some tooling challenges and I went through this quickly because we're running out of time but there were no tools to support the vocabulary piece that we could find that it's a taxonomy tool but most of that didn't support the level of annotations that we needed they're more comfortable with the Excel we ended up building a prototype in access that we're now planning to move to Sardaea there were a few tools also for support of collaborative review of the terminology not in desktop tools but over the internet for SMEs to look at and that we've built a prototype tool in Sardaea which everyone's going to talk about in a moment and then we're looking at additional tooling and we've tested some sparkle queries early on for looking at their logical schema and validating those via the ontology making sure that the definitions line up so we're using sparkle queries against Sardaea to do that and comparing that against their logical data models which are in an IBM framework which we mentioned earlier that you see I forget that the acronym stands for so this is a view of the prototype tool for terminology entry which is pretty straight forward you can see these in the slide deck this is an extension to FIBO for market valuation calculations where this is a protege view of the ontology and what we've done here is only the things in black were needed to be added to FIBO everything else comes directly from FIBO that we are using to define market value calculations and then specific calculations will have additional parameters and with that I'm going to hand it over to everyone as much as protege is a great tool for editing ontologies and viewing for experienced users it's not really something you can put in front of the subject or the parameter experts at the bank so we built this tool mostly to display visualize the ontology contents in a way that's a little more immemorial and aesthetics was not the top priority function ever to us so what you see here is the details about the certain concept annotations about that concept the formula represents this concept the part of the class hierarchy related to that concept and also a natural language representation of the accent in the ontology so rather than showing the ontology languages it's a little more readable natural language representation of the definitions so behind the scenes what's happening it's a JavaScript based application that's sending queries to the start of database the ontologies that have been developed in protege are loaded into the database and the front-end is sending these queries live so as you click the links to all the links you see here are clickable you can navigate through the ontology definitions while the queries are the content is being pulled from the database as you browse through the onto as you browse through the definitions so I'd like to talk just a little bit about the internals of the start of database and link it to our use cases so start of it has standard capabilities you'll find in any kind of database so the core is an RDF database with a sparkle query engine so you can load your private ontologies or any ontology that's presented in RDF and all in the start of directory is course and pro being the most common ontology is used in the financial domain in addition to private and there's a reasoning component so you can run your queries and have all the ontology reasoning happen as the queries being answered it also supports rules as well so you can add rules on top of your ontologies to infer relationships and get them as your query results but the ultimate goal with the start of that we are developing in complex is to make it a data identification platform so not just handle structured data presented in RDF but we also index natural text that you that can part of the data so if you have some blogs descriptions we index them we do scene and then we can use free text queries to get those results or if you have just special data encoded somewhere coordinates or locations we can index them as well then you can run queries like find me these instances that are this much close to a location you can link to legacy data that's in relational databases and use R2R ML mappings to represent that data as RDF so you can have this RDF view over data that's being stored in SQL database run your sparkle queries and they are automatically translated to SQL and answered and there's also this feature that is going to be available in the upcoming release which is extracting text from document like PDF or Wordettes you can feed these kind of documents into start up where metadata will be extracted and stored as RDF the text inside the documents will be indexed as well and you can also configure storage for storing these documents so whenever you run a query and find for example I find this policy about expressing the domain where is this coming from you can go and trace it back to the original PDF document where those policies were described for example so next steps in this work will be continuing the development of the vocabulary and preliminary deployment is planned for later in this year in the Nordia internet and the focus will be developing the vocabulary on the trade oriented concepts and collateral organizations and on the interface side we'll be working on the research and review process so as you've seen we just have a proof of concept but we want to build a better tool for subject experts to see the definitions verify them and validate those definitions in a better way and we are going to be using the text extraction indexing capabilities that I just mentioned we can link the vocabulary terms to the original documents and have tracing back to the original policy document we'll be prototyping a schema validation tool for the data warehouse project so that's what Elsa mentioned before so when the schema for the data warehouse are being generated we'll be using the part of queries to validate those schemas and we might be also using the rdf app to implement things as well to pull in data as well and we are planning to use the text indexing capabilities on the interface side as well so that the business architecture related to terminology that users are seeing from them can take my word for that as well so just a few final thoughts the sales process of the organization is still more and it's difficult and again people on board when there are a lot of reorganizations and an organization is also challenging so being able to show early wins is what's led to the success that we've had so far and the other key has been the sexualization of the governance process and of the metadata so that all of the different organizations within the bank are using that same so that's all we have. Thank you very much the slides should be available. This morning vendors are really really important for us as are the use cases in the library of life and so I asked to bring the bank to bring the vendor to get exactly what I wanted to do tomorrow we have a whole session two of them dedicated to vendors and that's going to start off by showing a graduation form that we've been doing and asking vendors to fill it out and then tomorrow evening after the last keynote in this room any vendor who wants to come to I've invited all the ones who express an interest you're welcome to come here we're going to talk about more cooperation in 2016 so without any questions they should have served so the question is are we attempting to build a multilingual glossary for Nordea and the answer is eventually we are not starting there having common definitions in English is the first goal and then once we have those and people start overloading the English terms then we are attending to represent the ontology in Swedish and Danish I'm not sure whether they want to do Russian and Finnish yet I need native speakers to help me with that but yes eventually there's no intent to do that so the plan is from the perspective of the metadata that they have they have a number of tools they're using so we're taking ontology using sparkle queries driving the metadata back into I think the IBM business glossary tool whatever that is and into some of their other stores and with respect to the logical schema they're still early on in defining a new warehouse so we're at the front end so we can tell them what terminology to use and then use the sparkle queries to validate that that's what they did so we're still early on enough that it's not so much a big deal for changing their existing data