 Okay, thank you for coming, everyone. I'm responsible for the financial services industry at MarkLogic and I will be talking about using liquidity risk management within liquidity management and more general framework as a case study of how semantics and ontology can actually enhance that process rather than dealing with very specific aspects of the FIBO and ontology semantics technology. I'll be speaking more about what liquidity risk management is and where some of its problems can be addressed with the technology we're all talking about today. So you're dealing with a new way, a new way, at least I'm proposing, a new way to look at how liquidity management and liquidity risk management can be reframed and rebuilt. And the primary difficulty we have right now is the pressure on financial institutions, especially the mega-institutions that I've come into being in terms of both using liquidity management as a competitive edge vis-à-vis one another globally, they all have global reach, and also in response to the regulations. Almost every regulation that takes up the headlines today has a particular requirement about risk management, specifically about credit risk and specifically about liquidity risk, and the accounting for the internal funding requirements of the institutions. And it's becoming a little tedious because the, okay, this is, I'll go on, this is the original deck, but I'll continue with this. The operational management is as three legs, basically, the funding management, the internal funding requirements, identifying funding gaps, and then responding to it. Manage the cash flow and collateral operations, cash flow management, collateral operations, so you get the data into the decisions about funding, and then the risk, and that risk is two-fold, one is operational, that get the funding where it needs to go within the company, plus its implications on the balance sheet through asset liability management process. And there are three components. The yellow ones are the application stack, which may be homegrown or commercially available, and the bottom one is the process that harvests data from many databases. The databases underneath are legacy, transactional infrastructure, and there are many. Part of the reason is the mergers and acquisitions that we've seen over the past 10 years, everybody accumulated smaller companies with their own technology infrastructure and sometimes incompatible technology. The funding part as a business process draws on the results the collateral manager application and the cash flow manager generates, and the liquidity risk applications or module. So we've got data silos everywhere, but everybody wants a 360 view of where the funds originate and how they travel to their target budgets. And as they do so, they need to generate reports about cost of capital usage, the trail, the audit trail and so forth. So liquidity risk management function is this diagram explains it as a continuous process and the boxes at the bottom are indicate what it is that we're missing and what we would like to have built into these boxes. The first one is essentially collecting the data that explains us the behavior of the transactional systems and the data they generate and understanding where the risk exposure originates from in terms of risk groups. And then the first one and the second one is the calculations where we do the simulations and come up with the results and the third box is the reporting and it's entirely sequential and it was overlaid on a transaction infrastructure where you dip in and extract some of the transaction and flow of data, use it and then pass it along to the next stage. The other thing that is generally not captured in that diagram and in the legacy systems is the event signal data. That's generally outside the transactional systems but events occur through news, market activity, volatility data during the course of trading and the things you may harvest from social media and the payments and the irregularities in payments and the payments supervision that comes through the fraud surveillance systems. And all of that is generally has been outside the transactional streams that feed existing liquidity management applications. Event stream management today, when we look at it as a technology stack, we're using complex event processing technology, messaging technology, transaction flow, XML and their derivative instruments traveling as documents but all of them are independent of one another and they do have exist in their own time dimension but temporarily it's very difficult to relate them until they get accumulated in a data warehouse somewhere and we don't establish the causal relationships in time in point in time when they occur. We're waiting too long to start making sense of them in unison. And what we miss, the opportunity we miss and the information that we miss in this case is those independent streams that may be client transactions, economic indicators data that is published regularly, market data, market activity information and the trading data, news, all of those have causal and temporal relationships and we can capture them in point in time when they occur and make them travel together and make them get to a destination into an analytical box as if they were one unit of work, one unit of information and exploit it very early rather than going through this sequential process of moving data, accumulating it and then attempting for discovery. So when we look at event streams, we have to realize it's much more than data that is the contextual, operational, significant information that affect one another. We just don't know how to quantify those effects as they happen, but we should be doing that and we should be doing that in the context of both regulatory supervision, institutional governance and in terms of paying attention to the PNL aggregation across lines of business. So the proposition we would have is to look at events and capture them as RDFs. And I'll describe what the repercussions of that would be if we were thinking of a liquidity management infrastructure and a liquidity risk management infrastructure in that context. So we would treat cash flows right from the source origin as events that we could transpose into RDFs and manage through the flow as RDFs. Same thing with collateral updates, updates to collateral valuations and do the same thing. And then we would actually establish the relationship between those using ontologies and semantics as they travel. So my examples are very simple, they're meant to be illustrative. So all of this will be familiar to you, but I wanna get to a point where I can relate what happens in the course of the time dimension and the kind of analysis we need to be making the points in time. So the first bank is in London and London is in England and ideally I capture these in RDFs. And we will find that banks that are interested in collateral lending business that are in England, we will infer from that that sorry, first bank actually is in England because it's owned by a company in London therefore it's subject to regulations in the UK. And I would actually treat all collateral lending operations of this first bank as if they were subject to UK reporting regulations. The workflows that define the ontology is gonna be based on relating the processes and the reporting requirements. The green ones are either internal reports or regulatory reports. The blue ones are the processes that I'm describing that will actually use RDFs and we will actually correlate the calculated results that go into reports by the semantic relationships we establish is the RDFs flow. This is my view of how we would actually establish a Fibre framework specifically for liquidity risk management. If you go clockwise, it's starting with the boxes, we would be determining the relationship of a cash flow that is related to collateral. In practice in liquidity management we establish risk groups. A particular cash flow is part of a risk group that is defined by sensitivity to say interest rates or income, household income of variations in other economic indicators. A collateral valuation is part of a risk group that is exposed to say the real estate market in Southern California or the currency fluctuations in Europe and we would actually establish those and then travel around into the the delta calculations we would make when we do have changes in the underlying parameters or underlying data. And that would involve the calculation engines that I talked to about earlier that would not only do the simulations incrementally as data changes but update the risk profile as it goes along. What I would do is in the detail out the cash flow management and collateral processes is actually insert the ontology into the cash flow manager where predictive forecasting capability lies and that cash flow managers enriched with the ontology and map to a reporting hierarchy. Each reporting hierarchy is based is mapped to in practice a risk group but we do a lot of data transformation and data movement to make that happen. What we should be doing is that the risk group has a reporting path all the way to the management level on its own and we will actually invoke that reporting path with the cash flow manager when the semantic relationship indicates a change in the underlying relationship. Same thing with collateral manager it is the host for the ontology definitions related to collateral valuations and the risk groups that different collateral assets fall into. So instead of that middle that had a lot of ETL movement and relied on transactional systems for movement of data we would actually insert the facility that would build the RDFs and capture the data as RDFs and to build the ontology and then move forward into the cash flow manager collateral manager that would activate the specific calculations we would do. This would not be sequential this would be concurrent processes depending on what the signal data is telling us. And that's the short version of the presentation. So I'll stop for QA that leaves us a lot of time for QA. There was one couple of slides there that went into detail about how to construct risk groups and relate them to specific calculations in liquidity risk management. But I can go into detail about those if that's of particular interest to anyone. So any questions? Dean. Right. That was one example. Every previous question. The bi-temporality and the significance of bi-temporal capability in the context of liquidity risk management and just an example of what would help. If I have a cash flow one from a source one and it belongs to risk group three and then that cash flow is related to collateral asset two which is part of the risk group three and the cash flows are essentially payments for mortgages. And at some point I get the news that in that, from a debtor in county D and then I get a news report that says for the past three quarters in a row household income at county D declined by three percent. So at that point I know something there is regular time dimension and then there's another time dimension that is after the fact that I know there is household income decline in that county. Should I move my cash flow from one risk group to another risk group? Should I go and move my collateral from one risk group to another? And suppose I know that the payments are essentially payments into a mortgage backed portfolio that is coming from a tranche that is interest rates payments only versus payments against the principal. And there in lies I need to keep whatever I do in two different times one is after I know the household income has declined now that I have to look at my risk grouping differently versus what I would continue hoping that nothing has changed and the payments are coming. There's nothing, no irregularity in the cash flows but except that there's this news that changes my parameters of the way I look at risk grouping. That's whereby temporality would help when did I know what changed and what did I do after I know what has changed. Yes, we have customers who use that function for very similar portfolio optimization type applications. There is a reporting application for internal surveillance at trade desks that also applies which is very significant not as quantitative but yes. Well you can go sparkle or all. If you're capturing these as RDFs what are we using to leverage RDF information? Well it will depend on both the ontology but first of all it's either sparkle or all you can use them. A lot of the customers have very specific requirements for what they're watching for. The change in the event or the exception event that they capture is gonna trigger an application that looks the entire RDFs, entire history because history is also stored and do some analysis based on that exception. Exceptions are the type of exceptions I was describing that change the parameters that are part of that ontological relationship be established like a cash flow is actually in this risk group did something related to that cash flow change it so that we have to disqualify it out of that risk group and put it somewhere else and that's the logic. That's the logic depending on what it is you're doing a derivative straightening or mortgage backed assets or yes you can use sparkle, you can use but that's the interface to the data but it goes yeah, I mean x-query is very particular to my logic but that's how you interface with the data. The application itself that leverages that data is various depending on the line of business but that's essentially most of the time maybe 80% of the time proprietary logic that the customers have. Mike Bennett. Right, so we are building ontology is the next. So what Mike's asking is there an event ontology? There's an event ontology that we're building for different lines of business with customers. The talks that follow me is from SmartLogic is the partner that we work very closely with and we have a number of projects that went into production already but it is not a global event ontology at this point but it is very specific to the things we do like credit derivatives straightening or fund portfolio optimization or econometric forecasting, things like that. Okay, anything else? Anything else? Okay, let's. Okay, thank you. Thank you.