 So welcome everyone. So my name is Leo Labais. I'm the CEO and founder of technology company Ragnosis. We've been working as ISDA's technology partner and more recently ISLA and ICMA on first developing the common domain model and what we want to show today is a very direct production implementation of the CDM which is actually an industry implementation of the CFTC rewrite So we'll get right into it So why does it matter and what is the role that open source software and Phinos for that matter played in making that happen? So first of all the CFTC rewrite in figures in a nutshell it's big and it's costly. So here are a few fun facts It consists of 175 fields of which a number of reusable thankfully It's spread over 230 pages scattered across three documents Good order of magnitude for a large tier one institution for implementation is in the order of 10 million dollar And it went live as luck would have it Monday this week What I'm about to show is directly Demonstrating what happened for the CFTC rewrite, but it's not confined to that the SEC is changing to it's changing Aligned Canada is also changing more or less in the same way with 13 extra regime-specific fields Amy refit is gonna go live in April 20 24 There are even more fields to report more than 200 Thankfully about a half are actually overlapping with the CFTC and there is an array or a barrage Shall we say of further changes that are coming across the G20 in the next few years? Now, what is DRR digital regulatory reporting? So it's effectively a groundbreaking industry-led REC tech program that lets financial institutions slash billions of dollars of reporting costs and risk By working together in open source So what that means is that DRR firms are digitizing which means effectively coding the reporting logic and they are sharing that code with others hence, you know the circle around It has a few logos on there in in reality There are more than 30 financial institutions that have been participating to the program Including a number of global banks as well as trade repositories which are on the receiving end of those reports now where Open source is important all of this has been open source at heart and from the get-go The DRR ecosystem leverages to Phinos projects The first one is the CDM as I mentioned and the second one the clues in the name. It's Rosetta So about the first one the common domain model provides a standardized representation of Trades and events through the transaction lifecycle. You've probably heard that over and over at Nozea since this morning So those transaction data are effectively inputs into the reporting process now Rosetta is a language Also known as a DSL domain specific language that effectively allows business and regulatory experts to translate Reporting rules into executable code both those projects the CDM and Rosetta Apply software engineering technique called domain modeling and they apply that approach to the regulatory reporting domain interestingly So both the CDM Rosetta In the process of being contributed to Phinos as you've heard earlier The output of DRR is also open source although not under Phinos at this point in time And upstream from there The CDM and Rosetta are actually themselves based on a modeling framework Which is held by another open source foundation called the Eclipse Foundation. We use EMF and X-Text as per you the The acronyms, but effectively it is open source at heart Okay, so first Two sections in that presentation. We're gonna talk first about how DRR got built And then we're gonna talk about how DRR can get consumed by users. So first building DRR So this is a collaborative industry efforts that is being run by industry experts So how rules are coded So we're gonna step a little bit into the anatomy of a reporting rule in DRR Effectively a reporting rule binds two components together The first component is the functional logic of the rule which is machine executable So non-ambiguous can be executed by machine and then an equally importantly a Specific reference to a document or set of documents that support that logic So this can be regulatory texts, but it can also be guidance technical specifications. You name it And effectively the Rosetta DSL is a purpose-built syntax That allows regulatory experts business analyst to read and write that functional logic So you see an example on the right-hand side of those two components on the left the regulatory reference and on the right the functional logic bound together So how do we go about building an entire report like the CFTC rewrite three components what weather when? So what is what to report? Those are all the reportable fields. So those are captured by field rules whether to report is Eligibility rules should I report or not that trade for that specific region and finally when to report also called timing rules So this is what you see at the top on the right. What when when? What whether when? The field rules are bound together Into an overall report data type So that's what you see on the right at the bottom in this case. It's the CFTC report data type And that data type contains all the fields as attributes and it also contains its own validation logic Is this field optional mandatory? How many do I need to have are there specific conditions linking presence of attributes versus other? What is the type of fields and so on? DRR follows a test driven approach and that is fundamental So what that means is that every bit of logic that is implemented in DRR is run live on test packs So test packs are actually synthetic data sets. In fact, they are Anonymized data sets that are provided by the digitizer. So the contributors around the project and those Test data are used to test the reporting logic in real time. So each test sample comprises two components First is the inputs So those are the trade events. This is where the CDM comes in the trade events are represented as CDM data And second is the output, which is the expected report as per the regulation And this is driven by the data type that we just saw earlier. So it's being Projected into that that data type in this case the CFTC rewrite report data type Now what Rosetta does is it abstracts all of the machinery that is required for real-time execution Away from the user so that the business analysts and the regulatory experts can focus on Implementing the logic providing the test pack making sure that everything fits together Okay Finally on Constructing DRR, how does this get distributed? Again open source at heart and open source in its output So that the output First part of the output which is open source is all the model components So rules, data types, report definitions, also the test packs extremely important What is also open source is all the generated source code So by default code generated into into Java, but effectively that code is directly usable to build compliance systems And directly usable by either end users can be the financial institutions the reporting firms but also by Vendors in the regulatory reporting space Not open source, but freely available and very important is that reporting code that we just saw earlier is also packaged as Hosted API those hosted API are provided for testing purposes to the industry freely available But it's just if someone wants to get going test an API send the data get a report back They can do so within a matter of minutes, but it's not for production system because it doesn't have the right latency And and other and a throttling that would be required for production purposes for for obvious reasons Okay, so that's about how DRR got built Okay, so a lot of you probably in the room are also consumers of that You know, how can I make use of that great asset that has been built in open source? So Fortunately for you we have an example of a live production Implementation use case and this actually went live on the 5 of December. So Monday this week And it went live smoothly, which may be a first for regulatory project Okay, so the reporting pipeline so this is how firms consume DRR So basically you need to get to the end-to-end in four steps translate and reach report and validate and Effectively what at each steps what Rosetta allows you to do is to abstract the business logic away from from the application layer So we're going to step into each of those steps in a second So first translate whoops, sorry So Let me dwell on that slide just for a little bit. So translate is how you map your internal data models to CDM Generally like the industry is not natively CDM. They all use external Existing data messaging formats. Those can be mapped to CDM It uses a built-in feature in the in the CDM and in the Rosetta DSL Then second you need to enrich those data. So those data need to be enriched with either public or private data sources and The APIs by which you do the enrichment is actually specified in DRR It's which means it's standardized and you can do that again using the Rosetta DSL The third part is high reports So the functional expression of the reporting rules in the regulatory text is what allows you to report and Effectively it's using the Rosetta DSL ability to express logic and all of that gets translated into executable code and Finally validation. So it's the same validation logic Which is often contained in the technical specification can be encoded in the model and you know using the Rosetta DSL's ability to express logic So what the next slide shows how all this can be put together On an actual implementation So this is a live Implementation that is actually and here I need to make an important disclaimer disclaimer using Rosetta services So some of the components and I'm gonna say which ones they are some of the components on that slide Are not open source. They are provided on a commercial basis But as I said like Everything that gets distributed or either free available is as per What I mentioned on the on the previous slide in how they are gets distributed So from left to right the first step is that the client staff uses the Rosetta service to build source code For each of the required data pipeline So data pipeline think of it as you know how you get data from you know end to end So the CFTC rewrite is one data pipeline, you know I'm getting CDM data as an input and I'm reporting to the regulator as an output In this case, we have in fact three that data pipelines translate and reach and reports are all data pipeline So all of those get coded as a source code as a model In a client Private repository, so that's the second step all that source code is now hosted in a private repository at the clients and The pink box that you see all around that is effectively for the ability to do that and to develop their own private Extension of the model. This is where it requires some services that are provided on a commercial basis On the third step on the right hand side, you have how those data pipelines get deployed So many deployment options available one popular one that we see is that data pipelines are packaged and deployed as container We use artifactory as a software artifact Registry and effectively the client can pull that software and embed it On-premises within, you know a microservices architecture for instance So that software gets embedded directly within their compliance implementation So Rosetta or the CDM does not provide the compliance implementation The client builds that but they can directly embed all of the logic as part of that compliance implementation They don't have to re-implement it and this can be deployed while in that particular example on-prem But some clients Hopefully in future will be keener to migrate that even on a hosted basis in the cloud And that's it and that's you know a way by which a firm went live With the CFTC rewrite earlier this week So we're gonna talk a little bit about each of the steps. So translate and reach and report So how do you translate from firms internal models to CDM first? So this is as I said a built-in feature in the Rosetta DSL that allows to map to external models very important The CDM is distributed with a set of synonyms and test packs that allow you to ingest from public model sources So an example is the financial product markup language or FPML Which is widely used by most firms generally in their confirmation systems They generally use variations of FPML, but essentially it's based of FPML And effective firms can extend these public synonyms sources and the test pack to map to their own So that's the first step The second step is enrichment. So how do we enrich CDM data with static reference data? So why do we need to do that? Typically front-office transactions, which is affected the data at source They usually need to be enriched with static data before reporting. This can be legal entity reference data. This can be So those would be publicly available But those can be privately held for instance in many reporting regime you need to report Personal information about the traders who did the trade passport number, etc. So those would typically sit in HR systems So what DR leaves the choice of data source that implementers want to use? However the API which is the inputs and outputs can be standardized and specified in the model And this is exactly what we do. So three steps to do that First we define a special annotation that marks external APIs in the model. It's called external API very original Then we define that external API by its input and output So the example that is shown on screen is a glyph call. So that's a call to the Legal entity database, which is public which is called glyph So you would pass in a string as an input which is effectively LEI and you would get back Information from that database about the name of the entity the country and other sorts of useful information And finally you can use that API as a function in the model to enrich the data So in this example that is shown on screen. It's it's ellipsized for obvious reasons But basically I pass in transaction input, which is not enriched It's coming from front-of-the-system and I've logic that calls the enrichment API Connected to glyph and enriches that data with with LEI data and it gets through the rest of the process So we went through translate and rich and finally report So that's whoops well actually that's was the step we saw earlier. So that's where people can either deploy it on-prem or Hosted and if they deploy on-prem they just pull from the R software registry And which brings me to the conclusion so hopefully we will have time for questions So what's next for for DRR and how Finos can help? Well in a nutshell, you know, it's all about collaboration so the image on the right-hand side is effectively my lame attempt at representing a quote that I like which is the difference between Stepping stones and stumbling blocks is the way you use them So what happens in the next couple of years is that a wave of regulatory changes are a foot across the G20 Next is a mere refit compliance date April 20 24 Many in the room will be all too acutely aware of that and also anticipated in 2024 Six or seven other regimes across the globe those could be stumbling blocks and usually they have been stumbling blocks for the industry Now what I'm the pitch that I'm giving to you today is to say no no This is actually a stepping stone This is a massive opportunity for the industry to change the game once and for all on regulatory reporting by doing this And how to do that it's through open source collaboration This has all been about how the industry comes together Collaborates in open source to solve that problem once and for all and it's been proven on a production use case This has involved firms but what's interesting is that and we are seeing very very positive vibes at the moment on that is Increasingly it will involve regulators. I would hesitate to go so far saying hey, you know in a couple of years time The regulators will directly provide Regulations as code, but that being said there are some regulators that have been toying with the idea in particular the European regulators and Also some some in Asia that would obviously bring substantial benefits on both sides Regulators as having a self interest in making that happen because it is the way for them to ensure Absolute data accuracy and quality and comparability on the reporting regimes that they supervise And that's it leave you with that thought and I'll take any question Oh, sorry, I forgot to say if you want to see a live demo of that in action table number five and So my colleague Nigel is also at the table with me Those are QR codes if you want to scan them if you want to connect us on LinkedIn questions So I'm just going to repeat the question. So the question is can this be used for? Internal risk management purposes if a firm wants to embed rules for instance within like limits in trading for instance So the short answer is yes. Absolutely. Yes So this particular example is using all of the functionalities that you've seen to build something that is common across The industry and the benefits are obvious because it means everybody can collaborate but the same technique can absolutely be used to Implement and enforce rules almost like at source in systems and those may be private to a particular organization So yes, absolutely Obviously doesn't it doesn't necessarily have the same, you know the extra kick of that industry collaboration That makes everything even more efficient But yes, definitely having that transparency over, you know, this is a rule It's been enforced natively and by the way because the rules are bound to specific text and references I can actually link that to my internal policy as a firm. So that's that would be a very powerful application in deal I need to ask about enrichment. So I think there are a lot of challenges about Enrichment right? Can you hear me? Can you hear me? Yeah, better. Now you can hear me. I Wanted to ask about enrichment so on the challenges around enrichment and What happens when if it's either external or internal you cannot enrich The sources in in a separate repository how you how is handling to recognize when the data is available to fix the reporting. Yeah, so the question is about enrichment and essentially how does it work? Especially if we are enriching with private API. So the way to end we have that exact use case with With with with some clients. So the API was defined in the model So the inputs and outputs. So I like to take the example of you know, the HR systems if we are to enrich data with Private information about traders name and you know passport number and and all this kind of thing So you can absolutely standardize that that call, you know You need to pass in to pass me in an employee number and I'm going to return you You know this set of information and then we define an address for the API And then the system will automatically recognize You know the address of that API and make that that that call and that sort of call to the API gets embedded in the Software that is being pulled by in the container So that that's exactly and we do some calls not just to glyph a public public Database, but we do some calls to internal APIs. We have an example of a firm was also using that They have their own eligibility engine And Basically, we are making calls to as an API to an eligibility engine Again, which is modelled in you know exactly in the same way as As what I showed on on glyph for instance, how does it handle exceptions? I Don't know The honest answer I'm sure I'm sure we've done something but I don't I'm not sure what it is Nigel, do you know how we're how we're doing it? Why in said okay? Good. We are right validation conditions, but ultimately we're delivering a model and and a lot of software But yeah, if there's technical integration issues, then that's part of the Testing and build to make sure that those calls are gonna work and don't fail That's implementation Hi Thank you for this presentation Um, I think I have two questions. The first question is you mentioned that there are some firms that went live with dr Can you talk about how many firms and how long it took for the implementation and just a general experience? And then my second question is As you you may know CFTC phase two is coming up on The unique product identifier and also ISO 20 oh 22 You guys planning to use dr for that. Thank you Thank you. Yeah, two very good questions so So what's interesting is that because it's an open source project We don't necessarily know the exact number of firms who are using it and how and that's the beauty of open source And we don't need to know what I can tell you is that we definitely have We have you know at least we know of at least one firms From went live with the directly with the code provided by dr as their primary implementation We know of at least one other firm was used the the code for Benchmarking effectively they have an internal implementation and they benchmarked that against what was provided by dr We know of at least another firm was used with even a bit more Light touch approach and they use the test packs as a way to benchmark their implementation So they didn't use the code. They just use the test packs. These are the inputs This is the output. Can I if I run that through my primary implementation? Do I get the same? And because all of this gets distributed in open source, then you know open season So we know of at least those three the CFTC was the first New trade reporting regime that went live, but all of the G20 Is going to change across the next two to two five years So the dynamics at the industry level as I said there were more than 30 financial institutions Around the table and having contributed to building that only some of them Kind of went live with it first time around but hey, you know, it's the first time So, you know early adopters take a leap of faith others, you know are looking to see what happens based on the success of that first rollout What we expect is that with any refit and the flurry of other APAC regimes that are coming next year That we're going to see, you know, my lower bound estimate is probably You know half a dozen Firms who are going to use that as their primary implementation in the code. That's my flaw There could be more Sorry, you had a second question and I forgot what it was Oh, no, I saw 20 22. Yeah, okay, so as you know I saw 20 22 so Reporting to the CFTC was initially programmed to be ISO 20 22 compliant ISO 20 22 the technical specification was Not there, you know early enough For that to happen. So the CFTC effectively Provided a relief to the industry and said even if we go live December of this year ISO 20 22 will not be until December of next year, I think So an indirect answer to your question. Yes, and that's also going to apply to To a mere refit. Most of the next ones are going to be ISO 20 22 So dr will also include that, you know last mile which is translation Into ISO 20 22 and that's actually that was one one of the biggest driver for why we at some point we decided to Change the way we were Looking at the output of the report and nights of data type With like attributes types Cardinality constraints validation We effectively have it as a model component because we can then take that and map it to the ISO 20 22 model So that was one of the drivers for doing that. I don't have my glasses. I don't see what's at the back. Are we finished? Stop. Yes, we are Okay, thank you very much