 So hello everyone, my name is Rakesh Sharma and I am founder and CEO of Crest Labs. Today, I'd like to talk about how we used capabilities of OSDU to build and maintain consumption-ready data using our data quality management solution on top of the OSDU data platform. So, AlloidEQM is the platform having built-in data and business rules derived from emerging OSDU standards and PPDM that controls the ingestion of data. So, AlloidEQM is based on mic services and it's a cloud-data application orchestrated by Kubernetes. It's a highly scalable application available for conventional and new energy streams. So, now basically, we know that to push data in OSDU platform, we can anticipate data migration jobs, ETL pipelines, and CLI tools and various other data ingestion approaches to finally populate data in platform. Now, as data is, as OSDU is data-centric and a schema being its blueprint, quality of data should be centrally defined and managed to build fit-for-purpose data, regardless of ingestion source and approach. So, we built AlloidEQM that is data quality as service on top of the OSDU data platform to build and maintain reliable and fit-for-purpose data in OSDU. And we believe that data of known quality will certainly exhalate and promote adoption of OSDU. So, now I'll talk about a little bit of the benefits of AlloidEQM, which it brings to OSDU. So, it certainly elevates trust in data with full data lifecycle APIs and built-in data and business rules derived from OSDU standards and PPDM. It helps data governance team to monitor and oversee data cross-organization and they can intervene when required. And this will eventually help to have consumption-ready data in your data supply chain. It reduces significant time wastage and possible risk as QC, then necessarily done by individuals, may not be qualified to do so. And for instance, if a new entity or custom schema is added to the system, associated data quality rules will provide a gateway to various publishers and data of known quality can be guaranteed to the consumers of the data ecosystem. Again, data center and schema being the blueprint, relevant business and data rules can be incorporated easily with our plug-and-play framework to produce reliable data for other streams like Wend, Solar, CCUS, etc. Now, I like to highlight major advantages of OSDU data platform that helped in our AlloidEQM journey. Trust call services are provided by a platform. It reduced major efforts to develop solutions. This eventually saved a lot of time and reduced time to the market. And being a unified platform for conventional new energy stream, you were able to develop data quality management system for any stream supported by OSDU. Now, with regards to interoperability, quality attributes can be consumed in other applications, not limited to UI, like various analytics data and domain apps. We strongly believe that AlloidEQM on top of data platform will have great advantages for consumers. So AlloidEQM will act as a quality gateway to your data and eventually reliable data can be pushed and consumed in data supply chain. Now, data-centric quality rules can be defined and relevant changes will be rolled out data ecosystem wide and thus reduces efforts and certainly reduces time to market to produce fit-for-purpose data. Majorly cost being function of efforts and time, substantial cost can be saved to build and maintain stable data foundation. So as far as usage is concerned, AlloidEQM can be used in pre-ingest, post-ingest and enrichment stages. To give you idea about pre-ingest, so certainly with a rich set of APIs, UI, data of known quality can be built and maintained in OSDU. So for instance, we know that with OSDU design principles, data should be inserted in platform with very less friction. But however, we know that this obviously enters harvest data into the system. So in those cases, pre-ingest checks can be performed and various enrichment stages can be awarded to have a fit-for-purpose data. And in post-ingest stages, governance team can have corporate storage by data health checkup. And we also got capability to tap the notifications from OSDU data platforms. So with every insert or update system can perform quality scan to potentially find issues. And eventually it can trigger various manual or automated workflows for enrichment or auto-fix potential issues. Now let's proceed for a short demo. So I have two instance up and running. So this one is pointing to Google pre-shipping instance. And the second one is pointing to the instance provided by photo-sample lining. So for the sake of differentiation, I turned on the dark mode on the other instance. So to start with dashboard, so this dashboard gives a data storage while data health check check highlights. So with the total number of records and number of consumption ready, and this gives the number of records being scanned by the system and the potential quality issues. This is the scorecard of complete storage. And this is the quality over time. So this will simply help governance team to check if the quality of complete data store is above a particular threshold level. We also got quality coverage highlights. So this will also help governance team to see if a particular entity was kept in pre-ingest or post-ingest stage. And then a development scan can be initiated to have fit-for-purpose data in the system. We also got a very good feature. We call it rule impact. So this will give a preview of major errors in the system. So in this system, there are lots of broken references and other errors also. And then this will help obviously data owners and governance team to come up with a solution to fix those errors. And then they can plan strategies to avoid those errors in future. This is another error impact area where the system presents if an error is repeated in multiple records. So for instance here, this value is missing in 26K lockup type. So once we fix, obviously once this broken reference is fixed, then most of the errors in lockup type can be resolved. So the users can also filter dashboard by entity. Yeah, there you go. And we can see here that for well-kind, we got some substantial errors. And that users can go to these particular errors to find the relevant details. So let's try to pick this one. Okay, so that errors is affecting 31 worlds. So let's drill down. And obviously, this screen presents various dimensions calculated by system. As you can see here that the UWI and well name is missing for this well record. And apart from that, this section presents a scorecard for this individual dataset data. And the number of errors and these are the timeline of business rules checked by system. Also, we can also see here system performed quality checks based on schema. And this is a very, very good feature where an system performed checks based on schema out of the box. So we don't need any configuration or mapping for that. So system checked all of the references based on the schema metadata and the patterns and if the required properties exist or not. So let's clear this. Now let's go to quality run screen. So this screen gives you the quality runs performed on entire dataset. And then users can obviously filter quality runs or the records where these business rules were failed. So let's pick a very, very, very minor one. So leading and trailing by space and that's that's a minor error, but eventually it affects a whole data supply chain because we know that practically in various scripts or in various system spaces are skipped, right? So eventually it affects data quality. Let's check this. Right. So this data.name got a space, right? So this can be fixed if we know that error. Now let's go to rules library that I was describing. So we got a built-in rule library and how that library used to derive all these results. So to stop it, I'll pick schema-based check. So this is a library which performs the quality scan based on the schema metadata, right? And we don't need any configuration for that. So this is a default rule set applied to all schemas. Well, let's pick another one of course. So it contains the rules required to do the various velity accuracy and completeness can available. And furthermore, more and more rules can be added to our built-in library because of their plug-and-play nature. Now apart from built-in library, we understand that to produce good quality record, there are corporate specific requirement to perform specific checks. To cover that thing, we also got a section where custom rules can be created inside the application. So this one is a custom rule that I was filtering record. So let's pick this one. So users can define custom rules based on simple expressions, right? And they can select relevant dimension. And those can be attached to any schema. And obviously these are the details so that we can filter out those business rules while applying to schema. So you can also specify energy streams. So once more and more energy streams are supported on OSU data platform, so we would be adding various built-in and various other data rules to the system. Now let's go to schema. So we got a very good feature where the system scans the schema from OSU instance, right? Apart from scanning the metadata, it also scans all of the attributes and the required, like if that property is required or not. This is a section where users can assign business rule to the schema. So considering schema as a blueprint of data, so the data quality rules should be assigned as close as possible to the data itself and then the schema, right? Rather than scattering to various other applications or utilities. So users can pick business rules from our library or the custom rules they added in the system. Now let's go to a data set and let's try to perform a quality scan. And for information, the instance is fetching data live from OSU instance. So we are not caching any data. So this is the button to initiate a scan. And sometimes it takes time because obviously it needs to check all of the references. And once the system is deployed along the OSU cluster, then obviously that time can be reduced drastically. Let's drill down to the detail. And obviously this section lists out all of the quality runs performed on this data set. And this highlights the quality of the time for this particular data, selected data set. And as you can see here, system checked all of the references as defined by the schema metadata. That's a check. Plus system also checked some of the other data based on built-in library assigned to the schema. Now let's try to run a complete scan on a particular data set. And that's important because once we have data in the system, so we may need to add more rules that needs to be checked for the particular data set. And for those cases, we also got capability to bulk check the data set. So let's initiate a scan. Let's go back to quality runs page. And you can see here, a quality scan is initiated for 426 records. Let's refresh that. So that's how a bulk scan can be initiated. And apart from all these system, as I mentioned earlier, system also got a feature to tap notification for OSU. So for instance, if a record is being updated or a new record is added to the system, Alloy DQM will perform the relevance scan automatically. Rakesh, quick two minute reminder. Okay. Right. So now, okay. So to re-emphasize, so Alloy DQM can be used in full complete data life cycle to have consumption ready and fit for purpose data on the top of OSU. And this concludes my presentation and demo. And I'm happy to answer questions if you have any.