 So, good. So, what we're going to take today from the slumber to site is. How are we taking in the context of data curation to enrich our corporate data assets in available in the always the data platform. And in fact, this is an exercise that is a result of a word that we have with a customer in Southeast Asia that presented their vision and the business value of the. Work on the OSD, you and you see outbreak on last September on 2021. So they were highlighting the benefits of what we have seen during today's session, the flexibility of the OSD data platform, the schema landing the data as well as on the transformation of that data to reach the standard OSD authority schemas. And the whole enriching capabilities offer in this in this. So, just as a reminder, last year, we will launch, during the mercury release our enterprise data management solution and that it comes to be constituted by 2 big pieces in manage OSD service, which is taken they always do from the community and put in our premium services on top of that. Services that are native to the architecture of OSD you and a data workspace with nothing, but a data management tool that allows us to do a seamless data loading into the OSD data platform plus being able to search, browse and visualize that. What we have been working from there is in the context of data quality and data. And just to put us in context, so 1 of the things that that I'm just going to play this pretty quick in the context of completeness, we see a lot of our customers struggling with the fact that despite the fact that we are bringing data from multiple sources to accommodate these golden record. We still have issues with a completely so we are bringing the data in the original formats transforming it into the standard OSD authority schemas and creating these new composite best data sets or records as we saw it during today. So, multiple companies, including our cell has been dealing with this challenge of creating, not just the master, but the fact that we have to do these deterministic as well as a non deterministic approach to bring data for multiple sources and amalgamated into a single master record or work in record. And if you see on the illustration, we have done and we, we have successes, not just all, but from other companies in bringing the completeness level into the highest level, but still some challenges. So this customer proposes the idea to bring data from on structure data from reports because they say, we know for a fact that we have all these missing information in those our reports and what we are doing is manually completing them. So, can we use some non deterministic automated approach to be able to squeeze these insights from the document and put it in a flavor of structure data. So, in a nutshell, we're trying to do exactly the same thing that we that in the previous illustration, but rather than having structure sources being able to extract or create new record. Out of this document, they can advantage of AI, sorry, enable workflows and instruction to move those them into the OSDU data platform and then leverage the power of the platform, being able to transform those to normalize records into the OSDU schemas, and then passing that to create the master. So, in the context of on structure data, and this is something that we also presented last year, we have been squeezing these documents on so many ways. We have not only leveraged in house technology created by our innovation centers across the world, but also the best of the best of partners that are present in this on this call being able to not only visually detect elements from documents, but being able to extract it like with the tables traction as we did last year in our demonstration, but also now moving forward to even digitize a lot records for images or digital records from those documents into into the OSDU databases are proof of the evolution of these data management. And we are going to talk today about one of them, which is the raw record from on structure data. This isn't a screen of the workflows that we have available last year where you could see, and not only the browsing capability of the platform, the on structure data coming from the OSDU platform, but also the ability to be able to see taxonomy based classification that could be customized to the customer requirements or governance concept tax. Based on some science provided by some of our partners in the business and being able to identify what is there in the document based on the context of the document or based on the context. In this context, we have progress a lot of all the different modules and engines to be able to expose them that information. But how can we use this in a canvas to be able to execute what the customer wants is able to transform these into valuable insights that is going to complete better their assets in OSDU data platform. So, one of the things that we analyze and the challenge is whenever we are reviewing this document, not necessarily the data is coming on a single page. It's coming not only on multiple pages that are in a single sequence, they are scattered all of so we have to be able to analyze what is the relevance whenever we find or found these attribute value pairs that is going to be useful to recreate these are record in the in the OSDU data platform. There is different formats, tables, forms, schematics, there is different nomenclature. So we need to squeeze also natural language processing to be able to accommodate the understanding of what is say, what is mentioned in these documents. So we took a very basic approach on this one. So being able to have the obviously the test expression leveraging the power of OCR technology. And then 3 or 4 big blocks, 1 is to be able to do the relevant page identification and being able to understand where are those attributes value pairs and convert them into a standard format that could then being aggregated to generate the single record that could participate on the, what I call amalgamation of multiple sources. To create these golden. So, as I mentioned, we have the data workspace, but what we did also was to combine the power of our data workspace with other two important workspaces that we have in our offering the analytics workspace to do the analysis. All data sets being able to do dashboarding and analysis of all the data as a whole and contextualizing the chapters that we may want. And also to bring the AI workplace, which is a very important in these elements to be to do what we call the democratization of creation of AI model for everybody to take advantage of that and being able to automate the evergreen inside generation. Taking advantage of in our AI workspace, which is powered by data, so we could easily plug to the always do data platform being able to do our orchestration of the data being able to create our models in a such the data in a way that we could get that inside. And then the results are back into the OSDU in a in a schema that can be standard schemas that we are using for authority, or it could be created on a standard, a customized schema generation that could be mapped them by our well-known schema mapping services into the OSDU authority schema. The good thing is that you could bring your own algorithms or you could take advantage of their party algorithm, open sources, modules, as well as a third party vendors, algorithms into the same canvas. Not only that, you could just work into that canvas to do the full process, but you could also create a composable microservice that then could be consumed by an app. What I'm going to do right now is to show you very quick a flavor of these approach going to be a starting with the dashboard in part. So, in here, I took a decision to do an analysis of an area of our New Zealand data set, and I took a couple of wells to be able to do the analysis. So, some of them, based on the rules that I have set have been to see it properly. And some others have challenges with QC. So, this is very basic. This is, in our case, or in this case, powered by TPUS spot fire, but we could bring here Microsoft Power BI, or any dashboard in a capability. But we are doing right now is jumping into the AI, I have my project that does this, blending of data, and we are going to jump into that particular. So, in here, you could see in the illustration what we are talking about getting the inputs, getting the PDF, generating the artifacts, understanding about the relevant pages, and then get the extraction of the well headed record. So, in here, just showcasing while the process is running, what are the documents that the, the system of access. So, this is reading directly from the OSU data platform, getting identified. What are those relevant pages based on the algorithm to be able to extract those attribute value pairs. And there is a navigation where you could pick what are the documents and in fact, there is a module that we are working to say, oh, we wanted to add a single page that I haven't identified or remove some of the pages to avoid the clunkiness of the system. And last, but not least, being able to see the actual record generated. So, in a nutshell, this is pretty quick, pretty awesome, leveraging the power in the context of the OSU data platform. But that's not all. So what we were also be able to do is, because of the sensibility that we provide in our data workspace, now we could contextualize what is the data that is going to be QC, what is the data that we are going to create directly. So, we could select a group of wells, or we could select a single well. So, in this case, what I did is just took advantage of the power apps and I created a little app hook studio OSU data platform and being able to read the data and hook to that. ML algorithm that I created, created a composable microphone and and I'd embedded into my data workspace. So the only thing that I do it and I intentionally did it. So to leave the, the context that I'm passing to the power apps in this case, to be able to read the information and put it in a context of a single environment. So we have done this, not just with power app, but with a bunch of our partners presence in the call. One of them, I and T leveraging the power of I've got to be able to contextualize what they are looking and take advantage also of the workflows that they provide, like the one that we saw today. So, in this context. So, this is how we show how to unlock the power of your OSU data platform. So we have provided the first fully unified enterprise data management solution, which have always been a platform at the heart, adaptable to connect and reach any existing and future data source, natively built on top of OSU. The other point in here is that we leverage. We don't do things by ourselves. We define how, based on the customer requirement, who are we going to partner to be able to deliver the best of the best to our customers. And now we have demonstrated as well as everybody doing this morning. How do we are generating new data driven and automated workloads that leverage the power of AI and fully unlock the power of our OSU data platform. I wanted to close with an invitation to our slumber jet digital global forum, which is the subject is connecting for the future. So, I really welcome you guys to register. There's no closure for anybody is an open Congress and event forum for all of us to show you what we have, but also what we have been able to do with all our partners. Thank you very much.