 Hi, good morning, everyone. This is Manish Parmar. I lead upstream practice for Elendian for take. And with me, I will be having Siddharth Sinena, who's a geophysist and domain consultant. And today we will be talking about a business solutions in terms of. Extracting obligation data from seismic contracts and. Injusting the same into those two data platform. Using our LTIs force per solution. So, before we get into the actual solution, just wanted to share a few insight about our company. LTI stands for Elendian for take. Basically, it's a 2.1 billion dollar size company. We are a part of our parent company called Larson and to bro. And which brings almost $5 billion of revenue in oil and gas sector. Almost 47,000 employees worldwide. We are offering our services to all oil and gas major globally. In terms of our oil and gas practice, basically be a 20 years old oil and gas practice. With almost 3000 practitioners in terms of technical and domain way. We bring a lot of heritage from our parent company who does a lot of business in terms of hydrocarbon defining and petrochemicals. Also, we are we operate all of our services in the complete oil and gas value chain starting from upstream. Midstream downstream and we are entering into the multiple sources of energy and we have we have started the new practice. In terms of LTIs contribution to the OSD forum. We are a silver member of OSD forum for last 3 years and. This we are actively contributing in multiple projects of OSD release. So, starting from we were part of certification in terms of. Providing a lot of guidelines as well as automating some schema validation part. We are we're doing a lot of testing activities as a part of overall OSD data platform. Release management. So we are actively involved in testing space from domain way. We have donated. One of our solution in the area of audit and metrics. We build a. A dashboarding solution and that is a part of core OSD data platform. In terms of entitlement of the initial pilot the whole development was done by. And then still we are playing some role in terms of development. We played a role in terms of OSD for production. As well as we were part of helicopter resource coordination pilot project. So the dashboard development was taken care by LTI developer. So this is this is just for a for your reference and this is our contribution to the OSD forum. In terms of LTI solution for OSD data platform on the adoption. So basically we have built our own solution framework framework called. EPDM platform. And basically we brings bring a lot of solution accelerators in terms of. Data ingestion to the OSD then in terms of data delivery to the various end user applications or EMP workflows. And then bringing a lot of insight from OSD data right so that that's where. We have our own data set of products called phosphor and in in today's session we will be giving a demo of one of the business case with the aspect. So basically this framework helps in terms of peak adoption of OSD data platform. So for today's presentation basically we will be talking more about aspect product of phosphor. So phosphor is basically a suit of products which deals with the data transformation. And every product of phosphor that is spectra aspect of the refract and women. Each product brings unique value proposition in terms of data transformation and it brings a lot of actionable insights so. In today's presentation we will be. Giving a demo of aspect product and we have built one use case to extract data from seismic contracts. And then how we can relate that attributes in terms of entitlement and obligation and then how it is going to help in OSD data platform right so that that's a business case we will be discussing. Next, so with this I would like to invite my colleague Sidharth Sinare to present the business case using. Thank you. Sidharth, over to you. Companies are adopting the OSD platform. So data entitlement and obligation becomes a very critical service. So for modern sets of data when most of the records are digital, it is not a major issue because the records can be retrieved very easily. But for older set of records, many of these obligations and contractual information are captured in contractual documents which may be in either in a scan format in a non text recognizable format or even in paper based documents. So our solution over here, which ministers introduced the phosphor solution of which the aspect part of that sauce for solution is tuned to extract the entitlement and obligation parameters that is the important contractual terms that is when is the effective date between whom the contract was signed, which are the areas and coordinates restricting these contracts. So those data extraction can be done from paper based documents or image based documents using our solution. So this is what we are going to present today. Manish, can you kindly go to the next slide. Right. So, this is the business objective which I just mentioned, and the business benefits for these cases are basically, of course, these are very important parameters for managing the data obligations and contractual parts on any platforms including OSTU. Second is the data usage rights management. This is especially critical for multi client new venture data companies who have multiple data sets sold to multiple clients for definite periods of time across the world. So managing those contractual applications and data access permissions are critical and this can be done as a, as by extracting the data from the paper based legacy documents, the contractual documents. And also, data sets are usually a shared for academic purposes or for other purposes to third parties. So to prevent the misuse, many of these data sets are sometimes watermarked just to ensure that the original proprietorship of these data sets are maintained. So with this in mind, I will probably skip this slide and then probably go into the main data sets. But before that, I just speak about this user journey. So this platform that we are talking about called fast for aspect. It's a AI based intelligent platform, which has got multiple components to it that includes data ingestion. Second is the OCR. We have supporting multiple OCR engines for doing this purpose. After the OCR is run for initial set of documents, the metadata parameter list has to be created. And the corresponding values from the report has to be annotated by the human in the loop. And once a sufficient number of documents are trained and the corresponding models AI models are built, then for further documents, the future documents, the system that application is intelligent enough to actually extract these data sets with a good amount of confidence. So this I will be displaying on the actual interface. So one more thing that I would like to say over here is that this fast for aspect document processing system, this has been an offshoot of our in depth, a sort of subsurface data processing entire solution which can extract metadata attributes from well logs from seismics and even from well completion reports and so on. So with this, probably I will share start sharing the screen manage for the part of the solution. Let me take up the screen sharing. Right. I hope you can see this. So this is the, so this is the homepage of the aspect solution that we were speaking about. The aspect is one of the modules of the phosphor solution. The phosphor is a more versatile solution. And aspect over here is focused towards extracting data from paper based or image based documents based on OCS. Secondly, it can also extract tables. It can process different type of formats of documents like BDF, XLS, TIFFs, PNGs and it also supports multiple languages. It is also capable enough to extract data from barcodes tables and so on so forth. So within the aspect solution, so there are multiple modules, each of which are tuned for particular datasets. For example, there are, there is an automated document classification system for geochemical datasets and for well completion reports so on and so forth. So today we'll be focusing on the seismic contract, this module of phosphor aspect. So if I just click open on it, so these are a set of files which have already been ingested and OCR for the saving of time purpose. So if I open one of these files, which has been ingested, it will take probably one or two minutes to open up. Yes. So this is an image scanned copy of that document which was ingested. Secondly, this is the OCR version of the same document. This is important because otherwise we cannot extract the metadata attributes. Which are the attributes we can do, what are the major OCR engines that we support that I will be coming in the taken part of the presentation on the configure solution screen. So, coming back to the main thing. So over here, on the right hand side we have a list of metadata attributes that have been extracted from this document. We have trained multiple documents of similar type and then run this through the model to extract these datasets automatically. So if I should not go through all of them, but few of the important parameters, for example, what is the license number, the license details. Between whom were the agreements signed, for example, one of the parties where the Petroleum Expulsion Development Board and the second one of the most licensed, I guess, this one and the licensee over here is the Green Park Energy Limited where the data is located. So on and so forth, plus the coordinates of which are the areas on which these contractual agreements were done. So all of them are extracted from these documents. So as we understand that the quality of extraction depends a great on the original quality of the document, parent document, and sometimes many of the documents are also handwritten. So there might be some issues in extraction. So what we have over here is a sort of human in the loop QC system where any of the fields if it has not been captured properly. The human in the loop, the domain person can go into it. They can make the necessary changes within this, within this captured field and then save the changes over here and then download the document either as a JSON format or a CSV format as we see. It can be a before QC document as well as an after QC document. So preferably for the final person, we will be doing the detail QC and then saving it and then extracting the JSON outflip. This is primarily the function that we are doing. Now coming to what are the configurations and what are the different. Just let me click on the screen over here. So first is the taxonomic configuration. So we can we have a predefined list of metadata attributes, but if any particular company requires any particular set of documents or set of attributes, it can be entered and added, or it can be the existing list can also be deleted. But as it is with all of these AI based solutions that adequate training on individual unique type of documents is required for the models to get trained and for the data extraction to be of good quality as for the text processor. So text processor over here shows what are the important OCR engines that we are currently supporting on default basis. So what we have done over here was based on Google vision, but the rest of it are also the remaining three of them the cofax AWS and AWS recognition by default they are there. But some of the companies might require or might use some specialized OCR engines like Abby. In fact, it's worth mentioning over here that we will be doing a huge data digitization work for one of the words leading companies oil and gas companies for that we had used Abby as a license version. So that was integrated in this platform and the outputs were of course definitely quite good. So this is how using or leveraging multiple OCR engines, we can actually extract the document as parameters from these documents. So with this, I will just go back to the home screen. And before I end this session, I'd also like to say that this solution is able to extract metadata from seismic images. The typical are being the line names and the basings and other information, as well as it can also extract data sets from surface subsurface well locked data sets, especially on the header information. So even if these data sets need to be extracted and the data parameters need to be used so they can very well be done using this entire solution. So with this, I will pause and stop my presentation and I will hand it over to you, Manish for the concluded remarks. Thank you, Sidharth. Maybe you can hold for a minute and we can ask for questions. So members here we would like to conclude our presentation and we can take your questions if any. Thanks Manish. There is one question in chat is phosphor the rebranding of LTI EPDM, or is it a new product line? Yes, Tim. So phosphor is the rebranding of one of the solution we had earlier called mosaic. Yes, you're right. Thank you. Are there any more questions from Manish and Sidharth? Have you been able to take the extracted metadata and use that to populate entities in OSDU? Yes, if I can address this question. So this solution specifically is more tuned toward extraction of metadata and giving an output in JSON or CSV format. As Manish mentioned that earlier we had done a work on OSDU entitlement and obligations here on which all these metadata attributes were used for managing the entitlements on OSDU. So Manish, would you like to speak any more detailed on that? Yeah, so once we generate the JSON file from this solution again means basically we will leverage our ingestion model. And with the tags or the data model which is set for entitlement obligation perspective, right? So that's where whatever is the data element, now in this case we are talking about seismic contract. So all the seismic data will be based on the obligations data we captured from this contract document. So that's where a lot of legal aspect or obligation related aspect will be attached to the data elements which are going to reside in OSDU data platform. So that's the overall solution we are trying to present here.