 I'm going to showcase Accenture's journey around the OSDU, what we're doing, investing and building out. But essentially, we're really excited about the OSDU journey and we're fully committed, fully in. Accenture's been building capacity and capability around the data-led transformation. And we see OSDU as a key part of that data-led transformation that many operators are going through at the same time. We've built out an OSDU Centre of Excellence and we've connected that into the wider Accenture network and that's nearly 700,000 people now, covering everything from innovation, agile development, change management and then coupling that with the domain expertise that we have. We're investing heavily, both in the forum and tooling and we're contributing to the forum with lead positions in the core services sector around EDS and ingestion. And we're also focused around platform assurance. We've got a plan to start to populate people into the multi-region deployment side of things. So one of our investments that we'd like to show you today really is a demo around one data platform. Before we get there, Terry, if you could jump in and talk us through maybe some of the challenges of what they are and why we've been building what we've been building. Yeah, sure thing, thanks Paul. A few flick over please, Deggan. I just want to take a moment just to acknowledge that a lot of the software applications that we're looking at are amazing and that they're on the premise of OSDU already being loaded and populated with data. And a lot of the challenges that we're seeing actually is being able to connect to the source systems that exist inside organizations and harvesting that data, applying validation and governance to it and then taking it and populating it into OSDU. So the premise on our working flight at the moment that we're doing with people is based around actually how do we access what it is that you've got? How do we get it in a state that's ready to be loaded into OSDU and how do we apply the governance and validation rules that are going to be very specific to your organization before you actually put it into OSDU. So if you think about the capabilities that are required in order to do that they actually sat down to the next size and said, okay, so what is it and we did this in partnership with people. What is it you need to be able to do in order to be able to populate OSDU? So if you flick over again please, Deggan. We actually outlined what it is that you need to, the capabilities that you need to take your data from where it exists right now and push it into OSDU and these are some of the points that we actually looked at. The bits marked in yellow here are things that you'll get a view of right now will show, Deggan and Pugil will show you in a moment and there are also bits that we're working on and stuff that's in flight. So one of the key parts is being able to take the schemas and the structures that exist inside your, that exist and actually matching them to the OSDU format because there's a lot of interpretation or potential interpretations to how things could marry up. There's actually connecting to your systems as they exist now. There's validating that what you've got is the correct version or what you consider to be the system record for it or the source of the truth. The ability to generate the metadata database that characterizes it but also to actually extract from certain file types, meta-characteristics that you will use to populate out those metadata databases. Applying some of those structural QC, I saw a question a little while ago about what version I've said why you could look at whether it's an extended version of an acidic header being able to correctly identify that and make that information available to people but also to run QC on this data as well to say there are bits that are missing and being able to populate your metadata base correctly before you actually load it into OSDU. I won't talk any more in this. I'll let the guys show you what it is that we've been working on. If you have any questions along the way about any of this, please feel free to reach out. I think Jagan's going to take you through it now. Thank you, Terry. Let me walk through this one. Basically, we identify what are the pain areas where customers are really going through that. Any customer who adopt that OSDU platform, we have all the functionality available where how to ingest the data using a different mechanism like MANIFER, CSV and then DDMS concept. Everything is in place but the challenging part, first step, it's figuring it out is the customer data how I can map with the OSDU data schema then identifying that mapping first and then ingest, it is taking longer time. What we decided slightly that we will start building, Accenture start thinking that. The first step, before we ingest the data, first we need to map the schema with the OSDU. To do that, I'm taking as an example of the data what we have right now. Basically, it is built based on the machine learning model where it is identifying what is a relevant schema what we have with OSDU, what data what customer has provided. When we do that, basically it is an NLP program which is running in the background and it is identifying very close to the relation between the OSDU schema with the customer data. For example, here we have a well ID. What is the relevant information according to the schema and what we have in OSDU? It is able to identify what is a relevant information. For example, right now we have the 70-80% of accuracy. What are the things? Because in OSDU, you all aware of that the schema it is too much complex where we have to look at the master schema. That master schema will have some reference schema that reference again it will go to the nested reference schema. We have to traverse through all the element of each data type and understand and map the data into the customer data. That is quite difficult to do because of it is all manual effort right now it is happening and you need to have a functional knowledge. So with Accenture, we have a functional expert plus external technical expert we build together this tool where we are able to identify what is a relevant of the data we have. For example, if I have a suffix number this I got it from one of the customers say example we are providing this option here but unfortunately we do not know what is the exact result. So what we have given we have given I identify as a functional expert that I am saying that this is the exact result of this column name. So what we have identified there is another model which we are building based on the customer feedback. So we will take that we call it as a feedback model which we are building right now. Eventually what it does it started giving the more accuracy result one to one mapping kind of thing. Once it is all done then we will just save this data into into our platform here. This is the first step which we are doing for any customer the second step what we decided that what is the next step to generate the metadata. We know that before you ingest the data after you receive from the customer the next step you have to categorize the data what is master data, what is reference data, what is a miscellaneous data. So these are the data classification it is still in a manual effort but we are trying to make more intelligent way to identify the data and then we are constructing it. Right now what you are seeing at this stage that you are seeing after we receive the customer data we have to populate these are the additional fields that these are the run but the data source ideally comes from the data what customer feeds in for the demo purpose I just keep it for the dummy data over here once the metadata generation everything in a place the next step to start ingest the data into the OSDU subscription what customer has. So what we identify okay this manifest generation from the metadata and then go to the ingestion and then finally do the verification part of it. So I just start this one basically it will start generate the manifest and then it will start ingest the data and you will see that what is the count which we are doing. So these are the basic thing which we have done it no need to go to the individual component what OSDU provided that we have to take care of that manifest ingestion we have to execute the load command and if you go to the DDMS ingestion we have to install some of the component what OSDU provided like STUTIL or WBUTIL so these are the entire package we can be converted into CI CD pipeline if somebody customer wants to own this whole platform into their tenant we just do one click it will install the whole environment all the setup in the back and what it needs to be there everything will be in a place in one click so the CI CD also we already implemented in this place so these are the ingestion is currently running so the OSDU data verification so what we identify that whenever I am ingesting any data into the platform we identify there are 100 well there are 1000 well for say example there I am inserting due to some network issue as due to some airflow issue maybe some of the issue data might not ingest properly so the idea being we have defined some predefined rules stating that okay these are the thing I want to understand how many well I ingested from got it from the customer what I ingested into the platform so kind of we have done some predefined rule to identify what is really happening so right now this is ingestion is processing but with the interest of time I will just go to that validation part what it is going to happen and this particular stage so if you go here we already defined set of rules so this is the data which I am receiving say example of from in a CSV format or something as we all know that the elastic search is the final search AP or delivery AP where people hit and get the count out of it so we feel that okay these are the rule I will define up front so that I can just do the validation what was the count when customer has given what is the count when I land the data into my platform so when I do the you can add the rule if you really want so we have provided all the option here so if you want to edit or delete you can perform whatever the action which you want so when you click on this data validation it will hit both source and target and it will get the result of it so right now at the demo purpose we just keep some of the CSV record what we have ingested and these are the query what we are executing against source and target so if both are matched then it will show it's a true if it is not then it is false so these are the future which we have so the additional future what we have so we all know that we have a postman collection where we will have to really check what is happening all the time so we identify that why do we want to go to the other one so we brought all the future what postman has provided we brought it here so you whatever the configuration you set it in the background for the environment variable we are setting internally through that admin page when you log into that so if you see here you will see all the details what is the list of group we have right now so you will get all the groups over here and if you want to search any of the data what I have so you can just go and hit it over here say example well I have how many records I have so basically all the data what you want to perform in the postman you can do it over here itself so we have a some more functionality like if you really want to DDMS we are in progress of implementing a seismic DDMS also so if you really want to do ingest in the DDMS basically well both last file if we really want to do that so in WBD util is a background which is running in our docker where we are taking that input and we are trying to upload those files into this well board DDMS so the ingestion of well board DDMS also within this platform so to keep that in mind that ok there are many things where we are struggling to get into all the ingestion framework in one place and all other functionality which is in a silos at this time we brought it in a single platform we want to make sure that all the data what customer has needs to be pushed into the customer tenant that's a whole idea so the whole application what I am showcasing here it is running it's a cloud agnostic you can connect to any of the cloud or AWS or Azure it will run on top of that so the configuration detail we can do it in the admin screen so this is a airflow which we really want to see that what is happening that is a link which we are providing so this is just airflow which I just want to see what is happening when I ingested so all the flag you can see it over here the other feature where we just want to take the leverage of elastic search which comes with the Kibana we do not have much data visualization but what the data what we ingested all the data which you really want to see the account all the data which we just integrated with with this dashboard we are building more and more dashboard based on the use case which is an pipeline so that is what we are working towards to that so these are the things which we have done as part of this platform