 So let's now move to the operator part of this adoption, so I would like to start by Petrobras, and it's Leonardo Mello, so please Hi everyone, I am Leonardo Mello. I'm a data consultant at Petrobras I'm also the tech lead for the West EU project So my mission today is to guide you through our company journey using the West EU So the first topic will be about our upstream data architecture So we have quite an unusual architecture So it's important to present to you So you can understand some of the decisions we've made with the West EU After that I will present the journey itself Some objectives the architecture that we implemented and the last topic will be the road map So I will position things in time that we've been doing So we have a problem an integration problem for a lot of years. So around 20 years ago We received an order from business like your nature fixed integration problems We had a lot of inconsistencies and we have a lot of databases with the same data So we didn't have an official database where we could get the master data So we had the project like 20 years ago To build this upstream integrated database So this is our place to all official data from upstream and we started with a Transactional component. So it's the one that is a big Oracle each application has its own schema inside this database So we have well data field data oil rigs and all master data is in there So we have around 20,000 tables on this database And we we have over 300 applications reading and writing data from there We also created a files component So there is what we store the seismic files and also Spreadsheets PDFs images and it's where we store the documents that we sent to regulation A&P is the regulation in Brazil So other interesting thing is that we have all the metadata on this files component So everything is related to the transactional component So I can say that a file is related to a specific well on the transactional component We have millions of files there and we have over 20 petabytes of data on this files component So this is really old architecture, but it fixed the problem of integration between internal solutions So we are going to OSDU to cover the gaps that we don't have on these architecture So the the main one is interoperability Because we have fixed the problem with internal solutions, but we still have a big problem with external solutions So we have a big team to extract some data from one tool and Ingest data on another It takes a lot of time and the end user can get the data in the tool he wants And the time he wants so this is a big problem we had we tried to use PPM and pastime also Open spirit, but we never had a good solution for this So we really believe that with OSDU we can finally fix this Since the community is together, so I really think that's the time we can fix it We also want to build the data So it doesn't matter where it was reading we want all data on OSDU so we can plug some visualization to and see everything that is there We also want to provide data to data scientists So OSDU have a lot of APIs so that's a good way to give data to data scientists And the last one would be the data sharing between companies When we need to share data usually we send some Spreadsheets on PDFs and that's not a good way to use the data because they will need someone to type the data in their systems So they can use We did a proof-of-concept with this we show Brazil So we ingested some data in OSDU and gave them some access So they visualized the data there So that we think that's the best way to share data between companies since everyone should be OSDU ready So we plan to use that again maybe next year when you have a more definitive version of OSDU So talking about architecture on the left side. We have our integrated database So that's also different from what we've been seeing We didn't do the applications integration directly with OSDU As I said we have hundreds of applications there and that will take a lot of effort to adapt all of them and Even the ones that don't write data, they will need to adapt to read data from OSDU So we decided to not change that So we created an ingestion directly from our integrated database to the OSDU This is an automated process So we did a full load one time and then we keep updating every day on Sunite We have just a delta ingestion with the difference So all the master data will be ingested directly from this database and we have The external solutions to ingest the project data So master data comes from the integrated database and project data from the external solutions So the idea here is for interoperability So one tool can ingest data and other tools can read it That's what we want and we want to use the dashboards on these On the OSDU and provide data to data scientists as I said and in some specific cases We may need one application may need to get data from OSDU For instance we have CGL that is an interpretation tool So it would be nice if it's integrated with directly with the OSDU But the other solutions that already write and read data from our integrated database They don't need to get data from OSDU. So that's what we plan So mainly external solutions ingestion project data reading and only some internal solutions that need interpretation data would get from the OSDU So we deploy this on Azure and We have three environments. So we have development testing and production. We have these up and running With all the ingestion and the daily basis as I said for well well bore well logs fields and Basin and we intend to keep increasing the concepts that we are ingesting Then we install admin that is the managed version of OSDU and Microsoft Installed it on a US subscription and loaded the same data. So we have the pipelines getting data there as well So the thing is we got a big problem installing this because We got a lot of errors running the scripts and in the templates so we did a partnership with Microsoft to do the installation and That's when we decided that we needed to get the managed version. So if we had such a bad time doing the installation The managed version would fix that so whenever we need to change versions It will be a lot easier since we'll just press a button and that Microsoft do this we also had some problems with the Security rules in Petrobras because every storage and database need to have private endpoints So this is not expected on your SDU So we needed to change the scripts and templates to do the installation and do some configuration after But yeah, we were able to do it So it's working now But it was an extra step that we had to take because of some security rules in Brazil We also did the data make data mapping. It was more complicated than we expected Because our data model is quite different from the one in the West you This is kind of our fault since we were not participating that much in the data modeling sessions We intend to change that and tend to be more active in the West you form so we can get closer So this is just to show some results. So this is two dashboards that we created The left one with some wells and some position it in the map and Some well logs that it took directly from the West you so we did all the process the ingestion That the mapping the ingestion the automated ingestion and we can see that on the dashboards So the final topic will be The roadmap So we did a proof-of-concept with the West you actually it started on 21 It was the one with AWS And with Shell Brazil where we tested the data sharing we also did a proof-of-concept with Azure and We didn't see a big difference between the both solutions They were really good any of them. So we decided to go with Azure just because we have Other data platform for corporate data that was already on Azure. So it just makes sense to be on the same place So business prioritized on Q2 The installation of OSDU we got the confirmation that we would move in that direction and Then we installed The OSDU on Azure on Q2 Then we started to do the ingestion So we ingested well and well logs. Well log was prioritized by business. So we ingested this both these data And That's when we started the first internal solution to test the APIs So it's already connecting to it, but as CGL also has access to our integrated database We first need some more External solutions to ingest data there. So CGL can get value from these integration So on Q4 we started the installation of admin. It's on US subscription That's kind of a problem for us because admin is not available in Brazil yet So this is expected to be only on Q3 When the admin goes on GA. So that's what we've been expecting from Microsoft So that's when we will move to our definitive version of OSDU When we install this we intend to turn off the other versions the admin on US and the current OSDU we have been using So we started testing the seismic ingestion for now. We are only ingesting the metadata But we intend to start to ingest files soon We also started the integration with the data science marketplace This is our internal solution from the data scientists and we also started to do some connections With Petrel and DSG On Q3 we expect business to prioritize other internal solution. So For now, we only have CGL prioritize So we think we'll have one or two more solutions to integrate on the beginning of the next year and On Q4 we intend to finalize the ingestion of seismic So we intend to have this also up and running by the end of the year This will be like the mission for for this year to be this To make this run. Okay, because the seismics are are really big and We have a lot of data as well. So these on the first testing it took a long time. So That that's a challenge you have to ingest everything there and On Q4 is when we expect you get some value generation because we will have already ingested data from Our integrated database of the master data We will also have some external solutions with ingested data And we also have our internal solutions of data marketplace and CGL connecting to it So that's when we expect to have to generate value using your WSTU structure so So thank you