 y dyfodol. Felly, rydw i'n dweud yn gweithio ar y dyfodol yma, ac yna'r ddweud yma. Felly, yna'n gwneud i'r ffost, yna'r ysgrifennu. Felly, rydw i'n dweud, Sean Mackie. Felly, yn ambrangodd yng Nghymru, ac mae'r rôl wedi'r ysgrifennu'r ysgrifennu yma, i ddweud y teimlo yma, oedd o'r oes i chi, a dweud yna'r dyfodol yma. Mae'n gweithio o hynny'n oed agor. Udo erioed yn cais i'n nhw'n troi. Yn ystod sut mae'n gyda'r bobl rhywunion oherwydd mae'r cyfrifiwn ff Mogh. Mae'n cyfrifiwn sydd e'n hi'r cyfrifiwn. Yn y gallwn i'r cobl, mae wedi cydnod i ddod i'r troi arno. Mae'n cael eu drwsgrifennidau tod сейчасon. Roedd yna, mae yn gweithio ynglyn ag ein rhaglen, ond bydd yna ymlaen o eu cyfrifiwn ffainian. Yna, mae'n gweithio i'r program. We actually called it Teranova, since we do see this as new land, new territory that we are entering in, and the purpose or the vision that we had was to really define the subsurface data foundation of the future to get more hands-on with the OSDU, see where it works well, see where it doesn't, and to then know what that means for our company going forward. In order to achieve that, we had several objectives, so five objectives here that we wanted to demonstrate. The first one was that we wanted to bring well-data, seismic data and documents into one central place and see if we can reduce the search time for our data managers and our G&G users, and to also create this environment for quality control and also start to embed some QC automation processes in there. We also wanted to show that we can improve the collaboration between our data managers, our G&G experts, and also see how different disciplines and their workflows can integrate with this platform. We wanted to enable faster and more efficient workflows, so we did a lot of testing on some of these new AI machine learning workflows on top of the OSDU and also tried to enable a better interaction between our data scientists and our subject matter experts. I would say last but not least, and this became a major focus towards the end of the project, is that all of this sounds nice and good, but you need to do this in a cost-effective way so that it's for us, it's not an investment, I would say, it's actually how can we use the OSDU to drive down costs in the company, and that's something we will touch on. So we have a hyper-cost focus at the moment in our company, so for us to get to this next deployment phase, the business case needs to be extremely solid and we'll try to show you some of the input we have to that business case. Looking over the past year, these were the work streams that we had in order to try to achieve those objectives. There were six work streams, many of which were run in parallel, and five of those six are now completed. To give a summary, maybe the first one there, the top right, this took up, I would say, the majority of the programme, especially the first six months, was dedicated to that work stream itself, so this was implementing the OSDU in our own Azure tenant and also utilising an enterprise data solution from SLB that can enable our data managers to bring data from three assets across three of our business units into the OSDU to visualise and to do the data management workflows. So I think a key component I want to mention here as a mid-sized company is, I guess we don't have the luxury of data engineers, developers and so forth. We're really dependent on more traditional data management workflows, so we needed a solution that our data managers can use to bring data into the OSDU and do those workflows. So that was a one key component of why SLB has been a major partner in our Terra Nova programme. Secondly and second work stream is, well, our most widely used subsurface application by far is Petrel and if we have an OSDU environment and it's not working with Petrel then it's also not really going to go anywhere for us. So we wanted to test how well Petrel can integrate to the OSDU, streaming seismic and also testing some of the new AI machine learning workflows that are available there. We also wanted to test some data science use cases. We have a data science team that gets use cases from the business and we wanted to see how well does that work on the OSDU. So instead of them getting their well data from various different silo databases, can they fetch it using the OSDU APIs, run their data analytics workflows and provide a bit of collaboration between the data science and the subject matter experts. And maybe just to summarize the last few, we also wanted to look at a couple of third party applications that we use. One was earth science analytics for machine learning for geoscience and the other was geologic for well planning. And we also demonstrated the connection of those two applications to our OSDU environment. So then we had Petrel, earth science analytics and geologic all kind of using the same data. And I just want to mention that one work stream we also focused on was change management. So we had a dedicated work stream on change management using the structure ADCAR model. And we found this vitally important because I think we had more than 40 users across the business involved in the testing and making sure you're getting the lessons learned from them, the feedback, the surveys, knowing where they are on the change curve. That's sometimes the biggest hurdle and we tend to focus on all the technical stuff. So we put a dedicated work stream here and I think that definitely benefited us a lot and we were able to at least make sure the user felt they were also part of the journey and their feedback was taken on board. So that's kind of a nutshell of what was done and I guess Max will also touch a bit later on some of the outcomes but maybe bridging over to Max, Max will then touch on what are kind of our challenges around data in the initial data. So apart from being fashion models, I think we also created a OSDU PowerPoint portfolio which you'll see in a second. But going back a bit to the why, why we've done this, we have a few pains that's me in the company and probably these will be similar to other companies or people here in the room. Yeah, we don't have a master database for subsurface data. We have different databases. We don't have a connection between our legal documents and the data. So yeah, we're working with Seismic but we don't know if we're still allowed to use it for example. Quality control is very time consuming and we don't have a best practice for it. Our archive data is basically not searchable because we don't have any tools running up on top of it. We have data in the cloud because we have some cloud applications as well but we have no control of this data that we push into the cloud. We have a diverse landscape. We've heard that before I think today that between business units there are different ways of storing your data in databases or not even a database which is a file structure. Where is the latest and latest grader? Is it in a project? Is it in a folder? Is it in a database? Which of the databases is it in? And we don't have standards, not even a simple naming convention. I guess yeah, every time you switch for example from a physical person you get a new naming convention. So those are the pains. And then I guess the solution in short is then OSDU. It's me again. And then with the solution we also of course have a value because you can bring something in but we need to show them what is the value of our solution. So the value. We have a centralized database so we can dissolve some of our all databases one point to go to now. We have the legal tagging so we adhere to the legal contracts that we have mitigating the risk of not being compliance to the rules and getting penalties. Automated QC. And I guess what's missing here maybe is also the metadata announcement. So we make sure we don't have any duplication in there. We add metadata to it but we also have an automated QC now and a scoring that we've never had before. OSDU can catalog so also our data that is in the archive system can be made searchable now. We have a standardized approach to storing and organizing or accessing our data whether it's cloud or on-prem because with the on-prem stuff or in the archive stuff we can make it searchable at least for the time being. We have a unified landscape so we're doing everything the same in every business unit still letting the business unit being the owner of their own data. Latest data so you have the golden record so you don't have to go through the different versions you know where it is. The last one I guess this is the whole story but you don't have to explain that too much what the idea behind it is. And we have standards so we can finally have a technology that works for data governance as well. Then next. Yep there we go. Then these are kind of the workflows that we've tested with OSDU in the middle. I'll start on the right top side. Kind of if you've been to London I present there as well on what we've done. So the enterprise data solution we've trained 15 data managers on it who did the ingestion, the visualization, the QCing of the data. Also testing is against the criteria that we had and basically concluded that we could use this tool as our future data application on top of the OSDU. And it's also supporting all of our most important data types apart maybe from one which is 2D seismic which will come early next year I think and that will also be the moment where we say we go for deployment and that was also our advice which Sean will go on a bit later. Then the second workflow that we've tested is then the Petrel PTS where we had 10 geoscientists working on the workflows between OSDU and Petrel, ingesting data, liberating data from Petrel into OSDU, but also consuming the data, but also we did some with Petrel PTS comes on AI or machine learning workflows that we've tested and in our business in Norway where we did most of the testing they came up with a efficiency gain of 50% for the workflows that we have in combination with OSDU and then Petrel PTS. Then the third one is data IQ which is kind of then the data science part where we had two use cases of which one was fetch data from OSDU, do some reprocessing on the well logs, some machine learning or AI on it and then push that back into OSDU and then consume it with Petrel. But of course data IQ is one of you can use other data science tools I guess on top of it as well. Then on the left side we have three other components of which two are cloud applications that we use at the moment and the third one is a Delphi native application called Opportunity Assessor where we yeah for data for air science analytics for example we push seismic into OSDU and then consume that with Petrel vice versa and also we push data from Petrel well data into OSDU and then geologic consume that to show for example these well pass and then with the logs on top of it and some interpretation like markers and I think I got everything on that one yeah then this is basically showing our journey the change journey I think change has also been named mentioned a couple of times today. I don't think I've ever worked on a project where emotion was such a big factor every day and people you're fighting against people with preconceived ideas biased views and I guess your main your main enemy is fear, fear of something new because we all like our old stuff if it works it works or if it's comfortable but also the fear of new good tools replacing potential jobs which is not the case but yeah there is that fear component but also fear of a tool that's maybe not 100% working despite actually covering all your critical workflows but you can use that feedback or you can use that fear as well to change it around if you use that well if you can explain it if you know what the fears are and you can explain it to them in a better way or you can use the feedback provide it for example for the enterprise data solution tool um the feedback that we gave to SLE back they improved the tool we bring that back to our users kind of making a few more comfortable that okay we are listened to um we're part of this whole process that's kind of the change the change that you need to or the change topic that you need to keep in mind that you always keep them informed and involved um then there's also negative stakeholders which maybe only listen to 10% of your story and there's always 10% that's really bad spread it create fires we need to extinguish them um yeah that's basically what you do on a daily basis then on top of that we're kind of proving the technology you're also fighting all these emotions unfortunately within the company we're now in a phase of hard cost cutting so on top of that we also need to show a good cost picture um but I think at the end you know with a good motivation motivational team and a good change management I think we got there in the end and we even managed to get all the critical stakeholders in the closet of a project to recommend the deployment of OSDU which is IT um exploration reservoir management and also our managing directors from the business units they all said let's go for OSDU um let's push forward I think I got that yeah so maybe just to to highlight I'm sure many of you are probably going through some of this and yeah for us it was daily weekly and some stakeholders we thought yeah are we ever going to change their mind but at some point I can't put my finger on really why but the tipping point did happen and I would say our most skeptical and conservative stakeholders also changed their minds just through I guess also max demonstrating to them in super detail the workflows the visualization and then I guess it took many many times but it is it is possible and I guess we were thankful at the end as you said all the all the key stakeholders not only were positive but they also put forward a recommendation to deploy so I think that also is detriment to uh thankful for the change management support as well it helps along the way basically what it shows is that every issue you solve you think you're going up but then there's a massive tumble down start again and I slowly you get up to the top yeah thank you one more effect there oh yeah I forgot one sorry it's still me um managing directors they said let's go for deployment so we also showed them different deployment scenarios of course uh of which one is the most simple one I guess it's a single source of truth um so we replace some of our databases uh go for the OSDU um then with that establish this single source of truth with no third party application integration for the time being um then the second one is basically number one plus um so we have this single source of truth but we also connect our cloud applications to it um so we have a greater control of our data in the cloud um also empowering our data managers to use yeah these these applications better and more then the third one which is not something we want is a hybrid environment so you kind of leverage your current infrastructure and investments and connect the OSDU to the on-prem um application environment sorry I shouldn't have to drink in the Pepsi on stage um this comes with significant technical complexity high cost and and poor end-to-end user because you have to switch between um virtual machines probably and it's not going to help the the whole case then there is deployment scenario number four which is probably not really feasible at the moment so this will be our dream uh scenario um or deployment so this is kind of more strategic way of looking at it um where we go fully integrated so we host OSDU an application environment on our azure uh to reduce complexity enable full integration um like I said this is considered as a strategic target for the future then then there's also the fifth one uh that's a no deployment where we also showed some alternatives with a cost picture even um and where we then also utilize our traditional databases basically go forward the way we did um which then comes with some risks like no open access no modern search no legal tagging no automatic you're seeing um it will be harder to enable AI and machine learning workflows on your data because it's all spread around um so that's definitely something that we would not advise but these are our we are deployment scenarios and I think Sean is going to go more into our actual deployment scenario yeah sure yeah yeah so just to highlight I guess we had to show that was a request from our steering committee and manage directors in their business units what are the scenarios even no deployment so that it's fully transparent all the ways that the all the different scenarios that they could choose from and in the end the scenario that we proposed was to basically get started with a deployment scenario with the smallest step possible that is cost effective and that was scenario one and this is basically a deployment of the OSDU and the enterprise data solution for our business unit in mexico which is one of our major business units both operated and non-operated assets and they have some of the most fundamental data challenges that max mentioned so there is no master repository in mexico all their data is unconsolidated on the file system and it's from pre-legacy mergers and acquisitions of companies and they're crying out for us to really support them with the OSDU by creating a clean environment for data managers and for the G&G community that also comes with modern search and visualization that can support them in their daily business and part of that deployment scenario is we'll bring their well-related data into the OSDU and the associated documentation with that but we will likely keep the seismic data on premise just because most of their workflows are still in a non-premise environment and we don't want lots of transfer costs but we will catalog that seismic data and make sure that it's searchable in the OSDU and also that the file path is there so that's kind of the the first approach and then the second step is that we're going to use the OSDU to rationalize it's roughly 10 corporate databases for well and seismic related data that we have some of them are not even maintained anymore and we need to find alternatives and that rationalization alone is close to 2 million euros of savings which more than pays for OSDU deployment for our business unit Mexico so that was our deployment scenario proposal but our feedback also from our leadership is that well we don't just want to do this for Mexico now if we're going to do this we might as well consider what our global phase deployment will look like which I will also touch on but before I do I want to mention that in order to achieve a more global phase deployment you can't get away from having a cloud data and application strategy and that's one thing that we've had to push I guess that's mostly been pushed from the OSDU project and something we've been working on for the past year and we see that if you want to get to this fully integrated environment you need to archive your data in Azure you want your OSDU deployed in the same cloud environment and you also want your applications to be running in the same cloud environment even if they're not yet OSDU compatible just so that over time as they do at least you're set up in the right infrastructure to enable that and for that you need a common cloud and application strategy which is something we are finalised by the end of this year and it's also how we see our IT and digital coming closer together and this is where this cost saving now comes in that we see that this strategy can really enable us on the storage savings it can support our application rationalisation project that we have it can help us build a more right sized infrastructure and it also enables us to achieve these data driven aspirations so those four areas you see below those are projects global projects now being initiated within our company and we kind of put forward made it clear that you can't do this in isolation you need to do this under one common strategy and we now have kind of bayon that at least the OSDU is the way forward when it comes to our future data foundation so I think this is the last slide for myself and this is now really the final pieces that we are putting together for the deployment strategy so I think we have the all clear to go for for Mexico but this is now looking at it in a any broader context so the first thing that's being triggered is how do we prepare our data for the OSDU and there is now a global cleanup being triggered to de-duplicate data but probably most importantly is actually we want to shift most of our data into cheap archive storage because we have something like 70% on high performance machines and the vast majority of that's not been used in a year so we have major costs that we want to want to mitigate here and then we want to rationalise our databases and as again I said 2 million from that rationalisation alone and I would say multiple times that from the cleanup and moving data to cheaper storage and that's kind of the precursor of what we need to do to then implement the OSDU for business unit Mexico and we aim to do that by the end of next year as max said there's one key feature we're waiting for and that's the 2d ingestion from the enterprise data solution and we need this prep work done on the cleanup in order for us to do that so once that's in place and in parallel I guess our business unit Egypt and Germany are also doing a cleanup that we can then scale the OSDU and the seds environment there but what is the next step there is actually is then to test how does that our application environment look like in the clouds and to do some testing between that we can host our third applications in a new azure environment and then start to see how that works a bit more with OSDU not for all but at least for a few of our key applications that are the brunt of our work workflows and then from there it's kind of a global scale out so this is what we're piecing together now basically and the main value that we think we can deliver is related to rationalisation of applications quite significant storage savings and we're looking at this from a total cost of ownership perspective so what is the total cost of ownership today and how does that look when you configure this new environment and so far we see it's quite a positive business case especially in times where we have very severe cost cuts going on right now which makes a deployment extremely difficult but it's looking at least likely that we'll be going ahead with a deployment but the decision for that deployment is on the 24th of November so it's not confirmed yet but we already see positive feelings from the main stakeholders and we've been verifying them with the key business case topics up front so at least in 2024 the Mexico deployment we see is is highly likely and we'll come with it a cloud and application strategy to support and to orient our IT and digital initiatives so yeah that's where we are today and I was saying and our project is always do you what always do you you