 Hi, I'm Max de Groot from the Netherlands, based actually in Germany, working in our corporate unit in Hamburg, in the D&T, so the digital and technology team, and working on our OSDU project. So during this presentation, I'll try to give you in a very tiny nutshell an overview of Teranova. So Teranova is the project name that we gave it because we're stepping into unknown territory, I would say. I'll give you a quick live demo of the tooling that we use, which is coming from SOB, and how that works. Then we'll have the journey of the data managers and learnings and challenges and then a roadmap for the rest of our project. So, yeah, in this very tiny nutshell is a big slide, but basically what we've been trying to do is implement OSDU as a test initially into our company. We do this together with SOB, with their SETS tooling, which is the SOB Enterprise Data Solution, and we're trying to, oh no, we actually have implemented, we ingested interpretation data, well data, documents, spatial data, but also completion, so production data. Yeah, and we really did this with the point of view, with the data management point of view, we really wanted to prove the data management perspective and that it would improve our workflows and, yeah, create it, so prove the ability to prepare and gestanalyze QC, govern, search, et cetera. We want to have the system of records, so create this golden, so-called golden record, your single source of truth coming from these multiple, multiple sources. So that's the start, that's where we made our MVP of, which actually ended a few weeks ago, and then in the second phase, we're going into the application interoperability, which you've now seen with a couple of others who presented that should also be a success, I hope. And then after that, I mean, it's on the top of the slide, but the cloud and application strategy, so before we go all in on our cloud and application strategy, we need to prove that we can manage our data as well in a cloud way, so this is really the start of that, I would say. So now maybe bear, oh wait, before it starts. Bear with me, because when I prepared myself for the demo, I thought I could pause it whenever I want, but that requires me, I think, to have a mouse, I'm not sure if this works as a mouse, so I'm gonna be a bit quick when I speak on the demo, but I'll do my best, but basically what you see now before I start, I can do already some introduction here, so this is the data workspace, as it's called, it's part of the SETS tooling, so the SOV tooling, and within the data workspace, you have your data spaces, and basically your data space, you can, we, this is where you ingest your data into, and we divided this now into different countries. So we invented, so we've ingested data for Mexico, for Norway, and for Germany. What you'll be seeing in the demo though is data from New Zealand, which is public data provided by SOV, the video is also provided by SOV, by the way, so thank you for that. So that's what you'll be seeing, and what we'll do now, we'll show the process of ingesting a log file, a loss file, what comes with that, so there's a legal tagging, there's a QC-ing, and then we're also going to search for the data, and visualize it, and some more data afterwards. But yeah, let's see if I can keep up with the video. So, the data spaces, and now we're going to go into the ingestion part, so you're going to browse for your data file, which is now the loss file, we're going to add it to a certain data space, and this case is going to be the New Zealand one, so this is the legal tagging, and the legal tagging, in our case, for example, for Mexico, every well comes with a contract, and the legal tag will really help us to adhere to the contract that comes with it, so if a contract ends next year, the data becomes invisible to an end user, so you won't be using data that you're not allowed to use anymore. Then what's now happening in the background is, we're actually looking for the parent well, so we're going to link the data to a well, which is the amakura well, that's happening now, so you go to the ingestion phase, then there's the standardized, which is the mapping in the background, now there's a quality control, so you can actually, before you continue with the ingestion, you can manually check your data, so you have the summary overview, you have the JSON viewer, where you can look at all the metadata in the background, which is maybe not as interesting for an end user, but for data managers, it is. Technical assurance, you put it on trusted, if you are okay with it, you push it, then there's the approval phase, and now I need to speed up. Approval phase, and if you keep it at trusted, it's only going to be visible to data managers, but if you go to certified, it becomes available for all the end users. Voila, I think I managed. So now we go try to visualize the data, we're going to find the lost file that we just ingested. So first we're going to, with the filters, look for the wells, you can do this by typing just the name, it will find it. Here you can see all the data that's actually been related to the well. On the right side, you see the related digital entities, but it's going really fast. Lock viewer, so we're going into the lock viewer, we're going to look at the curves itself. If we have markers, you can also add the markers to it. So you can visualize the data before you actually put it in your patrol project or whatsoever. There's also the QC ruling. So the QC ruling, you implement some QC rules before you ingest it. This can be the well needs to have a name or the lock needs to have a name. So you can kind of make a preset of quality. Then there's also the document ingestion. So the document is linked to this well. You can look at the document viewer. I'm going to see, no, I'm not going to try to stop it. It also recognizes logs, it recognizes tables and forms. So you can look at them real quickly and I'll mention some other things in a second, but now we're going to have a look at some seismic. Sismic, basically same thing. You can look at the trace views. You can have a look at your seismic. You can scroll through it, X lines cross in lines cross lines. You can change the coloring. Yeah, you can just basically have a look before you ingest it into your Petro project, Paliuscan and what's where you need it. And now I'm done before the, oh no, there we go, yeah. Maybe going back to the related digital entities. So you can link documents, well data, you can, a field it's belonging to. And yeah, so you really have basically all the data in one visualization. You can see the location of the well. You can go straight away to wells, documents, like I said. You can, the metadata is there as well. So it's really, yeah, for a data manager, I think it's a little piece of heaven, I would say. Then the journey of the data manager or the data managers. So what we started with for the MVP, as we, like I said, we started with, we need to prove the data management part of OSDU, we want to prove that it works and together with then the SETs tooling. So we established the experience team of data managers. That means we want to have basically a specialist in every segment. So we want to have a specialist in seismic and well data and maybe spatial data. So create a good experience team. Get them familiar with OSDU and SETs. And that's maybe one of the harder things because every time you have a new person coming into the project or you have to explain it to somebody, everybody, nobody, they always mix up SETs as OSDU, OSDU SETs. So yeah, it takes a lot of explaining, I have to say. Then part three was create testing and criteria to validate SETs and OSDU basically against your daily workflows. So we do daily stuff as well and we want to make sure that this can be done and maybe even in a better way in OSDU and slash SETs. Gather and prepare data sets across three assets and business units. So we did this for Mexico, Germany and Norway where Germany is a very mature asset. Mexico is a more exploration asset and Norway is a development asset. So we kind of wanted, although the data in the end doesn't really differ, but we wanted to have three different types of assets in. Number five, so create data splits. So each data manager has a data set to ingest. So we wanted to have a representative data set. So I was a very tedious job, I have to say, but I tried with all the amounts of data that we have, give every data manager a good decent data set to ingest. Then train the team on SETs, which was done in six sessions. Basically at the same time, so six and seven can be merged, create a playbook to guide the ingestion workflow so you can go back into your manual and have a look how that works. I have to say that after the training, because it is a very new workflow and it is very new and it's semi-disruptive compared to what you do on a normal daily basis. So yeah, our data managers weren't feeling very comfortable in ingesting straight away the data after the training. So what we decided on is do some hands-on in-person sessions. So we went to every business unit together with SOP as well and did the initial ingestion altogether, which worked perfectly fine. Then document the test criteria and feedback. So of course we had a list of criteria that we wanted to check. So gather the feedback on this. Also gather the personal feedback, because it's not always about technical stuff. And then of course during our ingestion workflows we reported some bugs and enhancements. It also goes up apparently in my voice. So report these bugs and enhancements so that we can improve also the tooling and now this is more about sets than OSDU. And that should lead us to success. So maybe on a more personal level or on a human level, because we all talk about the technical part, but there's also of course the person behind the technical work. So this should be your, yeah, maybe your average subservice data manager. He has attention or he or she has attention for details. Has a strong understanding of data management principles. Preferably maybe has a background in geology, geophysics or something related. Good communication skills, hopefully. But a data manager's work, and I'm my background geologist, but I've worked most of my career in data management as well. And from my perspective, I think in the last 10 years, my work didn't change that much, apart from maybe a new Petrel version or a new GIS version or a new OpenWorks version. But in general, the work itself has not changed in the past couple of maybe decades even. So a data manager's work has not changed a lot over the past couple of years. You might even struggle to get some on board because why do we need to change? Because this has been going quite well for the last couple of years. So yeah, you might get some people not wanting to be part of it. But then if you show them the potential and the benefits you could create and the time you're freeing up, not for actual free time, but maybe for more interesting work. And I think somebody mentioned yesterday the Monday morning or the Monday mid-day or the Monday overall work, these things that you actually, they don't bring any value. You might get them interested again. And this is, of course, not only for a data manager, this is also for an end user. This is basically for everybody. So yeah, if you show them the potential, you will actually have this very reliable colleague fully on your team working. They're absolutely best to have a proper database. And as I said, I think this goes for every part of the business. So some feedback summary. Sorry, it's a whole load of text. But yeah, so the sentiment and trust is on the rise. So like I said, after the training, people weren't feeling that comfortable with the ingestion itself, so going on their own with OSDU and SETs. But after the in-person ingestion sessions, people felt a lot more comfortable. So maybe some quotes or some feedback. So tools are easy to use, cool to see naming conventions and it's cool to see it work quite quickly. Naming conventions and the possibility to correct the lead items that you ingest needs to be improved. One of the good things is that also for some of the bugs, so we came up with these announcements and they've actually already been implemented in the meantime. On the other hand, there's also the seismic scanning and ingestion, which can be a bit bucky, can be related to the data that we have, so the data is maybe not 100% correct, where previous tools were less error sensitive, but within OSDU or implementing into OSDU, of course, is a bit more difficult then. Maybe the last one, the person is not reflecting the text, but it's a great ability to link, for example, the documents to Wells. One thing that we realized so for our very mature asset, we have old physical reports that we kind of made pictures of and they put in PDFs, which actually have a subdivision in there and that's not being supported at the moment. Yeah, so the benefits and it's also, I think, a first time for everything in our case, so to have this golden record, merging all your data sources into one, having basically all the data, the attributes to one single data item altogether, so your single social truth, your golden record, the data quality, although, yeah, so data quality is also always the responsibility of an end user or of a data manager to go through the data, to check the data. It's not a silver bullet, but it will help us and yeah, this automated QC checks that you can turn on basically before you ingest your data, is going to help us to have an initial kind of sorting of data quality, upscaling of upscaling so that data managers can ingest data themselves and contextualize data, industrialize data management, so I think, yeah, this is, like I said, this is quite disruptive, so it's a completely different way of working and I think in this level, we have not been upscaling colleagues in a long time, I think. And then the last one is the data-driven part. I think what you could say there is walk the talk and I think OSDU will really help us walking the talk and putting data first, so data separated from applications and help us make better decisions. Am I still on time? I think we're getting ready to wrap up soon. Oh, okay, so I'll try. Another couple minutes? Okay, sure. I'll be really quick. So learnings learned by doing was the most efficient way. So data minutes and approaches with OSDU and sets are fundamentally different to what we've been doing so far. Data mapping is very complex, I think we've heard it yesterday as well, it's very complex and requires dedicated people with expertise. I'm not sure if we'll ever do that ourselves, but at least we need to be aware that this takes some time. Either we get an expert in or we hire somebody to do that. So huge amount of stakeholder management. You really need to keep people on board and keep them updated because they lose track a lot and you need to keep explaining people. Yeah, maybe some challenges. So dependencies on proprietary DDMS for some data types. So for now, for example, for the interpretation data, we use the SOB proprietary priority, yeah. That word, DDMS. And the technology readiness of OSDU and sets, I think everybody is, when it comes to OSDU, is struggling with, yeah, that you cannot ingest some data types. OSDU is disruptive with a huge impact in cloud data app strategy, but also persons. And maybe the roadmap, I'll go quickly through it. So we've done now and we're in Q1. We finished now the MVP, the data migration, the sets testing and feedback. We're now going into the interoperability of applications. Just quickly going through it. We have a larger data management project coming up or Epic coming up. And later on, we wanna do a migration plan in case it's a success, of course. Expert plan testing, value proposition. We also talked to some application providers like IP and TNAF, which we'll be working together with. And then by end of August, we want to make a decision on are we going to continue the OSDU storyline or are we going to stop and wait till there's more improvement? Sorry. Oh no, great presentation. Thank you. Thank you.