 All right. Hello, everyone. So I've also got Jamie with me on stage. And I also got Rob Bond from Aspen Tech and also Dave Butcher from Halliburton. And my name is Teresa Rannam. I work in Equinor. And we're here to present the collaboration that we've had ongoing since October last year. So the idea actually came in the last face-to-face meeting in Houston. We've been working through March this year, and these guys are going to help me show some of the demos from that work. So here we go. In Equinor, we have traditionally had a patrol in our exploration workflows, and we've had DSG in our asset production workflows. So we wanted to see if we could use OSDU to share data between them. So that's where we started. We didn't want to do the manual transfer because we're sick and tired of that. So these are some examples there on what we did, and we're going to go more into details. But we had one ADME installed in the Equinor tenant where we connected iEnergy and Delphi. And also, what's also important for us is to see that we can get the data from these tools into our data science tools. So there's this one-size-mic app that we made in Equinor that we also connected. We wanted to prove the interoperability concept, and we wanted to do it before February 1, and then we focused on size-mic. And we did prove it. We were able to stream the same size-mic to all of these three, and we did some operations. We put it back into ADME, and they could all kind of read it again. So that's the first step. So now I'm going to leave it today to go through that first. Halliburton, do you want this or that? I'll take that one. So if you fire off the video, I'll say a few words about the project that we've started off. Okay, so you get some pointers on the screen of what we're doing, but the challenge really was to integrate our decision-space applications, our decision-space Geosciences suite, and seismic engine applications into Equinor's OSDU instance. And here you're looking at seismic engines. So seismic engine is our seismic attribute calculation engine. Here we're browsing Equinor's OSDU instance, so we're connecting an application on iEnergy into Equinor's instance. We're selecting data, and the seismic engine allows you to, it's a cloud-native application that allows you to run very complex seismic attributes, any seismic post-processing cleanup. You can create very complex flows. It's all cloud-native, it's completely scalable, fire up as many compute nodes as you like, all through a very simple web interface. We have no dependency on any of our traditional technologies or data stores, so it's been very easy for us to layer this on top of OSDU. It's been designed, we're leveraging some of the principles that Chandra outlined in terms of data federation, data virtualization, and we're running that. So we've run it now, we're doing some work on the UI so that we can actually show some of those metadata attributes back to the user, we can't do that today. So this very exciting piece of the demo, we're actually seeing postman, so we're running postman queries, so we're just showing that we're actually populating the correct metadata and we're following the standards of OSDU and how we're populating data back when we do those writes. But we're running against the native VDS through the APIs, we're not doing any data translation into our traditional formats. So we're streaming the data. The other thing I'll say is that the AIML tools that we're bringing to market now for automated and semi-automated interpretation, for faults, for horizons, for salt body detections, they all sit on top and run through this so they're all OSDU native by design bringing AIML to user's fingertips. The second piece was trying to get the data into decision space so decision space is more of a traditional application so how do we make that play with it? At the moment we're showing that we actually do the right back into decision space, so this is translating the data back into the traditional data store. Our commitment to Equinor that we're continuing to work on that will be delivered shortly will be to deliver the same native read access to OSDU VDS data in any other format whether it's VDS, VDS+, in the application we're not doing any data translation into our traditional brick format. You can still do that if you want, if it's workloads to the cloud, we really don't want to replicate data. Thank you very much. Thanks very much, Dave. So I'm here on behalf of CHI Moorfeld. I'm a poor substitute for CHI but I'll discuss what we've got here. Yep, go ahead. So again, we're focusing on this seismic streaming use case which is an incredibly valuable early use case. So what we can see here is in our patrol application connecting straight to the seismic DMS with data that's been ingested into OSDU, stored and managed in OSDU and streaming that into patrol without making a copy of that data. So this is one of the key patterns that we want to establish. Ingest your data once to the platform then have it accessed seamlessly from your applications and your workflows without having to do cumbersome import and exports. One of the lovely things about the seismic use case is that you get so many benefits from this. You're able to actually have a much higher performance than you can get reading from traditional block storage off-disk. So it's cheaper, faster, better which is an incredible place to start. Also very important that when we talk about data we're not just talking about input data but it's important that we use this as a way of writing back results to the platform as well. So if you're doing your work, if you're creating an attribute volume you can write that back to the OSDU from within the platform. You've not had to do any data management here. This is all integrated into the end-users workflow and we've got tools for allowing you to easily load this into the platform but otherwise end-users are accessing this seamless. Their workflows remain simple and intuitive with the tools that they've used. As well of course of getting access to the next generations of tools which are corrected directly to the platform around automated AI and ML interpretation. So we can actually have seamless integration with seismic data into the platform, into the end-user environment, high performance, lower cost and of course we have the data tools that allow you to browse that standard information, the standard record formats that allow you to discover and access and work with the records and the bulk data from a seamless end-user interface with the spatial, with the ability to look at the full hierarchy. When we load data into this platform we go from original formats like SegWi and we process those and we allow you to transform them into ZGY, VDS, index, catalog them and calculate all of those complex OSDU records. I think there's probably a lot of people in the room that have invested in a lot of scripts and codes and things like that to manufacture these schemas and we can do that in an automated way now whilst recognizing that the end consumer doesn't have to be exposed to any of this complexity. They can actually carry on working in the regular end-user desktop tools. So I think that's what we managed to show with this collaborative demo with the platform as the hub for Seismic. Then I'm going to go through this demo that Equin are made. So we'll see. I need to stand in front of here to see what's coming. Yeah. So here we're going to show the search and finding Seismic in the SDDMS and are streaming into the one Seismic app. And this is all Python and it's premade so it's just kind of putting in what you need and then it's ready for you. It will print if it's there. It will print the results. The other one is how we're looking for data in SDMS and when you find what you're looking for you'll get the URL. So this is just a simple tool for our data scientists. Now the one Seismic app you can do simple operations like down sampling just to get a quick view of your data. You can print it, time slices, inlines, crosslines, whatever. And this is the actual streaming then. So then this next example here is filtering. Here you can see there is a ghost in the data and that can also be shown in an inline and a crossline. And the point for doing this and point for kind of building this application was that you don't have to go into a commercial tool to see all the Seismic that you have in OSDU. So quick can you see? And it's open sourced so it's actually available to all of you here. Right. So what you saw might seem trivial but I think we did a lot of work in a very short amount of time. And there were many details that we had to sort out and to do that we worked as one integrated team. We met every day. So you can see our backlog here. That was a common backlog where we had all our risks, tasks, everything was tracked and we would report progress every day. And in the top there you can see there are our diamond cases, our use cases that we focused our work around and the feedback that we got from our partners there is that this is very important for us. We need to understand where you guys are after. So we also met face to face a few times and we had to discuss these use cases, what they meant for everyone and we had to reiterate on those and we had to actually bring them down into smaller bite size pieces. So one of the features in one of the use cases was Stream My Seismic. One was create an attribute, one was write it back. And doing it that way we were able to have a lot of celebrations so that was good we did build trust in the group and yeah we had a lot of fun. But of course we're not going to get our digital subsurface by streaming seismic. That's not going to take us where we want. We want more. So in March we decided that we're going to start something new, we're going to start sharing seismic interpretation data through the RD-DMS. And it's not available in ADME yet. So Microsoft, and sorry I didn't mention Microsoft when we came up on stage. They've been the enabler here. And Frude I think you're here. Yeah, there he is. He's been also working with us every day here. And this is also Eire Keugom, his colleague who put up this standalone version of RD-DMS for us to start the functional testing. And there are there's a blog post and user instructions and everything so you can also get a hold of this and start testing. So this is also where we onboarded Aspen Tech. That was very useful to get the experts in and they were sitting there and answering questions and things just flew. So we set the new goal and that was to share surfaces between these applications before April 1st. And today we're April 19th. So you might wonder if we made it and we did. So these are four happy guys since the banger four weeks ago or something where we had just shown live that one application could pick up the surface that another one liberated. By changing the data space, no manual transfer, simply code, only code. So I thought that was super cool and we had another celebration. So now I'm going to give it to Rob to go through some more of the details that Aspen Tech came into the collaboration with. Thank you, Theresa. For those who don't know me, my name is Rob Bond. I'm a director of product management at Aspen Tech. I'm proxying today for Elise Chambin who was acknowledged yesterday as part of the awards ceremony. Elise is responsible within Aspen Tech along with other colleagues for the delivery and the open sourcing of the Reservoir DMS. Before we start the movie, I just want to mention that with regards to Seismic, because Seismic is my topic, that we do have this same approach that our colleagues on stage here have mentioned as well, that we want to leave the data, particularly when the data is on the cloud, we want to leave it where it is. We don't want to make local copies. We don't want to transform it into our own formats unless absolutely necessary. So for those who are interested, if you would like to see a demonstration of our streaming, please come and see me afterwards. Okay, so back to our DMS. Quite a whistle-stop presentation here. I'll try and keep up with the movie. I'm not sure I'll do as well as Tatiana from Telltale did yesterday with her movie. What we're going to show is aspects of interactivity with the IDMS. We're going to use two modeling applications, Aspen RMS and Aspen SQA. We're going to bring data across from the IDMS into RMS. We're going to write data using a Python interface back to IDMS using tools within Aspen RMS again. We're going to transform that data and add value to it and create some new products. We're going to try a new modeling scenario. We're then going to interrogate that data using the open-source CLI. I hope I get this phrasing right. The open ETP client that we have open-source to the community along with our RDMS. Then we're going to bring data in to Aspen SQA from that RDMS instance. Okay, let's see if I can keep up with the movie. You'll see why I did the description in a moment because I've got five seconds to describe that workflow. The workflow I've described already is what we're going to follow. We're starting in RMS. We're going to launch a Python job, a Python script, to bring data from an RDMS data space into RMS. In this particular case, we're bringing in interpretations, horizons, faults, and actually well data as well directly from the RDMS. We've copied the script across. We've run the Python script and we've brought that data across. We're focusing on the interpretation here, but we can also bring across 3D grids, modeling grids, polylines, et cetera. We're viewing that data now in RMS. We can see the fault sticks. We can see the wells. We can see the 2D grids, the horizons. Now what we're going to do is we're going to emulate a new scenario. We're creating a new data space in RDMS because we're going to modify that data. We're going to create new data and augment what we have within the previous data space. Again, we use a Python interface in this case to push the data we've loaded back into RDMS. Now we're showing the OpenETP client here. We're going to, and the command line, interrogate what's in this new data space. Take a look at the data types that we've pushed across. You can see the new data space we've created. Emerald 2 there. Going to take a look at what's in there. This will just flash up on screen very quickly. I can tell you that all the modeling data types you would expect are there. The key number there is 660 objects in that data space. Now within RMS, we're just going to do something very simple. I'm going to create some new horizons. I'm going to grid up some well-tops constrained by existing interpretations. New data that we want to push back to that new data space in RDMS. We use the Python tool again now. We're going to just push back the data that we've changed. I should mention all of this is being done through the Energistics Transfer Protocol. It's all based on the rescue mail-open standard coming from Energistics, both in terms of the Transfer Protocol and in the Storage Protocol that lies within the RDMS. We've just run the export. We'll go back to our command line. We'll now interrogate the data space again. We'll see it's grown. We've got more objects than before, but now we're into another modeling application. We're in Aspen-Scua. We're searching for the various ETP tools we have within Scua. We're going to create a new session. Log into that data space that I mentioned before, so we'll just go find it. Just looking for the data spaces here. We don't need to. We'll actually select from a user interface now, go to that particular data space, click OK, and we're going to bring all the interpretation data across from that data space. Now we have transferred data using the RDMS, using the rescue mail and ETP protocols from RDMS between two modeling applications. As I said, I'm proxying for Elise today, but if you have any further questions, you can either approach me afterwards or contact Elise directly. Thank you. All right. So on this slide, I want to just say a little bit about our main learning in this collaboration. Being open, working together in a collaborative manner, I think that is what's going to take us to the next step. To accelerate the implementation of OSDU, and what we're presenting today represents part of the cultural change that we need to push through. So now we are scoping the next phases of this kind of work in Equinar, and we don't, we shouldn't forget about our main objective here, which is to make all our subsurface guys smile like this one. Thank you.