 I'm Jean-François Renault, I'm working in the CEO of GeoZiris. And I think the presentation today will be to explain how we are on the way to build an OSDU rescue ML share that's modeling workflow platform. And working with that together with rescue ML and OSDU together. In fact, we are a small innovating company. I think maybe all of you don't know what is GeoZiris at all. And in fact, we founded this company a few years ago from myself, from IHP, Energy Nouvelles and other PhD and people from Ecole des Mines de Paris. And in fact, we have work on web architecture, cloud, AI, knowledge management, geometry and topology. And what we want to do is to facilitate the shared or smoothly approach between several companies. In fact, we intend to promote the combination of cloud, available microservices, orchestrated, choreographed by cloud native platform. In fact, it's really something on which OSDU is wanting to go. And in fact, we wrote a boot on that part, let's say around 10 years ago before. The domain on which we are working, it's mainly of the rescue ML domain. So which is the reservoir modeling domain, in fact, for OSDU. And in fact, we can work on different entities coming from interpretation to structural framework and reservoir characterization, basin modeling and reservoir dynamic simulation. Another element on sub-surface exploration, we are working also with geotechnic companies for roads, roadway, tunnels, etc. We could apply that to carbon capture, to fluid flow simulation into rock volume, what we did in an ancient history, and geological risk assessment and quantification of environmental disease. So what we are doing really, we are working on data handling tools and we have some tools around rescue ML, which are able to manipulate the information, validate and we have APIs and we realize our development service. We facilitate in fact this time the use of energy sticks, what is done in OSDU forum and also with BPMN exchange standard. So our thing is, as a plus, we have geomodeling business activity. This business activity is mainly involved to create implicit or explicit 2 and 3D object representation from seismic, for example, or for other implementation. And also this element tolerates the transformation, which can be induced by geological evolution through time, and we can mainly formulate that after in a workflow. But let's say, in the geomodeling business and OSDU forum, what we are doing? As our historical domain is earth modeling, and because we were working with energy sticks, in fact, we are onboarded in the OSDU forum in the early 2020. As our objective is to federate the small vendors and build a shared or smodeling workflow platform, could be open also, using this standard, we actively participate to several forum workgroups, the reservoir data definition, the data loading and execution, the core concept, the reservoir, the DMS, and a little to the enterprise architecture design on the side for the timing. And we did that because, in fact, in the OSDU, we have one of the earliest contributors to the reservoir domain. We started, for example, with Emerson, with Hexon and others to work on that part and the total energy. And we are one of the first working on this domain. We did that to accumulate the necessary expertise in order to develop the product on this base, in fact. And at the end, we'd like to propose an innovative workflow for geomodeling, which could be shared by many companies at the end. The first result of our work is our contribution to the data definition workgroup. In fact, we work to be sure that we have a complementarity between the rescue ML V2.2, which is not really, but will be issued by the OSDU forum, the open group these days, and the OSDU 1.1. And we are sharing the same metadata design at a high level. You have rescue ML feature, which are mainly master data in OSDU. We have rescue ML interpretation, representation, property activities, which are in fact, work package component in OSDU. We have rescue ML relationship, which are translated in kind ID reference link and reference in OSDU, and rescue ML enums, property kind, empathy class, unit of measure, which are part of the reference data. So all these parts could consider to be the metadata part of the rescue ML. But on the rescue ML, you have a complement with the detailed representation of numerical value, which is handled for the timing by HDF5 files, but could be handled after in OSDU, in a Postgres database like it's done in the Emerson RD-DMS system, or in a HLDS server if we use it as an external data store system. We handle the topology in the representation entities. We handle the geometry and property, and we handle also geometry as well, the result of a key period of the geo-modeling lifecycle, and be able to share it with between applications. So what we are doing now, we are generating geo-modeling applications. In fact, we did that on both, that's done. We export the information into rescue ML ETP server. We use ETP server collection with a system set of rescue ML entity to be able to ingest all this information. We get this XML instance from the ETP server collection and put that by the data set service into the data set. And we create now the tool which is rescue ML to OSDU converter, which is able to create master data, work package, work package component, data set, and all things like that. So this thing also is ingested, because we did all this ingesting with script, which is the manifest by reference, and we also pass validation integrates. So in a summary, in that we have this extraction extraction, which is showing manifest main structure, on which you can have the kind system, the reference data, master data, the work product, the work product component, and data set. So you can see here that we are not just talking, but really working in deep and executing what is necessary to be done by all the system into OSDU. And for that, I will show you just after the different manifest proposed from a rescue ML collection, and I will go very quick in order to just show you how it works, but you will be back after if you want, you can have a look and check if everything is fine for you, etc. So by example, you can show the master data and have here, by example, the relationship between this master data and the UUID in rescue ML. So any OSDU object and rescue ML object has the same ID and can be checked and exchanged together. They have also an identical version number in order to have a complete relationship between. And then we have information about the data set on which we put this master data. And what could be interesting to look is the result of the collection, which in fact transported into a work product component. And you have the list, for example, of what we have in the rescue ML and the same view in OSDU. So we have that local boundary feature is the same thing. We can have all reason interpretation and it is very interesting because all the metadata which is interesting to be captured by the OSDU store here. The same on the fault interpretation. And you have a way to represent information about the generic interpretation and that's the way to go and see if you come back. You have all the information of the side of all the five you can collect after at the end. Who are the data folks be? I mean, do you think between Jill, who can help us out with the actual connectivity of OSDU permissions and the tags that she's helped us out? Yeah, that's right. Sorry, thank you. Yeah, I can come back after if you want all the thing because we have just explained that in detail after. So the data set here and the interpretation ID, by example, it's the link from the representation to the interpretation. And the indexable element count on which you are the number of information into this. We have the data set and the data set we use the system on the data set find generic. And in fact, in fact, we have the link between the wall EPC container with HDF 5 or the link to the XML file containing the metadata in the SQL ML file. This is ready to be, you know, delivered after when we pass the validation, we pass our script together and we have the validation on which we successfully validated all what they did. And also have a test on the integrity on which we integrated are done the integrity test coming into the system. So what we're doing exactly. So that's some proof of concept and term and implementation where I started to work on. So we have a technical short term objective, which is to realize what we have done with this motivation in several contexts for ingestion. The first one will be to have rescue ML. EPC plus HDF 5 so coming directly from the application and ingestion into the platform. The second will be to pass through V2.2. ETP server use as an external data store and that again to complete by search and delivering from the data platform. And using ETP APIs. So our technical mid and long term objective is to connect OSD data platform to all component of Earth modeling workflow. And the overall objective is to develop the partnership to strengthen the solution with other people working in OSD. So what we see is just we have a German application with send the EPC plus HDF 5 in a rescue ML 2.2. We can also edit create a collection and rich with property and create the record into the data set and the complete manifest. Put that in the organization by our products. That is for the application coming directly for existing now. So if we have a rescue ML ETP server which is like a RD DMS intact. We take a collection. This collection is our converter go to body face and send the manifest to the ingestion by air products. It's the demo you have seen just before. And now if we want to go back we have the search and delivery from the OSD platform. This select the center number of work package component elements. And in fact, you we have an OSD we will have an OSD to rescue ML correct. Which is only create a collection. And as everything is already known with this address in the rescue ML server, it will be delivered to the application. So. The end of the story is we want to experiment the progressive replacement of that. Of the native operation. So we take this idea of the OSD forum. And working on the competing file, but not only because we work also on the API, etc. We have developed a rescue ML management system based on rescue ML. The system to be able to compute some services to orchestrate the workflow and some user interface. That will be after at the end the user is workflow platform. So, as a conclusion, in fact, we are working to federate in fact. Shared of model activity by exchanging OSD you at any of these activities. The objective after will be to aggregate small vendor activities to compete against monolithic proprietary workflow by a combination of these orchestrated micro service in a clue that the workflow. So when we started with with you from it was fine because you see that as a game changer and we started to participate. And it was a very good opportunity because we contribute to ensure the complementarity between SQL and with you, which is important for us. OSD broader, safer and more universal cloud strategy in a larger ecosystem than we had before. And it is a couple of opportunities to collaborate to partner and customers. We breath now and take my breath and for more detail after on our future workflow platform, we will present something which is more detailed at the EHEA in Madrid, which is a cloud native standard by Joe study Joe scientific workflow architecture for improving geomodeler collaboration. So I thank you very much to listen me to the end. And I think I hope I could have somebody interested to follow up what we are doing in the presenting in the next year annual show.