 Thank you very much. This is Kadri and I have today Lucy from our enterprise architecture team and you have a group of Microsoft people here that you can reach out and check with us discuss with us And I want to change the gears a little bit and Let me click this and then talk a little bit about How we can implement this interoperability. I mean you have seen the interoperability Applications itself and I think there are a couple of things that that we need to do to enable those interoperability Interoperability is in scale. So that's what I'm going to talk about and give a couple of examples on how that works How we think I think it's a I Would see the presentation here as a good discussion starter And then we can we can continue the the conversation within the OSU community or known how we can enable that so I think like what we want to talk about interoperability today is one is The OSU platform data platform is really extensible So one of the things that that you might have seen during the presentation today is is one Like if you look into the whether the foundational platform or whether you see the data platform How we liberate data how we make it high quality data how we make it accessible? That part is pretty as is it's very mature. That's that's what we see so And then we ready to deliver on that front. I think what comes Out is is a couple of things so one is interoperability So interoperability is what we're going to talk about today Interoperability is about enabling different applications to read and write the same data that sits on at me and OSU But probably more important than that what comes out is we've been asked over and over again like around the workflows enabling the workflow so Enabling the workflows that includes multiple applications different data types and maybe tools in between To enable the the work that you do on the OSU environment. So What we've seen is and as you have seen in the last two sessions and you're going to see on other sessions OST platform is Extendable enough to enable those use cases, but I think what we need to do is to to automate these and provide the right tools on right frameworks within the OSU community The other thing that's important is I think we all need to look into this together Just OSU forum will not be able to scale with all of these extensibility. So probably what? what we should do is we should put the The standards of how to build these extensibility and how to certify it how to validate it Then we can we can work with the whole community here And then we even people outside of this community to build those extensions that will that will work with the platform And it will be driven by Well-defined and verifiable extensibility what that means there was a conversation today in in today's morning's workshop was around How can we build those? extensibilities and then Extensions and then how come how can we verify them? That was one of things that we were discussing and that was one of the outcomes That part of the planning so we will talk about that a little bit. I mean that's that's going to be important And what I'm going to also talk lastly is different modes of extensibility. So I'm going to give an example from the from the world of building Drivers for the for the for the operating systems So following a similar model, we might have different modes of extensibility and that's also a kind of a thought-provoking Idea on how we can how we can position these different extensibility points so let me Let me start with with that idea. So in in the operating system world. We used to have this user mode and kernel mode kind of Device drivers and and sometimes applications and and the main difference is In in the kernel mode it runs directly as part of the windows or with operating system Processes, but then the caveat is of course like some in some cases You have to build the device drivers or or other extensions in this way because it needs to perform But of course the caveat is if something happens with that it can bring down the whole operating system So you will see the the kernel panic and all these kind of messages Or the or the blue screens The other option is to build the the extensions on the user mode and when you build it in the user mode It's it's basically if it if the process dies it dies and it doesn't affect the OS Of course the caveat is if you need these high performance kind of Extensibilities, then it's not it might not deliver the performance in that so following that same model We can basically look into the OS to you physical architecture and then you can see like there is these extant external APIs with OS to you where you can You can basically build applications and extensions and you can build ST case So that's that's kind of like equivalent of the user mode, right? So it's an external API and you can easily build it. It's very easy to test that it really doesn't affect the system but then there is the There is the other thing which you see with the red box which runs in the in the OS to you itself So that's the equivalent of like a kernel mode But in some cases you have to do it and there are some some some of these that that we have done So let me let me show an example Okay, so let me show an example I think I remember I put the video but anyway, so That seismic DDMS so there was a good video in the previous session that you have seen Which was kind of just streaming the seismic data on Petrel directly from the from the OS to you so that that's a good good example of of of a Kernel mode or an OS to you integrated DDMS so it uses seismic DDMS as the back end and it has to be as part of the OS to you itself because if you run it outside of OS to you it might not perform So we're looking at these kind of extensions and these kind of extensions We will have like the reservoir the MS was was another one. There's other OS to you integrated extensions that we need to develop over time Because of certain reasons and then and I think the main is the performance the other one could be certain functions and features so This is this is more critical as I said I mean this this kind of extension if you build it as part of OS to you if you put it in the OS to you It can bring down the whole system. So that's why these these kind of extensions needs to be a little bit more I Would say validated certified and that kind of stuff The second one the second example I want to give is the is the power BI extension the power via connector so this is this is a good example of of Of user mode extension or extension that goes out because Basic at the end of the day at the back end it uses the query API and the query API goes to OS to you queries the data gets it back and Presents it in a power BI dashboard. So that's a good example of what we what we have and these kind of extensions Or these kind of extensibilities is probably easier because like it just uses the external APIs and Yeah, there's no there is no risk of bringing down the whole OS to you infrastructure So that's that's that example Let me talk a little bit about the extensibility enablers here Which is like if you look into the to the OS to you the Azure data manager for energy in between So there are certain touch points where we read and write data from from this From from the OS to you and if you look into the SDK on the top Like our our colleagues in Equino has developed an SDK for for Python that that sits on GitHub you can you can find that But that's that's a great example of that SDK and within the OS to you community. We're building a similar SDK for reading and writing data directly from the OS to you So there could be other SDKs like the ball on the left you see the power BI connector That's considered as an SDK. I mean like with the power BI connectivity. You can write M language extensibility to read and write data from the OS to you and and there might be others that that will be developed. So On the bottom, there's the extensibility and customization so we can potentially consider an SDK to build the the kernel mold extensions To to to run those those extensions within within the OS to you itself Or there might be certain SDKs for data governance So you can think consider doing plugins for enrichment or data quality or AI based Document attribute extraction whatever so that's also a good point and within OS to you. We have the notification service So, you know to view the notification service you can catch if a certain they knew data type is ingested and they can Go and process it. So that's a that's another point that we're looking at then and the other one you see here on the right Which is which is I think we were having a discussion today morning is around How do we do analytics with the platform? So there are certain questions with OS to you, which we don't have an answer now But but one is how do you get the data sitting in OS to you and expose it without copying it so that you can do AI machine learning the large language models whatever To do to do that kind of stuff on it and the other thing like with that SDK is you have to still Cover the the entitlements that obligate obligation policies. You shouldn't just copy data so that people can read and write around that And and lastly to be able to drive all of this we should be looking into and and a Certification or a validation for interoperability again at one of the things that we were discussing Previously this week was was around So so how do we test these up? I mean, how do we test an application or how do you test an extension? And there are some examples here like the one on the on the left-hand side. You see it's from Microsoft Azure Marketplace policy so Basically you have a policy engine and then there are certain extensions like there are extensions for SharePoint extensions for Power BI and then you you Build this this policy there are some basic things like doesn't have a name doesn't have a logo that kind of stuff or it has some Specific applications specific validation and test routines as part of the policy engine so you can put it in a VM you can run some security testing on the VM some cyber security Kind of testing and then and then come up with some results. So and then the other examples You see here is like testing petrol plug-ins And then there is also one like with you can it's called Windows hardware lab kit Which is like you can run a driver and then test it on it, but but to be able to do this kind of thing you have to have a Reference implementation that be this that be discussed and then on that reference implementation You have to have some test data within that test data. You should be able to test the scripts On how you test those as I as I said the the external one the user mode testing is is relatively easier But the kernel mode one is probably more complicated In that sense and with that I want to give the floor to my colleague Lucy. She's just done a new extensibility a kernel mode extensibility exercise Yeah, good afternoon everyone My name is Lucy Liu. I'm a principal software engineer in Microsoft cloud for industry energy division I'm very happy today to Present some example real-world example to showcase the interoperability of admin and external customer applications So Since this work we just completed Not long ago very recent so we didn't have the opportunity to upload any PowerPoint slide to for this presentation yet So I'm not going to have a PowerPoint slide to show here. So just to talk about some highlights about this work So first of all, what is open works? Many as many of you may already know Openworks is an important component of the Halliburton Landmark Decision space software suite it supports complex and cross-domain workflow requirements so recently Halliburton has collaborated with Microsoft and successfully tested the interoperability between open works and Microsoft admin so the test was done on a customized version of admin in public preview So the high-level workflow of the interactions between the admin and Openworks is as follows So the business data of various domain entities stay within the open works data management system and then the metadata Crawled into the admin instance to enable easy discovery of the business data using the standard unified OSDU APIs and Then the URL retrieval instructions for those business data can then be generated by data set service And then those generated your retrieval instructions can then be used by any client apps to retrieve the business data From the open works data management system So that completes the whole workflow between the admin and open works data management system So through the interoperability and extensibility of the admin So yeah, that wraps up this Working example of the interoperability Thank you. And with that, I think we want to open up for questions if you have time We're probably best in the interest of time to allow the other two presenters So I'd suggest any any questions, please go with Cadrian Lucy We're around and and I think that it will be further discussions. Thanks. Thanks