 So Nikki Chang, I'm with Chevron and I'm on our subsurface digital platform in our data insights product line. And I'm Jenny LaRue. I work on the same platform as a product owner and then of course in the data definitions section as the petrophysics lead. So we're going to talk about some specific use cases today and how we've solved some of our problems at Chevron. We'll give a quick reminder on some past examples we've shared at the forum. And then Jenny's going to talk about some more specifics within our rock and fluids data types. So we've taken a prioritized workflow approach to our implementation. And of course, you've heard the common themes across the different presentations today, right? Accessibility of that data so that we can improve decision making is really the driver in a lot of the space. The left example is kind of, you know, where we have been in the past. It's all over the place, messy, hard to get at, hard to know what to trust, what to use for your different solutions. And where we'd like to be on the right is a much more streamlined, simplified aspect to all of our data. Okay, so just a quick refresher. At the past forum, some presentations we've done. Irina Prestwood talked about our PVT solution. So for fluid analysis, we had a use case where we have an internally developed Chevron app and it needed access to PVT data. We didn't have a system of record for our PVT data. So this was one of our early use cases that we leveraged OSDU for. And it was one of our first OSDU aware data products that we had created at Chevron. So Irina talked about that in November of 21 if you're interested in more details, but I'm sure you can find that online. And it was one of our first cases. And then in the last forum, I talked about another use case around decommissioning a legacy system we had called our Well Production History System. It was a very complex system, been in place at Chevron for many, many years since the late 90s. It was our well header system of record, it encompassed all kinds of data types. It's legacy code. We have limited OC, our organizational capability, left at the company that understands the solution, understands the code, very, very complex. It was on-prem. We, of course, are moving everything into the cloud. So we've had multiple decommissioning attempts of this application. OSDU will finally allow us to do this. We're not quite there. It says April on the slides. We've had to push it a little bit due to some last minute hurdles that we've run into, but we will be decommissioning that hopefully in the May, June timeframe. And we are all very excited and we'll celebrate that one when that time comes. So. Okay. And then I'll let Jenny talk more on the rock and flutes. Thanks, Nikki. The first one I'll go over is the data manager. So we needed a way with all of the data that's in OSDU, how can people access that? How can we make it a little bit more discoverable? So we have the data managers. This allows for discoverability, but it also allows to ingest data without having to write code and creating the data collection. So some of our folks within our company have had to rewrite back into it. So that's what we've done for the data that's currently inside of OSDU. But the hard part's been for a lot of us as a geologist myself, as one of the previous people that gave the presentation, what do we do with all that data that has just been stored on O-Dries or just random files in Bob's office, but Bob retired 15 years ago. So we needed a place to get that data prepared. And so what we did is we actually have a core analysis area, it's a power BI for rock and fluids that makes it easily accessible. And we have a way to search on a particular file, kind of like a work product component. So if you're looking for a particular file that you need within the business unit or you're drilling a new well and it's not been parsed and in OSDU yet, or it's just not in an easily accessible location, you can come in here, look up the business unit, the asset, the well-bore name, the material, the study, however you would like to look it up. And that way you can find that original document. So you can still utilize it while you're waiting for everything to come to fruition, right? We keep talking over and over with everybody about how long this is taking, but at the same time we're moving as fast as we can. And we're moving faster than what I think a lot of people expected us to do. But on a day-to-day basis, if I'm drilling a well, I need that data now, not next week, right? So this allows people to go ahead and get that. But in the future, once we're able to get everything up and running, geologists will be able to just go straight to OSDU or straight to our individual patrol and just be able to look at that data without having to go somewhere else to download it or find it. Or downloading it from an unknown source, right? So what do we do while we're trying to get everything ready, while we're trying to get this ingestion acceleration going? What we're doing right now is we have challenges, right? We have old PDFs, we have old Excel files that we're finding places, scans, et cetera. I do want to mention that all of this data is fake. We do not have grain density of 3.11 for those of you in the room that might look a little weird. But what we do when we find the core analysis data, we need SMEs, we need business unit people to help us identify what that data is, right? We need to make sure that we're understanding that this is maybe RCA data or it's special core analysis data. So as they find that, and I'm going to jump from the first part of the paragraph all the way down to the bottom where it says Corconnect, we have a way where people can tag that metadata so that way it can start going into that core analysis file area that you were seeing. And they'll tag it with things like the study, the material type, so whether it's a whole core or a rotary sidewalk core, et cetera. But then how do we get the data out of that PDF or out of that Excel file when we decide to start parsing that data, right? So we've started using the Microsoft Azure form recognizer for that. It's definitely reduced our cycle time on trying to get parsed data out, much like many people start off with a lot of Python codes. Trying to figure out how to get all these old templates from the 1950s versus the 60s, I'm sure just about everybody in here has all different types of formats that you're trying to deal with. This has actually really reduced our cycle time for doing that. But the caveat is that as part of the pre-processing, you have to convert your Excel files to a PDF or a PNG. So that's just one caveat before I go into the next portion of this. So what do we do when we're looking at the Microsoft form recognizer? You want to have five different, well, five of whatever you're looking at. So if it's RCA routine core analysis, sorry, you want to have five of those templates and you're going to go in and you're going to label what you're seeing. So this one here, you're looking at some permeability, some porosity, some grain density, and you're going in and you're labeling this. This seems pretty simple, right? So even those SMEs that are identifying the data, they can do this. It takes all that work off of the development folks so they can be developing things that maybe feel a little bit more important to them rather than labeling porosity and permeability. I know us geologists, we think that that's really important, but we know it's important. And then recently they developed the auto label feature that greatly reduces those of us that have risk issues. We can auto label the permeability, the core depth, et cetera, and then you'll go ahead and you'll click that continue button. Once you've had everything labeled, so you see all the green boxes and there are a couple of the red boxes, in the upper right-hand corner you're going to see a button that says train and then train a new model pops up and you type in things like RCA and then you'll type in neural. So that way when you do have just a slightly different change in the form, it might actually recognize it. Sometimes it does, actually most of the time it does. Every once in a while it doesn't and I'll get to that in just a second. So when you hit train, it's going to come back and you're going to see the results on here and when you click the analyze button, you'll see in the first here where it says vendor number one, dry method number one, you'll see a reliability here of what it thinks that it's right. Most of the time we find that that's right because in the lower right-hand corner where it looks like there's some wonderful code there, you'll see a word that says confidence. That confidence is 0.993. We find that anything that's like 0.8 and above seems to work. Now you do need, like I said, at least five fairly close templates. We like to do about nine, but this takes maybe 15 minutes for somebody that's been doing this for a little bit, right? So an NMR SME can come in, do an NMR template and go ahead and get to this point and see a confidence. We don't know yet. We'll have that person do that, but they can go ahead and see the confidence and figure out is this something that's working or not or do I have to add more to get that confidence in. So we're working on getting our data in faster so that we can get into our database so when we're ready in that petrophysics section with our schemas and when the rock and fluids DDMS comes out, we're ready to start putting that data in. So thank you very much. I do also want to recognize, we didn't say at the beginning, Andre Mosley. You've already heard his name once. He has greatly helped all of us out in this process.