 I'm Ted Kernan, director of cloud architecture at Geologic Systems, which is based out of Calgary in Canada. So, quick agenda, we're going to have an overview of why we're talking about the cloud, what we're trying to accomplish, then I'll give a demo. I'll also speak about our architecture and implementation and of course take any questions that you might have. So, a quick introduction to Geologic. Geologic has been around and based in Calgary for over 30 years. They have a variety or we have rather a variety of different products, such as GeoScout, which is a platform that's used in the western Canadian basin. We also have a data premium data service that's called GDC and Geologic acquired subsurface IO, which might be familiar to some of the attendees here at OSDU in 2020. And it is now part of the data management or pass system that Geologic is offering. The platform is called Tavaio and it's an enterprise system that can be installed behind operators firewalls. So, one of the reasons that platforms like this are important is because they solve the problem about the exponential growth in data. Traditional solutions grow linearly, but data as we all know has been growing exponentially, especially with the new insights that can quickly be created with cloud computing. And so we need cloud architecture in order to solve the problem of how to capture all of this data and extract its information. Cloud architecture is can be summed up as a microservices architecture. That means that there are different components all connected using a network, as opposed to a monolithic architecture where all the different components interact together in one system. So in a microservices architecture, every component is scalable, and we ask how instead of if a service can do, or a platform can accomplish a certain task. Tavaio is an enterprise software from Geologic. It's cloud native and microservices oriented. So it has high scalability. It's also data agnostic, so it can work across multiple forms of data input and connect out to multiple data sources. OSDU is one of the data sources that we connect to. And then we are also working on what we can ingest data from OSDU which I'll show now and then we're also working on egressing data into OSDU. We can also connect out and help organize companies data sets into the OSDU format as well. But in the process, we have a flexible data schema and quality control process that allows both data management users and regular interpretation users to look at data in its natural habitat through maps and dashboards online. The concept is to have data discovery for the entire company from traditional geoscience and subsurface interpreters, all the way to executives, lawyers, financial analysts, anybody that needs to look at and understand the data that a company relies on to run its business. So it is a easy to use lightweight application that allows insights into the data that a company has and breaks down silos between those different data sets. A use case for Tevio is a product put out by geologic called GDC cloud. We moved geologics approximately one million wells to the cloud. We developed them from a PPDM formatted object oriented schema, and we enabled browser based access to all users both internal and external, using a robust entitlement service. GDC cloud is currently being offered to clients in the western sentimentary basin, and it's actively being used by hundreds of users every day as a product that geologic is offering as one of the products in its suite. Let's go through a quick demo of how OSDU works and what the interface looks like. Let's skip over to my browser here. We go. So, Tevio is a map based interpretation tool. We can see here that I've loaded all of the Canadian wells and the OSDU sample data set into the same platform. In the data stores tab, which a management user can see, we see the different connections to data stores. So we have our Oracle connection to the GDC database. And then the OSDU endpoints that are hosted by WIPRO, as well as a AWS S3 bucket that we've connected here. We can add data stores using this GUI. And if I go over here to the API and to the OSDU, you can see that we have easy way of inputting the different endpoints for an OSDU and then connecting those to the Tevio front end. We can also connect directly to the different data sources here. That process kicks off a import data process that I won't run through right now, but basically allows an admin to import data into Tevio easily. So once it's on the front end here, we allow users to search using various different qualities of the data. This is completely configurable to each Tevio installation. So if you have an internal field of data or internal key that you want to search against, we configure it here in this front end. So right now I'm going to come over here and search by an LAS property, search for all the wells within the system that have gamma ray. I apply that filter. We can see all the wells in Canada and in the North Sea data set that has been loaded here that have a gamma ray. I can also reduce that and or refine that rather and select all the wells that have a gamma ray that were drilled by Chevron. Add that in here. And now I've reduced or refined that layer. I can save that layer. So I'm going to call these Chevron gamma ray wells put into a new folder. I'm going to search for a layer for this user and also share this layer among other users. So if I zoom into the North Sea here. I'm going to search by a polygon. And I'll get a table that shows me all the wells within this area. In the bottom for this tab, and only the three wells with gamma ray that are available here as well. The other set that we show in display is seismic data and come over and turn on my seismic layer. And we can search for seismic layer in the same way. And over here, go to seismic, open up the survey country. Put Netherlands, the wild card. And that polygon the vault data set gets highlighted to preview dime seismic data, I can choose the seismic feature selector on the left. Click within the seismic volume. Preview the header information and also draw in lines or cross lines within the, the seismic cube itself. So if I want to move forward on in lines, I can click on these navigation buttons or switch to cross lines. And then in order to preview that seismic, they click on view seismic. And we process the seismic on the fly and select and show a preview of what the seismic file looks like here. Any other functionalities that GDC cloud and to bio have bio has that was a quick demonstration. I'll go back to my presentation now. So how do we install and work with operators that would like to use to buy. We have actually dockerize the platform. So it's a container that can be easily stood up on site or in the cloud. It's easy to upgrade the file or the software itself using docker and it's Kubernetes ready. What we deploy in customers cloud tenant or the different to bio servers. A flexible schema and an elastic search instance. Then we can connect to any source of data, be it OSD you or other sources of data that are on the cloud or on Prem. And we allow users to search and discover that information and create users, manage accounts, do the entitlements, all of that can be done by the administrative user themselves without our stepping in or having to do anything consulting work specifically. It's also highly configurable. So as I said, we can configure the search and we can configure the front end so that it looks and feels the way that a company would like their data discovery tool to look like. We support various subsurface and surface data types, including all the different well logs XRD data seismic point data image files and we also support land data and pipelines, anything that's geographic we can import into the platform. So a to bio implementation by implementation we first do a discovery phase where we establish a test use case and gather installation requirements. Then we'll do a pilot phase where we connect to want to do different data sources and train users on using to bio. We also do a configuration phase where we scope additional data sources, determine timing constraints and establish the scalability and cost. And finally, we'll implement and allow users to run the enterprise software themselves. For further information, you can use the email to bio geologic.com or call the numbers here listed below. And I'd also like to share a website that's available if you search for the bio or GDC cloud, where you can find more information on GDC cloud. And anything that you see here is part of the to bio system as well. This is what we stand up behind your firewall and load your data into. So that's the end of my presentation. Thank you.