 track the embodied carbon emissions through a supply chain, kind of capturing and pulling all that data, verifying that data, and then certifying that data for use in financial marketplace as well as by by by engine consumers and government regulations. So we see these if you want to get corporations to decarbonize, you know, our general kind of viewpoint, pragmatic viewpoint is you need to pull as many levers as possible in unison to kind of to get that cost revenue equation in a place where they're willing to take the risk to adopt some of these decarbonization technologies. So you can imagine, you know, when I kind of started learning about the work that you were doing, you know, how interested I was, how interested I think this the SIG will be in the work that you're doing, because we're definitely playing in the same sandbox. So I don't I'll go ahead and kind of hand the floor over to you. And thank you so much for for sharing. We're very excited to learn some more. Awesome. Thank you for the introduction. And it's a pleasure to be here with you today. And I think it's, you know, an exciting time to be working on what we're working on and really empower the Web 3 ecosystems as well as equally the traditional financial system for all the purposes that you just outlined and which I'll dive a bit deeper into. Without being said, I can just dive right in if you guys are ready. Please. Awesome. So can I share my screen? David, can you open it? Let's see. All right. Cool. I've got it here. Great. Yeah. He should have access. Fantastic. Thank you. So let me know once everybody can see this. Yeah, we can see it. Cool. That better. Okay. Awesome. So thank you for everybody tuning in today to learn a bit more about the importance of really accurate climate data and validating the process and sort of the toolkits we use within Web 3, you know, between blockchain and Oracle networks. So I want to set up some preface here to really be able to dive deep into a lot of the gaps that we see in the market right now. So generally speaking, there's just a major lack of climate data out there. And specifically when it comes to the data that does exist, it's very difficult to find accurate data, let alone any data that can really be proven and validated that it stands true. This was, you know, I typed out a little example here, which is from the recent IPCC report, which has a really alarming statistic in it, right? Which is really highlighting this error of margin between 70 and 90%. So when this came across our desk, we read this for the first time, this really set off alarm bells for us. We said, wait a second, this is terrifying. We're seeing huge emerging markets like ReFi, we're seeing the entire financial system from a policy point of view, really redesigning fundamentally to be able to really green the financial system. But how are we going to do this without with such a terrible error margin in the data that's out there in the ecosystem? And I think that this has really become highlighted over the last year, really kind of scrutinizing ESG things with saying, wait a second, there's so much self-reporting, there's so much greenwashing that's taking place. How do we know what's actually real and what's not real? And so that's really what I'm going to focus on today is picking apart the systems that we have in the toolkits we have available to us today to be able to do this the right way. So something I want to kind of highlight diving a little bit further into the preface is just what's on the horizon, what is coming and what is taking place that we're seeing around the world really led by the EU in Switzerland. We're seeing just a fundamental restructuring of the central banking system to be able to really green the financial system. And we've seen in my opinion kind of outlandish numbers being thrown around of trillions and trillions of dollars that are committed in different trillions worth of assets that are now going to have to be proven green and really greening the reserves of banks themselves. On the other hand, with emerging carbon markets, there's so many carbon credits today. But how do we know that they're actually, how can we measure the impact of those assets? And before we can measure the impact, how can we then price accurately price those? And so we're seeing the changes in policy to really demand for these disclosures and say, hey, we need systems available today that can prove the emission footprints of corporate entities or even large asset managers within their fund portfolios and be able to really validate that process instead of self-reporting it and essentially saying, hey, we're this green because my shirt's green and I say so, right? That self-reporting system has failed. And I think everybody could agree on that now. And the truth is we just have market basically disabilities due to a lack of infrastructure and data that's accessible by the financial system. And that's really we're all focused on today. And so when we first really started diving deep into this, it was really, it rang even more alarm bells for us. So if we're going to green the financial system equally set in the traditional system as emerging markets like ReFi, how are we going to do this with such flawed data? And what are the chances that we are about to walk ourselves and even fund ourselves into a situation like a financial crisis similar to 2008? And I'll completely oversimplify it. So don't take it with a grain of salt. But if we completely oversimplify 2008, we had really good financial products being built and standards being built. And it was working really well for a while. But the problem was the foundation level was not being validated or refreshed. And so we built this giant house of cards that eventually collapsed because it was sitting on a foundation of faulty information. And so today, with what we're seeing around the green agenda, we're aware of that before we've gotten there yet. So we have the opportunity and we finally have the technology and the toolkits to be able to build that foundation to really support all of these different initiatives around the world. And that's really where we're hyper focused on is being able to do that and empower standards to be built on top of this dynamic data. I think one of the big gaps that another big gap, I'll say a lot because there's a lot of them, which is really it frightens me, but is the lack of communication between the scientific communities and the atmospheric agencies and the financial sector. And what we really fundamentally believe is that in order to sort of do this right and achieve the goals that everyone is ambitiously committing to, we need to ensure that these capital markets keep scientific standards at the core of those. And so I start that with kind of starting at the beginning. Where's the data actually coming from? What does the data mean? And how do these data flows actually work? Because we first have to understand that before we can begin to build financial systems using that information. And so just as a little disclaimer here, the next couple slides, the visuals are intentionally oversimplified. I kind of call them like napkin graphics, where it's something that if you were like sitting in a coffee shop with someone explaining a complex situation, they don't get it. You draw a few things on a napkin and sometimes all of a sudden it clicks. So the next couple slides are oversimplified on purpose and then we'll dive into some more in-depth technical things later on in the presentation. But it's really important to understand within the atmospheric agencies. And today I'm going to highlight mainly Copernicus and Eicos and NOAA. That being said, within those and underneath that umbrella, there's multiple other agencies. Like in Switzerland, there's EMPA and you have different smaller groups all over the world. What happens right now is it's a very manual process that has worked very well within the scientific community for scientific purposes. And we'll get into the challenges that that faces in the financial system. But what's going on right now, again, is a manual process. So NOAA is going and they're collecting manually, standardizing manually and doing their own scientific models and methodologies to the data that they're collecting from groups such as Eicos and Copernicus. And this process takes a long time and it's not just as simple as gathering the data. They're actually doing things to the data manually with their scientific knowledge and changing and altering things. So this process takes anywhere from six months to a year. And then once that's all done and they're confident, they then publish that through an open API. That API is not meant for an enterprise usage. So if it starts getting thousands of calls, it can't support that. But it does work very well within the scientific communities. But by the time this manual process is completed, it's been a year and then it's made available. So really, the soonest you can get access to this data is a year late, right? So if you're building and trading on what's happening today, but you're using the data from at the earliest a year ago, you have a problem right there, right? It really, very logically, you understand that there's an issue within that. And that the financial system needs to be able to access this data essentially, really dynamically, but in much more rapid intervals. And so right now this is basically the way it works. It's a very legacy system, very Web 2. And you have an API, capital markets kind of scrape data and they grab it and they build what they want with it. Now, as this goes on, multiple people are using a single data set for various different things. So I'll focus it kind of around carbon, but it's applicable to all sorts of different GHG and ODS. So you have within kind of the capital market umbrella, you've got ESG firms, you have banks, you have emerging markets like ReFi and then you have models, and the list goes on and on. But they're all using a single data set, doing their own things with this, adding their own models on top of that and calculations based on the data. And so essentially what begins to happen is you have one origin of a data set that is used by multiple different organizations and none of it's connected. And so essentially it's like a fractionalized data set. And so all of a sudden one original data set fractionalizes and it becomes anywhere from five to 100 different data sets floating around out there and it all starts to change. So it starts to veer away from the truth of what actually happened in that data set, which is very alarming and troubling. 15 years ago, we didn't have a way to solve this. This worked and this was there and we also didn't have the demand for this data as we do today. But today that landscape has changed. So we have multiple gaps throughout the system that we need to fix and ultimately it's an infrastructure problem. And so to dive further into some of the atmospheric agencies, there's an issue that arises among this beyond just the single data set essentially becoming fractionalized. If this data is being used, let's say a specific data set from two years ago and they're starting to build products on it or an ESG firm is using it for their calculations or you name it, you get my point. NOAA will discover something today and new data sets, which changes their understanding of the science and they will go back in the historic data and the database and alter that and change it to correct it and make it more accurate. That creates a big problem for the financial system because what happens there is you have data alteration, you have a new data set that's created, but it's not connected. There's no dynamic aspect to it. So it doesn't refresh and update all of the continuous value chain that the data is being used in. And so suddenly the moment that data is altered, all of the groups that have previously used that data set, it's no longer valid, but they might not even be conscious or aware of that fact. They might not know that it's been changed. And so what happens there is it only makes the problem worse of as I call these fractionalized data sets. So now not only do you have fractionalized data sets, but they might be 100% invalid and you don't even know it. And there's no malicious player here or no fingers to point, it's just a flawed system simply. And that's one of the main reasons that we see such an issue with the accuracy of climate data floating in the markets today, because this is the system that's been in place for years. And this has been happening every day. And so no wonder at this point sort of a lot of the data floating out there in the market is essentially trash because of this exact problem. And so what can we do to sort of begin to address this and fix it to empower capital markets to have access to this data closer to real time, as well as ensuring that it's validated and that every step of the data is logged and essentially has a half validation on the blockchain. So it's accountable and traceable to see what data was changed, when it was changed, and who it was changed by. What this really requires is really the topic of today's discussion, which is blockchain and Oracle networks. And so using an Oracle network, it really enables us to do a lot of the validation aggregation and then internally standardization and building additional infrastructure on top of that, as well as you really using the blockchain for the hash validations. And I'll get deeper into that. But this is again, just a napkin diagram of showing, comparing it to kind of the old way of a new way where this way, all of these different atmospheric agencies can validate aggregate and standardize their data. And now it's no longer manual, but it's an automated system that uses Oracle networks and blockchains and empowers all of these different groups within capital markets to access this data. And if anything gets changed, or in the backlog of any of these data sets, it automatically refreshes that throughout the financial system using a smart contract system, which is possible through the Oracle network. So to move away from the napkin drawings and kind of get into this a little bit more technical, we'll dive into the next system. And this is really the validation and collection system. So when we focus on the Web3 world, a really beautiful example of what Oracles are used for is within price feeds. Now, one of the advantages that you have within price feeds is you have multiple different sources of information that you can collect that data and come to a consensus of the most accurate answer, essentially. What we don't have in the climate world is we don't have multiple Noah's. We don't have multiple Copernicus, multiple Icos, right? There's still one single source of it. So what can we do to validate this process and still ensure that it can be trusted and ideally get it to kind of, with this diagram breaks down a trustless ecosystem so that you don't have to trust one entity, whether that's an atmospheric agency or an ESG firm or a bank, to be able to know that this data is what it truly is. And so here you can see that this diagram breaks on, essentially, which we'll get into a little bit later in the hyphen Oracle network, which is a chain link powered decentralized Oracle network with our partners, G-Link Solutions and Northwest nodes. And what we do here is we're gathering the data as soon as it's made available from these different agencies. And then all of this is either put on chain and run through Avalanche, as well as goes into the chain link infrastructure and onto a blockchain where you can have a hash validation so that you know at each step of this value chain of the data that any alteration and any change, there's a hash validation and stored on the blockchain so that you can have a immutable ledger of all of this information. So that way when things change in one hand, it's automatically refreshed, but you have a backlog so you can see what's been going on historically as well. And this way you can really trust the system and know that this is the data that I need and that it's accurate. And if it's been changed, you can see who changed it and why did they change it so that we have a much more accurate understanding of what's actually going on with the data and where is it coming from. To get to the next slide, this sort of breaks it down in a different type of visual, more of a technical stack. So if we have the real world, right, which the way we collect all of this data today from various atmospheric agencies is a combination of satellites, IOT towers, oceanic submersibles, and all sorts of different systems around the world, all run by different atmospheric agencies though. So they all sort of operate in different silos. So by using an Oracle network, and sort of what I already described through the validation on blockchain, we now have the ability to take all of this real world information and equally empower the Web 3 ecosystems as your off-chain and legacy systems. The beautiful nature of an Oracle network is you can do these systems on-chain as well as you can do them off-chain. So it's really inoperable and it works for any pre-existing system as well as any future system. So for any, kind of here, as I described in the layer two section with your DApps, all of these different groups can now begin to access this data through smart contracts, whether that's on-chain or off-chain, but it's still doesn't, it's not giving them the upper hand compared to the legacy world. They both can access this in real time and they know that it's truly validated and they know the source of this data. To go on further, what is this really enabled though, right? So now that this system's in place, how do we know? And should we trust hyphen or should we not? And I really stay away from saying, you know, we should trust an entity, trust it, it's blockchain, it's these problems. But the truth here is that we're not just fixing this problem by putting it on-chain or using an Oracle network. There's much more to it within building additional infrastructure to collect and aggregate this data in real time, to then run it through these blockchain systems to validate that process, as well as make it available and easily accessible through the Oracle network. So the hyphen Oracle network is in partnership with hyphen g-link solutions in Northwest nodes, so that it's a truly decentralized governance. And this way you don't have to trust one entity and there's not a bottle's neck. What this really provides is much more security, reliability, and there's no downtime. And you know it's not coming from a single source. So it's always being cross validated. So anyone that calls on this data knows it's exactly what they want it to be. Where are we kind of gathering data from? So within hyphen and a partner specifically, it's mainly comprised of different climate and atmospheric agencies around the world. But we also work with external partners. You know, you have really, the private sector is really growing, especially within space agencies, satellites, and things of that nature, as well as working, which I can get into further at the end here, working with corporate entities themselves on data that they gather by themselves for supply chain purposes for scope one, two, and three emissions. It's really important to be able to have both sides of that. Using atmospheric data and specific methodologies, you can really establish baselines and be able to calculate the emitting locations as well as sync locations of different GHGs in the world, which allows you to have a very accurate baseline to compare a corporate entity's footprint against. And what you see there and what we find is that you see what's being reported, and then you see what the actuals are. And there's still a large gap there. So the goal is to narrow that gap as much as possible. And through that ability is really where the carbon markets will flourish by reducing those emissions. And so kind of what we empower now and to be able to access all this data, it's a very wide range of things. So I have a fraction of it listed out here on the right side of customers. We're really empowering smart contract staffs, financial institutions. I don't need to read it. It's right there for you. But now the financial system has the ability to access data, in some cases in real time and other cases in near real time, to be able to build more sophisticated financial products, knowing that they have the most accurate data to be able to do this and that it's constantly being refreshed. So the entire system that you're looking at right now, the diagram of the system is a completely dynamic and automated system that ensures it's being refreshed. So kind of in my earlier analogy of comparing it to 2008, now that foundation layer is validated, it's constantly being refreshed. And if there's any alterations or anything, you know exactly what's happening. And it's all accounted for and validated on the blockchain. So you can see that. Kind of the conclusion, I'm not sure what time we're at or not. I might be cruising through this too fast, but I'm also looking forward to get into more of a discussion with everybody. But the conclusion and the things I really want to highlight here is that it's very important that we're validating the entire value chain of the data. From the moment it's being created at various different IoT devices, whether that's at a IoT tower or a weather station or a satellite, moving into the hands of one atmospheric agencies, bringing the validation, the transparency there to see what each different agency is doing to the data, their alterations, and then finally getting it to the steps where it's made publicly available and people can access this data. And then as they access this data, they know that it's constantly being refreshed. And they have a place to go back to to make sure that by the time data gets in their hand that it matches and that it is validated. And it's the same as what it originally was. And so this really builds that foundation of dynamic trust. The second thing is really ensuring that this data is made available as quickly as possible and as easily as possible. So previously when I sort of mentioned this gap of about a year, if we're launching whether it's green bonds or tokenizing various different carbon assets and creating carbon credits, it's important that we can use the data of today if we're pricing these assets of today. And so it's really important that we can start to not just unlock this data and make it more accessible, but that it's as real time as possible. And the truth is in some cases and in some regions, it's better than others. So there is a lack of physical infrastructure in the world to be able to actually gather this data. It's not as simple as just grabbing information from satellites or just grabbing information from weather towers. There's very specific methodologies within the scientific communities that use a combination of triangulating those things to be able to understand exactly what's happening and why it's happening. And that's what we need to make available to the financial system so that they can reach the climate goals that everyone is hooting and hollering about. The other thing is really making this available and preferred formats. So your Web3 world, they're going to want to see things in a very different format. They want to see things on chain or made available in smart contracts versus a bank. They want to see this in a completely different format. And then lastly, it really empowers the scientific communities because now they have a system that makes their own processes more efficient so that they can have to do less manual work. There's always going to be some manual work because you actually need the scientific brain and knowledge in the process. But this way we make their own systems more efficient so that we can increase the speed at which this data is made really available to everybody and so that they can use this to build whatever financial products they want. The last thing I really want to highlight here is throughout the different team at hyphen from their various backgrounds as well as our partners at G-Link and Northwest, the truth of the matter is a lot of what I've described today is just simply not possible without using blockchains and oracles. So I really try to voice as kind of clearly and as nicely as possible that it's not about disrupting the financial system or the traditional system and oh, Web3 is more powerful and it's better and we don't need the old system. I really try to find the equilibrium between the two and using the best toolkits out of the Web3 and DLT ecosystem to upgrade the legacy system to run more efficiently and be able to validate those processes and automate processes. And with that being said, that's basically it. We can dive into like Q&As and discussion. If you'd like, I didn't add it in this slide, but because I know that you're pretty focused around supply chains and things, I do have one extra slide here at the end where I can talk a little bit about the supply chain side of things which these systems sort of enable. Yeah, I think if you have the slide on hand, that might actually help inform the discussion too. I'd love to just to make sure we have enough time. I'd love to really quickly because we have about another 30 minutes left. So I'd love to kind of hear the slide about how you kind of work and power supply chain tracking and then we can open up the floor to some questions. Yeah, awesome. That sounds great. So I gotta click here. Cool. So let me do it again. One second. So once, you know, kind of to one of the things that these systems enable. So in again, in certain regions, it works better in others and in certain verticals as well in the sort of industry world. So if we, depending on the region again, we can establish accurate baselines of what's going on in regions, what's the actual particles in the atmosphere and what is happening and how much of that are inflows from other regions, how much of that is actually coming from this region itself and being able to understand that is crucial to then being able to understand what a company's footprint is. So you can establish very accurate baselines and once you do that, in some cases, it does require more implementation of IoT devices. But if you look at some of the work the OGCI has done, they've actually invested quite a bit of resources in putting infrastructure in place to be able to measure different GHGs within the energy sector. So in this system, what we have, which is really highlighted more throughout our website and less of our Web3 initiatives is the ability to track the scope one, two, and three emissions. So in this situation right here, this is a typical plastics value chain coming from oil. And so the scope one and two emissions become the next person's scope three and so onwards throughout the supply chain from the point source to the point of sale. And so what this system allows is once you establish the baselines, you can begin to monitor data that we would either install or partners or even data that's collected by different companies and institutions to be able to understand what their footprint is and compare that to the background information that's going on from baselines. That gives you a really accurate ability to understand what their scope one and two emissions are which become the next person's scope three. The information that is then captured from the scope one and two is then put in a tamper-proof smart contract framework using the Oracle network, which then basically has a virtual container that lives within the digital world that then follows that supply chain. And so this way you have an accurate understanding of knowing what the scope one and two emissions are and then the scope three from each step throughout a supply chain. I do think it's really important to highlight that this is easier, more easily done in some verticals and other. So in heavy industry, in oil and gas, energy, things like green hydrogen, LNG, RNG, those systems, this is very doable today. The things that we as an organization try to stay away from are things like, as an example, let's say a fashion supply chain. It's very difficult to put an actual machine to machine framework in place there to be able to not have self-reporting taking place. There's too many variables moving around in the clothing industry. There's dyes, there's self-reporting. Those are the things we stay away from, but within heavy industry, agriculture, and energy, these systems are completely doable today. And this is something that also ties into my second or third slide around policy where starting in 2023 in Europe, you have C-BAM that comes into effect, which is the carbon border adjustment mechanism. So if you are a, whether it's a green hydrogen producer or you're mining lithium in, let's say, Chile, then if you want to start to be an exporter from there and bring this into Europe, you're now going to have to prove those scope one, two, and three emissions. And this is a system that can automate that process for you, which makes it much easier. This now gives you the ability, once you come across that carbon border, to be able to actually prove the greenness or the brownness of that underlying asset. And I believe starting next year, you have to do this. If not, you'll start to kind of be penalized and come into fines and things. And then I think it's still slightly up in the air, but as we get closer to 2026, this is something that'll be mandated. And if you can't prove this, well, you're not bringing it into the EU. So I think this is a really important framework that has a lot of demand to be used right now. And it really just comes into which different verticals are ready to start doing this and under the most pressure to do it, but we have the technology today to achieve these missions. Fantastic. Thank you. I'm really glad that we asked you to give you the time to walk through that last component of it. It's definitely in line with a lot of the work that we're doing out. You know, I've got some questions, but I first like to kind of open this up to the sick to see if anybody has any questions. So please, yeah, raise your hand or maybe we'll do a hand raise. I see George right there. I got a quick question. I think it would probably be Miles. It would probably be a good place to kick off the discussion. But first of all, thank you. I thought it was an excellent presentation and one that really described sort of the wide range of things that you're doing. My question really has to do with the data and the data flows itself and you touched on a little bit. But I mean, what units are you dealing with? Are you dealing with tons of carbon dioxide equivalents? Or is it a range of data types that you're dealing in it? So could you describe a few of these and perhaps give some examples of how the data, you know, where it ends up? Yeah, absolutely. So the truth is, it's a lot of data. You know, we in our system, where I think on average daily, we get an inflow of 40,000 different data sets that come in. And it's all made dynamically available, but it's a combination of things, right? From different isotopes to different levels in different regions, like within icos, particularly, it's mainly focused on tower infrastructure throughout Europe. And so these towers have the ability to measure different GHGs and ODSs and things in the atmosphere, as well as the isotopes on a case by either a parts per billion or parts per million. And so this is made available, but then you have it in sort of like a data format that no one really understands and it's hard to work with. So that's why it's important to standardize and aggregate it and actually begin to create averages there so that you can call, you know, a daily average, which is difficult to do just because sometimes the data computation is not there and it's not produced daily. But we do have the ability to do weekly averages, monthly, things like that. When it comes to Copernicus, it's much more focused on satellite data, but you need to use the combination of the two to begin to understand where the emitting locations are versus the sync locations of different GHGs and begin to then use the IGAS methodology to understand what's taking place, right? If there's a large region of forestry, how much carbon sequestration is actually taking place there or not taking place? So, you know, to answer your question a bit more directly, it's a very wide range of data. I think some of the data is more useful to the private sector than other data. So as an example, like carbon flux data is really important to carbon markets, as that's really the ability to measure the carbon sequestration that's taking place in certain regions. Sorry, what did you call it? Carbon what data? Carbon flux data. A carbon flux data and that's measured, that's measured on these platforms that you talked about. Is it flux data for individual facilities or? No, it's really for different regions. And so that's where you can use it to begin to establish more accurate baselines so they can compare, let's say, like a corporate's footprint to that region and area. Brilliant. Thank you very much. Thanks. Thanks. Thanks, Miles. Yeah, absolutely. All right. Next question. Are there any other questions? Okay. Well, I'll jump in. I've got some questions. Actually, there's one from Anna Sender who had to drop, but she wanted to ask, she said the automation process is excellent. Her question is, where are you currently with the execution and onboarding the relevant parties? She mentioned in OAA and ICOs. Or is, and then she said, was this still at a planning phase? Yeah. Yeah. So I think it's everything that you've seen today is built and it's there. So we just did an announcement today really about the hyphen oracle network with glink solutions and Northwest nodes. So this is available. It's built today. You can access it in a tier. One of the things we really, sort of as an ethos from a company is we really prefer to talk about what we've done and what we've built and what we have today to be able to really empower different organizations to use. And we stray away from sort of talking about future plans and other things of that nature. Okay. So just kind of with the in OAA, for example, are there active conversations with them about integrating their data onto the platform? How was that, how did that process go? Yeah, absolutely. So we have a really good working relationship with the different atmospheric agencies that you saw today. And it's really important that when we work with these different groups that we're not just coming in as the private sector and saying, Oh, we need this help us, you know, we're here to take this data and help the financial system. But I really believe it's a two way street. So as much as they can do to sort of help us, we also have a very big initiative to help them. As I briefly mentioned, with making some systems more efficient and more automated to be able to make their life a little bit easier as well. So we've been working with these groups for the last over a year now. All right. I have one question, sure. Yes, please. I like a presentation. I do like the whole idea of the destructive of that particular solution. Right. However, my question is more on the emissions calculation side. So we know that there are specific variations in carbon intensity, depending on from country to country is, is that taken into consideration in your app? Or is it a one size fits all calculation system? Yeah, no, so it's not a one size fits all. And I think it's important to sort of, you know, put a difference here where at Hyper, we're not doing any of our own models and calculations on what we think is happening in an area. We really work within the data that we are given at, you know, have access to, to be able to empower like an ESG firm, for example, to empower them with dynamic data that we know is accurate and meets the scientific standards so that they can sharpen and improve their models. And to your kind of first half of the question, some of that factor that I just picked apart depends on the regions, right? So like Europe, for example, has the infrastructure in place to be able to do this much more accurate than in other locations around the world. So that's again, where I highlight, there is an infrastructure gap in the world where there needs to be more infrastructure to actually gather this information. But the good news is, is that all the, you know, the majority of the infrastructure that exists today, we can empower groups with this information so that they can sharpen their models, increase the accuracy of it, compare that to specific corporate entities or certain regions of agriculture, etc. All right. Well, we need to have a chat. I'll try and connect with you on LinkedIn. Could you put your LinkedIn details in the chat for us? Yeah, I could do that. Let me see. It's very innovative, your solution. Thank you very much. I appreciate it. You know, the most important thing to us was that that we really ensure as these markets are growing so quickly and there's so much capital being pushed behind it that as these different products and things are being built, the most important thing to me is that we keep the scientific standards and this data to sit at the center of these markets because I fundamentally believe that is the key to doing this the correct way. And I think if we don't do it the correct way, kind of as I referenced, which some people might call a little bit extreme, but early on in the presentation, we don't do it the correct way. We might unknowingly basically walk ourselves into another 2008 situation, not, you know, not intentionally. There's no fingers to point. There's no good guy or bad guy. It's just a infrastructure, the lack of infrastructure and the lack of collaboration between the private sectors and capital markets and the scientific communities. You know, we've got to work together on these issues and that's finding that equilibrium is how I think we can all do this the right way. Cheers. Thanks very much. Thank you. Also, fielding a question from Kyle Robinson. Can you describe more about technology used? He says it appears to be hyperlensure fabric. Also, what technology would input systems used to add data to the ecosystem? Yeah, so if somebody wanted to add data to the ecosystem and really be like a, you know, enable the hyphen oracle network to access that data, we do that really working with you hands on. It's really important to us that the data that we do bring in is very accurate data, right, and fits those scientific standards. So if someone was interested in sort of in one hand providing data into the network, we would take a meeting with them. We were always open to those conversations and welcome anybody to sort of having those discussions with us. And then in the other hand, if there's people out there that are running and operating their own infrastructure, pretty specifically to Chainlink, we also welcome those people. You know, so people are welcome to join our network as a validator. As sure as that they kind of meet the standards of running and operating those systems. And are you operating on hyperlensure fabric? Currently, no. I think we can change that pretty soon. Are you, so another question from Gregory Saylor, are you already working with Ocean Protocol and Declimate? Yeah, so we have great friends with Ocean Protocol as well as Declimate. And so I think those are both really great platforms and marketplaces where people can go to sort of access this data and use it for their own systems, right? So we're really a data supplier, but we don't have our own marketplace and things of that nature. So I highly would recommend both Ocean Protocol and Declimate. Fantastic. All right. Are there any other questions? So one question that I have is, you know, this is a space we've been thinking about quite a bit, obviously. And it sounds like you're developing solutions for kind of supply chain scope one, two and three. Also developing, you know, digital assets to allow the financial markets to kind of come up with different, you know, approaches to kind of different financial products to drive decarbonization. Our approach, our kind of philosophy is that this is such a large problem and there's so many different players that need to be involved in such a small timeframe that an open source approach allows for the kind of makes it easier to kind of bring all the players together to kind of build, you know, build all the different solutions and kind of help bring together all the standards and protocols that need to be in place for this kind of system, this global climate accounting system to kind of operate effectively. It sounds like you're going the commercial route. And so I guess my first question is like how does it scale given the magnitude of the challenge and the complexity on the moving parts? Well, first of all, I'll ask that question. Yeah, absolutely. So within the infrastructure we built where it stands today, that was one of the most important things in the beginning is ensuring that this is very scalable, right? We kind of cracked jokes from time on on a call with where we were talking about how some of these APIs that exist today to be able to access climate data like let's, you know, whether that's from NOAA or ICOS or somebody, that infrastructure was not built for an enterprise use. It's not built to be having thousands of calls a day, right? So if that happens, they have their own system there, their own issue there, which is just a technical issue. So we built a very robust infrastructure to really ensure that we can support full scalability. So whether we're getting, two data calls a day or hundreds of thousands, our system can handle it and we're prepared for that. So I hope that kind of answers the first part of your question to make sure that this is scalable. We can empower everybody with this data as soon as we get it so it's a dynamic system and the system's built to not be overwhelmed. There's another aspect of it here as well, with it being truly a decentralized Oracle network with g-link solutions and Northwest nodes, let's say for example, hyphens nodes went down or something, there's still g-link in Northwest there. So it adds not just redundancy to the validation, but it does add security as well as reliability to really try to ensure to the best of our abilities that there is no downtime. Yeah, yeah, so from a technology standpoint, that makes sense. I guess the question I have is, just standards alone for being able to kind of track embodied carbon emissions through a particular industry vertical, that requires each vertical to be able to bring together the stakeholders to decide on the standards that they're using, how they're tracking data, how they're verifying data. How does your solution enable that kind of collaboration governance? Yeah, absolutely. So I think I'm about a warning, I'm about to give you kind of a long answer to your question, but I think it's something that it does require that detailed answer. In the first hand, when it comes to the scalability on the corporate side and to the different verticals, so whether that's like the energy sector or agriculture or heavy industry, the truth there is it is going to require these different corporates to kind of put their money where their mouth is, right? Everyone's really pledged to a lot of things. At the end of the day, they're going to have to either start implementing the infrastructure to have more accurate data and move away from the self-reported. And I personally believe that the groups that begin to do that sooner than others will have an advantage in the marketplace over the others. When it comes to our system for tracking scope one, two, and three emissions, again, it's really important not just to our brand, but also to just the way we think about the climate crisis. We don't want to onboard self-reported data. We are highly focused on machine to machine frameworks. So as an example for you in like the oil and gas industry or something, there's infrastructure in some of these different companies to where they can and they are measuring what's going on with the different GHG emissions that they're releasing, right? Whether that's mainly methane. We want that to be a machine to machine framework, so we can use the data. We already have established accurate baselines, collect the data not from them telling us or them filling out a spreadsheet or something, but from a machine to machine point of view through IoT devices to collect that and then understand the differences between those two data sets to be able to accurately understand what their individual footprint is. So when it comes to the scalability, as you mentioned, on the corporate side of things, it is something I think about quite often actually is, okay, we know what we need to do now. It's kind of at this point where we just need to get up and do it, right? So I can't force a corporate to start to do this, but if I look at a lot of pressure, depending on where in the world these organizations are, that's changed now, right? I think a lot of things change in 2023, where these corporates are going to be forced to begin to have to do those things, and they should. I think that's the only way that we can really achieve this goal when it comes to emissions one, two, and three and tracking those for different corporate entities. On the other hand, when it comes to things like carbon markets or something, right? We see a lot happening right now in REFI, where there's a lot of groups focused on tokenizing different carbon credits, right? Basically tokenizing carbon assets. When it comes to that, something that we really stress the importance of at hyphen is to be able to just tokenize, let's say, I'll use forestry as an example, to be able to tokenize a tree, it's not super difficult, right? It's there, we have the technology, we can do it, and I love that groups are doing this, but how do we know how to accurately price that, right? The first step, in my opinion, is actually measuring the performance of those carbon assets. What is the impact measurement? And being able to measure that, according to scientific standards, and provide that data in a tamper-proof, dynamic manner, enables these different trading exchanges and groups tokenizing carbon credits to be able to prove the performance of those regenerative projects or those carbon assets, whatever the underlying asset is. And once you can prove that impact and measure it, that's when I think that you can accurately price that. And I think that the groups that begin to use that type of a system will be able to charge a premium for the credits that they're producing because they can prove exactly how well it is or isn't working compared to the person next to them. And I think that's very important, not just for the financial benefits, right, which are very obvious there, but that's really important to ensuring that this works and that we are making the world a more sustainable place. Because we know that there's a lot of science involved in that process, right? It's not just as easy as throwing up a little IoT device in the forest and saying, oh, the number, you know, it went down, so we're doing a good job. There's a lot of methodologies that need to be put into place here, right? Are there inflows? What are the emissions coming from other regions? It gets very scientific, and that's why it's important to ensure that you're using not just quality data for your background, but that the right IoT devices are being implemented and put in place if they're even need in those regions. And so I think it's, again, not to sound like a broken record, but it's so important that we keep scientific standards at the center of these emerging markets and enable these different groups to be able to access the data and use services to be able to measure the impact of what they are and aren't doing. Because, you know, unfortunately, if you plant the wrong type of tree in the wrong geolocation at the wrong time, you actually could be doing a disservice. It might actually not be sequestrating carbon, but it might be in a carbon omitter for the first few years. So there's all these different things that you need to take into account, which again just makes me emphasize the importance of really bridging the gap on more of a human level between scientific communities and the financial system in different groups to be able to know that we're all doing this the right way. I think we have time for one more question. We're about to run out of time here. We're just going down the list. I just want to make sure. What is the next meeting? The next question that makes sense to me is how is hyphen governed? Is there a Dow or other structure to maintain standards in transparency? Yeah, so right as it stands today, there's not a Dow. There are, you know, plans down the road to be able to put things of that nature into place. Right now, where we really emphasize the, you know, the decentralized governance of ours is the heart of our system is really the oracle network. And so it's important that we don't just have an oracle network spun off of a bunch of our nodes and we say, oh, you know, it's decentralized because we have different nodes, but they're all run by hyphen. So trust hyphen. So that's, you know, I was quite excited today where we announced the official announcement of the hyphen oracle network, where we're partnered with G-link solutions in Northwest nodes so that they are running their own chain link nodes within our oracle network so that it is truly decentralized and that you're not just trusting, oh, you know, the single point of hyphen, but that you have other groups in the network so that it is not just decentralized technology, but that it's truly a decentralized governance among different operators and validators within the network. And it looks, thank you for that. It looks like we have time for one more question from Elizabeth Green. What are your, she says, what are your requirements for standards? Who are you working with to develop them? Can you, and then she says, can you use a standards developed at our, with our standards working group, which I guess we'll ask, you know, again, the question. I mean, so the first thing I'll say there is like we're very collaborative. We want to work with a lot of people and I think that's how we will achieve these goals coming together. So I welcome any discussions around anybody working on standards. There's a few groups that we're working on doing different pilot projects with to essentially establish new standards, which are all focused on originating from scientific standards around the data. So we're less focused around sort of branded standards and more focused on standards that can be proven through the scientific data that this does work and look at the science. Don't just trust me, right? So a lot of the ethos of what we're doing here is really focusing on trust list system. And we want to take that same ethos and apply that to working with different groups to build new standards. Fantastic. Well, we are just at the top of the hour really appreciate it. Miles, this is a fantastic presentation. I think we all have a lot to chew on and think about. And thank you for your time and congratulations on the progress made so far. Hey, thank you. I really appreciate you guys having us here today. And you know, if anyone has any further questions or anything, I'm available on LinkedIn as well as, you know, you can shoot hyphen email info hyphen dot earth. And again, thank you to the hyperledger team. I appreciate you guys for having me on here today and thank you guys. Absolutely. All right. Thanks, everybody. Thanks, Miles.