 Well, great. Well, thanks everybody for joining. We hopefully, as we were just talking about, everybody's got a little energy left for this last session in the Hyperledger Global Forum for the 2021 session. As Brian was just saying, hopefully everybody was able to hear that. They did, in fact, save the vest for last with this hosted discussion, with talking about experience with OpenIDL to build industry-wide applications. So, this is a discussion, certainly open for questions discussion. As I go through the presentation, I don't think I can see the chat boards while trying to take some stops in here. Please, if you can use audio or chime in, get my attention. I'm happy to enter for questions, as well as translations as we go between insurance, jargon, and technology jargon. But hopefully, at this point, we're here for a couple of very specific things to start building and really start delivering on the promise of blockchain and Hyperledger. And be able to do that in a big fashion, be able to go industry-wide. And hopefully, we'll be able to have a good discussion here and understand from our experiences and what we've done to get to that point and how we think OpenIDL can help a lot of the other initiatives that we saw over the course of the last couple of days. By way of introduction, I'm Truman Esmond. You can hit me on email, you can hit me on Twitter. I'm pretty easy to find on LinkedIn as well. Just reach out to me and let me know if you're a human and be happy to connect and see what we can do together. My primary job is I'm the Vice President of Membership and Solutions at AAIS. A bit about AAIS is that we're a not-for-profit association of member insurers. So we're kind of a natural consortium. We've been around for almost 90 years. We are a licensed intermediary for data for the property-casualty insurance market in the United States that's homeowner's auto. All the insurance that's not health insurance or life insurance, pretty much everything else falls under P&C. My job as the VP of Membership and Solutions is to engage our member insurers, work with them directly to understand their pain and their opportunity, and build solutions both within AAIS, helping our carriers, our member insurers do the same thing, and then more broadly to fulfill our mission for the insurance industry, which leads us to OpenIDL. With openIDL, I serve as the lead architect. I'm also the chair since joining the Linux Foundation of the Technical Steering Committee. OpenIDL stands for the Open Insurance Datalink. You'll often hear me reference it as the Open Information Datalink, as OpenIDL moves beyond insurance. But the first thing that we have today is a regulatory reporting data network, and that's the part that we're gonna be able to leverage and start to build more industry-wide applications, both for insurance and far beyond. So with that, we're gonna start with a quick Y of OpenIDL with blockchain technologies and really any sorts of efforts like this. It's critical to start with the Y and the purpose first to hopefully a five-minute architecture overview, seeing how the types of nodes, the what's inside a node and how the network works itself. And we'll look at that through three different application examples that we've used for OpenIDL, looking at the data flows, how it reinforces privacy, the sorts of logic, bit about the UIs interact in the work underway. And we're gonna do that through three examples that we have today. One is the nine state COVID-19 proof of concept. We just completed as well as regulatory reporting using auto as well as a flood and catastrophe response, which is an emerging threat as we deal with growing issues like climate change and things like that. And then as we look beyond, we wanna make sure we hit on the industry-wide applications and how you as a hyperledger developer or a technical constituency can leverage OpenIDL now as a hyperledger, excuse me, the Linux project to build upon for industry-wide applications and maybe leveraging solutions that may have already built leveraging hyperledger. So the purpose of OpenIDL and the real reason is here and I wanna make sure we start with the takeaways first so that in case we run out of time or we get into deeper discussions, I do have a tendency to go down rabbit holes. But it's really important to understand that OpenIDL is a mechanism that allows us to ask and answer very important questions. The community is ready and open for business. The Linux Foundation, now that it's official, you can go to the website, look at the Wiki, join the social forums and Slack and then hit us up on Git and look at the repositories and test out the reference code that is out there and it is currently being developed. Like I said, it's now a Linux Foundation project. The technology is stable. The underpinnings have been pretty stable for the better part of a year. And the key thing that we are providing with OpenIDL is true data privacy and information delivery through the Linux Foundation governance networks of which OpenIDL fits right in that strategy. The key difference or the key differentiator of OpenIDL is the regulatory connection through property casualty insurance and the property and risk of that allows us to connect a lot of other things that we have up in the air and in the marketplace to go forward. Quick introduction to AIS, I won't read this to you, but it's as a consortium or as an association of insurance companies, we serve three functions that we are actually licensed by each state as insurance is state based to do. First thing is a statistical agent and we at that role, we aggregate information, anonymize it that it is applied to us by our member carriers. We summarize that and then distribute that in the form of data reports to the state regulators. So it's all the data is kept anonymous and they've got a general understanding of what's from a pricing perspective and coverage perspective, what is fair, what is adequate and making sure the industry is not being unfairly discriminatory and people are getting what they pay for with respect to insurance as a regulated market. Beyond that, we have a couple of important functions. As a rating bureau, we take that data, we also reflect it back to the industry. So it's a limited antitrust exemption, essentially providing the cost of goods sold for insurance to allow new products to be developed as well as experienced to be absorbed by the industry. And then we're an advisory organization to build actual insurance products on that as standard for homeowners, auto, business owners, things like that so that the industry is better prepared to offer a risk transfer mechanisms in where there may be less experience in the marketplace. So the challenges to our mission in terms of providing those functions is that we've got some really recurring problems. As data owners, it's critical to get that data private and in their control. As we are seeking, information seekers are knowing, are seeking more and more detailed data, more and more frequent data that privacy issue is increasingly important. And as a result, in order to get to the quality and the needs, we need to make sure that the purpose and the people playing in that situation are aligned around that purpose and the goals of that network data. And building open IDL, we hit a couple of key things in the development. You probably heard reflected over the course of the last few days, requiring collaboration and empathy across stakeholders who typically don't work together, who definitely don't trust each other and we're talking about regulators and insurance companies. And then establish a community, which is a network, but a network for a particular purpose, in this case for regulatory reporting. And then establishing those standards, the rules that everybody agrees to work by and a transparent governance mechanism that allows those rules to be changed over time. And then of course, being allowing that to be delivered on an open technology platform, which of course we have with the Hyperledger Fabric as a foundation. So that allowed us to build open IDL that gives some really critical features for the stakeholders, namely insurers, data privacy, and they see a solution to the trajectory of regulatory reporting and really a mechanism for an external data strategy across the industry. Regulators have an opportunity to get much more reliable, timely and enhanced information so that they can develop more responsive and more effective policy for their job about fairness, adequacy and discrimination. And AIS is the intermediary. We can actually do our job far better. We can stop moving, stop storing data, delivering information faster, and ability to serve more of the industry in a way that'll give us the new experiences that we like. And as I often say, so we can get to flying cars. So I'll stop really quick and see if there's any questions. Give me one second, and I'll check on that. It doesn't look like it yet. So I'll keep going until somebody interrupts me, but let me get, pick up the cover a little bit and talk about the architecture and data flows. So as you mentioned, the network serves three major roles. The network governance role serves in this initial case with AIS as the statistical agent enforcing our current responsibilities today. Data owners, in this case, the insurers who are deploying data to their own peer that data remains private and secure. And again, it's important to understand that the data that while it's deployed to the peer, the data itself is not on the network on the blockchain or on the ledger. Only the evidence of that data being present and having gone through the requirements to be used for the purposes of regulatory reporting. Then the information seekers, in this case, the state regulators who are preparing the data calls, the questions they'd like to ask of the network that are those transparent governed interactions with that data. And then when they issue those data calls, carriers can agree to them and permit those trusted interactions. And the information seekers, the state regulators get assurance that that data, the information that they're getting is based on the actual real information that was present in the system maintains integrity. So we look at what's happening on each of the clusters or each of the nodes in the OpenIDL network is based on Kubernetes, a series of clusters providing services that are all tightly linked or entrusted to provide a suite of services, what we call OpenIDL in a box. First and foremost is the hyper ledger fabric peer including the private data collection which we leverage for extensively. We'll talk about that. Then the application cluster which provides the user interface that the humans interact to consent to data calls and ish draft and issue them. And then the applications underneath the hood, the insurance data manager which brings the data in smart contracts that bring the data in usable for regulatory reporting and then the data call processor which is the extraction patterns that brings the data that leverages the data for reporting. And then the OpenIDL data call app which is the framework that deploys those things. Essentially that is the chain code and the smart contracts. There's an additional cluster for ETL as we're talking regulatory reporting. We are talking, you know, often hundreds of gigabytes of data on a monthly or quarterly basis. So we've got a robust ETL process that can be varied depending on the needs of the particular node and that exists in a trusted cluster. And then we've got different applications that provide the different services that may be cloud dependent like certificate managers or application security protocols as well as the actual data stores themselves. In this case, we're using MongoDB for the carrier nodes. We dig under the hood a little more to get the idea of what technologies serve those different components. They are all designed to be replaceable by the different deployment and the need of that particular node. As mentioned using MongoDB for the harmonized data store which is a key component that data that is known to be of having met the guidelines of the community for regulatory reporting. That could be other data stores in an analytics node. It might be Oracle or something like that. To talk about the harmonized data store extraction and logic, you know, map reduce could be simple SQL queries. It could also be syndicated logic that we might deploy as chain code or as trusted applications as part of the network. That's similar to those data transformations. We started with NIFI. We've also leveraged different implementations for our reference implementation as well as for AIS's own use that allows us to scale and be much more a much higher degree of service and throughput application levels. Node.js for the user interfaces and of course hyperledger fabric using IBM blockchain platform for some management efficiencies but ultimately it's not the fabric node that's really the critical component and really just the key included features including the private data collection of hyperledger that we built on for OpenIDL. When those nodes are deployed, it looks a bit like this as across the network. There's really three types of nodes that are out there. Starting at the bottom, we've got carrier nodes who are the data owners. Each carrier would deploy a node or nodes that allow them to contain their referential data and their own harmonized data stores. AIS deploys a multi-carrier node or multi-tenant node. This allows us to serve our traditional capacity where we're reporting on behalf of companies who provide us data directly and we can participate in our trusted capacity using a multi-tenant node but they can still get all the benefits of the broader network in a much more facilitated way to participate. Then there's the analytics node and this is the destination of the data that is retrieved from each of the data calls from each of the private data collections on the node. And then as that data is output and visualized, that could be visualized on a workbook, a Tableau workbook, for example, resident on that node or the output data could be delivered through a more traditional or what we might call legacy mechanisms in today's insurance regulatory reporting world. How it generally works is a two-phase commit. The carriers are loading data to their nodes on a periodic basis, might be daily, weekly. We've asked for at least the monthly and that data is high resolution. It's got key data points in there like address and primary keys, but it's completely private, completely within the carriers control. On the other side of the field, regulators are creating data calls, questions they would like to ask the industry to help them build policy to understand what's happening in the market. They create those data calls and then publish them over the open IDL network to carriers and issue them for their participation. Is carriers agree to participate in those consent? Not enough in a consensus thing, but they agree to participate in those. Those transparent interactions with their data are executed, harmonized, anonymized, and de-identified data is delivered to the analytics peer and ultimately to the regulator, allowing integrity transactions to occur at each step, ensuring that the resulting information provided to the regulator has provenance and traceability all the way back to the source data. But the real key thing for carriers to participate in this is that carriers maintain control. Carrier identified or address level data or really any data they don't choose to, never leaves company control, never leaves the network. They have control over all of the ingestion storage and transformation operations, leveraging trusted code with integrity verified through the network and all of it is transparent. So there's no surprises and there's no opportunity for them to be identified at the back end of it and having their experience come back to home. So walk through a data call really quick and about actually how it works in terms of how each person or individual participates in the role. As you mentioned, carriers upload data into their local harmonized data store on their note and at the same time regulators are creating data calls, drafting data calls. Carriers can log into their user interface and like or unlike those data calls indicating the likelihood that the industry would respond favorably should that come into play. Using online offline mechanisms, they may incorporate feedback and regulators may issue multiple drafts of that data call. When they're satisfied, they can issue that data call to the market. It's not quite ready yet because the stat agent in this case, AIS, but in the future really could be any organization that would develop the extraction pattern that would actually be the code that interacts and queries that data across those carriers that does the data transformation and the input for the report. That extraction pattern, that code is actually attached to data call logged within the ledger so that there's nothing, there's no way for any other code but that to be run when it comes time. Carriers can then review that data call and see that extraction pattern. And if they're satisfied that test runs, they see how their data is gonna respond and they're willing to do it. They can consent to that data call. At that point, the system executes the extraction pattern, sends the result to the analytics node using the private data collection that information is sent to the statistical agent in this case, AIS, where we run additional quality and ultimately deliver the report to the state regulators in whatever mechanisms they prescribe. Let's pop up in the hood of the extraction pattern because this is really where the rubber hits the road and where they say we're really starting to play with some sharp objects and that we're looking at data and dealing with data with trusted interactions to very sensitive data. It could be for PNC insurance, it's things like address, maybe location information, as well as coverage information claims experience, but as this grows into other insurance mechanisms, it could easily be much more PII. So as a carrier consents to it, they consent to the data call and that is obviously logged onto the ledger. The regulators are aware of an additional consent. They don't know who actually performed the consent. The extraction pattern is retrieved from the ledger to make sure that there's nothing other than what was approved that is executed. And then the extraction patterns run against the harmonized data store. Again, on the carrier node in the context and control of the carrier, and they can have a big engine that's run as fast as they want or they can run it at whatever their pace is. In addition, we have the opportunity at this point to not just query the data, but we can actually run additional applications and decorate that data relating it using those key fields or private data or even internal data stores to things like the payroll protection program. In this case, in our first example, we'll talk about how we related private data, which is policy record to a business loan application record through that address and how that was able to be really powerful. Then once that functionality is performed, the data is extracted and then matched to perform decorated appropriately to using related to other data stores, that is just placed in the private data collection and then is replicated to the analytics node. Data is ultimately delivered to, in this case, the statistical agent. On the analytics node, we do additional quality checks and then it can provide a delivery mechanism and do a reporting database, potentially a Tableau UI or report or a data set output. So we talked about really the purpose today of the industry wide applications of how do we leverage this moving forward and talk about a couple of the applications. The first one is the COVID-19 business interruption data call that we performed. Obviously regulators had a key in interest in as they were issuing policies to close down businesses of what kind of interruption or what kind of impact that was gonna have on their small businesses, how we're bills gonna get paid, how was payroll gonna be met? And they did a data call that gathered some high level information about policy and premium and COVID-19 related claims and losses. Well, the insurance industry had long ago recognized that pandemics and viruses and things like that were generally excluded from typical process. They only happened once in 100 years and the costs would be disproportionate to the potential value for that. So those are typically excluded risks. So the initial data call that was issued indicated that there was very little coverage out there for pandemic risks and business interruption and virtually no claims were paid as a result. We had nine states, including California, Texas, Maryland, Virginia, North Dakota, Connecticut, New Jersey do a proof of concept issuing the same data call using OpenIDL. This allowed us to leverage data that was already available in the marketplace, not do it as a one-off query of the industry, but leverage information that we captured from statistical data and then augment that data using the data privacy capabilities of OpenIDL to add some keys to that. So we can now associate claims to policies where we couldn't before. We had a bucket of policies and a bucket of claims and they weren't associated because of course that would indicate a particular policy experience, very proprietary and enterprise information. And we also were able to decorate the policies with the policy address. Again, something that's very private. Historically, we would have zip code level information, but now we can get it at the street level and then be able to keep that fresh. And through those associations, we were able to connect the policy street address to the payroll protection database and answer and definitively understand the question of the gap between how many loans business owners took out to replace their payroll, even though they had business interruption insurance that wouldn't have covered the risk. And what it shows is the dramatic gap in payroll loans, PPP loans taken out and relative to the premium paid by the industry for business interruption and there was just not enough premium being paid to remotely close that gap. But it gives us a mechanism from a regulatory perspective to connect that and close that gap so that good policies can be made encouraging good behavior. That's really a powerful one that just completed and finalizing the major report for that. But the whole reason this was built is for cases like that, but really for the day-to-day business of regulatory reporting. And as we mentioned about what I just created was the regulatory reporting data model, which replaces a highly complex and variable system for regulatory reporting across the P&C industry. Carriers have to respond to 1600 of these data calls including the annual reporting things they have to do just for doing that sort of business in a particular state and more and more of being created every day. That hairball in the upper right-hand corner of the slide represents the data model that indicates all the different data that is necessary by all the different data calls that are issued, because each state issues their own even though they're asking the same questions they do it different ways. Assessotating a carrier to respond very tactically. Our regulatory reporting data model flipped that around. We looked at all the data calls that were being issued, boiled down the data architecture and created a model such that if a carrier ingested the data for the regulatory reporting data model that touches most of the content areas of an insurance business, they can now do regulatory reporting essentially at the click of a button. And that model is changed through mechanisms in the industry that are permitted today in order to do that in keeping data private and allowing a much more streamlined flow of information as well as products and benefits to reporting carriers for in terms of benchmarking as well as collaborative products. There's some great opportunities there and really the first big value proposition is operational efficiency with some companies that are looking at reducing again 900 or so processes and removing potentially 90 or so steps in those processes. And we're able to put a very fine number on those to see what sort of internal efficiencies can be gained when enterprises take advantage of open IDL. Then we start to get into the future cases and what are we going next? And one of the big challenges we've had as an insurance industry is certainly flood and catastrophe and the changing risk environment and the need to get far more precise in terms of our management of those risks. We know where water is gonna go when it comes down. For example, using these models and incorporating data providers, third party aggregators, machine learning tools and even data owners or data providers in an automated way like IoT devices. Building a model for those allows regulators a mechanism to not only leverage those tools like models that they might not have been able to afford otherwise and they can start to get the very specific value out of those to understand what their risk looks like on the first day of hurricane season or as two of our initial members cat risk in RMS to leading flood and catastrophe models in the industry proposed is allowing them specific potential access to those models so they understand the risk can develop policies and mechanisms to collaborate with the industry to both mitigate and respond to that risk to help those events become less impactful and those includes new building construction codes, for example, as well as infrastructure to help mitigate and respond in a collaborative way. That's really important. And then as those types of cases work through regulators are already saying that they have data owners as well. They have rich experience that has been reported in the past. They have lots of information in the cross their networks that for licensing, whether it's department of motor vehicles and transportation, street maintenance, as well as permitting that they do for everything for pilots to home daycares. All of those data stores need to be connected and are a value valuable back to the insurance industry as well as we get to these more regulatory dashboards and more dynamic policies that will again allow us to get to flying cars where the interactions and the risk context needs to be changed through the interaction of trusted systems without sharing, like one car might do 40 terabytes of data on the course of a day. And keeping the data in control, keeping it proprietary and the way it needs to be done for execution and trusted for a purpose connected to the regulatory environment so that we can actually get some of these things done. So as we go beyond what we're doing today we certainly have lots of other working groups going that hopefully everybody's able to see how this fundamental ecosystem is this really baseline privacy technology can facilitate all the different applications we've seen that are leveraging Hyperledger today but also how OpenIDL is a fundamental technology again and community and through OpenIDL to do or through Linux foundation to do network of networks and governance networks allow us to develop immediate industry wide applications for within the insurance industry supporting specific processes and new stakeholders like re-insurers and agent networks as well as those data providers. The enterprise deployments for dramatic efficiencies within organizations as well as mechanisms to integrate their enterprise data strategy with that of the external data strategy so they can start to really do some real technical investment for the enterprise and their own purpose and leverage product verticals so they can start to integrate will allow across the industry or for particular policyholder or customer experiences that will be broadly impactful and then require those regulatory inroads pun intended for them to be involved and allow the broader impact that we know distributed ledger technologies can provide including the things the regulators need to be accountable for like smart roads and street maintenance and rules around when we can have autonomous vehicles and flying cars based on weather or other factors. Beyond that open IDL is at a point ready for to start being introduced to other communities and networks insurance related or not. I was gonna say often everything is related to insurance at one degree or another but the underlying technology allows communities to start to establish themselves and set the boundaries of their purpose and then open IDL allows them to interoperate for the purpose of completely new experiences and value across these communities of untrusted stakeholders. So getting started on learning join joining Linux foundation and most folks who are already there join open IDL for the community start to get busy. We've got all kinds of resources out there we mentioned on get on the wiki and of course on the website. Anybody to please reach out to me and anybody at AIS the Linux foundation to help you get started. So with that I will stop sharing and if we have some get to some questions. I think that's a great. Any questions come through I'm trying to get my Q and A thing here. No, just trying to give you some moral support as well. I had a couple of people in the audience. I don't know anyone have any questions. I mean, I'll just say I first heard about this about actually at the last Hyperledgic Global Forum is where Truman and Joan and I met and we started talking about other ways that the Linux foundation could help with this project help with open sourcing of the project and help with a bit of the governance model for it as well. And so we've been working with them to put this project together and roll it out and so it's live now. Oh, it looks like there is a question in the Q and A. Yes, go for it. Yes, regulatory compliance monitoring and regulatory bodies to get on board with them and blockchain in the first place. It actually, it's a great question. Regulators have been actually very on board with this as they recognize the challenge they have and the fights they kind of pick with the industry when they're trying to get new information and the COVID-19 data calls a great example of that in that we're starting, as I said, to start to play with some sharp objects, we're getting near some sensitive areas when we talk about things that paying claims on a broad basis that are very touchy for insurance companies. And for compliance, that's actually our role as AIS and where this is going to have objective rules for compliance that allow those and allow those rules to change, get faster, get more detailed. And regulators see the opportunity there as we mentioned that we've got, they're actually kind of pushing the envelope with our, in our industry, because they're trying to, they wanna say yes, but they just need a mechanism to do so and what they see with OpenIDL is a way to build that, it may not be there yet for flying cars, but there's a pathway to get there and that's what they see to do so they can allow this activity, but it's a paradigm shift. And actually, blockchain, while it's critical to the underlying technology, actually, as somebody say, it's actually starting to, we know it's going somewhere when it's being applied to very boring use cases. And that's what this is, this is a process that's been going forever, but at the end of the day, it's how regulators allow new risks to be taken when there's a risk transfer mechanism. Again, they're not gonna allow flying cars, 2,000 pounds of metal flying around over a city, until they can permit that and they know that if an aerial wheel falls off, it's gonna be okay. And there's a mechanism, and it's not just gonna get fought about in court, people are gonna get made whole quickly. And that's what we need to do for all these different things that are changing cyber, great example, securing our pipelines and food supply, maybe. Maybe if I could add a little bit, Truin, as we look at the use of blockchain technology, there's the technology itself, which we have a lot of control over, then there's the communities that we need to build with the technology, which are, that's harder to do, you have to bring that community together. And then if you're in a regulated industry, like the insurance industry, you do need to bring the regulators to the table as well. And we have been collaborating with the regulators all along. And the example that Truman gave earlier around the COVID-19 data call, the regulators really do want to move in this direction where they can begin expanding how we bring data together. How do we bring that closely held private information that companies have? It could be a bank, it could be an insurance company. How do we link that to new data sources and get to an understanding of what's going on in a market that we just can't do it with the data, the way we've collected data in the past? So the regulators have been a very active participant and there was a law, now a legislation passed in North Dakota about two months ago where they allocated funding to continue looking at using open IDL as a way to solve really some serious problems in states such as uninsured motorists, which cost good motorists a lot of money because for those who aren't insured, it costs all of us some money and it's a pressing issue for the regulators. So I think that's a really important point that we have to have the regulators there. The other part is that there are millions of dollars a year spent on all sorts of market conduct and financial auditing by the government. And open IDL is a platform that enables that to be automated. And instead of sending large blocks of data to a regulator, just a simple dashboard would be all that would be needed. So that's why, again, to echo what Truman said, come join the open IDL project because it's more than just regulatory reporting. There are many possibilities of extending this platform to do more than regulatory reporting. And regulatory reporting in all kinds of different ways, again, giving them a way to say yes and allow all these things that we wanna do. Give me off the roads. Have the regulators had a heart attack when you used the word blockchain at them? Okay, good. No. Some do, what really perks them up though is what we're able to show them because the industry protects itself from discrimination. So things like really sensitive things like race, income levels and geographic disparity are kept at arm's length, which makes it very difficult for regulators to see if those things are happening correctly. But that's the conundrum. Yeah, I think when they kinda to clarify that your question, Brian, in terms of the technology itself, I don't think they're concerned about it and they're even less concerned when they understand it's not about cryptocurrency, it's a different type of blockchain technology. Exactly. But we've seen that even if they don't really understand the technology, they understand the potential to unlock information, not data, but information, they do see that. And we've had great partners, there were eight state commissioners that joined in and supported the COVID-19 data call. Nine. Oh, nine. Almost, in almost 10. Yeah. So that's for a state regulator to pay attention and say we support this, it is important to them. No matter what the technology, that what we were trying to accomplish, that's what was really important to them. Yeah, because we're gonna need them to change the laws because how they do regulatory reporting, the requirements that they put on an industry and how that industry responds is at least policy if not actual law. So it's not something that happens overnight and it's kind of a yes-and thing. All the things we're doing now with OpenIDL that are really cool, we still have to do the old way too. Yeah, and that's a good point. I think, again, for all of the groups that are participating in the building of these types of applications, there are many things we can imagine that we would like to do, but for many of them in regulated industries, we can only do them if the regulators understand what we're doing. For those things that are important to them, a fair, open, transparent marketplace, they need to come along with us because we can do great things with technology but then get stops short because they have no insight into it. And a good example of that is catastrophe models which have been in the excellence industry for a long time, but they're black boxes. And the state said, no, we cannot approve the use of these as just black boxes. We have to look under the cover and make sure we really understand what you're doing and that it's fair to the consumer. So the same is true here with the exchange of information. Jennifer, feel free to share your audio and video and come on if you wanna ask other questions too. It's good to hear that you're working in the same space that it resonates. It sounds like you're somewhere between water utility control and critical infrastructure, cybersecurity infrastructure, certainly a hot topic since the pipeline hacks and other things that have happened. So anyways, feel free to ask more questions as well. If I see another one from Iwana, could the models for climate change evolve soon? Where do you think this does plug into how risk, and well, the models for like where weather events happen both in the macro and the micro, but how does this get maybe to things like parametric insurance and those sorts of things? Yeah, the first thing it'll do is start to do parametric experiences for communication efforts, for example, it may not be paying claims or necessarily creating a whole bunch of first notice of losses, which means there's a whole bunch of claims that might get paid, but start to do some things like, oh, we know that this policy is here, we know this weather is there. We can send notifications like batting down the hatches, here's some sandbags, things like that. That'll happen more parametrically, even if it's not actually underwriting issuing insurance or paying claims, but those will come as we get more trust in the network and more capability that the regulators will permit on a broad and increasingly broader scale basis, right? They'll allow different things in high risk areas for houses on stilts on the coast than they might allow in underserved areas in an urban area, for example. I see Jennifer came online. Anything else you wanted to ask or comment on? Well, so, well, first, thank you for inviting me to be part of it. I'm kind of in a unique position where as you can see behind me, I'm a water nerd, like I'm just kind of dipping my toe for lack of better phrasing down the rabbit hole of blockchain and the entire industry. I've been in wastewater treatment for about 11 years now and over the course of that time, I kept finding myself saying, there has to be a better way of doing this when it comes to like the flow of information. And so I eventually learned what blockchain was and I heard about the cryptocurrency part of it first. Now I was like, okay, you know, that's cool, but... And then I heard about the supply chain management aspect and I was like, oh, oh, that has potential. So I started, my utility had some issues going through with actually our industrial users. So every wastewater utility has stuff kind of sending them or industry sending them stuff, whether it be residential, everything that gets flushed down in toilets, rolled down a drain, chemical manufacturing, all that stuff gets washed down and sent to us for treatment. Nine times out of 10, everything's fine. Every once in a while at like three o'clock in the morning on a Sunday when no one is really here paying attention, we get something that just wipes us out. It's equivalent to a disaster situation. So in addition to all of the physical stuff that we have to do to prepare for that, there's also paperwork that has to follow, primarily billing paperwork between the utility and the industrial user. We have a hard copy contract with them right now where our attorneys kind of battle it out and come up with something where, if it's under this threshold limit, you pay one fee. If you exceed the threshold limit, you pay another. Right now, nobody trusts each other. Nobody, if something happens, nobody has any proof of it. There's no way for anybody to get financial compensation. It's kind of a mess. So I kind of started thinking, okay, well, I had recently learned about blockchain technology and smart contracts and their use cases in like Walmart shipping. And I was like, now what if we could apply something like that to this? And that's kind of how I got involved in this and learning about it. And that eventually kind of morphed into after the cyber attacks in February at the wastewater plant or at the water treatment plant down in Florida. And especially after the pipeline hack, cybersecurity in critical infrastructure has really come to the forefront of national headlines. And nobody likes making headlines for all the wrong reasons. Not that way, right? Exactly, so can we, again, apply that kind of supply chain management thought to water and by extension to basically all of critical infrastructure. I mean, the industry is moving towards digitizing all of that stuff. Like you said, with all this paperwork floating around and with calling for information, can we streamline that? Can we digitize that? It could be the future of truly smart and sustainable cities. And so most people around this point in my little spiel kind of think that I'm trying to develop Skynet. And I apologize in advance if it leads to a post-apocalyptic sci-fi series. We can manage that too. See, that's the cool thing. We can manage AI using the same technology that provides for everybody. But also if we keep the human element at least somewhat involved. And if we do it smartly and through collaborative efforts, like Hyperledger, again, I'm still learning all of this stuff. As I go, I've taught through things like this, I've taught myself enough to like overall architecture kind of stuff. I know what needs to go in place. I am not a coder, I'm trying to find developers. So shameless plug. If anybody's interested in trying to create not Skynet, Skynet, reach out to me. LinkedIn profile is attached to my profile here. But it's one of those things where this could work. We have to do it right. We have to do it smartly. But it could really work. And it could have so much potential benefit for the same reasons why it would benefit that the insurance industry. Well, those public-private partnerships is really the critical piece, right? Water infrastructure, energy infrastructure, utilities from, again, I mentioned street maintenance and things like that. We've got to have objective things that say, again, autonomous vehicles, you can drive on it when it's dry. You can't drive out when it's wet. You can drive on it if the trees have been trimmed in the last month. You can't drive on it if they have. And the never more important than with water infrastructure, energy infrastructure, food supply, to have those collaborative mechanisms because nobody wants, we've been talking about a real main that seems so quaint now. Yeah. And now we're talking about gas lines and meat supply chains and, of course, water treatment, right? I mean, that's real time. One thing being out of power, it's another thing having bad water go down the pipe. Oh, sure. We work very closely with the National Fire Protection Association, which is an association for all the fire protection professionals in the country. And they have echoed the same pain points that we were echoing around the sharing of information. And you just described, in your case as well, where you need to share information within, a lot of folks that don't trust one another, but you have to work together and you have a lot of old manual processes and barriers to getting good information just because of the way it's being done. In the case of the NFPA, they work with every fire department in the country and they're using a wide range of systems to collect information. And then it has to be moved up to this national repository. With the OpenIDL, they can continue to use those systems because they don't turn over in a matter of months or even years very often, but they could put that data from the system that they trap it in today into the harmonized data store for OpenIDL. And then it's available to be linked to federal data resources, NFPA data resources that they collect and then they can heat their data private because, again, the fire stations don't want to share all their information. So when I think about the problem that you just described, each of the entities that you're working with from the businesses that may send something into your water treatment facility by accident, for those who have to clean it up, for those of you who have to test or monitor or govern, each of you have your own systems, but you do have points where you want to connect that information together. So what the OpenIDL offers you is a way for those companies that are going to be nervous about collaborating with you and others that are maybe municipalities, go ahead and continue using the systems you've been using. Continue to keep your data private, but find a way to answer the question by using the mechanisms of OpenIDL. So within the network, only the existence of the data, we're here, we have the data available if you need to query it, and then each entity agrees to answer a question using a smart contract, where the data is never stored on the network, but the ability to ask the question. So it's all about trust, right? You don't have trust. You mentioned that. There's no trust between the parties. This enables a first step. Go ahead, leave your data where it is. We know we don't trust one another, but if we do things this way, we can at least start to answer some questions together. And even then it can be a jumping off point to future things, where we are able to share that information. It's a good first step. Great, well come join us. Yeah, absolutely. Great news, please. Yep. Hey, I'm figuring out my way through Hyperledger at the same time that I'm figuring out my way through blockchain in general. So yeah, definitely. Anywhere we can help you, we're going the same directions, yep. Yeah, definitely. Well, it looks like we are just over time. So I probably should wrap up with that. I really thank you Truman for presenting on the OpenIDL vision. And this will be recorded and published online in a little bit of time. I don't know that it's put up there instantly, but it'll be put up there pretty soon. So the world will be able to see. And thank you all for coming. Any last thoughts or comments, Joan or Truman? I'll say thanks, feel free to reach out. Anything we can do to help. I think we've got something that can plug into pretty much everything we've seen this week. So I think we can help. Cool, okay, thank you all for attending. We'll see you all next year, hopefully in person. Yeah, right? Or sooner. Right, thanks everyone. To meet up near you, right? Yeah.