 So thank you to everybody who's dialing into today's virtual hyperledger meetup about private data objects, smart contracts for data access and more. We have two presenters from Intel, Mick Bowman and Prakash Morthy. As we were just saying, we want this to be an interactive discussion. So if you have questions or observations, please feel free to share in the Zoom chat as we go along. And with that, let me hand it off to Mick and Prakash. All right. So just by way of introduction, I'm Mick Bowman. I'm a senior principal engineer in Intel Labs. I have been there for, geez, a quarter of a century now, which is really kind of frightening for that time. I was the original architect for Sawtooth, for Hyperledger Sawtooth, and started off as one of the technical steering committee members for the first couple of years that were there. We started the private data objects work after we had kind of transitioned and handed off responsibility for Sawtooth. The PDO Hyperledger Lab, I think it was the second of the Hyperledger Labs that was approved. So it's been around for quite a while. For the most part, it's been an environment where we can experiment with a number of new ideas. Historically, we have farmed those ideas off into other projects. And we'll talk a little bit more about kind of the, whoops, all right. There we go. So we'll talk a little bit more about kind of the relationship between PDO and other projects. For example, Hyperledger Avalon was basically a fork of the PDO code, and there's an ongoing project in Hyperledger Fabric called Fabric Private Chain Code, which takes a number of the ideas and some of the code out of the PDO code base and combines it with some of the work that's been done by the IBM research team as well. Kind of the situation that we're in right now is we have matured the platform to the extent that we're really interested in seeing if the platform itself, the PDO itself, rather than being just the breeding ground for new ideas and new concepts that we hand off to others, if the platform itself has become useful enough to start pushing it as a potential incubation project. And I know from our side what we have found is that much of our work has transitioned from kind of the core platform onto the applications. And what we most want to share with you today is some information that we have about the applications that we've been seeing and building. So kind of the overall agenda for what we're going to start with just kind of an overall kind of scope of the problem of what it means to do computation with confidential data. Again, we're going to run through with a kind of consistent example of one of the applications we're building for access to confidential data and confidential models, trained models, and how we can create marketplaces for that information. We do have to go back. We talk about the platform and set at least a basic foundation about trusted execution environments. And one of the things I will say is my preference is when we get to it, yes, we are aware of side channels. Yes, we are aware of potential attacks on trusted execution environments. The technology has been designed to address those. I would rather not address those problems during the main portion of the meeting right now, but we have plenty of time at the end that we set aside for Q&A. We'll leave those kind of questions focused for later on. And then we'll spend most of our time on just kind of the architecture for the private data objects and the application deep dive at the end. All right, so what does it mean to do computing with confidential data? Well, first off, there are many different kinds of confidential data. They can come in the form of proprietary information that's owned by an organization and that information has value and they want to protect the value of it. Trade secrets is another example of the kind of thing we might consider confidential data. Traditionally, we've also thought often, especially when we use the term privacy for personally identifiable information. There's a lot of information about you that is controlled both from what's acceptable to share according to law and regulatory controls and what you want to share based on the information that's exposed about you. And then there's all kinds of things like regulatory control data as well. But there's one more thing in here which we don't often think about as confidential data and that's system state. One of the things that we have focused on a lot in the past is protocols for things like fair exchange protocols or confidential auctions. We did a demonstration application for fabric private chain code for a wireless auction, a wireless spectrum auction that can be done. And in those cases, what's interesting about that application is that the state of the system itself and the control over the state of the system is a confidential piece of information that needs to be protected from visibility that way. When we start to deal with confidential information, the fear of exposure tends to be something that keeps us from sharing that information. But one of the things that we've learned in the work that we've done for this is that most organizations see value in that confidential data when it can be shared appropriately. And by appropriate, we mean that there's a set of policies for what is acceptable use for the data. And as long as we can ensure that those policies are met, then we're willing to share access to the information. And it may be things like we expect a certain differential private interface to personally identifiable to a database that contains PII in order to protect the individuals. Or it may be that we have to make sure that the queries are sent to a particular database because it contains information that can't be migrated or exported that way. One of the things that we're going to use, this is a concrete example of this is how we use policy based access to machine learning models that way. So when we look at how these MO models or classification or inferencing services created, oftentimes there's a significant amount of proprietary data that's fed into that trained model. And that model does something interesting and it does some interesting classification of objects. And we want to be able to get value from the information that we're exporting, but at the same time as the creator, we don't want to expose all of the proprietary data that went into it, or the way in which that model was trained. So we want to protect the model, we want people to use it, but we want it protected that way. As the model user, there are a number of things that we'd like to be able to see as well. We want to be able to submit our queries to the inferencing service and know that the service that's posting it is not looking at the information or the queries that we're submitting to the inferencing service that way. That is, the request itself represents confidential information. So all of this sort of comes into play when we start talking about how do we do sharing of machine learning models that are based on proprietary data that time. And so we have a number of questions. How can we flexibly specify and then evaluate appropriate use policies that we've talked about this notion of appropriate use? How do we bind the policy to ensure that enforcement takes place on it? And the policy, let me just point out that the policy is not just, we like to talk about data as ownership, but really the policies are about appropriate use. And what we want to do is to assure use of the data, not possession of the data that way. And that may mean things like we want to ensure that a differential privacy policy is enforced on the data. We want to make sure that it's extremely difficult to reverse engineer the model so that there's a certain threshold for the number of queries that can be applied to it. We want to expose information about the results, but not about the particular details of how that result is ordered against other potential results. Again, trying to prevent reverse engineering. From the model user side of things, we've already talked about the fact that preserving confidentiality and inferencing inputs is extremely important. But there's also sort of a bi-directional attestation that goes in here as well, that the user wants to understand the provenance and the quality of the information. There's a big question that's coming up these days about asking LLM's questions and the information that comes back. Is that information covered by, for example, copyright? And being able to do that sort of bilateral attestation is extremely important here. So what we really want is a kind of trustworthy inferencing service that can enforce the policies and a trustworthy policy manager that allows us to express what that acceptable use is. So the approach we're taking with PDO is basically as it comes to doing the implementation of this is we're using a smart contract to encode the policies and enforce those policies, ensuring confidentiality for the information that's being moved back and forth. And then we're wrapping the inferencing service with a guardian protocol that allows us to ensure that the only way to access that data is through the policy. Prakash will give us a nice demonstration and a deep dive into this later. Alright, so that's kind of the problem that we're trying to solve here. By the way, as far as questions will go, when I hand off to Prakash, we can go through, feel free if there's something that you would like. I believe we can do a hand raise in Zoom and we'll try to pick up your questions at that time as well. Alright, so trust that execution environments, what are they? Well, I'm sure we've all heard of how we protect data with encryption. We have encryption for data that's at rest. That's encrypted data that's being stored. We have encryption for data in motion or in transit. For example, things like SSL and the protocols that allow us to move encrypted data around. Trusted execution environments add a third aspect to that, which is encryption of the data in use. The reason why this is important is that it provides a means for protecting data while it's being computed on, so that the host environment itself can be untrusted with respect to the data. The owner of a server or the owner of a virtual machine does not have the ability to look into a trusted execution environment and see the data that's passing back and forth. We can take data from encrypted storage, use an encrypted channel that terminates inside our trusted execution environment and we can protect that data at all times. Confidentiality is the benefit for trusted execution environments that most people look to for it. But there's another aspect as well for trusted execution environments and that is the notion of integrity. What we get from the TEEs, or for many of the TEEs that are on the market today, is the ability to prevent hardware attacks and malicious modifications of the execution as well. What that means in combination with hardware attestation is that we can verify that a particular computation happened. I applied this function to this data and got this result. Because it's running in a trusted execution environment, I can prove that to somebody else. For things like hyperledger, what it means is we can take a counter, decrement the value of the counter inside the trusted execution environment, and then turn around and prove to somebody else that we decremented that counter correctly without giving them access to the counter itself that way. There are a number of different vendors who are offering TEEs. Intel has SGX and TDX, AMD has SEV. The RISC-5 community is building Keystone, and there are a couple of other efforts underway, and ARM has trust zone. Each one of these has different characteristics for the kinds of threat models that they address for it, but all have the same basic characteristics, and any one of them would be appropriate for this. Most of our work historically has been on SGX, but the architecture that we built in PDO should allow for other trusted execution environments to be used. I'm going to go through this part pretty quickly because this isn't really the focus of what we want to talk about today, but just the benefits of trusted execution in general are things like scalability and efficiency, in addition to the confidentiality. What we mean by that is when we look at centralized trust versus traditional decentralized trust, we replace single computations with multiple or redundant computations, and then we vote on the result. If you've looked at the architecture for things like hyperlid or fabric, it's basically a set of parallel computations that create endorsements, and if we collect enough endorsements to match the policy, then we can turn around and provide the commitment to it. Each one of those endorsers has to have access to the information, we have to perform all the computations parallel, and we're kind of limited a little bit on what we can do for it. What trusted execution environments allow us to do is to replace that redundancy in the computation with a trusted piece of hardware that can provide integrity about the computation, and so in theory we execute once, now it's not strictly there, and this is where I'm kind of referring back to what I said earlier about, we do allow for the potential for attacks, and we do provide some level of resiliency to the attacks with some level redundancy for it, but it's a much reduced level for it. So what we get by going to the TE is something that's private efficient and scalable that way. So private data object then is an off-chain smart contract technology, where we take all data, we encrypt that data, we send it through an encrypted channel into a contract execution environment based in the TE. The contract is evaluated, the execution of particular methods on the contract are performed inside the trusted execution environment, the resulting change to state is re-encrypted and submitted back. Then that entire transaction in order to verify it is submitted to a ledger, which basically maintains the commit log for updates to those objects that way. So we execute the smart contract in the TE, we access the transaction for the state change, and we commit that transaction to a blockchain, and that blockchain serves as the basic root of trust for it. So this is how we do off-chain execution of smart contracts in the sense that we have separated the notion of correctness, which is determined by the trusted execution environment from commitment, which is determined by the blockchain itself. And then just sort of the summary on this part of it is traditional contracts, smart contracts, all participants see all the data, and those who are participating in the endorsement policy must see the data that's necessary for computing the endorsement for it. If you're looking at Ethereum, you're looking at something like 10,000 servers per transaction. If you're looking at something that's a smaller environment like fabric, you may see 10 to 15 servers per transaction. And in each of these cases, because the redundancy is the thing that becomes the bottleneck for it, more servers really result in more resiliency, not more transactions. With TE-based smart contracts, only on-clays sees the data, so we can hide it from all participants in the computation. We get one-ish server per transaction, so it becomes much more efficient. And when we add more servers, this is one of the unique characteristics that we like out of the PDO capabilities is when we add more servers, we get the ability to perform more and more complex transactions that way. And just a reminder, like I said, all of this work is stuff that has a long history with Hyperledger. Avalon was basically started as a fork in the PDO work, and then the Hyperledger fabric has gone off. We have a number of other projects that we have shipped code into as well, including some of the federated learning work and some applications that we've done with Microsoft on improving their CCF. Okay, so quickly on the PDO architecture, there are four basic components to the private data objects. The contract provisioning service basically creates a key that can be shared across multiple execution environments using an MPC, so that it is protected from all participants. That is, even the owners of the contract do not get to see the encryption keys for the contract. There's a decentralized storage service that goes along with this to provide some storage for short periods of time. And then the two interesting, the two really interesting parts of this, the contract enclave services are the places where the smart contracts are executed inside one of these trusted execution environments using an enclave hosting service. The contracts that were implemented will come back to this more in a couple of minutes. The interpreter that we have is a WASM bytecode interpreter. We do have some limits on WASM, so for example, in order to ensure determinism, there's no threading that's allowed. We don't allow IO out of the contracts, but for all intents and purposes, it's a pretty full environment for WASM. So we're able to run open SSL, a variety of JSON interpreters as well inside the contract. So anything that can compile down on a WASM can be incorporated into your contracts that way. And then the final piece, excuse me, the final piece is the distributed ledger that we were talking about. This is where we have the commit log for the updates to the different contracts, contract objects. Our implementation, we started with Sawtooth and ran PDO with Sawtooth for many years. But what we found was we needed a notion of finality in order to implement some of the kinds of contracts that we wanted for protecting against rollback. Microsoft CCF, their confidential blockchain, provided some means both of protecting access patterns to the ledger, but also getting a high rate of commits with finality. And so we've been using Microsoft CCF now for the last three or four years that way. The ledger, beyond just being a commit log, it gives us some basic semantics, so graph-based semantics that allow us to do multi-contract transactions. And these turn out to be very interesting and challenging when we're doing things like fair exchange that way. And I guess the final thing is that it is, at some level, our identity registry. So architectural principles just kind of what guided us to the decisions that we made about how to go about building PDO. First and foremost, any distributed system where we're trying to do multi-party compute has to have some set of Roosa trust. These notions of zero trust environments, it's a design principle for reducing the number of Roosa trust, which is great, but there's always a root of trust. So for us, what we wanted for the authoritative implementation of the object, the authoritative state of the object, is to go back and use what we've been doing in this community for the last decade and use the ledger as the root of trust. So if you want to know what the current state of an object is, you can always find out in the shared ledger that way. We've already talked a little bit about these notions of separating correctness and commit. So we want to determine correctness of a transaction using the trust of execution environments. And it's kind of nice because we can do things around inferencing and other kind of complex computations inside the contracts without limiting or blocking the execution of the overall ledger. So we can do very complex things in the contracts and the TEEs, determine the correctness of it, and then determine commitment separately. So if a change has happened while we're performing an additional computation, the commit may not be accepted. Or if we haven't performed all of the other transactional updates that are necessary, the commit may not be performed as well. So what that's done is allow us to say the ledger, which is the typical bottleneck for performance, can be made extremely simple. Most of our transactions are the hash of a state that went in, the hash of a operation that was performed on it, the hash of the state that came out, and the signature of the trust of the execution environment that was where the computation was performed that way. And so simple transactions mean that we can make the ledger go, excuse me, that the ledger can go extremely fast that way. Like I said, I'm not going to talk too much about the techniques we're using to prevent attacks on TEEs, but I will say there's a much longer presentation that I do on this. If others have interest, we could use one of the development meetings to go through the details of this. But the principles that we've learned for building trusted execution-based applications is no big targets. There's no single key that will give you access to the entire information in the ledger. So we keep the information that any given component has to an absolute minimum for it. And then we do a trust and verify. And so in comparison to something like the fabric endorsement policy, where you collect a set of endorsements and then submit that set of endorsements as proof of correctness for the final commits. What we've done is shift to an optimistic mode where a single trusted execution environment can say, I looked at this and I found it to be reasonable and correct. But if other trusted execution environments perform the same computation and get a different result, then they can present that evidence for the purposes of revoking the transaction. So the cost of successful attacks on TEEs is sufficiently high that we think we can go with an optimistic policy for it. One of the things I've already said as a principle is that the state is already encrypted outside is always encrypted outside the TEE. One of the interesting effects that this has with it is that the owner of the data is actually the contract. It's not the owner of the contract, but the contract itself. So that the contract becomes the final arbiter on what can and can't be done with the state that's included in it. And that's what's allowed us to do things like we said with fair exchange with organizational vote, with confidential auctions and others. Because it can't be manipulated because no party has access to that information for it. One of our design principles for communication is that the most motivated party is the one that's in charge for it. This includes everything from invocation provisioning, the state replication for storage, how communication channels happen between the channels as well. And as we've said, we have this notion of design for trustless. So we design the architecture in PDO to be something that kind of accommodate rigorous threat models. However, most of us when we deploy our systems have a lot less concerns about threats than the worst case. And so we allow for optimization for things like consistent access to a protocol or consistent access to an object, maybe cash no particular enclave service in order to increase the caching time or the cash hit ratio for it. So we can design for trustless, you can do worst case, but if the community agrees, you can optimize the performance in a lot of nice ways. Alright, so just an observation. We'll get to in a minute. So we talk about smart contracts wrapping data. But in fact, the implementation that we have the data sits, the smart contract is a part of the data. This is one of the nice characteristics about the Wasm bytecode is we can just treat the application as data. And so this entire package, including the contract itself becomes encrypted with the shared encrypting key. And then the execution is basically taking that bundle that we created and map it into our trusted execution environment, and letting the Wasm interpreter decode the package, extract the smart contract out of it, unpack the appropriate pieces of the data, perform the execution. So all of that happens inside the trusted execution. And just kind of a high level overview should be pretty obvious at this point. So the gist of it is the contract user pushes the state to the enclave service, performs some invocation, collects the results back, included in the results will be the new state and that state hash, including the information that we've that we possessed gets pushed out to the ledger as part of the final invitation for it. Okay, so that's a really high level overview of the architecture. I guess I'll pause here for just a moment and say that as far as the implementation goes, we have really three different views of how people interact with and participate in a PDO. Role number one is kind of hosting services. So this is hosting the provisioning service or the storage service or the enclave service or the ledger. We have containers for all of those. While production systems should be run on a trusted execution environment and the containers are set up in order to simplify that. And it's relatively easy to run the trusted execution environment in simulation mode so that you can do application development without a trusted execution environment for it. So most of our application development is with TEs running in simulation mode. Most of the times when we've done production level work, it's been done with the TEs running in one of the confidential compute clouds like the Azure compute service that way. Second role is to actually go develop contracts. We have a tool chain for building the applications that's based on the WASI toolkit and the WebAssembly MicroRuntime WAMR interpreter. This includes a set of common contract usages. So the model for building contracts is a bad implementation of inheritance in the sense that you can grab contract methods from other contracts and pull them into your implementation of it, which makes reuse easy. The semantics are still a little need a little bit of work still on it. But for the most part it's pretty straightforward for doing contract implementation these days. The final role is those who want to use the contract and this is really a client only installation. There's no trusted execution environment that's required for it at all. So you're simply performing operations where you're submitting invocations to existing services. Application packaging is still a work in progress. If you go to the PDO contracts repository, we have been working quite a bit on how to package contract interactions with Jupyter Notebooks and make it as easy as possible for sharing for that. As far as the existing applications that we have, we have a notion of the exchange contract family and this is sort of your traditional blue marble, red marble, whatever you want for digital assets with high integrity issuance of the assets that includes things like fair exchange. We have a collection of applications around digital assets. These allow us to do tokens for sharing confidential assets. Right now the sample implementation that uses generates images and allows you to show those images. And then we now have a new inferencing package or an inferencing application which uses similar tokens to the digital asset, but it's really transitioning from a sense of ownership to a sense of purchasing or tokenizing the use of the access. And that allows us to actually put the policies in it. And that's what Prakash is going to talk about here just a minute. In progress we have some work on identity and abstracting policy decisions for distributed trust and we've done a fair amount of work with the software supply chain. So being able to represent software artifacts with a private data object that provides provenance and some additional information about the artifacts themselves and provides some additional protection of those artifacts. So both of those are work in progress. Okay, so I'm going to pause here for just a second before I turn it over to Prakash. Are there any urgent questions? I'll go through the chat in just a minute. Prakash, I'll keep clicking until you get to the demo. Okay, so that sounds good. So, okay, thank you everybody. So I also work at Intel as a scientist, been working on the private data object framework for about five years along with McKen team. Okay, so what I'm going to be really talking is about the applications that can be run on top of private data object. Specifically, let's go back and revisit the same policy based access to ML models that Mick talked about. And so I'll give you an overview of the architecture of the application, some of the algorithms that go into the application and then we will go into a quick demo of how the application runs with a set of deployed PDO services. Okay, so just this is the same slide that Mick showed you earlier where we have a high value machine learning model and what you want to do as a model author is give access to a model user the ability to use the model but under a set of policies that should be respected. Right, so starting from the next slide I'm going to show you how this PDO contract and the PDO data guardian look like. Yeah, so can we go to the next slide Mick. Okay, so basically what you see here I've just taken out the two dark blue boxes in the last slide, right, and I've just rotated them. So the PDO smart contract is on the left and the inferencing service is on the right. So let's start with the PDO smart contract. So the PDO smart contract is where the ownership is defined and verified and also the policy for how to use the model is defined and verified. Right, so by ownership like Mick said what we really mean is the ability to invoke operations on the model subject to policy checks. Right, and so that's essentially the PDO contract we call it the token object PDO contract. So, and to the right what you see is essentially what we call as a model guardian service essentially is what makes that is to take the inferencing service and wrap it with a PDO data guardian. So a key requirement here is that we should be able to use off the shelf commercial inferencing service. It is not our intent to write to replacements of pytorch or open Vino or the likes to actual inferencing. So we should be able to take standard libraries or even services of inferencing service and wrap them with the PDO guardian and put together the whole system. Right, so in our demo and in our implementation we use open Vino model inferencing service. So for those of you who haven't seen open Vino, open Vino is an inferencing service authored primarily by Intel. It's open source, it supports various low level frameworks such as PyTorch, Keras, TensorFlow, etc. Right, so you should be able to download the open Vino software stack from Intel and use your own favorite like underlying low level model into open Vino and deploy an inferencing service. Right, this is regardless of PDO. So what we have done is like author a very generic PDO guardian that's really forming the layer between the token object and the inferencing service on the right. The data guardian is fairly generic. There's only a single Python module in that that you need to adapt for the fact that the inferencing service is open Vino. So if you replace it with like Azure inferencing service, it's fairly straightforward to use the same PDO guardian here. Right, so what at a high level is happening is the token object after checking policy and ownership is going to transform the user request into a capability package that gets securely transmitted from the token object to the model guardian service and this generic PDO guardian front end is going to decrypt it, verify it, repackage it into an API that gets sent over to the backend inferencing service, gets it processed and the result comes back to the caller of the API. So essentially the whole protocol here is built to enforce confidentiality of the request, including confidentiality of the actual processing using the asset itself. So note that token object is a PDO contract. We're not enforcing a specific architecture for how you build and deploy the model guardian service. Okay, so it could be just a web service. It could be that there could be smart contracts there. We're not enforcing any of that, at least in this application. Next slide, please. Okay, so this is just a high level overview of how the boxes that you saw in the last slide, it's consumed by the users. Right, so there is a model owner was going to specify the policy. There are four steps for very quick high level steps, right? So there's a model owner, which we call as a model author, I think in the last step, right? So this person is going to specify the policy in the token object and is going to transfer ownership of the token object to the prospective model user. Then we say like ownership always recall that disability to use the asset subject to policy. It is not the ability to download the raw model, right? And then the model user makes the inference request to the token object first, right? And the token object checks the policies and the way to think is the token object acts as a delegate of the model user to do the actual inferencing computation with the model guardian that sits on the far right. And that's step number four, the result comes back, there may be post processing that gets carried out, right? And all of that is it could be the logging to capture model state or model historical information, for example, to keep track of a model scorecard and things like that. All of that can be confidentially tracked via the token object if the policy allows it to be done, right? And then the final result gets sent back to the model user. Right? So this is an abstract overview of how PDF contract can be used alongside a commercial off-the-shelf inferencing service to put together a very realistic application that enables policy-based access to high-value machine learning models. Okay, so, Mick. And this slide is just essentially say that, well, the architecture that you saw in the last slide is not necessarily restrictive to machine learning models as the asset. It's very flexible and very general. The asset could be any arbitrary digital asset. It could be data sets, right? It could be prompts that you used to interact with an LLM. So the algorithm that we've developed to tie, to bind the token object to an asset guardian service, including the architecture of the PDF Guardian is very flexible in the sense that the underlying asset that you're tokenizing and giving policy-based access to could fall within a large class of family. And so all of the code for the code algorithms for this architecture can be found under the exchange contract family of the applications for private data objects, whose link is what you see here. And the digital asset contract family is, like Mick said, is an example of this core architecture for the use case where the asset guardian service itself is a PDO smart contract, where the PDO smart contract is managing a confidential image and the token owner gets ability to perform some operations on the image. And the inferencing contract family, which is what I'm going to be doing a demo of, is an example where the asset guardian service is not a PDO smart contract, it's a web service with a front-end and a back-end and where you have a machine learning model and you get access to do inferencing on the machine learning model, right? And one key thing is, I'll quickly flash it later. So basically we think of this architecture as being part of, or as a foundational, as something that we can apply as a foundational layer for deploying, like, NFTs, non-fungible tokens for policy-based right to usage of high-value assets. So typically when you think of, like, NFTs in today's world, you have an image like artifact and you store it in some, like, e-mail store, you mint a token in some Ethereum or Solana and give it off to somebody else. And the NFT owner gets unlimited access to the raw bytes of the underlying image, right? So what we can do with PDOs and this specific architecture is to be able to generate NFT where... Lukash, did we lose you? Mike, next slide? Yeah. Okay. We lost you there for a minute. Oh, really? So where did you lose me? Yeah, we're good. Just keep going. Okay. So before I go into the demo, what I thought I'd do is to give you a quick overview of the token guardian protocol. So this is basically the algorithm that forms the core of the implementation for this application. This is what is implemented in the exchange contract family, right? And there are basically four steps to the protocol. I'll show you the high-level steps here and then we'll give a quick demo of execution of all the four steps, right? So basically, let's say that I'm the asset owner and I want to deploy my asset with an asset guardian service and mint a token using the token object. So we just talked about the token object smart contract, but there's also another smart contract that we use in the algorithm. We call the token issuer token object. Basically, think of the token issuer as essentially as a root of trust for any number of token objects that can be made for the same asset, right? So the first step as part of the initialization is for the asset owner to deploy a token issuer smart contract and also go and deploy or like upload the asset to an asset guardian service and establish a bidirectional attestation between the token issuer smart contract and the asset guardian service thereby actually helping these two boxes to set up a secure channel between themselves, right? So like Mick said, note that it's a smart contract. It cannot make direct RPC calls to the asset guardian service. So if ever you see an outgoing call from a bidirectional smart contract to another box, those calls are always mediated via the actual user, right? So but it's not shown via the user here just for sake of simplicity, right? So that's the first step. So the first step is to ensure a one-to-one correspondence between a token issuer and the asset guardian service. So you do that. Mick, next. So the next step is where you actually deploy the token object itself. So you could deploy any number of token objects for the same asset, right? There is no restriction. So it's like saying that I'm going to license the same asset to more than one person, right? So that's the owner. I deploy my token object. Then what I do is because the root of trust is the token issuer, I get an attestation package of the token object submitted to the token issuer. The token issuer verifies the attestation package of the token object, deems it appropriate. Then the token issuer object tells the asset guardian service, okay? There is a new token object for the asset. So basically what we want to do is to generate a set of capability keys that you can, that the asset guardian service can use to securely communicate with the token object, right? So basically what we want to establish in step, in the third and fourth step, I think there is a slight typo here. There should be three and four instead of three and three steps. So in those two steps, what we want to establish is for the asset guardian service is to generate a pair of encryption keys whose public key gets sent to the token object. And the token object is going to use that key to generate any capability packages that it generates and that gets sent to the asset guardian service, right? So thereby, there is a one-to-one correspondence. There's a secure channel between the token object and the asset guardian service itself. Okay, so let's keep going, right? And, okay, now that's all done. So the next point is, okay, I'm the asset owner now, but I want to transfer the usage ownership to a new owner, right? Okay, how do I do that? The first step is, of course, I go to the token object. I ask the asset owner now, make an invocation to the token object, say, can you hand over the ownership of the token object to a prospective new owner? I do that. The new owner has the ability to request the system for a reset of the capability keys, right? So basically the capability keys. So what that allows is, so what the new owner is accomplishing there is to ensure that no past owner can come and get, come and use the system after the ownership has been changed. So basically this is one of the ways by which we enforced a revocation of past capabilities that were approved, but not processed, right? So the system has been built to take care to address a revocation that is essentially done by the new owner asking the system to reset the capability keys. And we're doing this primarily because the new owner can again go and transfer the ownership to a fourth person, right? To a third person, which is not shown here, right? So because it's a token, so think like NFTs, right? You fill the NFT to person A, person A can sell the NFT to person B, right? So like that, even here, there's a chain like that. And in that chain, we want to make sure that once you sell your token, the old token owner does not have any more rights to use the asset. And that's being addressed here. Okay, so that's done here. And finally, the fourth step is where you actually get to use the asset. And this is exactly what we saw in the first overview slide where if you're the current owner of the token, what you do is you make your asset use request directly to the token object video contract. The contract will check that you are indeed the user, check the policies, generate the encrypted capability package, send it over to the asset guardian service, which decrypts processes that request does any post-processing and gives the result back to the current owner, right? Yeah, so one important thing to note here is that it might appear as though every request because at least from the picture it gets routed via the token object for data intensive like invocations. For example, if it's a batch inferencing operation. There is no need to pass the actual inferencing data via the token object. It's only the control data or the some metadata like the hashes of the batch that needs to get passed via the token object. But again, that's policy. The policy can entirely determine what aspects of the actual inferencing data need to be approved by the token object. We have a storage service sitting on the asset guardian service to which you can directly buffer the actual inferencing data, right? So that we can do fast inferencing, right? So it's not really shown here. Okay, so that is this slide. And so like I said that was the, so what you're seeing here is this is what I actually hinted to earlier saying that the token guardian protocol. Can be used as a foundational layer to implement PDO based like NFTs for high value confidential assets. So the two boxes that you see at the bottom layer are what we saw till now. So basically instead of just using a layer one blockchain as it's done typically to mint and transfer NFTs, use a layer one, layer two combined architecture to mint NFTs where the NFT smart contract instead of implementing it in a layer one chain implemented in a layer two chain. In this case, we are proposing private objects as the layer two. You might continue to use layer one for payments. Layer one may not even be a blockchain. It's just a payment network, right? And there is some sort of some definition of it received from layer one that gets transferred that gets absorbed by layer two. And once the receipt is checked by the token object, the actual change of ownership happens in layer two. And we can use the token guardian protocol to process the requests made against the NFT. And of course, to make all of this work, we need marketplaces to support NFTs that get made this way. So basically, we are working with our academic partners from Arizona State University to see how to adapt the ocean marketplace to add support for like NFTs minted on like private object. So, okay, so I think at this point, I'm ready to give you a quick overview of the demo. But before that, I'll just pause and ask if there are questions for the slides that I shared so far. And Prakash, I'm going to stop sharing so that you can share your screen for the demo. Yeah, sure. Okay. Okay. Do you see my screen? Do you see? Yes. Do you see my Linux terminal? Yes. We do not see the Linux terminal. I see. No, I think you're sharing the PowerPoint, not your screen. Oh, I see. There you go. Now we got it. Okay. Now that you see the Linux terminal, that's good. So what do I want to show? I want to show you how an end user, the model user, the model author and the model user would interact against a set of deployed PDO services. So that's what I really want to show you. And the demo is going to be basically based on a bunch of Jupyter notebooks that we have authored just for demo purposes that could be used by the users of the system. So I already have a bunch of systems running. So I'll show you what I have to running. So what is this number three that shows up here? I don't know if you see it. So basically PDO CCF is our ledger that is running. It's deployed as a Docker container and all the PDO services, right? This is the eService and the pServices are all deployed for the demo in one Docker container and for the asset guardian service, right? This is the OpenVINO model server that's also running as a Docker container, right? So this is basically the stock OpenVINO image that you download from the OpenVINO repository. And you've configured it with, so the asset author has configured it with a specific model for which the author would want to create a token and sell it, right? Okay, so I also have the PDO guardian. So basically all the services that we want for this demo are up and running. And so let me start the notebook, right? And let me see, let me just copy this. Hey, do you see my notebook now? I just want to make sure that I'm sharing the right notebook. Yeah, token family factory? Yes, token family factory, right? So, okay, good, that's good. So basically I'm now going to act as the model author, right? And I'm going to deploy the token issuer, right? So the PDO services, the ledger and the asset guardian are already deployed. I did not show you how to deploy them. We have scripts for all of that that you can use to deploy as Docker containers. It's fairly straightforward. It's all there in the PDO contracts repo. Yeah, we can, if there is interest, I can show you that later, but let's get to see how we will interact with these services now, right? So the first step is I need, as a user, I need to set up the environment in which I can interact with the whole system, right? Yeah, so this is basically to initialize the Jupyter setup itself. Okay, so I'm going to input some of the parameters that are required for me to generate the token issuer contract as well as interact with the whole system. So I'm going to tell my own identity. I'm going to call myself user one token class. So what kind of tokens are I going to, I mean, let's say model. Let's say model for image classification, image classification, right? Classification, service host. This is basically, I'm asking, where are my services are running? Well, I need my IP address for that. Okay, so this is my IP address. Okay, I'm going to, okay, I do all of that. So this is where the PDO services are running. This could be a bunch of services, okay? As of now, this is just one node. As of now for the demo, everything is running. All the Docker containers are running on a single node, but there is no reason to assume that that's how, that's the only way to deploy it, okay? So basically this is a factory in Jupyter that allows me to make the token issuer a notebook, right? So basically this is the notebook that I'm going to use to make the token issuer a smart contract, right? Okay, I'm basically running. So the first step here is, of course, some of the steps that you see in the node, in all the notebooks are going to be exactly the same. Basically you have to initialize the notebook. You have to initialize the PDO environment as a user. You also have to initialize the application context, right? So regardless of who you are and at what point you're going to interact with the system, you have to do this initialization steps for every notebook that you have to run, right? So here we are initializing the Jupyter notebook. And so basically I'm not going to run this. Basically this is initialize the PDO environment. Basically I need to tell the location of my services, right? So I need to tell which eService, pService, storage service I'm going to use to deploy my contract. That's essentially what this step does. I've already done it. I think I don't want to recreate that file. And now once I do all of that, I'm going to initialize the Jupyter environment against the specific set of services. And now, so this set of this PDO create service group is infrastructure details. And the contract context is basically the application context. Basically the application is not just a single contract. Usually it's a bunch of contracts, right? So you need the context file to tell all the contracts that go into the application and the interdependencies between this contract, right? So every user of the application must have a copy of the contract context file, right? So here I made the context file in model image class context.toml. Okay, so I did all of that. So up until this point what I did was initialization of the whole system as a user. And now is the time at which I'm going to deploy an instance of the token issue or contract that you saw in the initialization phase of the token guardian protocol, right? I do that and the initialization phase, like we discussed in the initialization phase, will do a one-to-one correspondence with the guardian that's already deployed and they both will communicate mediated via this user to ensure that they establish a secure channel among themselves and not showing all the details here, right? So at the end of the day I have a token issue or contract that's been generated, right? Now that's done. The next step is I'm going to mint the token objects itself. As of now I'm going to generate only a single token object for my asset. So note that there is a count here. I can just change it to mint as many tokens as I want. As of now I'm going to just mint one token, right? It should say that, okay, so I managed to mint one token for the asset. Now I need to interact with the token. I'm going to make a notebook that allows me as a user or even as a new model user to interact with the token itself. So let me make this false and use this piece of code to... Sorry, it's not false. It should be false, yeah. Okay. So this is a new notebook that allows me to interact with the token smart contract, right? So recall that I have not transferred ownership up until this point. I initialized the token issuer. I also minted the token, right? Now I'm going to interact with the token, right? So let's again initialize a new notebook, right? And again, initialize the same PDO environment, initialize the contract context. Like I said, these are steps that you need to do every time you interact with a specific contract via a notebook for that contract, right? Now, okay, I've done all the initialization. And this is where I'm going to actually operate on the contract. So I have not... So note that I have not handed over the ownership to anybody else, right? So let's interact with the contract. So basically this... So the asset here is an image classification model, right? And what the operation that the token allows me is to do like inferencing operation against the model for an image that I provide. It only tells me the final type of the image, right? It doesn't tell me the soft scores, right? What the actual model will do is find the probability that the image belongs to a class of like 100 different types of object. But I don't get the probability spectrum. I just get the most probable type, right? So that's all. So thereby you're actually able to hide a lot of the information that the model would otherwise leak if you had raw access to the model, right? So that's all controlled by a policy. Okay. So I have to give an image and its details that I want to send to this contract for using against inferencing surveys. So let me see where I have an image. Okay. So I have an image that I've got from Wikipedia. Basically this is an image of Zebra. Okay. And I need to give the path as well. So let me see. Yeah. Okay. So this is where the image is located. Okay. So I do that. And I'm going to execute the inferencing operation against this image. And what the token returns me. So I'm making this request to the token and the token behind the scenes communicates with the inferencing surveys and tells me that hey, that's an image of Zebra. So basically this is what image I send is kind of like the relevant for this discussion. I can actually show you basically what did I send. And basically this is this is the image that I sent to the inferencing surveys and it says that hey, this is a Zebra. I mean it could be anything. Right. So okay. So I did that. Now let's go further. Let's of course as of now I invoke the operation as the original owner of the asset itself. Now let's give like ownership to somebody else. Right. So let's go ahead and transfer the ownership to a new user. Let's call it user two. Okay. Of course I should have the public keys of user two available with me. It should be available at a specified folder in your system. I'm not showing those details. Right. So I have it here. So that's why I'm able to enter user two here. That's why it works. And I'm going to go ahead and transfer the ownership of the token to user two. That means that from this point on, but it's only user two who can make inferencing operations against the token or against the asset. Right. So let's test the ownership. Right. So let. So we're still user one and let's. So I am the actual like owner of the underlying asset, but I'm not the owner of the token object. So if I make an inferencing operation now it should fail. Right. So I'm going to do the same operation that I did above with my zebra image and that should fail and basically says that only the owner may invoke this method. So this error is actually thrown by the token object smart contract. Right. It doesn't say who the owner is. It doesn't give any more information other than saying that, hey, you're not the owner. You don't get to do anything with the asset. So now like ideally user two is not on this system. The user sits in like on a remote system on their own system and they should be using their own notebook to make a call to the same token object. But for the purposes of this demo, I'm going to continue to use the same notebook on the same system and act as though I am user two. Right. And make a call to the same token object. So here this is the same inferencing call. But here I explicitly say who is the caller. Right. So I'm going to explicitly say that, hey, I am user two and I'm going to use the same image and the path that's totally unrealistic in a real world. Like I said, I have my own node, my own notebook. I would have like initialized everything myself and I would have made the inference call with my own image for the purpose of this demo. Assume that that's the same system. I make this call. And now look, I'm able to get the result. That also works. And like I said earlier, think of tokens as similar to like NFTs. That means if I am the current owner, I can go ahead and sell the token to somebody else. So if I'm user two now, I can transfer the ownership of the token to somebody else. So let's say you said three. Right. So how would you want to do that? Right. So basically maybe what's happening is there is user one who has an asset and what he or she wants to sell is the ability to do 1000 batch inferencing operations on where like each batch is of size 100. Right. And that's what user one sells to user two. But for a certain amount, let's say $1000. Right. The then user two after he or she has used up 500 like batch inferencing operation realizes that hey, I'm done with this asset. I don't need it anymore. But I still have 500 batch operations that I paid for that's remaining unused and it would be nice if I have the ability to resell it at some cost. Right. So that's where user two can go to the same NFT marketplace and to resell the asset at a different cost. Assuming that the policy allows it. Right. So the policy can again be used to dictate how to resell is authorized. So that's what we are doing here. And so now it says that the ownership has been transferred to the third user. And now if you go back and execute the inferencing operation from the perspective of user two, it should fail. So I'm going back and executing cell number 12. If you see that, that will again fail saying the same thing. I'm not the owner and it fails. Right. So basically that's what I wanted to show you basically that should give a high level overview of how you interact with the contract and things like that. And basically we have like other APIs that you as a user can use to download the metadata of all of the contracts. So, so, yeah, so basically this is the like information about the contract itself. This is the contract ID, the contract author ID. This is information about the contract code itself. This is information about the actual enclaves where the contract is now deployed. So there are basically three enclaves where the contract has been deployed. There is some information about what kind of application policy that we have set for this contract some amount of details about where the replicas are. So this is a bunch of like information about the infrastructure where the contract is running. And then you can also quickly get details about the application context itself. Yeah. So basically this gives information about all the various the contracts that go into this application. There is an asset guardian. There's a token issuer and the token object. Right. So details about where it is deployed. What is the code that you're using in this notebook to interact with this guardian. And if it's a contract, where is that local contract save file and things like that. Right. So there's a whole bunch of features that you can use as a feature to interact with contracts. Are there questions about what I showed you here? I know that I haven't gone into a whole lot of details about how you would write a contract. How would you write a client for the contract and all of that. But those are things that we could go into more detail probably during one of the developer meetings if there is interest. Hello. Yeah. Yeah. The tokens which we are referring here are the ownership of the token and the data with which those tokens have been created. That thing is when it's created, it is handed over to the users who have created those tokens. Is that? Yes. So you are the original also. I'll go back to the slide itself. Right. So basically here. So you are the asset owner or the model owner. You are the person who is going to mint the token. Right. Yes. And what you get is a contract ID. You get an ID for the token. Right. Right. And then there is an out-of-band mechanism probably enabled by an NFT marketplace which we have not shown here. Whereby this model user purchases that token. Right. That's something that you can enable with this architecture. Right. So you go to, so I ask the token author or token like author go to the NFT marketplace, put it up for auction. Right. And somebody comes and purchases it. And there should be calls from the NFT marketplace directly made to the token object to initiate the transfer of ownership from the original author to the prospective new user. Right. So when the ownership transfer has been done. Yes. Then the token name by itself is not being changed. No. No. You don't change anything. You don't change the token name or anything. Inside the token there is a field called like who is the owner and that alone changes. Okay. Thank you. Thank you. Thank you. Thank you. Any more questions? One more. When these tokens are created for PII data as Mark's presentation. Assuming the tokens are being created for representing PII data. In that in that case also the tokens are in the same fashion as you've shown. Yes. Yes. Yes. So, yes. Sorry. Yeah. Let me just step in for this one. So the policies that we can implement on the tokens can be. I mean it's code. It can be as complex as we want it to be. What we imagine in particular for PII and there are others who have been working always to do this would be, for example, the token contains a budget for a differential private interface that's implemented by the guardian. And so, you know, we want to be able to export information. We've got a differentially private database, but we need to make sure that we limit the budget, the free budget in order to ensure that repetition and attack is not going to allow information to be exposed for it. So the policies that we implement inside the tokens are a reflection of the characteristics of the data that's being exported by the data guardian and what constitutes acceptable use for it. So a PII may be very different than an inference. A database that's exporting PII may be very different from one that's exporting image classification services. Exactly. Okay. So thank you so much. So I think I'm ready to hand over the screen sharing back to Mick. If that's okay with you, Mick. Yes, sir. We'll just go ahead and wrap up. Yeah. So I just stopped sharing. Mick, you might want to share and. Yep. I got it. Okay. So we've gone through quite a bit. I just want to just kind of summarize a little bit. So, you know, PDO is a project that's been around, like, like I said earlier for five or six years at this point. It's been a really nice sort of test bed for ideas for us and how we influence other projects on hyperlidger. But we've gotten to the point where the applications themselves, our focus is really shifted from the infrastructure to the applications and the applications are really starting to grow in interesting ways. We still need to continue to mature the core platform. And there are a bunch of issues on the PDO repo that point to some of the things that need to be done. What we really wanted to do today was to just kind of invite others to start participating with it. We think that the project has started to mature enough that it's capable of doing things that we're kind of open to a transition to an incubation project. But we know we're going to have to build a much stronger community around that. So we've been talking about things like developing a pilot for a confidential data marketplace that would be based on some of the work that Prakash talked about today. But we're also looking for other kinds of applications that could leverage that confidential smart contract infrastructure. And I'll just leave this up there and kind of open the floor for questions. Thanks to David for setting up the Discord channel and the development meeting for us. We have both the two repos, one for the infrastructure and one for the applications, pieces of it. All right, any questions? Yes, I have a question. This is Byron. My question was about you guys are going over the reusability and multiple contracts being implemented or deployed to handle potential mini transactions. Does that happen by like read-only inputs being able to like look at another transaction to like you were speaking on using methods from is that like a read-only feature? So there are a couple of aspects that go into this. So performance-wise an individual contract becomes the bottleneck for performance. So if that contract, let's say that we have an issuer that's managing all of the currency. It's a red marble based currency or blue marble or whatever it was that the first fabric examples were. So blue marble based currency. If all transactions have to go through that, then the contract itself becomes a bottleneck because it serializes all of the updates for it. So good contract practice breaks contracts into smaller and smaller units, each one of which becomes responsible for a smaller portion of the ledger or the exchanges. And so what we found is rather than having the issuer be the ultimate authority on current balances of ownership of blue marbles, we allow wallet to wallet exchanges for some of the prototyping work that we've done. And that allows us to have to optimize the parallelism across the contracts. The ledger itself, the CCF ledger, we've tested it with thousands of transactions per second. So the ledger is no longer the bottleneck for it. It's really the serialization of the updates to an individual contract. Okay, now that being said, as you point out, one of the nice things is that we can do quite a bit with differentiated between transactions that modify state and transactions that simply read state. And for things that simply read state, the performance can be significantly better in all the parallelism that we need can happen. A typical contract is going to be provisioned over per castle talking about those notions of service groups. A contract can be provisioned over a set of enclave services. And it doesn't really care which one you actually run it on. You may get better performance by having all of the executions from different users happen on the single one of those services. That was my comment about you can trade off security for performance based on what you understand about your application and its value that way. So like I said, an individual contract, the serialization of contract becomes the bottleneck, but we can allow parallelism for read-only executions. There's more behind it that you can actually kind of do PDO in PDO and you can have contracts that absorb updates and batch them in order to send them up to the ledger of batches. Again, there are trade-offs that go along with the performance improvements. So Byron, did that answer your question? It did. That makes sense, especially when you mentioned that you talked about the wallet infrastructure because that's basically someone signing that I want to be done, but the logic is just being handled separate from both any wallet that's inputted, I guess. It's just the logic is handling it. That makes sense, yes. Yeah, and that's exact. I mean, it's that notion of being able to, again, it's the principle behind that we started with PDO was we wanted to break the computation into smaller and smaller pieces and spread those pieces out as broadly as we could. And that's, by smaller pieces, I mean smaller transactional pieces that work behind it may be significant, but the transaction is small. So that's the, I mean, that's the gist of it. Yep. You're going to eat an elephant. They say do it one bite at a time, right? That's kind of the idea. Yep. The guardians that get the tokens are the CCF smart contracts. So are the guardians smart contract implementations of Microsoft CCF or? So the guardians in their implementation right now are not. If we want to extend the confidentiality out, okay, let me step back for a second. In the digital asset contracts, yes, the assets and the guardian are implemented in contracts for the inferencing service that Prakash talked about today. Right now it is not implemented as a smart contract. And what that means is we're challenged on the bidirectional attestation, on verification of the bidirectional attestation. As you point out, there might be some interesting things we can do if we hook more directly to CCF for that information. So what are the guardians then in the machine learning context? How are they implemented? The guardian is really just a wrapper around an existing inferencing service. Right. So we take them in Vino and you can see this in the instructions for the inference contract family on the repo. But you're basically running open Vino and the guardian is a relatively thin wrapper around the outside of the inferencing service. Okay. Got it. Thanks. I have a couple of questions here. One is that your team seems to be open to inviting other third parties to join in the pilot program. If you can share more details about how one can participate in the program, that would be appreciated. And the second question is for the PDO infrastructure to be hosted on that particular TE, is there a particular specification for the processor environment or the CPU that is required? So the answer to the first question regarding the pilot, that's very much up in the air and we want the community to help define that. So we're really looking for people to join us in the developer meeting. And the main objective of that meeting is to figure out how to set up some communal test networks and build the applications that are necessary for it. So that's number one. With regards to the second question. Okay. Can you ask the second one again? Sure. What's the baseline? The attestation. Yeah. That's for the TE environment. Yes. So most of the vendors of trusted execution today have an ability to do hardware-based attestation. So there's, for SGX, the manufacturing process and the current state of the fuses can be sent back to a service that will do the verification of the authenticity of the hardware. It doesn't give you any details about the individual. There's no processor identifier number in there. But it can do a verification of the authenticity of the hardware. So it basically says you have the most current version of the firmware loaded. You're running a legitimate Intel processor that's got SGX version 2 running on it. And then in addition locally, what we call the MR enclave, which is the hash of the actual software that's running inside that enclave is bound to that attestation. And then when we start an enclave service, that enclave service has to prove its legitimacy to the CCF ledger. So that's the other part of the ledger, the registry part of the ledger. And it uses those attestations as a means of establishing its legitimacy. I see. Okay. So by the way, SEV has a similar kind of protocol for doing it. Keystone, I have not followed enough to know what their attestation protocol is, but I know both SEV and SGX will do it. Sounds good. Sounds good. Thank you. Any other questions? I have one, but I want to make sure I'm not cutting nobody else off so I wait. Why don't you go ahead, Byron? Earlier, you were talking about like the machine learning models being used to aid like some of the, not aid, but go into the process of the smart contracts or be able to be, what gets taken out of it after the smart contracts. The statistics are verifying like pictures like the zebra that Prakash showed. It looked like the front half of it was a zebra, I mean a zebra, but the back half looks, it didn't have the stripes on there. So how would statistics of machine, because machine learning is really just statistics to me. How would it vote to be the percentage that it was signed? Yes. This is a, the correct animal versus not based on that picture that was used, but the whole point of that kind of confuses me. Yeah, I'm not going to go into the idiosyncrasies of machine learning in the training. Exactly. We grabbed that image. It confused me when he said, there we go. In fact, that's the whole point of the policy object, right? So the policy object can be used to determine how much information you tell about itself to the caller, right? So you can give very fine information like, what are you asking? It looks like a zebra at the top level, but not at the bottom level. It's something like that, right? So you can give that kind of information or you can only give like a very close information like, oh, it's a zebra, not say anything else. Yeah. And to be clear, Prakash is talking about the model itself can give you information about its confidence in the decision that it made. And the problem with adding that information is that it makes it easier for somebody to turn around and reverse engineer the model and then create one of their own off of it. So if I don't really care if you reverse engineer it, then I may give you the confidence information. But if I care, then I may basically say, well, you know, I may give you the wrong answer and I'm not going to give you all the information that I have, but I know the information I give you, you're not going to be able to use to rip off my model. Yeah. Any other questions? I think David's put in the information about the developer calls the calendar invite for that. I'll email it out to everybody to. Okay. And I think we've got the discord channels in as well. Yeah. Yeah. Everybody. Yeah. You're please join us. This is just one opportunity to discuss this, but you know, there will certainly be others. So if there are no additional questions, then I'll simply say thank you very much for giving us an opportunity to talk to you. If you have any questions, I will do my best to do discord. This is like my fifth or sixth different developer channel tool right now. But I I'm always on email and you can always send me a note or something. Great. Well, thanks everyone for dialing in. Thank you, David, for hosting. Of course. Yeah. Talk to you later. Thanks all.