 David Pitt, one of our principles, is also here involved. And I think he'll just kind of kick things off, and we'll let Mick take it from there. Sounds good. Thanks, Matt. Yeah, hi, everyone. This is Dave Pitt, Kiehl's software. We were kind of, we started up the meetups, I think, during the pandemic. We kind of dialed that back a little bit, but we're going to start initiating those again. And we play around with Hyperledger and do some stuff with it. I personally have some experience with it, but we're excited to hear about private data objects and tell. I think there's some of this stuff is tied to chip private type of processing, too, for proof of work stuff. Am I getting ahead of myself on this? Or is this in that area that I've read about? Yeah, we'll come back to that. This work doesn't have much to do with some of the consensus work we did originally. I just remember reading something on that and it piqued my interest. So I'm looking forward to hear what you have to say and appreciate you giving us your time and knowledge. And I'll pass it on over to you. Great. Thanks, Matt. So just a little background on this. So private data objects is a Hyperledger Labs project. I think we were either the first or second, one of the very early approved Hyperledger Labs projects back in the day when we were just getting started. This work is very much, I'll say it's very much research. It's very much a labs-ish project because it gives us an awful lot of opportunity to experiment with some new ideas. But we have the technology that here has been transferred into several full projects in Hyperledger. So Hyperledger Avalon, the original code base for Avalon, was a fork of the PDO code. And we've just been working with the Fabric team to incorporate a thing called Fabric Private Chaincode, which is another thing that was kind of based on a lot of the ideas out of here. And it was a collaboration with several of us to bring the same capabilities to Fabric. And just as background, I was the original architect for Hyperledger Sawtooth. As Dave mentioned, I built the original TE-based consensus algorithm called Proof of Labs Time, which is a very efficient decentralized consensus algorithm based on trusted execution environments. OK, so all of that's background. So what I want to talk about today is a thing called private data objects. Like I said, it's a Hyperledger Lab project. And it's really at its core is about how we can do community-based or decentralized computing on data that has confidentiality and privacy requirements. So I kind of step back for a second. We're just the motivation parts of this. What is it to prevent data sharing right now? Obviously, there are some issues with access that data is not always available to us. We can't get at it. There is risk that's associated with it. And that risk is inappropriate use or misuse or monetization of the data that's inappropriate for it or inconsistent with the creators of the owner's responsibility. And then there's this ongoing sense of loss of control. So our data sharing interfaces and APIs tend to be fairly low level. If I give you something, you can do whatever you want with it, even if it's not what I intended for you. They're generating huge amounts of data about how you drive. And it creates this kind of high dimensionality, high detailed profile of you as a driver. Well, it'd be nice to be able to use that in order to get the personalized insurance quotes. And we've all seen the little dongles that we stick in our cars and I don't know which one. It's not Geico, but it was one of the insurance companies that talks about good driver quotes and insurance for that. So the problem in general with insurance for sort of long-lived or with short-lived interactions like when we have a number of rental cars and other things is that we can't really use that profile in order to get our insurance quotes. So what we'd like to be able to do is use a profile that we generate, be able to get a quote that's representative of what we are as a driver. But at the same time, we really don't want the information about us, the very personal information about places we've been and things we have done that could be inferred from that profile to be leaked out to the insurance company. They have no business knowing. But they have a reason to know how good a driver we are but not where we've been. And so how can we do those kinds of computations for it? And it brings up a bunch of questions. They're kind of about our data sharing issues where you store the profile. He gets access to the profile. And then there's a bunch of these sort of integrity things like, well, if I can give the insurance company my profile, why would they even believe that it's an accurate representation? I may be lying just so that I can get a better rate for it. And sort of the converse is that if I give my profile to the insurance company, why do I believe that they'll delete it when they're done or if I choose a different insurance company? And then there are a bunch of other things that are related to this. Like, OK, so the profile that I create is this useful thing for the insurance quote. But maybe there are other ways that I can do monetization at the end. As we kind of look at the emergence of NFTs, that question of monetizing data becomes a very, very interesting question for us. So what we would really like to be able to do is to have this sort of trusted party that we could give our data to, the insurance company could give their analytics algorithms to. And everything takes place. And the trusted party, we believe, will not leak any information. It verifies the integrity of everything. And it generates this insurance quote. So the ideal is that we have this trusted party that we can create that has those properties with it. And it turns out that this sort of notion of a trust that their party ends up in a lot of different situations. So we did some work with IBM on alternative ways of doing blockchain-based sealed bid auctions for wider spectrum. And it turns out I have some of the same characteristics that the government kind of wants to get out of the business of managing all the sealed bids. So how can they create this trusted third party that could actually manage the entire auction and ensure that the auction is fair, ensure that the correct bidder wins, but never expose the information to any of the parties who are participating. And we're going through the hyperloader election right now. This has actually come up several times about some of the problems related to organizational elections. And you'll notice I'm keeping public elections out of this because that has a whole bunch of other problems associated with it. But these kind of organizational elections like electing a hyperledger technical steering committee that we're going through right now. But again, what we would really like to be able to do is to remove the election staff from that trusted computing base and have some kind of third party which can enforce the rules, seize all the data and which everyone trusts to generate the right results that are coming out of it. And so we go back and kind of look at this trusted party. The properties that we would really like are things like, well, we want high integrity data. We want to know that the data is trustworthy. And a lot of times that has to do with things like provenance. We want to be able to prove or verify that the data that's coming in has the appropriate history for it. And we talk a lot about things like privacy and confidentiality. The lab I work in is the Security and Privacy Research Lab. But a lot of times we really don't understand what that means. And in fact, privacy and confidentiality are really set in context. And so really what we want is not necessarily privacy and confidentiality is sort of this broad spectrum of definitions for it. What we really want is for the data to be used in a way that we intend for it to be used. We want to be able to say, it's okay if you use my profile for generating an assurance quote is not okay if you use my profile in order to figure out where I've been and send me good advertisements for those places. So we want all of these properties and we want to ensure all of these properties even when there's no institutionally trusted third party. And a lot of these computations like the sealed bid auction that there is no one that you would trust because anyone who has the information has the ability to manipulate the auction that way. And there are a bunch of these other properties as well which we're not gonna talk about today like commensuration, which is how can we make data monetizable that way. And then things like scarcity for it have to be a part of it as well. We can come back and talk about NFTs on other times. By the way, if there are questions on the chat, because of the way the setup is right now I do not have access to the chat. So please just break in and ask, all right? So, anyway, those are the kind of problems that we're trying to solve is can we create that trusted third party using technology in particular to combinations of smart contracts and blockchain and other things in order to create that trust of third party that allows us to do these multi-party kind of confidential applications with it. So, as David mentioned at the very beginning you know, there's this special kind of hardware that's really been kind of the core of our work is first showed up in the Saatu Consensus Algorithms, the proof of the last time Consensus Algorithms Saatu started with that way. It has shown up in a number of other blockchain related projects from some of the work done at Cornell to on off-chain layer two contracts to some work on secure oracles and high integrity oracles that are moving data out of the real world into blockchain. So this trusted execution environments would be used in a lot of different places. And basically what a trusted execution environment is a way for us to execute code on data in a way so that even the owner of the server can't look at the computation and can't look at the results of the computation that way. So it gives us, we talk about encryption being used to protect data in transit and encryption being used to protect data at rest. Well, this is really encryption being able to use to protect data in execution in use that way. So we can create this little enclave and inside that enclave we can store keys that nobody who owns the machine can see into. We can add data in there that can be processed inside this enclave and the owner of the machine doesn't have the ability to actually look at it. The second really useful property of these trusted execution environments is the first is sort of the big one everyone talks about is a confidentiality part of it. But the second part is integrity. And what that means in this case is that I can perform this computation and assuming the computation is sufficiently deterministic, I can prove to you as a third party that I did the computation and that I did it in the way that you expected us expecting me to do it. And I can prove that to you without giving you all of the details of the computation itself. And that's kind of a useful thing when you're working with these decentralized blockchain systems that I can perform a computation. I can do a transaction on state. I can take some old state and some new state perform a transaction on it. And I can prove to you that I did the right thing with that transaction. This is kind of the basis for some of R3's work with Corridor as well as being able to prove this. So there are a bunch of advantages with performance and efficiency that we only have to do this computation once rather than like Ethereum where we have to do every smart contract transaction on every single client that's participating in the validation parts of it. But it also means that we don't have to do this sort of on chain, that the smart contract execution doesn't have to be on chain. It could be someplace else. And then we use the blockchain just as a means of recording the results. And so that's really what private data objects is is basically taking a chunk of data encrypting that data inside one of these trusted execution environments binding a smart contract with it so that the only way that you can access and update that data is through the operations that are exported by the smart contract. You can trust the results because the trusted execution environment basically signs off on the transactions and you get all the confidentiality that you need. So I can put confidential data into it. I can perform some computation on it. I can get some results back out of it. And at all times that data is protected from access. So really what it comes down to is private data objects is smart contracts for data access for confidential data access. As I said before, we encrypt the data. It's not just sufficient to have the data. You have to have the key. The key is placed inside the enclave in a way so that nobody gets access to the key that way. We wrap the data with smart contract. This allows us to formalize the kind of access and update policies. So that contract represents the agreement between us about what is an acceptable use for that data. Remember we're talking about, you know when we share data right now we tend to share it at a very low level interface and we lose control. And this ability to wrap the smart contract is basically the contract between us about what constitutes acceptable use for that data. That way. We actually keep the smart contract in the TEE which gives us the integrity and the scalability in the execution. And by the way, since it's off chain it means we're really not limited in the kinds of things that we can do. We can perform very complex computations inside these smart contracts. And we have some examples that are doing inferencing on images in order to be able to do object detection on the images. So we're able to do some fairly complex things in these contracts as well. And then we use the blockchain as the root of trust. So the blockchain serves a couple of purposes for us. One, it becomes a registry of all of the objects and all the components and the infrastructure pieces of it so that we can verify the identity and the integrity of the components that are being used for it. The second is that it becomes kind of a transactional log of state changes. And so what we do with the state changes in the blockchain for us is we basically take a hash of the data that was going into the operation, some information about the operation that was performed and the hash of the data that came out. And so that's what a transaction is on the blockchain. So there's no real data about the PDO that goes on to the chain, just the hash is in the transaction. And then there's a signature by the TE that performed that computation. So it means that we're moving all of the smarts off the blockchain, so the blockchain itself can be very, very efficient and very fast. It doesn't have to do a lot of semantic processing. It just has to basically say, yes, this is a valid transaction from a state that we know about to a new state. It gives us kind of an authoritative representation for what the object is that way. And I will say PDO, we started with our implementation based on Sawtooth. We did some experimental work backing it with Ethereum. And right now we're currently using the Microsoft CCF ledger as the core for what we're doing for most of our work on PDO. And the reason why we're doing CCF is that it gives us 10,000 transactions per second for the commits. And so we can do some fairly complex things without being slowed down by the ledger itself. Okay, so I'm gonna stop here for just a second to see if there are any questions. We're going kind of fast. Good, it's very interesting, thanks. Hey, Mick, there was one question in the chat. How are private data objects related to private data collections in hyper ledger fabric? So there are attempts to solve similar kinds of problems, but there are several different assumptions. So the private data objects basically assumes that the information that's collected here, unless you are specifically given permission to it, no one has the right to see it. With private data collections, there's essentially an access control list for everyone who gets access to and shares that. And any chain code that needs to be able to compute on that private data collection has to have access to the PDC. So really what a PDC is is a collection of data that is off chain allows computation on chain without exposing the specific details of that computation to anyone other than the parties that are explicitly allowed to access it. So the private data collections is not so much prevent anyone from doing it. It basically says here's the list of people that are allowed to do it and will allow them to perform the computation on it. But for example, any peer that needs to do chain code on that has to have visibility into that information. And the peers as they currently exist are not really protected. So the fabric private chain code piece that we're working on extends the notions of private data collections to something closer to what the PDOs provide you right now which is full confidentiality even from the peer that's performing the computing. Any other questions? All right, then we'll keep going. And again, like I said, feel free to interrupt if there's anything that you'd like to talk about. All right, so we can go back to this example that we started with the driver profile. And again, I apologize, my camera's on one side and my screen's on the other side. It bugs me to no end when I have to look one direction or the other on these Zoom meetings. So I will just apologize for that. So the idea behind the transit driver profile I remember is that we wanted to create that trusted third party that enforced the rules of computing the contract that way. And so what we'd like to be able to do is to move all of that information into a private data object that can actually do the computation. So we create a PDO for the driver profile that gives us a way of verifying the integrity of the profile. We create a PDO for the driver contract or for the evaluation insurance quote analysis. And then we provide a means to bind those two together so that the insurance contract can perform operations on the driver profile data without ever exposing that information back to the insurance company. So since the computation happens the evaluation of my profile happens inside the PDO at no point does my profile exposed. I can examine in theory, I can examine that insurance contract and look to see that it never leaks any information so that I can establish my own trust in it that I'm willing to give my profile to it. And the insurance company that the evaluation contract can go back and look at the driver profile and verify that the driver profile does not provide hooks so that the driver can cheat by rewriting their history. That is a driver can't go in and remove an accident from their history just so that their profile would look good. So we get this sort of bi-directional integrity check and then all of the computation happens inside these objects. So we can get a high quality quote knowing full well that the information that we're providing is not misused. It's used only in the way that we have authorized its use in this case. So in that way, the PDOs create that trusted third party that we were looking for. All right, so architecturally there are a bunch of different moving parts here. We have some issues with being able to basically create and provision the contract objects onto a set of these trusted execution environments. There's a decentralized storage service that allows us to actually share the current state in a way that others can pick it up. So we don't have any persistent storage but what we've been able to do is to show that we can guarantee availability of state updates. So this for a certain period of time and that allows anyone who's participating in the contract to guarantee that they can get at the most recent authoritative copy of the state that way. So this is how we manage storage in an off-chain system that way. The OnCloud hosting service is really where we perform the execution of the operations on the private data objects. And so this is a collection of servers that are running trusted execution environments where we actually move the private data objects into. There is an interpreter for the contract language right now. The original version of PDO supported scheme which was really the simplest language had the smallest TCB that we could implement on it. But surprisingly, no one wanted to write list-like programs for contracts. And so we've recently replaced that with the WebAssembly micro runtime. It's a WASM interpreter. And that means you can use anything that can compile into WASM as your development language. So most of us are doing our contracts in C++ and we get a restricted version of the standard library to execute in there. So it's actually a very functional contract development environment. All right. Any questions on the pieces? Yes. It looks like there's some questions also coming through on YouTube. So we had one around PDO. So where are the PDO's data stored? So since the data for the objects is stored off-chain, there's really an availability agreement between everyone who's participating in a particular contract that they're gonna keep a copy of the state. So when I wanna perform an operation, I grabbed the state, the current encrypted state of the PDO and I take that state and I hand it to a contract enclave service and I say, this is the operation that I want, that I wanna perform on it. That execution completes. There's a new version of the state. As the one who invoked the operation, I retrieve all the pieces of the state. I commit the transaction to the ledger but as part of committing the transaction to the ledger, I have to provide proof to the ledger that I made the state available in one of the public locations so that others can download it. So anyone who's participating in that object can go to one of these storage services. Now we implement our own storage service with the PDOs so although it turns out that all of the contract enclave services also have a storage service connected with them, that's sufficient. There is no reason why you couldn't as a collective store the state in IPFS or S3 on Amazon that way as well. And then another question was, do you leverage an attestation server to validate client certs to decrypt client data? We do use the attestation service. The current version of PDO is implemented with the Intel attestation service for SGX. There's no, I mean, we have tried in the code to at least move sort of the TEE specific details into modules so that you could implement with a different TEE. So it's the contract enclave host or the contract enclave services themselves are the ones that we provide the attestation for. And those attestations are registered in the blockchain so that they can be verified. So they are verified on entry into the blockchain but anyone who wants to see them is able to take a look at the attestations for the contract enclave hosting services. Once we have that, we bind a ECDSA key to the enclave and all other signatures, all other attestations are part of the ECDSA. So we have a way of sort of following the chain of trust back to the Intel attestation service for the SGX enclave that we're currently using for the implementation. Good question. Okay, and just to have another one come in, how do you manage CA life cycle? We're not right now. What we're really doing is basically putting policies on the enclave registrations themselves that require them to be revalidated and re-verified periodically. And so the periodicity over which those enclaves that we require them to be revalidated is a configuration parameter. So basically what it says is we've taken the TEE, that TEE we've gone to the Intel attestation service and said, here's what we're running, give us a verified report on that. We take the verified report and store that on the blockchain so anybody can look at it. And since there's a timeliness to that, as in the TEE may have been added to the revocation list, it may have additional patches that haven't been applied to the firmware. We periodically require the enclave service to re-register itself that way. So we don't really have certificate authorities in the traditional sense of S509 certs, but we are still building sort of the same basic capabilities through the attestation service in the blockchain. And it looks like a follow up to that. How far did you go on ACL regarding the PDO? ACL? ACL, capital ACL. Access control list, maybe. Yeah, as I said, we don't really have access control lists on it, on anything that's all policy-based. Okay. And then one of the question that came in- That is to be clear, if you want access controls, that's just a set of methods that you would implement in a contract object. So we don't enforce access controls. That becomes part of the smart contract and how you would implement the smart contract. Okay. And can you touch on new org onboarding story? So as far as the onboarding goes, we, again, we've really tried to stay out of policy. For whether or not a new organization can participate. There's a standard process for adding new contract enclave services to it, to the mix. There's a process for adding new provisioning services that goes through this sort of registration. But what constitutes unacceptable registration is actually entirely up to the organization that's running the ledger. So there is a policy object inside in, and again, like I said, our ledger dependencies are very limited. There's basically an acceptable transaction. And then there is an admission to the enclave registry. So you must be registered as an enclave before we'll accept any of the updates that you provide on the contract object. So whatever the community decides as the acceptable policy for allowing new enclaves in, that becomes our kind of onboarding story. So mechanistically, it's not very difficult to do. Deciding on what the appropriate policy is, well, that depends on what you want to accomplish out of it. Right now, our policy is very simple, which is if you present a valid IAS certified SGX report, then we'll accept you. That is, it has to be running, valid in this case, has to be running the appropriate version of the contract conclave service. Sounds good. I think we'll let you get back on your hunt track. Okay, good. And these are really good questions, right? And all of the right kinds of things to ask. So basically just some of the architectural principles that were coming back to us. As I said before, the ledger is the root of trust, but the ledger is not the source of all of the activities, right? So what we wanna do is to basically say that if we have created policies in the ledger for accepting a contract conclave service or the registration of a particular contract object that way, it gives us a way of rooting the authority and the authoritative copies of all of the objects that way. So it provides a reference to what is the current authoritative state of an object. It handles and manages the policies for like we were just talking about how to onboard new contract conclave services, what constitutes an acceptable conclave service that way or what constitutes an acceptable provisioning service. So the ledger and the registry and transaction log is our root of trust. It is not the root of activity. It's not the source of activity for many of these. And this is kind of what I mean that we separate out the notion of execution of a contract from authority or commit of that contract. Authority and commit is the job of the ledger. Execution is the job of the contract conclave. So we can perform operations that cannot be committed, for example, because somebody else has committed the current state or because the transaction returns because the execution returns an invalid transaction that way. But by separating execution out into the contract conclave services, it allows us to have scalability in the execution separate from the problems we have with scalability of consensus in the ledger. So we can do big computations of the contract. We can do inferencing. We've been talking about how to incorporate some of this work into some trusted federated learning architectures where we might do some training there as well. So these are big complex contracts or the potential to be big complex contracts. But the complexity of the contract because the execution happens off chain does not affect the scalability of the ledger transactions as well. The ledger's the bottleneck. Keep it as simple as possible. That was our basic premise. For security, a lot of it, there are, we've all heard some of the kind of problems with trusted execution environments or their side channels for them. And while there is a ongoing process of making sure that they are as secure as possible, we never assume that they're completely secure and we include some basic tenants in order to increase the resiliency to any form of attack on those. With confidentiality, you explicitly provision your contracts to known hosts. Anyone who wants to participate in that contract object can interrogate the properties of your contract object and see where it's been provisioned to. And the idea is that execution inside a tier one CSP data center is still fundamentally different than execution inside a academic grad student lab for it, for the kind of security properties. And so we wanna be able to reflect that and participation is the human's choice to assert a policy of trust on the characteristics. So everything about the contract can be interrogated, including where it can be executed and how it can be executed that way. So the second part of this is then if the TEE could be attacked, how do we prevent those commits from being a part of the authoritative logic that gets in the ledger? And the approach we've taken on it is basically an optimistic execution. So any of these attacks are really hard. Any attacks on TEEs are really hard. So what we wanna do is rather than pay the price before we put something in, we allow anyone who has access to the contract to verify the integrity of any transaction that's been committed in the past. And so the idea is we can commit on, we can commit a new transaction to the ledger with a single trusted execution environment saying that it's good, but we can revoke that transaction if enough others, enough other TEEs attempt to perform that transaction, get a different result for it. So it's the idea that we're very optimistic which allows us to go fast, but we still have the ability to do the kind of revocation on something if we can prove that something's been broken that way. As I've always already said, the state's always encrypted outside the trusted execution environment, which kind of has some really interesting properties that the object itself has agency. So we can do things like we can create data that no human being will ever have access to. It's an aggregation of a bunch of other things or we've created a fractal image inside it and we're trying to protect the image that's being created. Because of the way the encryption works, we can store data inside the object and that data is never exposed to any human being or any host in that environment. And that makes some really interesting possible applications that we can build on it that way. In general, our notion is that the most motivated party is the one who's in charge. So many of the trusted execution environments don't explicitly support trusted IO. That way, so we have to create authoritative channels from the execution environments to the different services in order to coordinate the protocols. And we basically trust the individual who is most motivated for it to work correctly to make that work. And as I said, we kind of design the system overall with this notion of trustlessness. And when we start from a zero trust assumption, everything is really expensive because you have to establish from ground zero the acceptable use of the policies that way. But what we've done at the same time is to basically say there are hooks in here for metadata that can be added to these services that allow us to create a much more efficient environment. So for example, in order to make sure that we get the appropriate resiliency for updates to a contract, anybody can run those contract objects on any of the provision on place. But it will run an awful lot faster if all of us who are doing it agree that we're gonna centralize and execute on a particular contract on place service. So we don't require that. But if we all kind of agree that this is worth our time, then we can get much, much better performance by sort of re-centralizing the execution of those contract objects. So it allows us to have both sort of core of zero trust but allow for relaxing that assumption for particular applications. All right, and then the only other thing I'm gonna say about this is that one of the things we finally realized is, we were kind of being silly originally by keeping the contract and the data as separate things. The contract turns out actually is just part of the data. And so we ended up moving the smart contract into the encrypted data. And so now we have a complete package that we can execute that way. And then the contract enclave is really just a loader for this chunk of data. It's able to extract out the contract, extract out the data and then perform the execution inside the interpreter that way. And I think we've talked a little bit about this already that whoever wants to execute the method pushes the state to the contract enclave service, performs an operation, get something back from it and then pushes that change set onto the ledger. At this point, we've got about 15 minutes left. I can go into some kind of interesting stuff that we're trying to do with it but let me pause again for questions and just see if anybody has anything that they'd like to ask. It was another question on the live stream. Any governance mechanisms offered to support? We do not have governance other than the very simple execution that we have. But one of the things we get from Microsoft CCF is that it has a built-in notion of governance for the set of services that are part of the ledger. As the ledger is kind of the root of trust where we expect governance to come in is through the policies that are incorporated by CCF. So in some sense, my answer to that is that it depends a little bit on what the properties are of the ledger. We have tried hard to make the PDO as ledger independent as we can but there are some places where the property is sent to bleed through and the governance section is really where that happens. Anything else, Pat? No other questions at this moment, looks like. Okay. Well, let's talk a little bit about something kind of different and this is the kind of application that we're trying to build with these PDOs, right? And so I said before that we can do enforcing, right? So we've looked at how to do CF light, TensorFlow light inside one of the contract objects and they can compile into Wazen and so we're able to take advantage of that way. It requires a little bit of fudging to make it work but we were able to get it to work. And so the idea is that I can put this enforcing engine inside one of the contract objects and give it a model. And let's say that I build this model called a cat counter, right? And so the idea behind the cat counter is you invoke an operation on the cat counter PDO and it uses its object detection algorithm and counts the number of cats that are that are in that image in that particular image that way. So what we end up with is in some sense this PDO becomes almost kind of a library or reusable library routine and it has some really nice properties as a reusable library routine. We've all been kind of looking at functions of service and microservices and things like that and how we do composition of smaller units that way. But what I get out of this thing is once it's been completely initialized is I get something that's effectively a server independent encapsulation of code that preserves the confidentiality of the intellectual property of the cat counter model that's inside it. So some really nice properties of this thing it's reusable, it's serverless and yet at the same time, it has some interesting it's not just giving you bytes it has behavior with it which means that, for example, I could charge you every time you use it. So that if you wanna invoke a method on it you have to prove that you've paid for the right to invoke the method on it. So I can then take some things like this cat counter object and I can say, well, I'm doing this exchange or this auction for NFTs and what I'd really like to do is to take this dog picture that I have and I wanna auction it off but I'm rather than paying for it in cryptocurrency I want you to give me a picture and I want that picture to have as many cats as possible inside. Okay, this is a silly example, I know but it captures the idea here. And so we get in this situation. So again, like I said, the cat counter is not free I wanna make money off the work that I put into creating the object detection model inside the cat counter. So the way we're gonna do this is that I bind the cat counter my cat counter to a billing service and that billing service is basically gonna require you to pay for use in the cat counter. So you get back essentially a receipt this as you paid for it. You provide the receipt to the cat counter along with your picture and what you get back from it then is an attestation from the cat counter object that a particular picture has a particular number of cats in it. And then you can take that attestation that's been coming out of the cat counter and you can give it to the auction. And so we can get the image we get the attestation that the cat counter object says that there are 25 cats in this one and the auction can then decide what it wants to do for it. And the cool thing is that all of these services are fundamentally other than the trust relationship between them, they're fundamentally independent things, right? The billing service doesn't have to know anything whatsoever about how the payments are being used. Sometimes it can be used for a cat counter or sometimes it can be used for something else. The billing contract is completely independent of the cat counter contract. And the auction contract does not care how cats are counted in the image as long as one of the things it trusts can provide an attestation for the number of cats during the image. So what we end up with with PDO is in some sense, a new decentralized programming model. It's compositional and confidential computing. We end up with this library of kind of preconfigured composable objects. They're not bound to a particular provider, right? This is not a service that I get from Google or a service that I get from Amazon. Anyone can execute these anywhere that these objects have been provisioned for execution for. And so they have this kind of notion of, for back in the programming languages view, it's a closure on a set of data that is completely confidential and completely server-independent subject to appropriate provisioning of the keys for those servers that way. And so it's really kind of this new programming model for decentralized function as a service. And that's really the part that we're kind of excited about right now is looking at how we can take that information that we've been doing, how we can take that sort of new programming model that we've been creating and apply it in different interesting ways. And this is where we go back and we're doing kind of the seal bid auctions and institutional elections. We're looking at new means of creating NFTs and monetizing, creating monetizable data units as well for that. All right, so closing up then. As I said, data optics is really a system for doing smart contracts or data. It's a decentralized TEE-based smart contract execution environment that preserves privacy and confidentiality of the data. Prototype code is available through Hyperledger Labs. I will say it is labs code, is research code. Most of the time it works and some of the times it doesn't. The production quality versions of these things are more in the fabric private chain code and Avalon. So we are continuously experimenting with new properties and new techniques inside PDO. If you are interested or have some features that you'd like to look at, by all means just let me know and we can have that conversation for it. As far as ongoing work, as I said, we have some notion of supporting additional contract interpreters although we really kind of like WASM for a lot of the things that it gives us in the kind of bi-directional sandboxing part of it. I'd say we're supporting more ledgers, but frankly, again, we sort of, we really like the properties that we get from CCF as far as performance and confidentiality goes on the transactions and such that way. All right, and on that, thank you. Any questions? Mick, I've got one that's Dave. So if you have an enclave, if you have an enclave of servers and that is with a TE is going to be residing and then you install the private data object, and we'll kind of resources. I mean, are they going to, these servers going to be Intel specific or any kind of my, you know, or... Yeah, in the current implementation and like I said, we designed it to be something that you could use. I mean, I'll give you the caveats in a minute. Right now, the implementation is Intel, is based on Intel SGX or the software guard extensions. All of the ICELIC servers that are out now have SGX support for it. The E3, Xeons for the last several years have supported it and they're supporting the clients. If you want confidential services, you can get them from like, for example, Microsoft Azure has their confidential compute service. IBM is offering some of their cloud servers that are bare metal SGX as well. And you talked about that TensorFlow cat model, you know, models can get huge, you know, and I know this was just not real, not a real, you know... It's a toy. Yeah, it's a toy for sure. But it's an interesting concept that these, that, you know, AI type of, you know, neural net models and processes could be on, could be in a TE, is that realistic? A real, from the risk of size, of like, say, I wanted to do something, you know, for real that way. Yeah, so the answer is yes and no. The older E3s, the amount of memory that was available for the enclave was restricted and so you had to be kind of careful with it. But the ISAC servers have increased that substantially and it's enough, we've been able to show, again, sort of in-house experimentation, that you can do some pretty substantial computation in those, that is, there's enough memory to do interesting things. Interesting. Very good, thank you, that was great. Appreciate it, very interesting. Any other questions? We have a couple in the chat. Can you please explain a bit about explicit and implicit private data use cases? So the notion of, yeah, so explicit, I will give you my basic approach to this thing, right? Which is if there's something that I want to be used in a particular way, can I create an explicit set of operations that export that data exclusively in the way that I want it to be used? I can put the data inside the PDO. The PDO manages all of the encryption and the confidentiality around it using the trusted execution requirements. And so in some sense, I have an explicit set of policies for use for what constitutes appropriate use and I can encode them inside the PDO and the PDO does the enforcement of that. There are other more challenging problems with sort of implicit information that's exported that way. The transaction rates on a particular private data object, for example, may lead you to some understanding of the importance of that object or the properties of that object or because the protocols between objects as we move data back and forth between these objects because those protocols may give you some information about the kinds of objects that are in there. There's an implicit expectation of confidentiality that could be exposed as a result of those transactions. One of the reasons why we like CCF. So first off, we do anonymization of the transactions as best we can. And so you don't get the same identity for the same transactions when they're being committed to the ledger. The ledger does not know anything about who's actually doing the committing. They just know that it was the same individual that performed the operation that way. But what CCF does is because CCF itself is implemented using a trusted execution environment is that the communication between all of the components between the client and the ledger and between all of the components of the ledger is happening through encrypted channels as well. So even that allows us to hide some of the details about you can see activity, we can't hide the network that way, but you can't even see differentiation between what contract objects are being, are being that way. So if I'm understanding your question correctly, that's really the kind of two things that we're trying to prevent in this case. Okay, got one more here. When revoking a transaction after getting conflicting results from other TEEs, are there any restrictions on the time period when a transaction could be revoked or which TEEs are trusted to disagree with the original result? Yes, we basically are looking for a, when we did the implementation in Saatu, we were really looking for a number of blocks and block commits on the ledger. So you had to provide evidence of revocation within say a 10 block period of the original commit, which is a little, I mean, it is a policy decision. And one of the things we were looking at was should every contract object have its own policy for revocation on it? That's probably the best way to do it. And in our design, that's what we were looking at, but the implementation is much more primitive than that. So yes, there's a time limit on the revocation. One of the things that we'll point out here, which is a little different than when there is a disagreement between TEEs, we can infer much more information about the problem because they should never disagree with one another unless a compromise has occurred or unless there is some randomness in the contract that has not been removed. And so we can infer a little bit more information and the policies for handling invalid and how we do auditing of invalid allows us to do some things on that. All right. Sounds good. We're at one o'clock. There's a couple of more questions on the live stream. Do we have time to go through those? I have another meeting I need to go to. I would suggest if you have questions, send me email or join us on the GitHub and either ask questions there or post issues on that. That sounds great. Really appreciate it, Mick. Nice job. And thank you everyone for attending. All right. Thanks very much. Have a great day, y'all. You too. Thanks, Mick. Bye-bye.