 I'm Mike Soft research in Cambridge and I'm here to tell you about the confidential consortium framework, which is our project It only says some last slide which might have been a mistake, but it's open source on github So please feel free to check it out So what we set out to do is build a framework to make it easy for people to construct multi-party applications with good guarantees So one of the things that's often a problem is the lack of governance mechanism So we wanted to have a verifiable consortium governance mechanism a system where you can have very well defined governance rules for the multi-party application Which are easily verifiable to all participants to members themselves and to also users So they can see that the network is or the application is run in a specific way that is promised is to be run We also want fine-grained confidentiality, of course So people should be able to have very fine-grained control of their data It should be possible for application makers to make it so that you can only see or reveal You know what's what's useful and appropriate for business purposes We want a very simple programming model everyone wants a simple programming model We want it to be approachable and make it easy for people to get those guarantees And of course we want high availability and high efficiency. It's no good to have an easy framework if You then have to do a lot of legwork to make your app, you know available and fast, right? So broadly speaking From that from a far away distance, this is what a ccf application looks like you have a number of Machines which run a trusted execution environments or hardware enclaves Running anywhere so they could be running in a cloud provider multiple cloud providers So then could be on-premise doesn't really matter where a bunch of users interact with that service and The members a set of members basically provide governance. So the first thing they do is they endorse the service they essentially Endorse the identity of the service and make it so that everyone knows that this is the service they're talking to They also govern the service. So they'll you know, govern the membership sets They'll they allow new new members to join or to leave they can add and remove users and all that and They look after our grades too. So if they need to replace the code improve the application fix security bugs and so on They're on the hook for that Separate from them is an operator someone who's just provisioning things. So they're providing the hardware They can spin up the the instances of the nodes that the application is running on but they're not trusted They're not inside the trust boundary. So they're spinning things up And they can provide them to the members on behalf of the members Maybe the members pay them to make this available to them, but they do not have any specific access to the application They are not they're not part of the trust boundary in the application So how do we how do we bootstrap that how do we get that started and for that? We need a number of trusted execution environments and we need a way to sort of connect them together and make them into a Queer and network Sorry Yes, I can try yes, sorry So trust the execution environments mostly everyone except well that person left, but mostly everyone is familiar with them So what we get is we want encrypted and integrity protected memory So this is this is key to make sure that the operator the person who's running the nodes on their machine Cannot have access. We cannot cannot see anything that's happening inside the node We want cryptographic evidence over running code So we want the ability to assert at the distance or to check at the distance what's running and finally So and remote attestation. So we want to be able to do this Remotely and this gives us a distributed trusted computation So it allows us to spin up a number of enclaves and to check that they are running in enclaves So the contents or anything we send over to them or anything they contain is not immediately visible to the people Who are running these these enclaves we want to be able to know exactly which code they're running And then we can make them part of the networking we can distribute our application with confidence that Information is not going to be leaked or we're not going to have integrity issues or the source the source stuff So how do we do that so a node overview So a single CCF node roughly looks like this you'd have a client talking to to the nodes They use TLS. So we we don't support generic byte streams. We support only TLS. You have to talk to us by TLS That goes through the host the host is potentially malicious and they potentially can look at All of the frames and so on but these are encrypted TLS frames on the way in and out So they can't they can't see anything useful. They can't derive anything useful from that traffic There are no different from someone who could be intercepting some of the network traffic there So the TLS session terminates inside inside our enclave and inside our enclave is essentially two bits There's off framework which provides a certain Certain amount of functionality and the user application code which is running. What's our business logic there is there? So the enclave contains the application logic and states it contains the governance code Which is what we referred to earlier, which is what allows these these enclaves to talk to each other and fault thrones code so code that allows us to distribute essentially the updates across the networking enclaves and make sure that We have availability if a certain number of enclaves are affected or come down The application still still can run So communication between the host and the enclave we want to avoid Equals and occult so we want to avoid calling in in and out of the enclave There's a there's a number of performance and security reasons why it's not such a good idea to this on a regular basis I mean you want to minimize the amount of those you do So the way we the way we do things is there's single occult at start a single equals or a startup when we provision the Enclave and then all communication between the enclave and the host Is done over ring buffers and so essentially you put data on the ring buffer the enclaves Reads it out on the other side and then back in I see a lot of people take pictures and I want to point out the slides on the Website so you don't have to if you you know just one of the slides So yeah, the communication is basically just the TS frames in and out of the ring buffers and also heartbeats So we'd like to have trusted time. There's no such thing as trusted time But we still need some notion of time because if we're gonna run a distributed consensus For example, we need something to do timeouts and the sort of stuff So we take rough time from the host is not trusted and the host could essentially perform a denial of service by Withholding time updates or or sending time updates too often and so on while we claim this has no impact on Confirmationity and integrity basically It does have an impact on the ability so if you want to if you're not performing denial of service And you're the host you certainly can but then again if you're the host you can impact availability by withholding all TS frames on The enclave and then not much availability there. So yeah for sure Then so the way we do availability is you distribute this and you have a number of enclaves running Set up with a consensus mechanism. And so you have to take down, you know, at least half the enclaves you're running CFD Maybe a third if you're running a BFT right But if you take down what if you if you if you own all the hosts, then you can stop the system from working completely for sure So yes, and now that we have these these building blocks We can talk a bit about like a joint protocol. So if we have a network that's already established That's already bootstrapped. So the way we start out is we start a single node And then we we gradually bootstrap a network by adding on new nodes to that to that network So the way it works is our nodes create a key pair Very much like G-thor scheme. So that that stays inside the enclave That's the identity of the enclave And we the node produces a quote over its over states That's the platform the code that's running and the identity is the platform contains platform information It contains microcode version, you know that type of information as well And then that's sent across to the network and the network then decides if this node should be allowed to join or not And there's basically two two parts that one of it is checking the quotes So making sure that the node that wants to join is in fact running, you know SGX is running the right microcode is running everything's up to date is running the right codes inside the enclave And the other part is governance kicks in here So essentially your proposal is created and it's put in our state and then the our governance mechanism, which I'll talk about just next Kicks in and the members get to decide if they want to allow this node to join or not In some configurations you could decide to allow that all the time So if you've got the right intensity running an SGX running the right code and so on maybe everybody's allowed to join There's some reasons why you probably don't want to do that Or maybe the the members have set up some rules. Maybe they want to check who's joining first Maybe they want to just check the quantity of how many how many things are joining at the same time. There's various problems with Having too many join at the same time. Obviously everybody comes in and then they have to catch up and it creates problem And so the network if it decides that it likes the node and it wants to add the node to its configuration Then endorses this identity and sends data secrets basically to that node So consensus can kick in and we can start replicating states to this node because now they can read they can read the updates We send them and so we have a single network identity Which is distinct from the identity of the first node But is essentially set up by the first node and this network identity is what would be distributed So that can be you can produce a CSR for that have it be signed by root CA or that could be something that the members distribute to their users So if you think of the members as a group of companies maybe a group of banks or something they could distribute this root certificate to their users and Users can when they connect to a node over TIS the fact that the certificate for the nose is endorsed by this root cert This is how they get You know proof of evidence that the station has been conducted properly and the rules of governance have been followed properly And so yes once the node has joined and it needs to catch up on state and become part of the part of system So that leads us into governance How do we how do we build a mechanism to make it easy for members to decide how the network should be governed and operated? So we have we have our consortium of members and initially what they do is they endorse the first The first prefix of the ledger and the and the configuration so we have a first few transactions that configure our system and The members have to come in and they have to stage a vote basically to say yes We agree. This is the network. We think we wanted to start So it's running the right version of the code. It's running with the right membership and We're happy with the identity of the node so far So they vote for that and this is recorded and in general, this is just an instance of our generic governance mechanism Which is a staging votes so you can stage votes to decide things like membership who should be a member of the network Who should be a user? What the network configuration should look like so I want to add a certain number of nodes remove some notes I want to update the code or I want to change the versions of the code that are allowed to run the network Or you want to change maybe the Constitution itself so all the votes are governed by a Constitution which is a script which I'll show on the next slide and This is what decides how the votes are passed essentially how the votes are counted and this itself Being part of our state is something we could update also probably with different rules. Maybe we want to be stricter about that So the Constitution is a script the votes are scripts Well, so the proposal sorry our scripts and the votes themselves are scripts So a proposal could be you know We want to make this change and then all the members have to look at that and they have to say yeah I agree or I disagree But if you do if you say I agree or I disagree then you have a time check time of use problem These things could have moved on and maybe you care about some other elements of the state Maybe maybe you're happy to add a node But only if there's already if there's not like more than 10 nodes already because you're trying to restrict the total number of nodes so in your your vote should be something that's executable it should be something that can look at the Proposal but also the current state of the system and say yeah, okay. I agree with that or you know given the circumstances given the state We're in I don't want to accept that So because this because your vote is executable you can get rid of of these concerns and this is how this is how we count things So that's all the slides and we should see a bit of code. Hopefully it's big enough for everyone to read Okay, so I'll try and go through that quickly, but essentially this is a very simple constitution samples Sample some of them will look bigger than that. They'll be more complicated But here all we're doing is we're saying like look for most things All we're looking for is a majority of members saying that they agree So if we if we wanted a stage proposal and we want to say we want to add a new member for example or a new user We just want to have a look at the majority of you of members which are currently active and check that they agree with that so we count member votes and Essentially, we just count member votes for active for active members So we if we have members who have been retired or who are joining and they're in the process of joining They're not active yet and so on they shouldn't be counted in this vote We count all the members the same we could decide to make a distinction We could have some tables where we store special members who are like operating members or senior members or whatever And maybe they have special rights. Maybe they can veto some things Maybe their vote is a lot is sufficient to pass the entire thing. It's very flexible. We can do pretty much anything We want here. This is just a simple example There are some tables where we store information that maybe Shouldn't be so easy to modify. So for example, if the members wants want to stage a vote over so there's a whitelist table which is basically some The tables are allowed to be affected by governance and then we have our governance script table Which is the table that stores this constitution if people are going to replace the constitution You're probably looking for something more than just a simple majority. So here in this case we enforce unanimity We need to have them the number of the total number of member votes needs to be the same as the total number of members that's active And so if we have a majority otherwise, then you know, we pass otherwise return false and the way this works is very simple Every time a member sends a vote Their script is executed that decides their votes then the constitution is run again We count if it's good we pass if it's not good still pending the proposal is still there There's a way to withdraw proposals as well. If you think there's no realistic chance of them passing But that's sort of how it works The types of proposals are fixed Not really so because your because your proposal so this is Lua Your proposals are Lua and the votes are Lua If you can if you can put your proposal in terms of a Lua script that executes against the table in the key value store That's backing this then you can put it to a vote and the other members They'll have to write a script that knows how to evaluate your proposal And so it might be tricky for them if you propose something sufficiently complicated. Maybe they won't agree to it Yeah, but when a proposal sub yeah, your proposal is code. So yeah, you have to Have to cover so so conveniently we'll look at proposals now So some some simple ones. The first one is just we want to add a new user So we're we get to see that the tables and ended There's some steak to say user search So we get the tables and the user search as arguments here and are what we're saying is we want to add this new user To our store. So the way this is split is The proposals are typically little bits of codes and some arguments are Templatized and parameterized. So to hear the user search is passed separately is passed in So the next proposal is we're passing a new code digest and we're saying we should be able to run this new code version in our system and We can have a look at the ballot here. So a vote to evaluate that So the vote to evaluate that will look at the changes proposed by the proposal. So it checks There's only one change proposed That the proposal is a new codes proposal and then the new code is this this new code ID that we have decided is acceptable So probably typically what would happen is The member would have ahead of time communicated with the other members and say hey look I want to deploy this this new version of the code. I Give you what you need to reproduce the build You added the code you reproduce the build you get to the same ID you get same hash over the code And he decided yeah, okay, I want to put a vote for that So you're not accepting the proposal itself you're accepting this particular code ID. That's that's what you're accepting there Yeah, so code updates It's essentially a sort of simple three-phase mechanism. You add a new supported code version through this vote mechanism Everyone agrees to that For a while when you do that as you spin up new nodes that run this new version of the code You want to make sure that the new and the old versions are compatible and Then you can stage a vote say I want to remove the old version the codes now that we've upgraded Maybe the new version contain a security fix and you don't want to allow the old version to run anymore So you stage another vote say we should get rid of the old version now because we have a good quorum of of nodes running the new version and Then if that passes then the new vote the the old code is essentially Removed and so notes they're running the old code count can't be part of the network anymore and I can tear them down So another thing we we potentially need governance for is catastrophic recovery So typically we run with the consensus algorithm and it'll be there'll be a formula that tells us how many nodes we can tolerate are failing And that's that's basically F So if we're running CFD crash fault turns we can if we have two F plus one nodes We can we can lose F nodes and still make progress if we're running with decent infill tolerance instead We we need to have three F plus one nodes then we can tolerate F failures But if we have more than their failures then the network can't make progress anymore because there's no quorum So transactions can't be globally committed and now we're stuck So in that case we need to perform catastrophic recovery and what happens is we need to go back to the members We're ultimately the root of trust for this And they so we have a mechanism that uses key shares to do that There's also an older mechanism that you see in keys, but essentially you need a way for the members to Come together and put together enough information that they can read the old ledger and produce a new service that runs on the basis of the old ledger and You want to have the the new the new ledger endorsed by the old service So if people are connected to your service and you have not distributed the new identity of the service to them yet They can still connect and they still know that this is endorsed by the old entity here The members have quite a lot of power so it's up to them not to endorse a new service or not to endorse multiple new services For example or to endorse a new service unless they're happy with the state of it But it sort of works the same way as the bootstrapping the first time works Someone stages one of the members stages of recovery votes The other the other members get to examine the state of the service And if they're happy with the state of the service then they endorse it and they can move forward And so finally we want very far be it is it's no good to have a well-defined governance if no one can audit it Because then there's no trust you have to trust that the code has been written correctly and that all the votes have been executed correctly So for that to happen all the governance states all the all the tables that hold the governance in our system are public So anyone who has access to ledger which should be anybody Can see what the governance tables hold and they can essentially play through the garden story And if anything doesn't match like one of the votes, you know that the total that was counted doesn't match and so on They can say hold on this there's a problem there All the governance transactions have to be signed. This is enforced by the service as well So you not only can say there's a problem there. You can also blame whoever is responsible for this, which are the members Has basically made a you know made a mistake or lied about the way they voted for example This happens the same to the order as other transactions so all our transactions happen on the same KV and Governance is done over a set of tables, which is part of this KV. They're not separate The versions the commit IDs you get for for governance changes and for regular transactions are on the same scale on the same time scale So it's very easy to decide You know which business transactions should have been affected by governance So if a user for example has been kicked out of system and you're trying to figure out Exactly until when they could have performed transactions or they should have been able to perform transactions You can easily because it's part of the same version Timeline It's a funny. Yeah, also all this is recorded in temporal fledger So he's sort of standard blockchain mechisms Mercel trees to make sure that all our updates are changed and that you can't just go back and it It's some part of the history without it being very very visible later on So it's all good to have this stuff, but it has to be easy to write applications for it So the first the first guarantee we want to make is that Unless you really go out of your way to to log like private information All data and CCF is encrypted all the time. So it's obviously encrypted at rest. So all the data we have to store on disk For for recovery and audit is is encrypted All data that is sent around the network is encrypted on the wire and in memory We also get encryption from our use of enclaves. So if somebody has access to the machine, they cannot just look at what we're doing So roughly speaking our application schema looks something like this The application code plugs in in the gray box and everything else is provided by the framework. So client frames come from the host That's just TLS. It comes to us as TLS frames. We have a front end component that just takes care of essentially the All the TS functionality decrypting things we have authentication against the KV. So remember, we know exactly at which point in time, which users or which members are allowed to use the system Because of our governance so we can check that and then if their identity is not allowed to do the operation They say what they want to do at this point. We can reject it automatically. It's not something that the user has to worry about So things come to the user with this identity that's been verified. So we've authenticated transactions and we've also Authorized them at this point And then the application engine can just produce a set of read and write transactions Or a single sorry read and write transaction against the key value store So that in their business logic all state has to go in the key value store that's provided by CCF They can perform any operations they want But in order for the states to be persisted the ledger to be recoverable and to be distributed and available across network It has to happen as operations against the key value store And then further downstream we replicate this using consensus. So that thing that again goes back out to the host It's a bunch of encrypted frames back out to the ledger. So to be to be stored in permanent storage And at the regular interval we also so we we hatch all the transactions We put them in a market tree and at a regular interval we sign that that goes to our ledger So this is this is how we get our type of proof So yeah, so we we have encrypted TCP frames coming in and out to the ledger this sort of coming in and out Sorry of the application. That's the only thing that's Goes in and out of the enclave. We have an independent ledger on disk and all application states Must be in the key value store for all the guarantees to hold So we have a couple of consensus variants Joshua turns is the main one we use now We have a BFT implementation. That's still kind of work in progress, but which is available in the build for people want to try it So if you use Joshua turns, we have a need enclave ref raft variant What we do there is if you have to F plus one nodes You can have F2 up to F failures and the network will still continue to make progress because we've added signatures You can blame compromised nodes and you can essentially go back to the ledger and Offline verify, you know what has been done and potentially take things out if you think some nodes have been malicious But it's not happening online. So if a node somewhere is acting maliciously They can potentially go on doing it for some time. You have to offline Check for that and you know remove the transactions that don't and so here we rely on the tea for both confidentiality and integrity If the if the tea is compromised if someone can for example temper with the memory We won't be able to detect it online. It will have to it will take some offline audit for it to be detected If you use our PBFT variants, which you can still work in progress, but it's coming online then If you have three F plus one nodes, you can tolerate up to F failures But you can tolerate up to F nodes being malicious. So the nice thing about that here Is that you still rely on the tea for confidentiality. So if SGX is actually broken one day Well, you know your data still is no longer confidential But you potentially don't lose integrity if fewer than F nodes are affected So if you're clever enough about the way you distribute your instances, and they're not all, you know Subject to the same attacker because maybe they're Geographical distributed or distributed across multiple cloud providers and so on if you manage to keep two F plus one nodes That aren't affected by the attack you still keep integrity So your attacker gets to see the data, but they don't get to Manufacture transactions there are illegitimates So the key value store is is basically there are very simple storage and face There's really just two things you can do with it. You can get stuff out of it and put stuff back inside I dressed CCF it's implemented in C++. So we'll get to the runtimes as supported But you can put any old C++ type in and out of there and then for other languages we have mappings Transactions are very straightforward. It's like any local key value store reviews. We have strict series of BT and opacity So that errors only see a consistent states. There's there's really no No tricks there. It's very very easy to use. It behaves the way you'd expect You can have up driven confidentiality because of this very flexible key value store model So the app decides completely how it wants to expose data. It's up to the application logic To decide how to do that. There is no built-in support in the key value store to label things with certain degrees of privacy or anything I was completely left to the code And you can of course store your code in this in the store itself So you have the key value store and one of the things you can do is some of the values could be code themselves So if you want to upload scripts if you want to upload things that can be executed by the users Then of course you can't Another thing that the framework provides for you is transaction receipts so This is a little tricky for you for the applications to implement themselves for for a variety of reasons But basically applications should be able to or users that use the application should be able to get receipts from the application They're signed by the service that say that This out this outcome happened and it happened at this version inside a KV So if you're talking to people who are outside the system and you're trying to prove to them that Something did happen or you did do something and you want to give them cryptographic evidence that this thing happened And it should be something that can be verified offline They shouldn't be they should need to be a user that system is to talk to a system to verify that it's happened You can do that through the receipts And so you can write your CCF apps either in C++ directly which is probably what you do for maximum speed Although it's a bit tricky, but you know, it's not that bad. It's modern C++ Or you can use one of the runtimes that we've that we've built So the standard one that's that's better tested that governance runs at the moment is is Lua We also have a project called Evm for CCF which allows you to use any Evm compatible language like so easy to run smart contracts against the KV store Or recently we've added JavaScript support is still a bit experimental and There's there's probably the odd box here and there still but but it's coming as a supported option as well So that's mostly it and the code is on GitHub. So go check it out and Try it out any questions That you have remote station. Yeah Didn't hear any detail about it. So I wanted to ask is it possible to not be a member of the network and somehow do remote station and say, okay, this Continental network or whatever it is. It has system integrity. What kind of do you use for the remote station structure or some details? Because I didn't slide about it. Sorry as a working. Yes working The real test station. So, sorry, maybe something I should have made clear is we use the open enclave SDK And essentially at the moment. So the open enclave SDK supports SGX mainly. So our stations are just SGX at station There's a standard SGX station and open enclave is adding support for arm trust on coming pretty soon So once they do will hopefully be multi enclave. They don't have a mechanism for that Yeah, they don't have they don't have a mechanism for that now. Yeah, but at the moment if you if you run this Yes That's that's right, which is which is the Intel open enclave node and I'm not part of a confidential network. Yeah, I can still verify You don't need to use an open enclave node. So Open enclave provides a small utility library, but the Intel SDK does as well Checking the remote attestation provided by an enclave. It's just a matter of getting the right certs and the right CRLs from Intel Yeah, the format is public. I mean it's specifying in the Intel SDK documentation So we do and so if people want to do that So as a user you can of course trust the network identity because it's been given to you by members and it's endorsed by members But if you want to verify the enclaves for yourself before you connect One of the things you can do is there is an endpoint that allows you to get a list of nodes and the attestation for each node On you could yourself verify the attestation for each node and make sure that it's running the right version of the code Against an up-to-dates, you know into sgx enclave and so on. So you can do that yourself as well if you want to Yeah, you absolutely can do that You can absolutely Yeah, and so you'll be able to verify that basically these nodes are running With a given identity so they'll have the the public key for the pair where they hold the private key that's just kept inside Yeah, sorry what the measurements Yes, so it's the identity of the nodes the code that the node is running and the platform details Yeah, that's what sgx attestations provide to you So If you do this normally, then yes, you have to talk to the Intel servers to get the search and so on Intel has a mechanism to delegate this. So I know that in Azure if you use the Microsoft cloud for example Yes, you can use a Microsoft service and you can set up your own service to do that But you have to talk to Intel. It's probably not super easy, but it's something you could probably do Yeah, yeah, I have business cards and you come to me after talking Yeah, yeah, okay good Thank you So you have facility I'm sorry How scalable is the framework? So that's a really good question So the largest test we've run so far were with about 20 nodes So we've run that to do performance benchmarks that we put in the initial technical reports that we published about a year ago And this was across two regions. So Adam on Azure only supports sgx hardware in East u.s. And West Europe regions so this is Sort of tenders like tenders one side nine nodes the other side and Can you scale it to a larger number of nodes like we hope you can? The benefits are not super clear So if you use if you use raft if you use the CFT implementation The only thing that's more nodes lets you scale is reads So you can scale the reads across the nodes, but the rights still have to go through a single leader So you're making the leader the leader busier because it has to replicate to more nodes But if you have a read heavy workload, you know, it might still be a nice thing You can also offload Signature verification in case we have proxies. So if your commands that come in are signed Because the nodes are part of are verified and a part of your trusted Compute base you could have them verify the signatures before they send stuff to the primary But it's not a whole lot. You can offload to secondaries in that situation and in in PBFT unfortunately, it's worse than that because Really every operation is a right conceptually because you need to get a proper consensus on it, so It's not just a leader doing things and pushing updates out. So in that case You could you could scale up to a large number of nodes But it's probably gonna cost you a lot in performance. It will give you good availability So you'll be able to lose potentially a lot of nodes But they won't be they won't be super good benefits So there's work on going and this is slightly longer term work It's probably not something that's gonna come out this year to add sharding to CCF and to really scale out to larger volumes at the moment If you run CCF applications the best benchmarks we have for applications that are written in C++ There's some somewhere in the vicinity of like 50 60,000 transactions per second So you can run like some stuff already It's not it's not extremely large-scale, but you can run reasonably large applications across two data centers geo replicated and so on And with sharding we were hoping to do more. So I don't know if that answers your question, but Okay, so the question is if I'm a new user Do I have to sign anything and the answer is you don't have to so it's it's standard TLS So if you have as a as a new user You won't automatically be allowed to take part in the service unless the service is configured to be very open Typically, it's TLS end-to-end So what would happen is you'd have to talk to one of the members and say I want to participate Here's my identity. Can you add me that member would stage a vote to add you then the rules There was a governance would apply and you know, you'd get it now There could be schemes where you automatically allowed to participate because your cert has been endorsed by some identity Of course and then the governance might decide to do that But you know, basically, that's that's how the scheme works And so once you've been added you establish a TLS session and you can start talking to the service and that's good enough as far as service is concerned We do support client signatures to support use cases where people might decide to proxy things So if you are a client and you want to talk to CCF But you're not able to establish a direct to us connection for whatever reason you could you could build a command sign it with Your identity send it to CCF have someone else send it over to CCF They need to be allowed to talk to CCF as well through their identity But the command would be recorded with your signature and with your correct identity There's it, you know, is a trade-off there. It's it's slower to do that if you're trying to send a lot of transactions but On the other hand, it means you can sort of route through anywhere you want So that's that's also nice And so if you're trying to build some workflows where you want verifiability for user Transactions as well as governance transactions, maybe you you would want to enforce that So if you have some very sensitive user operations Maybe you force the users to sign and then you store the signatures in the commands and people can offline verify things Sorry, I don't know if I'm running out of time No, but see Vasily is getting closer. Do we still have time or yep? One final question Yes You have to trust it members Like at the end basically share you mentioned. Yes. Could you basically also use this? to roll back the whole So Yes, so that's that's a good question and so the answer is yes So if the members could decide that they all agree that the last 20 transactions did not happen And so this is a good thing sometimes if there's been a problem But it could also be a bad things because the members could be colluding And what protection does the user have against that on the answer is if they've kept their receipts for those transactions They could say I did have this receipts It was committed at this version and it was signed by your service and although you truncated it post recovery I have proof that this was in there in the ledger And so you still have some defense against that if the if the members decide to truncate things because they want to Remove and sort of inconvenient rather than illegitimate operation But yes that when the members get together so the process the way it actually works as you spin up a bunch of nodes They look at all the ledgers they they verify All the entries and the signatures that are in there and they give the members back information about which which one is the longest And has still been globally committed And then the members can decide to vote on that they could decide to shorten the prefix for whatever reason So signatures in the ledger just to be clear signatures are done by nodes So they allow you to attribute the operations to a particular nodes If you know that some nodes were compromised then this is how you can decide to you know truncate various things