 So, thanks for coming, I'm going to talk about what we've been doing with the Weaver Labs project over the past year and a half since you open sourced it, but it's been a project that's almost three years in the making. So, I think already explained the value proposition of introp and probably to this audience is not really necessary, we have a lot of different networks out there which run different limited workflows and these networks are built on permissioned DLTs and the problem is that once you build such networks, they acquire life of their own and you don't really want to expand or merge them with other networks for various variety of reasons, maybe privacy, regulatory, performance, auditability reasons, but in the real world these processes, the processes that these different networks run like trade logistics network, the payments network and the KYC network, they all are symbiotically related to each other. So, there needs to be a way for these networks to, so these networks eventually discover reason to have to interlink or interoperate with each other. So, what we are trying to do with interoperability is enable the seamless flow of data and value across different kinds of networks built on heterogeneous DLT technologies so that they can conduct transactions and do useful things without being limited by the network boundaries and we want to do this in a way that preserves their trust and security tenets. All of being able to orchestrate or enable cross network linking also enables scale without forcing merging, so that's really what I want to leave you with on the slide. Now different people have different views on how to achieve this and people have expressed different opinions on the very definition of interoperability, now in the previous panel I talked about what my definition of interoperability is, let me just show you a couple of other views on what people think it is, so interoperability can range from, it can range along the spectrum from different kind of solutions or different kind of opinions can be expressed as to the level of centralization or the level of trust that they require or they use when they are trying to interlink two networks together. So if you look at the first pattern here, you can imagine having interoperation across different networks by just say take multiple fabric networks, now fabric networks are composed of peers that are owned by different organizations, so you have a single organization that participates in multiple networks, you can imagine it running a peer that has access to all the different network's ledges, so you can build an application that harnesses the information and the contracts on these different ledges in order to do something useful, it's a form of interoperation but again it serves just that organization that is running the peer, it does not serve the networks collectively, so that's one limitation of that. Now another pattern which people have talked about as interoperability is the ability to have different peers running on different kinds of hardware and cloud infrastructures to be able to communicate or to be able to achieve consensus on blocks as part of a single network, that's another valid definition of interoperability but that's really not what we are talking about here. The way that people were, one way that people were advocating how interoperability ought to be done when we stepped into this field was why not have applications running on different networks, that is this is layer two or the client layer and they just expose APIs and thereby you can link the contracts via these applications. The problem with this is in effect you have, we have reduced a network to, or we have reduced a network to trusted proxy which is really, which is this application. The individual networks still run on decentralized principles but once you bring two networks together you have in effect created a centralized scenario, so we don't want, we don't ideally we don't want that. So what we wanted to try and achieve was a deeper level of interoperation where networks can interact with each other in a decentralized manner without requiring any kind of trusted proxy that is the applications would continue to run on their respective ledgers but they would not, you would not use any of those applications with say using a particular wallet identity as a trusted conduit to a different network. So what exactly are we trying to achieve? So we, our observation was that we could in effect boil down the different kind of scenarios you want to achieve across networks to three kind of patterns, data sharing, asset exchange and asset transfer. Data sharing means the ability to communicate ledger records across two distinct ledgers and to be able to link two different smart contracts together. I'll show you an example in the next slide. Data exchange, simple to understand, you want to be able to swap assets across ledgers and you want to do this in an atomic way. Asset transfer, you have an asset on one ledger and you want to move it to a different ledger for some business purpose, so how do you achieve that? So data sharing, here's the model. You have two networks, network one has the, there's a contract that's implicit here but there's a data record that's on the ledger and network B is running another contract or a business workflow which for some reason needs this data record on this particular network. Now what would you do ordinarily? You could have, network B would ordinarily just have a process whereby a client, some client would be trusted to fetch this data record from this network and the supply that is transaction to this network and which could validate it right before commitment. Instead of that, if we can have a process by which network B can in an institutional way request this data record from network A and then get a response which is generated along with state proof and using network A's consensus and further before network B consumes that information, it also validates that information using consensus, that would be great because what you have achieved now is the way by which we have created a thick pie between these two networks whereby information can flow between groups of parties without requiring any other trusted proxy in the middle and the two networks now can be in sync and this can be very useful in lot of scenarios. One of which I'll point out here is, here is a trade finance network, this diagram looks complex, I'll just give you the elevator pitch, the trade finance network processes letters of credit using it's an instrument by which when a seller or an exporter ships goods to a buyer or an importer, the buyer is obligated to make a payment to the seller. Now, in order for the seller to ask the buyer to make a payment, the seller has to provide some evidence that they have shipped the goods, so where does that evidence lie? Now, in the real world as it happens we have networks have emerged to handle trade financing, something like processing letters of credit, separately from networks that handle the shipping part of the whole trade life cycle. So in the real world, shipping involves both processing confinement, documentation as well as financing, but you have different networks that are doing these different tasks. So you have the evidence of the seller having shipped the goods on this particular ledger, on the trade logistics ledger in the form of a document called a bill of reading. On the trade finance ledger, you have a document called the letter of credit which needs a valid bill of reading in order to ask a buyer to make a payment to the seller. So what can we do here? If you look at this arrow, step number 11, sorry, yeah. In step number 11, if the trade logistics network can supply a bill of reading to the trade finance network, then the trade finance network knows that the seller has dispatched the goods that it was obliged to and therefore the buyer needs to make a payment to the seller. In the absence of that, what would the trade finance network have to do? It would have to depend on the seller to provide a bill of reading, but then the seller has an incentive to supply a fake bill of reading in order to get payment from the buyer. So by having these two networks link and one network fetch data record from the other network through an interoperational channel, we can avoid these kinds of hazards whereby one particular party can hijack a business workflow. In this particular workflow, we also added another step whereby the trade logistics network can fetch a bill of reading or a valid letter of credit from the trade finance network. So we have shown that because this particular example is not limited to bills of reading or trade, but it's really it can be generalized to any sort of data record, any sort of artifact that you would want to share across two different ledges, across two different business processes. Asset exchange is a case where is a very different case from data sharing, but it's also a part one of the building blocks of interoperation. You have a network A party X owns an asset M and it will be party Y owns an asset N. And the outcome you want is Y gets M from X in the first network in exchange for giving N to X in the other network. Now what is the challenge here? The challenge is that this needs to happen in an atomic manner. You don't want the final outcome where this transfer happened, but this does not happen or vice versa. So doing that across two different ledgers which have completely different governance, which are completely independent of each other, that is a big challenge. And as an example, there's a delivery versus payment which you heard in the previous panel. One network manages bonds, another network manages currency accounts, and if you have two parties A and B which have accounts on both, on both these ledgers, A can sell a bond to B on one network in exchange for payment on the other network in an atomic manner if this kind of an asset exchange feature were available. And finally, the asset transfer case feature is one where you want one party in one network to be able to transfer an asset owns to a different party on a different network. I mean, in this case, X and Y could be the same, but in the most general case, they can be different. So what's happening here is that the asset gets expunged from one network and gets recreated in another, maybe in a somewhat different form, but in equivalent form, something which both networks have agreed is in equivalent form. So that is another basic use case of interoperation. And as an example, you can just imagine two central bank digital currency networks and if you have one party wishing to make a transfer of CBDCs from its account in one network to another party CBDC account on a different network. And this can be generalized to any other kind of asset. Let's see how I'm doing the time. Okay, so these interoperation modes, our claim is that any cross network process interdependency can be realized as a combination of these these scenarios data sharing asset exchange and asset exchanges and asset transfers collectively can cover like almost all of the use cases you imagine when you bring two different blockchain networks or two different DLT networks together. And just from a modeling perspective, we can think of it in this way. You have a if you imagine dependency, the cross networks, you can imagine unidirectional dependency, where a read in one network triggers a write in another. And that is the data sharing use case, and then you can imagine bidirectional dependencies where write in one network triggers the write in another, but that's both have to happen atomically. And the two kinds of scenarios we're talking about there are asset transfer and asset exchange. Okay, so there are unique challenges we face for DLT interoperability which we did not face in the traditional centralized service interoperability case, mainly that the authority overstate lies in a collective and the protocol or the consensus protocol that the employee to ensure its integrity. So if it is just single party, as long as you trust the party that is supplying information or that is transferring an asset to you, you're good interoperability is just a matter of ensuring that both of you follow the right message formats. But when it comes to multi party networks, it's not so straightforward. You have to be able to trust the information that's coming from a network. And you cannot really boil it down to trusting a single peer in that network, because single peer can always lie. And every network is geared to guard against a single peer lying. But what does the foreign network do because the foreign network does not participate in the consensus protocol for given network, right? So that's the main challenge. Now, when we were starting to investigate, we were, we saw solutions that were out there, which involved this kind of pattern. So the pattern here I'm showing is the you build yet another blockchain or a settlement network or settlement chain to which you can your network and plug in as a side chain. And then why are the settlement network so the settlement network provides the assurances whereby two different side chains can do the kinds of things that I talked about. They can share data, they can transfer assets or they can exchange assets. But the problem with this or at least one drawback with this is that you have to depend on the settlement chain. And there are the way you connect or plug into this ecosystem is intrusive. You need validators that are belonging to the settlement network that have to be part of your network and the privy to your private information. So this kind of solution works if all of these networks are already open public networks. But for private networks, this seemed to us like a suboptimal solution. How do you get private networks? How can you enable private networks to be able to interoperate without having to surrender their sovereignty in some ways to another network and also and to guard the privacy. So that was the challenge we set out to solve. And we wanted to achieve something like this and we this is what we have we have done in the past three years. So for as an example, you can see what we want to do and what we can do with the world is allowed to networks to directly interact with each other without any kind of settlement network in the middle, not any kind of trusted proxy. So that is we feel that's a less constricting way of doing interoperational and for permission networks, this is really the more optimal way of doing interoperational. So to give a brief history of the viewer project, it began as an IBM research project in 2019. And we identified the kind of use cases that we thought were covered the spectrum of interoperability. And end of 2019, we built a prototype linking two different fabric networks that are modeled loosely on the trade links and the V trade networks. And we published a paper in middleware that established the the basis and the engineering engineering principles for such interoperability to happen. In 2020, we extended support for cordon networks. You may wonder why they picked cordon after fabric. It was because we wanted to pick a kind of DLT that was as different from fabric as possible, just to show that the approach that we were researching would work and that the principles that we were trying to enforce were applicable to any kind of DLT regardless of what kind of tech track it is built on. So we did that. And then later in 2020, we built the first version of Weaver. We made it, we cleaned up the code. We wrote proper RSCs. And we made it, we made the code quite modular. And finally, in the early 2021, you open source Weaver under Hyperlegia Labs. Since the latter part of that year, we've been having discussions with the Hyperlegic Actors team. And as I think you all know, you know now, we're going to merge Weaver with cactus to form Hyperlegic Actors. So there's some links here I'll come to, I'll show you the links toward the end as well. So just a shout out to our team. We have between us, we have four researchers and developers who are working either full-time or part-time. And then we have Vinayaka, who's our research manager and thought leader. He's been involved in this space for four years now. So Weaver is built on several design principles that are really, that are really important to understand. And we wanted to make sure that we built a system that satisfies these. Inclusiveness means that we do not want to build a system that enforces a particular approach that tied to a given DLT. So we do not want to build Weaver to be fabric-like interoperability solution or an Ethereum-like interoperability solution. We wanted it to be completely neutral to that. While accommodating both fabric and Ethereum and Corda and what have you and any network that you can, you might envision creating the future. The networks we wanted to be, we wanted to ensure that the networks retain sovereignty over their own governance processes and their assets control rules. So if you imagine interoperability of the kind that talked about sharing data, exchanging assets, transferring assets, there is some sort of intrusion happening in a network. So you cannot actually allow the, no network, especially if it's built on the permissioned principle, it would not want anybody to just poke into this network and say extract an asset, right? What was happening in a cross-network scenario has to be fully under the control of a network. So assets control is really a core primary feature that we wanted to provide for the networks when they are in a cross-network scenario. Minimum trust, I will talk about this. Any interaction between parties, two networks should be private and confidential and only reveal to the interested parties. We did not want to rely on any intermediary, whether it be, sorry, whether it be a trusted third party or a common settlement network. Any shared infrastructure we relied on would be minimal, I say minimal because we ended up discovering that we need data infrastructure, for example, in order to be able to establish a trust basis for two networks to interoperate. So that's something that I don't think I have the scope to cover today, but I'll just leave you with that and we can have offline chats on that if you're interested in it. We wanted to leverage, we wanted a system or an interoperability system protocol which would leverage the native consensus protocols of the respective networks rather than using any other kind of consensus mechanism. What was happening in the respective networks should respect and follow the consensus rules that have already been set out. We did not want to impose any other kind of consensus logic over what was already supported. What this ensures is that any interoperational enable using VWIR would be as trustworthy as the networks existing transaction commitment processes. And we finally, we did not want to require any changes to the code DLT platforms. So VWIR does not require a fork of fabric or cord or ethereum or so on. So these are slanted list of principles, but we wanted to make sure that we provide a solution that ensures all of these because we believe that this constitutes an optimal set of requirements for permission networks. So, okay, I have I think I have time to 6, right? That's what time do we end? So let me see how much I can cover here. I'd like to mention that we have a tokens workshop tomorrow where we will be covering scenarios where VWIR was actually used in a real experiment and that involved not just VWIR but the fabric token SDK as well. So if I can't cover some protocols, please try to attend that workshop and we will be covering that tomorrow. So VWIR, this is the vision. You have a network and with VWIR, what VWIR adds to a network is something called a relay along with the protocol driver using which it can communicate or interact with a different network that similarly employs the relay and the driver. And the arms of the triangle simply show the three different interoperational modes as a transfer data sharing and asset exchange. And what exactly is the relay? The relay has two different parts. It has it has a completely DLT neutral part to it, which is a communication protocol whereby two different relays can talk to each other, exchange messages and do all of the good things you expect from a message management entity like message queue and everything. And then there is a DLT specific portion called a driver which is necessary for to either query a particular network or to commit transaction in a network. And because those queries and those commitments are very specific to a particular DLT, like you would query or commit a transaction to fabric network in a very different way than you would communicate. You do it in a cordon network or an Ethereum network. So the protocol drivers understand the DLT protocol, the relay part of it does not. The relay part of it is just running a completely neutral protocol, which is DLT agnostic. So views are is a concept that we introduced, which is akin to the views that you can imagine in a traditional database, right? In a database, a view is just some procedure that you run to extract some data from a database. So we are just extending that concept for cross network scenarios and we are what the core feature that we have built in Weaver is the ability for one network to generate a to supply a view address, which looks something like a URL and which it can communicate to a different network. And and then that network can supply the information by parsing this particular view address. So this is a feature whereby you can run queries across networks. And the this is part of the building block of all the protocols that Weaver provides. So if you can imagine this simplified network, network communication, you have two networks consisting of a group of peers and the relay is communicating an even communication across them. So the relay supports the ability to address remote views. So network, this network can supply a view address for this network to this relay, which can then communicate with this relay, which will then fetch a view corresponding to that view address and supply it back to this network. So you can have an end to end request response protocol that way. And we can also we imagine doing more complex operations across networks like having a contract in one network invoke a contract in another or publish events and subscribe to events across network boundaries. So the relays that act as communication modules and the networks act as validation and commitment modules. Moving to what the networks are doing further. Here's the set of operations that each network needs to do in order to run the kind of protocols that you want. Access control, the ability to generate proofs of legislate, the ability to independently validate such legislate proof. So if one network generates some proof, the other network must be able to independently validate that without having to, let's say, offer get privileged access to the ledger of this network. The networks must be able to lock or pledge assets and then they be able to claim them. So if you attended the interoperability formalization presentation a couple of hours ago, the ability to lock an asset is a key enabling feature for either asset transfers or asset exchanges. Without the ability to lock assets, especially in a time-bound manner, a network cannot be deemed to be interoperable. But most proxy networks today have the ability to do that. So that's something that we can rely on when we augment the network with viewer capabilities. So our claim is that there's a complete set of building blocks to realize any cross-linked network dependency. So I mentioned modes or the three different kind of use cases that viewer supports. Data sharing is simply the ability to generate and verify proofs and that's the request-response protocol across two different networks to enable that. One network requests another network for some ledger data. The supplying network generates proof associated with the data and the consuming network verifies proof that accompanying that particular data. So that request-response protocol achieves that the data sharing property. As it transfers, the way we have engineered this protocol involves multiple such data sharing instances and we don't have time to go into the full protocol which will involve probably at least half an hour of discussion. But please reach out to us or look at our documentation for information about this. Asset exchange is something that we have built on the hash-time-lock contract mechanism. If you are familiar with that, I think if you attended the Casper Labs demo earlier today, they talked about the hash-time-lock mechanism whereby if you have Alice and Bob that wish to exchange assets across two different ledgers, they can use this protocol which involves generating a secret and then producing a hash out of the secret in order to lock assets and then claim them in a way whereby neither Alice nor Bob can cheat each other because that's really what you want to enforce atomicity. So we implemented that protocol in Weaver and at present we are working on augmenting that protocol, making it more fail-safe and automated. So the Weaver architecture consists of these components. So let me just mention what you would need to do as a network administrator and as a developer. As a network administrator, you will have to deploy a relay and a driver which automatically comes with Weaver. All you need to do is take the relay and driver from the Weaver code base, build it and you just need a configuration file that you have to adapt to your particular network. The Interoperation module is again, it's built in the style of a contract or a DAB. So in Fabric, this is built as a chain code. So all you need to do is to deploy the Fabric Interoperation Chain Code on your channel and your network will be Weaver-enabled or Interoperation-ready. In Corda, we supply this equivalent of this as a Corda app and that's a Corda distributed application and it performs similar set of features. What are the set of features? I mentioned that a couple of slides ago. Generate proofs, validate proofs, lock assets, claim assets and so on. Finally, there are application helpers or library functions which allow any client application to be able to trigger transactions in order to make two networks share data or to be able to trigger an asset lock or an asset claim or an asset transfer. So this is the difference between traditional Fabric application with the pure smart contracts in the layer two and Weaver-augmented one. You can already mention this. I won't go into the details. I think I want to stop in a couple of minutes for questions. Maybe we can go into the other parts of it in the Q&A. Let me just cut to the, yeah. So Weaver provides, I mentioned the SDK that we have provide. So what does this look like exactly? So suppose you want to trigger a cross network data sharing request. As a developer, you can use the Fabric Node SDK. For Fabric networks, you offer the SDK both in Node and in Go, and corridor, you offer it in Kotlin. Again, these are the native languages that you would use to program on the respective DLTs. So in Fabric, to be able to trigger a cross network data sharing request, as a developer, you just need to call this in-drop flow function. Now, it has a laundry list of parameters, looks complicated, but it's just a single function. It is sort of like submitting a job. So what you would do is you would create a view address. That's one of the parameters of this in-drop flow function. And what happens is the query goes to the relays, it goes to the other network, proof is generated, the proof comes back, and a local commitment is triggered, and the proof is validated and committed. Assuming everything goes right. So all of that happens via just this in-drop flow function. So we are trying to make it as easy for the developer as possible to run the cross network data sharing. Similarly, this is for the equivalent for a core network. It's the same in-drop flow function you have again the list of parameters. So I'm going to skip those, these, okay. So what is the status of weaver so far? We have, as you can see, we need some common appliances. The relay as a common appliance exists. You need, we need to build drivers for every DLT separately. So right now we have support, we have drivers of Fabric and Corda, and we are trying to build a driver for Hyperledge Bezu. We have more support for Fabric and Corda as compared to Bezu at this point, but that's something we are actively working on. We recently, we added event PubSub support for Hyperledge Fabric Network so that from one Hyperledge Fabric Network, you can supply, you can subscribe to events that of some operations that happen in a different Fabric Network. And the other Fabric Network would then channel those events, publish those events to the subscriber. Decentralize identity is something we've been working on as a trust basis for these interoperations to happen because what we need in order for two networks to be, for two permission networks to be able to generate and validate proofs is be able to know who their respective certificate authorities, certificates are and validate any members of those networks against those CAs. So we need the ability to communicate group identities and CAs certificates across networks. So that's what this particular feature is all about. So for to contribute to Weaver, you can, or to use Weaver, you can just go to the documentation. We have tutorials that help you get the Weaver samples up and running from start to finish. The instructions are all perfectly clear. And also if you have an existing network, you want to adapt for Weaver, there are instructions for that. If you want to contribute to Weaver and or gain more information, please go to the RFCs folder or read research papers that are in the overview page. I promise they are fairly easy to read. So in the real world, this is something I mentioned yesterday as well. And I'm not going to talk in detail about this but to find out about the use case involving CBDCs and the experiment done by the Bank de France and HSBC, do attend the tokens workshop tomorrow and you'll get all the download on that. So Hyperlegic Cacti, again, I think this audience by now knows what's going on with this. So we have been talking to the Cacti team since late 2021. And I talked in the earlier session about what the merge framework will roughly look like. This is something that's still in the works and it probably takes a few months to complete. There is standardization effort going on. Again, this is something Rafael and then I couldn't talk about earlier, secure asset transfer protocol and there's several links here for your inspection. I think this is the most important. We do solicit involvement from anyone with an interest and an opinion on this topic. So please subscribe to the mailing list. You're welcome to join the weekly meetings at 9 a.m. Eastern time. And just to give you the report on what's going on, the aim is to form an official working group under the IETF and we made a presentation under the birds of feather session at the last meeting just two months ago and it was quite well received. And at this point we're in the process of refining the charter and making sure that it gets accepted as a working group at the next meeting. So the goal of this effort is to create or offer network gateways that are compatible with a universal DLT and app neutral protocol whereby two gateways working for on the behalf of two different networks can communicate with each other and they can trade assets or communicate data with each other. If these gateways are trying to keep the networks behind them opaque and make sure that we are not hamstrung by any particular DLT protocol. That is the gateway to gateway protocol or what we call SATP is not specific to fabric or Ethereum or Corda and so on. So that's what we have and we have several features in the pipeline. Again, one of the major things we're going to be doing is merging with the cactus. So that's going to occupy our time for the moment. So thank you and we happy to take any questions if we have time, please. Can you come to the mic? I think, what are you supposed to do? Yeah, thank you very much. That was very informative. For the BASU connector that you guys are working on is that targeting EVM compatible chains in general or specifically Hyperledger BASU? So far specifically Hyperledger BASU but in the long run again depending on how many people can contribute we would like to be able to expand support for generalized EVM too. At this point, I'm not exactly sure. I think Denakaran can probably offer more informed opinion. We have been trying to understand what's the best way to build a driver for BASU and it involves some more different logic from what we've used in both fabric and Corda. So I don't have a complete answer to that right now but if you can, yeah. The question I'd ask is what would restrict it to BASU? So unless you're using private transactions or the privacy modes, if you just stick to JSON-RPC and the standard Ethereum calls that are in that and the standard EVM contracts and don't rely on any specific, BASU specific pre-compiles it should transport to any EVM JSON-RPC system out there. Whether it's a GETH or whether it's Avalanche or some other chain like that. That's great to hear. Thank you. This point here. So at this point, it doesn't seem like there is anything that's restricting to BASU but maybe we'll run by the architecture that we have, the plan that we have with you. Thanks. Any other questions, please? Have you thought about fault tolerance for this relays and gateways? I mean, is this just one node or do you think that the gateways can be multiplied somehow? Yeah. So, great question. So I did not cover that part here. Again, it's more of an, yeah, the engineering of a relay which can go on for as long as you want. But yeah, we can have multiple redundant relays. We can have multiple redundant drivers. The, for fault tolerance and for failover, ideally in production, you should have multiple relays. You should not depend just on one relay. I mentioned early on that the relay was by, according to our design principles, supposed to be a trustless component. That is, it's not trusted to, as a, it should not end up being a trusted proxy. So our protocols ensure that any data that's going over the relay is already encrypted and it's already signed. So that when you get the data at the other end, that is in the network across the other relay, then the decryption happens only past the relay and the signature validation can also happen there. So the relay is already trusted to, or it doesn't need to be trusted to, to not mount man in the middle attack, for example, because it can't at least modulo the strength of the cryptographic algorithm. But yeah, for, it could potentially mount denial of service attack, but for that you should have enough redundancy in your relay. And we were, allows you to configure multiple relays. Yeah, so I will mention one more. So the asset exchange protocol that I mentioned is we have built support for the HDLC protocol or the hash time log protocol. That by itself has some points of inefficiency or vulnerability and what we are trying to do with one of the hyper ledger interns who's been working with us over the summer is try to add more automation and failover to the HDLC protocol. So this involves doing some research as well as doing some redesign. So we hopefully one of our goals is to be able to build an HDLC, actually an augmented HDLC protocol that can actually work in production because when you talk to different clients, companies or governmental institutions, they are really concerned about the performance of asset exchanges. So if we can provide a mechanism whereby we can guarantee that asset exchange will happen and because the basic HDLC protocol kind of depends on, if you think of Alice and Bob as the two exchangers, Alice and Bob just doing the right thing at the right time. And Alice and Bob have incentive to do so, but in the real world, at least when you're using this protocol for a lot of financial transactions, you don't just want to depend on that. You want the system to be somewhat more robust. So that's what we are trying to do. And in order to build a more robust asset exchange protocol, we are building capabilities that involve the relays, communicating information about logs and proofs and enforcing claims, even if, let's say, Alice or Bob fail to do so. So again, this is a longer discussion, but just in a nutshell, HDLC as we implemented right now, depends on users driving the protocol, but we are instituting more automation failure. Any other questions? Okay, I guess, yeah, it's been a long day and yeah, we are over time. So see you all at the, Guinness, what is it called? Storehouse? Okay. Thank you, everyone. Thank you.