 We're very appreciative of Peter and Rama for taking the time today to give an update on Hyperledger Cactus and the Weaver Project and how they foster interoperability within the ecosystem. So at this point, I will hand it off to Peter and Rama with the presentation and as David mentioned, you know, we'll be happy to answer any questions. Just feel free to put them in the chat. I'll also be going on the YouTube live stream and bringing up any questions that are asked there as well. Floor is yours, both of you. Thank you, Chris. So a very quick point about the agenda. So we have the intro, then we will talk. Me and Rama will keep passing it back and forth with the slides. We'll talk about interoperability in general. Then we'll have two little lightning talks about the respective projects that we represent, which is Hyperledger Cactus for me and Hyperledger Labs, Weaver for Rama. And then we'll talk about, finally, we'll close out with a sort of collaboration progress update because what we're trying to do is somehow bring the two projects together under the age of Hyperledger. And then we'll take questions. So there will be plenty of time for that as well. And a little bit about Hyperledger, it's an organization. So the name has ledger in it, but the word Hyperledger actually stands for a set of projects and you can find out more about it on the links. And we also have a code of conduct. So please, everyone, try to abide by that. There's more information about how it's modular. That's part of the Linux Foundation. And hopefully we can get more people involved. So if you want to get involved with working on open source software, then definitely reach out. And so the two projects, I will not give detailed explanations about what they are because we'll have plenty of slides for that after this. One of them is cactus and the other one is beaver. And suffice to say, they're both kind of going in the same direction. Some details are different, but division is kind of the same. And so on the topic of blockchain interoperability, how do we define it? Again, there's going to be more details from Rama about how and what it is. But the way I want to put it is that if you have two different distributed ledger technologies or ledgers deployed and then you have use cases on those as a business person, then you can get a certain amount of value out of using those individually, which on my mini fake equation, it goes as V1 and V2. And then the way I define interoperability in the voice of terms is that anything that you build that makes DLT1 and DLT2 somehow work together, producing a value of V3. If that V3 is bigger than the combined value you were getting from the other two DLT separately, then you have yourself an interoperability use case. And that's very, very like 10,000 feet kind of perspective, but I am keeping it simple for now intentionally. And then why do you want interoperability, free super quick points? You want interest fragmentation, which is a problem, especially for enterprise users who need stability. And then you also want to save developers from reinventing the wheel every single time they have to connect one ledger to another in some way. And then back to you again, the enterprise use cases, you want to lower the risk of adoption because the space right now is producing innovation at such a breakneck pace that it is risky to get in the sense that the technology you pick today could be obsolete in a couple of years. And so we aim to somehow reduce the risk of that as well. And then one more note on fragmentation just to put it really in terms of numbers. The number of different ledgers or DLTs that are in existence is growing and the number of integrations in an ideal case where you want all of them to be able to talk to all the other types, which is the intuitive default. So the number of integrations needed grows quadratically with the number of ledgers, meaning that if you have 100 different ledgers and you want all of them to understand or be able to communicate with each other, you need 5,000 integration scenarios. And obviously, if you are trying to get something out of the window, some business application or use case, then that seems pretty daunting. And with that, I will hand over to Rama. Thanks Peter. So just to emphasize the point Peter made a couple of slides ago, we look at the figure at the right. We have several different enterprise blockchain or DLT networks available today, and they all solve different purposes. They the key is they all solve very restricted portion of whatever business workflow needs to be run in a particular enterprise sector. Like a network could be devoted to maintaining know your customer records. A network could be devoted to doing processing payments. Another could be managing a trade logistics workflow from tracking the export of goods from one country to another. Another network could be managing insurance contracts or other the non mentioned here like financing and so on. So the thing is you have all these different networks that have been built on permissioned ledger technology and they in the real world, they cannot afford to remain isolated. Their business policies are inevitably or inextricably interlinked in the real world. So what we have to do is but still there were there were some reasons there were good reasons why the members of these networks chose to keep those networks limited and separate from other networks. So we don't want force networks to be able to have to merge and sort of create a super consortium, but rather we want to allow networks to have the chance of the opportunity to interoperate with each other as they need. So that's the point we're trying to make here is that given all these networks, we want to enable the seamless flow of data and value across these networks in a manner that preserves the trust and security data. So just because we are enabling cross network interoperation does not mean that we would sacrifice the principles of decentralized trust that blockchains themselves were designed to solve the first place. This if we enable such a corporation, we will end up removing network data and value silos. We will end up scaling all of these small scale blockchains into potentially one super blockchain, but without having to do the administrative and privacy breaking the task of forcing everybody to part of the same process. This improves network effects, increase market sizes, all of the good things you get in the business world. You also can build complex business functionality across networks. So you have smart contracts running on these different platforms. You can in effect build out of a super smart contract that encompasses different such networks. Next slide please. If you have been reading the literature interoperability, different people mean different things when they look at the board. So just referring to a few here and talking about what exactly it is that projects like cactus and viewer are trying to do. If you look at pattern one on the top left, there's a view of interoperability that was espoused a few years ago where you can think of different permissioned networks that share a common peer. That peer is managed by some enterprise, which happens to be part of the consortium of network A and it will be as well as network C. So interoperability in the scenario means simply writing some software that will run on the peer that orchestrates transactions across different networks and that has access to the data of the different peers. In effect, you're adding some sort of integration software. That's one view of interoperability. Another view is going in a different direction where we get pattern two. You want any blockchain or DLT network to be able to run on heterogeneous environments like if you have a peer network, it should not matter if some peers run on IBM SaaS, others run on AWS, others on Oracle Cloud and some on-prem with an enterprise. You want the blockchain or the DLT software to maintain a consensus over peers regardless of what hardware and what cloud they're running on. That's another view of interoperability, very valid view. That's also not what we are aiming for. What we are aiming for is really, as mentioned in the previous slide and as Peter mentioned earlier, you have two different networks. Just looking at two networks for a moment. You want them to be able to share data and assets with each other and also allow some kind of complex business workflow to happen across the two networks that is harnessing the data and the power that exists in both networks. Can we do that by simply representing a network by proxy? Every DLT or every blockchain today offers you the feature to build applications, Hyperlegia Fabric, Corda, any Ethereum. You can build some application over the network. Can you just have applications that have access to particular networks, just talk to each other in effect, using the same sort of service integration patterns that people are familiar with from the enterprise world. The thing is, then you're reducing a network to a kind of a trusted proxy. A particular application in effect acts as a centralized party that represents the network that violates blockchain principles, violates the principle of decentralized trust. So what we want is something deeper. We want to enable two networks to then to interoperate, to be able to communicate at a network to network level. That is the groups that are that comprise network A and the groups that comprise network B should be able to connect with each other as groups, not use any sort of a trusted third party, not use any kind of trusted proxies, but be able to reflect that consensus use onto the other network, and thereby enable some sort of complex transaction and also share their logistics with each other. And we'll come to details as we talk about the viewer and the cactus projects of leaders. Peter, next slide please. So interoperability can be viewed as the different concerns can be viewed in different layers. So if you look at this layer diagram, think of the OSI stack for networking. We have this is sort of like that. You can think of concerns going from at the bottom data going over the wire, and you need to we need to ensure that two networks are able to communicate over the wire with delivery guarantees, negotiating a session and so on, the standard networking. Then but above that, you start getting to more complexities and more some things that are specific to blockchain or distributed ledger networks, as opposed to connection network. We would like to define particular protocols and particular payload formats that are going to be commonly used for two blockchain networks to connect. Further above, we want to be able to communicate semantics of distributed logistics. So as opposed to a single centralized service or some sort of ERP system that's communicating with a remote entity, we don't just we cannot just have some data structure that is being communicated request and communicated across the wire. And that that would suffice for commission networking here in addition, what you would want is some notion of the fact that data reflects the state that has been agreed to by a consensus of parties. So in addition to the raw data, you'll also need some other something else along the data to convey the fact that the endpoint here is a group of parties that are maintaining shared ledger rather than a single party. That's where the state notion of state and cryptographic tools and identity comes in. And above that, we can layer various protocols. There are different kinds of protocols. And I'll talk about these in the subsequent slides. Further up, when you go past the actual message communication, we'd like to be able to think about standards that enable two networks that are still operating on the same domain to be able to interrupt it. There's a standard called GS1, for example, which is attempting to create specifications for all blockchain or DLT networks that engage in any part of the trade ecosystem. It could be trade logistics, it could be trade finance, payments. Further up, there's also concerns about governance, different networks put up different policies, how they govern their own network members joining and leaving, what are the update policies for the ledgers and so on. So these, these parts are little more fuzzy at this point, but we need to think a bit more about them. But we just put them here to show that those concerns are going to arise sooner or later. Next slide, please. Okay, so this diagram simply shows what are the different scopes of interoperability or what is the, what is the spectrum of possibilities that we can achieve if we enable interoperability among networks. If you look at this connection here, think of two different ledgers that are running on the same network. Now the sort of thing is possible with say, how about a Fabric or Ethereum or Corda. You, both these ledgers have their own shared truth. And they are distinct from each other, right? So interoperability is important even for those two ledgers, even if they're part of the same network, you still need a way for them to share information with some assurance as well as run complex transactions. Similarly, you have a legend one network that needs to interoperate with a ledger running on a different network. It could run on same DLT protocol. The DLT protocol here could be say, Apology Fabric. Going from there, you could, you also want, want the ability to enable ledgers that are running on a network on say, Apology Fabric to be able to interoperate with ledger running on network, a different network that's running on, say, Corda or Hyperlegipase. So there are different levels of interoperability. Now, when we actually build the mechanisms, it turns out you can build certain common mechanisms that will be applicable to all of these different patterns. So that is so the interoperability model that at least folks like Actus and we are building is quite powerful, as we, as we show it. Going to the left here, we would also like a distributed ledger to be interoperable with a non-distributed system like an ERP system and also relying on articles and so on. And finally, there are vertical concerns like policy and governance, which are a bit more fuzzy at this point, but they will always arise when you have two different networks that are being operated by two different consortia. That is, the network operators have to determine when they need interoperation, when they will open up the network to interoperation and when they will keep them shut to any outside connections. The intercept time probably move ahead. Peter, next slide. Yeah. Okay. I think this is the last slide before we move into the project. So what are the unique technical challenges of interoperability? As I mentioned earlier, you can think of, if you think of the traditional services-based world, it's fairly easy to communicate. If you, as long as you define a protocol and you expose, you have services exposing a particular endpoint using let's say REST, then you can, you allow independent parties to be interoperable with each other. But in a distributed ledger network scenario, you have, our endpoints are multiple parties. So when the authority over the state that each multi-party governs lies in a collective and as well as the protocol, that is the consensus protocol they employ ensure the integrity of that state. So there are some rules according to which any state is being final. And all the honest notes of that network are going to agree on that on whether a particular state is final. But now when we want to enable interoperability, this integrity of this assurance has to be communicated when any transactions happening between two networks or when any data has been communicated from one network to another. So when one network consumes states from another, it needs to establish the veracity of the state that is being supplied from one network. And that veracity of the state and that veracity has to obey the shared consensus view of the parties. So the consensus views of both networks is important, has to always play a part when you're talking about two networks interoperative. So how do we enable this? There's a, we will need some notion of proofs and verifications. So then any state is being requested for and consumed by another network, then that the information of the state has to be accompanied by some proof. And this proof has to be validated without the, the foreign entity having, being able to observe the ledger of the source entity. Because remember, these two can be two permission networks. So a node in this network does not, it's not privy to the blocks of the ledger of this network. So any consumer has to obtain an independently verifiable cryptographic proof, the validity of state. That's really the key here. How the nature of this proof, as well as what denotes validity can take different forms. Then we have to also think about data versus asset. So an asset is, we can think of as something that has a single instance. So it's some piece of information that's recorded on the ledger, which moves from place to place. So if it goes from one place to another, or if it changes ownership, then you always have just one copy of that, rather than making copies are very good. Data on the other hand is something that can actually be copied from ledger to ledger. So any, or when you're moving it, when you're copying it from one party to another, even on the same ledger. So the notion of data versus asset is important, because that impacts what protocol you would use for interoperation. So for example, you want an asset to move from one ledger to another, it will have to disappear from the source ledger. Then if you want to copy data from one ledger to another, it won't disappear, you just need to copy it. But additionally, you have to provide some assurance that it was valid state. Also, these activities must be coordinated without any sort of central mediator or a clock. So we do not, we cannot assume that the two networks are synchronized. Okay, so these are technical challenges. And now we can go to the project details. Let's keep this in the interest of time, Peter, and we can maybe talk about this towards the end. So on to the project specific parts, I will say a few words about cactus. And then Rama will say a few words about weaver. So the first thing I have to say about cactus is the safe harbor. It is an incubation status within hyper ledger, which means human. It means that it is not an active status, meaning there hasn't been a stable 1.0 GA release yet. And therefore, it's not considered production interest yet. We are hoping to get there as soon as possible. But at the same time, I don't want to claim something that simply just isn't yet. So that's why it's important to know this in the beginning. And then what is it? It's a pluggable enterprise framework. And it should hopefully make it easier to transact on different ledgers without having to learn all the different ledgers that you or your project is dealing with. So it should save time. It should have all those benefits that I outlined in the beginning regarding interoperability. So you can also sort of think about it as an SDK of SDKs, because it encompasses or provides access to all these other SDKs that the ledgers usually ship with themselves. And then the position in the hyper ledger greenhouse, the greenhouse being the collection of projects that make up hyper ledger. So that's fair cactuses. It is under tools, but you could also make an argument that it's a library in the sense that if you're an application developer whose scope on a business application is just to implement a specific feature connecting to a specific ledger or set of pledgers, then it will feel much more like just a library quote unquote. Few words about the design principles. I'll be very brief. Definitely the most important is secured by default. We're trying to avoid having to deal with situations where the project gets popular, everyone deploys it and then issues start to come up where the insecure or unsecured defaults are causing people to have data breaches or any sort of attacks that we enable them on them. So that's that's by far the most important. And after that comes the plug in architecture, which in my personal opinion is the coolest thing. We're trying to maximize the flexibility of the framework, trying to make it future proof so that even if the DLT landscape changes over time, we can adapt instead of just being obsolete it with the older technologies that would be no longer in use. And then the other important thing to say is that it's totally free meaning there's no built in mechanism to charge for the usage. There are no gas fees, transaction fees. There's no bidding mechanism there. There is no cryptocurrency involved with cactus itself whatsoever. Obviously if you want to implement something that if that is your use case to do something like that, then you can make cactus part of that. But cactus itself does not do any of that so that it's properly open source and easy to deploy. And then that leads me to the low input deployment, which means ideally you can deploy cactus without having to do any sort of special modifications to the distributed ledgers that you're already running in production. If you have any, a few additional design principles, wide support, which means that we don't want to just cherry pick the most popular ledger technologies to support. We would prefer if a good chunk like 1990, 95% of them were supported. Obviously there's an insane amount of development resources required for this, but this is where the plugin architecture comes in. And I'll talk more about the governance later, but for now just say that anyone anywhere can create a plugin for cactus that adds support for any ledger that is not currently in the officially supported list of ledgers. So what we are hoping to do there is to leverage the open source community growth around the project and then prevent double spending where applicable. This is important to say because people have expectations regarding what they can and cannot do. And I always have to highlight that there are public permissionless ledgers out there who do not have transaction finality guaranteed in the sense that, for example, they will deal with proof of work as the consensus algorithm. And there's always the possibility of a fork, although the probability of it is decreasing over time, but technically it's always possible. And so cactus is not a magic bullet in the sense that if you are doing some sort of transaction between two ledgers and one of them is public and permissionless and the other one, let's say is permissioned and is completely under you or a consortium you're a member of. I'm in the control of you or the consortium you're a member of. Then there could be a situation where the public permissionless ledger forks and somehow you end up either being out of the money or someone ends up double spending. And then preserving ledger features means that opting to use cactus should not limit you in the things you can do with the ledgers. If the ledgers you chose where technologies that have some exotic cool features that no other ledger has, then you should still be able to leverage those in your application. Finally, horizontal scalability, which means that we work very hard on the architectural design of the cactus API server to make sure that you can deploy any number of them in a cluster and auto scaling group for any other structure that your cloud provider allows for. And then if you if you want to read more about these and the use case ideas, we have please visit the white paper that I linked below. And then on to our architecture decisions. Yes, so the project cactus itself, it is mostly written in TypeScript. We do bundling with Webpack because hopefully in the future, we'll link with someone is asking for the link to the white paper. Okay, let me just go back. So it is here. Oh, no, sorry, I cannot copy it right now, but I'll share it afterwards. I'll upload the slides and then you have to link as well. So back on to the bundling with Webpack. We want that because we want to lower the barrier of entry, we want to be able to deploy cactus in resource constrained environments, maybe even cloud functions or lambda functions and the name depends on your favorite cloud provider. And we use learner plus yarn to manage a monorepo, which is the backbone of the plugin architecture, meaning that there's separate packages within the same gate repository, which allows us to quickly add new ones. And to have co-sharing between the back end and the front end. So we have packages that are cross platform and universal meaning that they'd run the same way, both in Node.js and in the browser. And then the other big decision we have is to focus a lot on test automation. We have dozens of end-to-end tests that pull up a pristine ledger for our supported ledgers such as Cora, Fabric, Quorum, Bezu, Iroha, etc. So we have tests where pristine ledgers are being pulled up from containers and then the cactus connector plugins are being verified against those actual ledgers instead of just doing mocking or stubbing or any other unit testing strategy, which is still good and has value, but it's not as sure of a thing as you actually testing your code against the real ledger. And then just one more note on the plugin architecture. We don't know the future. We don't know what ledgers are going to be popular a year from now. We don't know if anyone's going to come along. Current ones to be deprecated, etc. So that's why we have the plugin architecture where we hope to be able to respond to these changes as time passes. So we are just basically in it for the long haul. And then what I promised a few slides earlier regarding the governance model, I want to hash this out, especially because this is important. If you want to add support for any ledger for cactus, you can just write a plugin. You don't even need to put that code in the cactus repository. You can host it on your own and you can maintain it on your own. So there's no need for you to get any sort of permission from the cactus maintainers if you don't want to. The pros and cons are there for you to vary. And there's complete flexibility if you'd rather have it in the central cactus repository, then that's also good. But then you have to go for the review process, which is not as bad as it sounds. And with that said, I'll pass it back to Rama, who will give us a similar quick intro about Bieber. Should I share my screen? I'll go faster. You can, but I'm happy to just keep pressing the slides when you tell me. Okay, sure. Next slide please. Before I talk about the system, the way we began when you were thinking about what to build for an interoperability platform was what are the categories of use cases we want to satisfy. And we call them interoperability modes here. And there are three of them asset transfers, asset exchange, data transfers. And between these we believe they cover the all spectrum of cross network transactions that you would like. And I'll show them through examples. So next slide Peter. So asset transfer is indicated by the model you see in the top diagram here. You have two networks. And there's a party in network A which wishes to give an asset to party network B. So in the beginning you have asset owned by X and A and Y and B does not have it. And at the end, at the end of the transaction, you're going to have the asset owned by Y and X does not own that asset anymore. An example you can see at the bottom, you can have two different retail central bank digital currency networks. And if you have different banks that are on different networks, a bank may want to transfer digital currency from its account in one network to that of an account of another commercial bank on the other network. So that's the scenario. That's an example of the scenario we were talking about. Next slide. As an exchange is related to asset transfer, that something has to happen atomically in both networks, just like an asset transfer. But this does not involve an asset actually moving across network boundaries. So if you look at the diagram at the top, you have you have two parties, X and Y that are members of two different networks, A and B. X owns asset M in network A and Y owns asset N and network B. And at the end of this exchange, what you want is the assets to interchange hands. So why ends up owning M in network A and X ends up owning N in network B. Key is this has to happen atomically. So despite the fact that two networks do not have any centralized coordinator, do not have any central clock, you want both networks to you want the asset to exchange hands in assets to exchange hands in both networks or neither of those exchanges to happen. So as an example, you can see the at the bottom, consider a network on the left, which is managing bonds on ledger for different commercial banks. And on the right, you have a different banks that maintain currency accounts on a central bank digital currency network. You an example of this of asset exchanges or delivery versus payment where a bond gets issued by one bank to another in one network in exchange for the payment in the other direction in the other network. Next slide. I see this is data transfer or sharing. As I mentioned short while ago, data is different from asset in that data is just some state information that's agreed to by the parties, the stakeholders of a ledger, which of which copies can be made. Now that the data can actually be important in driving forward the business process or the smart contract in a given network. So this is the example below conveys this. It is rather complicated workflow. But let me try to simplify this. Think of you have a trade finance network on the left. You have a trade logistics network on the right. What is the trade finance network doing here? It's processing a what's called a letter of credit, which is an instrument in the in the trading world and the banking world, whereby an importer, the bank often of someone who's importing goods will promise a seller that is a seller ships certain goods and shows proof, documentary proof that the goods have been dispatched by a carrier, then the bank is obligated to make payment. So the reason for having this letter of credit is because of the inherent mistrust between the seller and the buyer. So of the important exporter, because if let's say the exporter ships the goods, there's no guarantee he'll get the payment. And if the other hand of the importer makes the payment first, he has no guarantee of getting goods. So that's what the network on the left is doing, the trade finance network, the network on the right, the logistics network is managing the shipment, it's tracking the shipping. So you have a seller who's dispatching goods via carrier. So what's happening in this network is that you have once the goods are dispatched, the document called a bill of trading gets recorded on the ledger, on the network on the right. Now, that if that bill of trading were to be shared with the trade finance network, then the trade finance network could enforce the payment obligation from the buyer's bank to the seller or actually to the seller's bank. So today, when these two networks are distinct and separate, we need some sort of a trusted intermediary. And usually that would be the seller, but then sellers an interested party, seller has an incentive to disassemble about the nature of a bill of trading. So if we have, if we can build a connection or a data sharing pipeline between two networks, where by document like the bill of trading can be shared with assurance, then that makes the cross network transactions trustworthy. Next slide, Peter. So we began when you're building we were we had several different principles. So unlike several systems that existed at that time, like mentioned a couple of things Cosmos and Polkadot, also system like Ethereum, Plasma, which, which we're trying to build a common relay network that would enable settlement among different side networks or different side chains. So you have a common infrastructure, which is itself built on blockchain principles, which is mediating interaction between two different networks. That is common approach to interoperability. We wanted to step away from that and instead avoid such reliance. And that led to our various design principles. So one of them is inclusiveness. We, we want to avoid approaches that are specific to any particular DLT. We even if your networks are running a fabric or sort to the corridor, it should not matter. The interoperable networks must retain sovereignty on the processes as well as the rules by governance access. So we do not want, so the network should be able to determine when and how they interoperate with another minimal trust footprint, of course, China's security principle, privacy by design that is the any interaction between the parties belonging to two different networks should be private and confidential and only the interested parties should actually be doing it. No intermediaries, we do not want any sort of a intermediary nor do you want to rely on trusted infrastructure. As it happens, we in the Viva project, we do rely on some trusted identity infrastructure in order to facilitate transactions, but not we do not rely on any infrastructure for the actual settlements. We believe that this that constitutes a minimal set of requirements that will facilitate adoption and it could be applicable to interoperability in different contexts. Next slide. So the diagram at the left kind of looks complicated, but if you look at it from the high level, you have three networks. What Viva offers is an ability for these three networks to be able to run any kind of interoperational protocol that is brought from the three use cases that I talked about all the three modes, asset transfer, direct transfer and asset exchange and the mediating entity or you can call it a gateway for a network to be able to interrupt it. Then other is what is a component that we call the relay. It's a service that every network owns and exposes to the outside world. So what the left diagram showing is simply the different configurations by which networks can interoperate via these relays and also with existing ERP systems. And if you look at the bottom, we have decentralized identity registries. Those are also a key part of how you would engineer any sort of cross network interaction. If you have time to talk about that in Q&A, I'll do that, but otherwise I'll skip it for now. For every protocol here, and that is the data transfer, asset exchange or asset transfer, there are concerns that we'll have to manage at different layers of the stack as we defined them in the earlier presentation. Next slide. So what does this really look like? Now the relay is built in the form of using a Microsoft architecture. And this model actually is very similar in scope, objectives and function to the cactus connector. And Peter will talk to us and about how he's at present trying to integrate the weaver relay with the cactus connector. So the relay has two different parts. It has a network, a DLT independent part and a DLT DLT specific part. So that's left and the right respectively. What the what does the relay do now? Because as you can see in the at the left, the relays have to talk to each other. So they have to talk some sort of protocol, right? So this protocol we want to be network neutral. It should not be tied to any particular DLT. We should not have anything to with Fabric or Corda or Sawtooth or Bezu or so on. But then internally, when you have two networks that trying to communicate instructions or communicate requests for state to each other, there has to be some DLT specific component that can reach to the peers and either run smart contact transaction or look up some leisure data. So that's what the boxes on the right are showing. So these are pluggable modules. We call them drivers. And they are specific to the particular DLT. So for any network, what you would need to do to make it we were enabled an interoperational ready is deploy a common relay, which does not have anything to do with that particular DLT stack. And then you also plug in a driver which is specific to that particular DLT. So at present, we have drivers built for we have the common relay component build, and we also have drivers built for fabric and corda. We are working on one for Bezu hopefully won't be ready by end of the year or early next year. Next slide. So the relay, if you want link can be deployed several different ways. Now, this is actually an important concern when it comes to network administration. So you have, so I'm just going to show you how we envision this happening in fabric. So a relay can be issued because relay has to reach the network, right? So it needs to have some credentials to be able to access the the peers to be able to access ledger content. So in the more in model a relay can be issued credentials by one particular organization. So if you know I believe the fabric, your network is classified into several different organizations which are the fundamental units or the members of that network. So any of the network can build a relay and offer it to the network or issue credentials to that relay and offer it to the network. The reason this model works is because our relay is a trusted service. It does not actually matter which organization it belongs to. It will be if we don't trust it for anything other than availability. The relay similarly, again, as model B shows can be part of the ordering service organization and or as model C shows can be its own organization because of this trust nature doesn't really matter how you organize us. And similarly, we can deploy and we have examples for coordinate work as well, where a relay can be affiliated with a particular quarter node, which is one of the primary members of the network, or it can be its own separate node or it could be even associated with the notary for example. Next slide. Okay. This again is a rather busy diagram. I'm just going to this just shows to illustrate what the data sharing protocol that I talked about. So without going to details, what do you have on both sides? You can see there are certain common modules there. That is on the, if you see at the leftmost end, you have SDK libraries in the coordinate work. And then in the middle, you see you have certain contracts. And then at the right, you see you have relay and a driver. Similarly on the network at the right, you have, which is a fabric representative of fabric network, you have a relay and a driver. And then you have SDK libraries that are used by the fabric client applications. And then you also have what we call system contacts. Now, that's not to be confused with the fabric system chain code. What do you mean by that is these are common contracts that are that you deploy on all the peer-to-peer system, in effect acting as library functions. And the steps of the protocol, which are simply talking about how a client in one network can make a request for data, which is at a state of another network, and also provide a verification policy that tells other network what are the standards of proof it's going to accept. And it's up to the other network. Really, it's triggered by the driver on the network, which will invoke system contracts as well as the application contracts indirectly. And it will collect the data as well as the proof, return it to the client on the requesting network, which will then trigger a spot contract transaction in the coordinate work. It's a it's a flow. And that can independently verify the the proof against the verification policy and update its registry. So this is a quite a complicated protocol, which but at its heart, it's a request response protocol. It's just that at endpoints, particular things are happening by consensus on the source network, that's a network that's providing the data, the right network on the right hand in this case, there are access control rules being run to determine whether the request should be satisfied by consensus because they've been run on the different peers, and the data has been then provided with endorsements or signatures. And on the network on the left, the proof is again independently validated by the different nodes of the coordinate work within the flow to ensure that the state is actually valid. How do they do this independent validation that actually depends on some identity management or the sharing of the route and the intermediate certificate. So that's part that if I haven't talked about here again, the more details on the on our project RFCs as well as the paper that talks about it. If you go to a website. Next slide. Let's skip this for the next time and go to the last next slide. So this is a snapshot of what the capabilities we have in our building at this time. So the modules refer to different services as well as different libraries. The protocols referred to the three interoperability nodes that I talked about in the beginning, data transfer, asset exchange, asset transfer. So looking at the modules first, we're building these DLT agnostic relays as well as DLT specific drivers. Then we're building these DLT specific contracts what what you saw as system contracts in the two sides before. And these manage proofs these two asset lock management because we're also running an hash time lock contract to an effect asset exchanges as well as associated SDKs for used by any corda distributed application in a corda network or a fabric client application in fabric. We're also working on hyper-legit basic work of that. Then there is a decentralized identity management, which don't have time to go into right now. But we have to go to our projects RFCs. You see a list of specific application on that. We have a relay. We have support built for data transfer for fabric and corda. Bezu not it's not done yet. For asset transfer is a protocol that relies on data transfer that relies on the ability to request for data and validate the state among the proof. Asset exchange is a protocol that's built on the common hash time lock mechanism, which most of you may be aware of. And again, that's a present supported for fabric and corda. And it's in the works of Bezu. Yeah, so the text means that that particular module or that particular feature has been completed. The orange dot refers to something that's going on. The gray dot refers to something that's not done yet. That's not being actively worked on now, but will be in the future. Okay, I think Peter, back to you. Thank you, Rama. Now we will talk about how we're trying to get the two projects together. Yes. Oh, wait, Rama, this is still your slide. You can go and talk to Peter. So yeah. All right. Yeah, so the common goal is to join forces between the two projects because there's strength in numbers. And there's no point in doing the same thing multiple times if we can instead just make some of those things once and make it better because we had more resources to develop it. That's kind of the broad vision. And timeline wise, you know, I'm always very careful to give any sort of estimates because everything always takes much longer than we would. But it would be great to have something tangible in the first half of 2022 next year. And then we would the idea is that we would merge some parts of the project. We would take it step by step. It wouldn't be a big one size fits all kind of one off operation, but instead we would gradually ramp it up and get it better over time. And then the important bit is that we would welcome weaver maintainers into cactus as cactus maintainers. So obviously the reason why I'm saying that is important because it's not some sort of takeover or anything. It is meant to be legitimately a joining forces on good terms and equal terms and everyone should have a voice in the governance of the project going forward. And then development efforts by cactus maintainers. So that's mostly regarding protocols and technical mechanisms that we need to adjust in cactus to make it possible. But I'll talk more about that later. And then the goal for the common framework together is to cover all interoperability modes or use cases, meaning that if you read any sort of interoperability paper that summarizes the different modes or if you just look back at the ones that we talked about now, we would ideally be able to cover all of those because it would be flexible enough. And the joining forces would have more resources to actually support all of those because we're talking about a lot of work. And then the merged framework would look like it would have a common code base, but it would be divided into packages. And I have ideas about how we could leverage GitHub's code owners feature for this, which means that you can mark each file or each directory within the source tree is being owned by certain maintainers. And this way we could divide up the work efficiently as in we could sort of have little patches of the code that are maintained by Person A and other patches of the code that are maintained by Person B. And so the challenges that first and foremost is that there's different programming languages that the two projects use. Cactus is written in TypeScript. The Viva Relay is written in Rust. Obviously drivers for the ledgers have to be sometimes written in different languages that the ledger itself mandates. But here in this context, when I mean Viva, I mostly mean the Viva Relay for now. So that's written in Rust. And then there's also runtime incompatibilities of the APIs. Such as right now, if you take the Viva Relay, even if it was written in TypeScript, it wouldn't adhere to the Cactus plugin API surface in the sense that you could not load it into the API server as a plugin module. And so the solutions to this that we've come up with one use WebAssembly as a command, quote unquote bytecode. And I say quote unquote because WebAssembly is not cold as far as I know as a bytecode. But I imagine a lot of people who are familiar with the Java bytecode and the .NET common language runtime bytecode. They know how to associate this word with the context. Basically, you can compile down to this common language or instruction set. And then have different programming languages actually work together if they both support it. And then for the runtime incompatibility, what we will allow there is to make it possible to have the coexisting implementations within the project through the plugin architecture, which means that just because there's a different or slightly different implementation of something, it either uses different algorithm or a different trust model. It doesn't mean that that and the ideas that are already in the project cannot coexist. And what this allows users or consumers of the framework at the end is that you can pick and choose your tradeoffs because in technology or pretty much anywhere in life, it'll something the decision that you make, it always comes with tradeoffs. And the best we can do is to give you the option to choose which tradeoffs you want so that you're not forced into a specific one. And a little more about WebAssembly, just in case you haven't heard of it or you're just not sure what it is. It's a VAP standard and it is a binary instruction format for a stack based virtual machine. Again, that doesn't matter much. The big news here is that it has shipped and it is supported by four major browser browser engines, the big ones. And if you have been in the industry for a while, then you know that this is some very, very special thing to achieve for any kind of standard or technology. Because a lot of new things that come out in the web space are not like that at all. Unfortunately. And so a little more about how we will achieve that. But I've been looking into and made some really good headway. Personally, is to compile Rust code down into WebAssembly with this tool called Vasin Pack, which at the end of the day, it's an extension to Rust's built in build tool called Cargo. And it gives you a lot of really, really helpful project templates for setting up a Rust project that you can pretty much out of the box can pile down to WebAssembly. And it has useful components to deal with the lower level mechanical differences between the JavaScript environment and the Rust language runtime, meaning that it has language primitives, not language primitives, sorry, language components that allow you to write an async Rust function that actually returns a promise to the JavaScript player. So this way, there's there's a much, much closer working together between the two runtimes. And then the other thing it can do, which is great as support serialization and deserialization into actual types. So if you have a class in JavaScript or active script six, however you want to call it, if you have a class in there, you can pass that into Rust and have it automatically convert into sorry, excuse me, have it convert automatically into a struct in Rust, which is sort of the equivalent of JavaScript class, but not really. But it's as close as I can yet. And then what it but this boils down to is that you can compile down TypeScript to JavaScript, you can load that as a plugin into the Cactus API server, which is a Node.js process. And you can compile down Rust to WebAssembly and then also load that as a module into the Cactus API server. And this way for the TypeScript code and for the Rust code, it's basically transparent if that method that they're calling at one time is actually being implemented in TypeScript or Rust. And for now, this has only been proved for Rust. But in the future, we can of course foresee this happening to other languages. I know that Go also has a target for VASM. So does Kotlin. So the possibility is here endless. And this is why I'm very excited about WebAssembly. And that's why I'm mentioning it so much because it's the cornerstone of being able to do collaborations like this. Standardization perspectives. I'll just skip this slide because we're running short on time. And we have some reference materials here. About standardization efforts in case someone wants to do a lot more reading and a lot more research than there's really good initiatives out there. There's a lot of people doing a lot of really good work in this space, especially within the IETF working group and the ISO. So please feel free to look these up as well. And then if you want to get involved with Hyperledger at large, because there's a lot of other projects there, not just the two that we introduced here today, then here's a few links. You can, you're very encouraged to go to the YouTube channel of Hyperledger as well. There's very, very helpful, very short guides about how to get involved on different levels, depending on what is it that you're interested in. And then I'll hand it off to the questions from the audience. Thanks a lot, both of you. Looking at the questions right now, how fast is the transfer across ledgers? How fast is the transfer across ledgers? Yeah. Well, for me, going to you. OK, so for cactus, we don't have any published benchmarks yet, but we aim to have them published and in a way so that you can reproduce them. But I can share the design principle around this, which is that we want to make sure that cactus is never the bottleneck within two ledgers. So it would always be the maximum performance achievable by those two ledgers. And cactus would just not make a difference in that sense. Got it. And then if you could comment on how cactus compares to Rosetta. Rosetta. Yes. So they're different in the sense that Rosetta is something that we plan on implementing in cactus as a planet supporting it. Rosetta, as far as I know, it's a specification on how to connect different bullets. So it's something that we would want to enable for you to use if you're using cactus. Got it. Any other questions right now? Also, Rama, did you want to also answer the performance question regarding weaver? I have my response would be the same as yours. We, as you say, we we want the weaver really not to be a bottleneck. And there is there's an effort on going we have we are aware of that student who's trying to measure the performance of interoperability framework. So we will see how that goes. But yeah, we don't have any published benchmark numbers for the weaver at this point. Got it. And then regarding the Rosetta Incorporation, when do you foresee that being completed? It is subject to prioritization. I it will probably not be in the one point or release, but it may make it to the two point or which I personally hope we will be able to get out and maybe three to six months after the one point or. Got it. That's that's not a promise. That's just my estimate. Got it. All right. Any other questions? I see. I want to do. A benchmark myself. What are the recommendations? Sorry, can you repeat that? I want to do the. The benchmark myself. What should I do? What are the recommendations? Sorry, I'm not sure. I asked you to go about benchmarking the frameworks. Yes, by myself. Yeah, how would you go about benchmarking it yourself? OK, possible. Yeah, I think at least for we were you can we have documentation that help you get started. You can spin up a couple of minimal test networks. Two on fabric, one on corridor, the minimal instance that fabric networks have a single peer, which is not representative at all, but at least it brings up a network that allows you to complete an end to end protocol. You can run that network and then you can write any sort of performance of performance benchmarking code on that you can try to use any of the available load generators to drive transactions. So the instructions show you exactly how to run an interoperational query. This is I'm talking about data sharing, which is our what you mainly started off with. We also have instructions for doing cross network asset exchanges that is using HDLC. So you can follow instructions for each of them and then you have to write your own test harness on top of that. You'll have to build a load generator, pump in a load according to whatever traffic pattern you want and write some additional code to collect statistics like throughput and latency and so on. Got it. Thank you. Is there any sample code to get started with CAC? It's like a Hello World program. I just timed out the response. Got it. And do you see some interoperability of asset exchange with public blockchains using Weaver, CAC, or Bezo? Yeah, so we are. We almost have asset exchange going with where one network is Hyperledge, Bezu, we're doing some. Last final refactoring on the code before we merge into main. But yeah, that the Bezu side of it is built on something is built on the ESC20 library. So we're assuming that any fungible asset that is that you would like to exchange on a Bezu network in exchange for some other asset on another Bezu network or the fabric or a coordinate work, you can do that. Just watch out that the Bezu support should be coming quite soon. We already have fabric support for these sounds of asset exchanges and Carta is mostly done or it is one version is already done. We are trying to refine it. But Bezu with the ESC20 to based assets is coming very soon. Got it. I know we are a little bit over time. So if there's any other questions, I would say, you know, feel free to reach out to Peter and Rahman if there's nothing else here. Again, thank you both for volunteering to give this. Very interesting. And thank you to the audience for your questions. And we look forward to seeing you the next meet up. Thanks, both of you. Thank you for having me. Thanks, everyone. All right. Thank you.