 Good afternoon everyone. I hope you're having a good conference. It's really nice to be talking here about interoperability in Hyperledger Global Forum 2020. We have like the first contact with Hyperledger Actors and after being working on the project for a while we've seen that it has evolved so much which also motivates this work. So in this presentation D&I are going to talk about formalization, systematization and standardization of blockchain interoperability. A joint effort between some Hyperledger members showing that a need to disciplinary approach is needed to solve this problem. So probably you're all familiarized with interoperability but essentially it helps us bringing liquidity to other networks. It helps us scale our blockchain solutions and it also helps us reduce risk by for example migrating our applications to a more secure blockchain or packing our security to a stronger blockchain. However we've seen that blockchain bridges so the most typical cross-chain use case have been constantly hacked. Namely we have seen about one and a half billion of dollars in damages just this year and despite several academic works and industry works on the topic we've seen these attacks. So we would like to mitigate these attacks and our hypothesis is that by formalizing blockchain interoperability and then instantiating our solutions with this more general model we may help mitigating some of the attack factors. So this is one summary of the bridge exploits from last year. Right so our first step is trying to understand the interoperability landscape and we've done some work in the area. We have a survey published at ACM Computing Surveys where we try to make sense of the heterogeneity of the area. Basically we aggregate interoperability solutions into different categories based on a set of criteria and one of the insights that it's important to understand is that interoperability is not solely asset transfers. So bridges are not the only use case for interoperability. For example hyperledger cactus and weaver now called cacti support more general types of interoperability. So we can have cross-chain use cases that are not solely asset transfers. It's also interesting to observe that there are some cross-chain APIs. So APIs that allow you to connect to several blockchains and perform computation based on the results. For example ubiquities cross-chain app which is from block daemon and many others. So when we're reasoning about interoperability is important to understand what is the object that you want to interoperate. So we call this the interoperation mode and we divide it into data transfers, asset transfers and asset exchanges. Asset exchanges are different from asset transfers and I'll explain a bit later how is that. And then we want to measure can the system interoperate with other systems as it is. We call this the potentiality of a system and we measure this with what we call the P levels. And then we have the C levels for compatibility and it answers the questions if two systems can interoperate with others as they are. We'll explain this soon. So data transfer very simple, typically. We have a piece of data on a source blockchain, an interoperability mechanism in the middle or interoperability solution. We copy that to the target blockchain via the IM. Typically we also might provide some proofs of validity but not necessarily. Then we have asset transfers. So the idea here is to move an asset from a source blockchain to a target blockchain. And we usually do this by locking the asset on the source blockchain and minting the representation of that asset on the target blockchain. Note that the idea here is to keep the total circulation of that asset consistent across chains. So that's why we need to lock and unlock. Otherwise the asset is not packed or backed up by anything on the source chain which is problematic when bridges are hacked. So for example if the target blockchain is hacked and the funds are drained or the other way around. If the funds are drained on the source blockchain the minted assets on the target blockchain will lose its bag and therefore its value. So we lock. Then we have a mighty entity in the middle. It can be more centralized or more decentralized that asserts for the validity of that transaction and then the asset is minted on the target blockchain. Then we have asset exchanges. So asset exchanges do not require a burn mint or a lock mint mechanism but rather the depositor locks some value or some asset in an escrow contract. This escrow contract can be redeemed by providing a preimage of a cryptographic hash which is called a secret. In other words if I have the secret I can redeem funds. So visually we have Alice and Bob on ChainNMB. Alice creates a secret which is it's hidden by a cryptographic hash and then she locks the transaction into this vault, this smart contract such that if Bob provides the red key he can get the asset. Right? That's the transaction where he gets the assets. And similarly as Alice Bob deploys a smart contract on his chain such that if Alice provides the key he can redeem the funds. And the point of interest here is that the key is the same. So when Alice opens the vault and redeems the funds Bob gets the key and can use it on the first one to redeem the funds. So both parties get the funds and both parties use atomic transactions on their own blockchain. So you only need some synchronization happening off-chain for this to happen. Right? Those are the P levels. I'll be really quick about this. The intuition is that the higher the P level the stronger the interoperation system. So for example level P1 is interoperability across smart contracts. If you saw any Weaver presentation there's a very similar diagram. And for example level P3 is interoperation across different DLT networks but same subnetworks, same functionality. So level P3 could be across Ethereum different Ethereum networks. Level P4 could be across different DLT protocols such as Polkadot and Hyperledger Fabric or Ethereum and Hyperledger Fabric. The C levels. So level C1 is compatibility on the semantic layer meaning that there is a protocol that is understood by two different systems. So for example an asset transfer protocol that instantiated on a set of blockchains. Level C2 includes synchronization at the organizational level meaning that there are some agreements across organizations that enable the fulfillment of a common goal. For example when we have bridging projects that have a collaboration to use shared infrastructure across several blockchains we have organizational layer compatibility. And finally we have legal compatibility which conforms to jurisdictions laws regulations etc. And there is not a lot of work on that field as far as we know. Alright so then our systematization deliverable we look at the P levels, C levels and D stands for data transfer, ATSA transfer and AESA exchange. If we want for example a P4 C1 and asset transfer solution we look into these columns. And we check which one has the tick. So the last rows and this indicates us that these rows here those solutions provide the requirements that we need. So those are interoperability solutions that support those P levels, C levels and so on. And I would like to call Dina to present our efforts on formalization. Okay so thanks Rafael. I think till now you'd have seen what we can call a system systematization of different solutions that are available for interoperability and in terms of both techniques and the interoperability mode supported across various dimensions. So now we'll try to think about what it would take to formalize or to come up with a formal definition for an interoperability protocol. So this is work in progress that Rafael and I are doing with the heart Montgomery from the foundation and colleagues from the IBM Media Research Lab, Shikhar and Rama who is in the audience. And the goal of this work is to come up with a cryptographic definition that if someone comes up with an interoperability protocol we can check if it's secure. So what is mean by secure is let's say if you want to come up with an encryption protocol. So that's something cryptographers came up with the definition a few decades ago. And one common thing that you would do is encrypt using your protocol, encrypt zero using your protocol. And then let's say encrypt one using your protocol. If you give it to someone who's trying to make the protocol either encryption of zero or the encryption of one. And if they cannot distinguish between them then that means that your protocol is secure. That's usually like one like very simplified way of saying your encryption protocol is secure. You can have like building on top of this seminal work from four decades ago. For any cryptographic primitive we always first try to come up with what a formal definition of security means for the primitive. And then we try to propose protocols for that because when you propose protocols we want to say whether it's satisfying the properties that we're defining for it or not. So similarly for interoperability what we aim to do is to build on the formalization that exists for individual blockchain ledgers. There has been a line of work trying to formalize a security of Bitcoin ledger starting with that to generalizing it for permissionless ledger to permission ledgers to also include card like any decentralized ledger technologies. People are distributed ledger technologies. People are trying to have come up with formal definitions for individual blockchain ledgers. So the crux of this work is to build on top of it to formalize interoperability protocols. So what does it take to formalize interoperability protocol? So here it could be interoperational between two different individual networks which means like they are fundamentally different in terms of I don't know like how they let people in like the participants of the network or they can be fundamentally different in terms of the consensus mechanisms or in terms of let's say I don't know the signing protocol use how data stored in the ledger and and so on right and the derivatives of it and even inside a protocol there are different different parts involved. In the individual ledger we have consensus, the ledger and as I mentioned before the membership provider and also when communicating with other ledgers you also have contracts for cross ledger ID management or like cross ledger verification of data that's coming from the other ledger or in maintaining blocks and other things. So one of the these is one of the main challenges of formalizing interoperability is that there are different moving blocks and we have to come up with a model that right we have to come up with a model that is aware of all these things and it's secured even then. So this is where we use this notion of universal composability I'm not going to talk about what it means this is just to motivate that if there are like multiple pieces of a protocol or if a particular protocol is going to be used as a part of a larger system with other things it's very common or if this protocol is different instance of this protocol are used in conjunction right in such a scenario it's very common it's natural to use this notion of universal composability to give an analog in cryptography when people use this is let's say if you're trying to use a signature scheme with some parameters from public parameters and let's say you're trying to use the similar or the same the secret key public key pair for an encryption scheme. I mean some of the signature schemes and indicative schemes they're compatible in terms of public key private key parameters so it's okay to use it right but even though the key like the mathematics of the key are compatible it's not always secure if they are used like in conjunction so sometimes it like they don't naturally do and also if you have different instances of if you use the same parameters for let's say different other protocols as well the randomization of the secrets are used it's not clear whether it's secure or not and people have seen in the literature that this notion of universal composability has a good way of capturing whether the repeated use of the parameters are done correctly or not or when like yeah as I mentioned before whether the protocol is used in conjunction with other instantiation the same protocol or different protocols to check whether it's done well this notion of universal composability helps and since we have a lot of moving pieces we will also try to use we have also tried to use this for formalizing coming with the formal definition for interoperability and for interoperability specifically the some of the specific things as any of you were like I've seen it before will be aware of is we will a particular ledger we need to have a couple of properties which are more than what it would need if it not where to support interoperability like it would need some level of synchronization kind of a clock and some guarantees with respect to the clock so that it's feasible to interoperability with another network and the second thing is it needs to support locking right to enable atomicity across ledger so locking is another fundamental property so some of these is what we will try to capture what we had tried to capture in our formal definition and this is the only sort of the math term that I will use in this slide is we do this by defining appropriately defining rules right and see the transition like appropriately defining rules and the transition of the state and the set of rules so we associate with a particular state of the ledger a set of rules and then we define transition functions which go from the state set rule pair to the updated state set updated rule set tuple and the transition function outputs one if the update from the previous tuple to the new tuple is valid and zero otherwise so by appropriately crafting these things is how we came up with the definition and right and we model this as a two-layer universal composability model the first layer captures the definition of the single ledger functionalities and okay the second layer is about the system of interoperable ledgers right and the first layer there is a prior work called F ledger which formalizes the second definition for all DLT technologies and we build on top of that to have some interop enablers like checking whether the transition the rules transition function is is one of the interop enabler right so that's is the first layer of the UC model the second layer is the system of multiple interoperable ledgers which contains things like the global clock or the atomic summit functionality and similar things which defines how cross ledger functionality would look like right so this is capturing the formalism effort at a very very high level do talk to us if you have any more questions after this session we just kept it at a high level because like it might be like appropriate for a wider audience right okay and like once we define this formally there are two steps of like once we write down the formal definition we'll have to first show that the individual ledgers are according to what F ledger plus is asking for right like fabric, corda, basu, we would want to or ethereum mean so we'll have to prove that those are according to F ledger plus and then we'll have to prove that cacti initially we thought okay we'll prove two use cases one for weaver one for cactus when we started two years ago okay now it's merged so we'll have to prove that the different kind of functionalities that each of these support are secured under this model right so that's I mean this is one one except from the weaver deck we have like multiple functionalities regular sharing that like Rafael mentioned data transfer asset exchange and a version of identity management recently and the even publication subscription so we'll have as after we write down the formula definitely will be proving that the protocols are secure under this definition okay so that's that with the formalization part the next is about the standardization effort which is task force under IETF we're trying to formally like get into the IETF working group so we call it the secure asset transfer protocol the SATP group and I guess like a couple of us in the audience I think have been part of the working groups right so the scope of this working group is actually yeah this picture captures right so there will be gateways that single or multiple gateways that lie in front of each of the network and the scope of this working group is to define the communication between gateways and mainly to facilitate the secure asset transfer protocol right and what all do we have here like mainly we try to formalize the API endpoint definition in the gateways and then and the description of like resource identifiers right the like how can how can you craft the query or the request to be made to the other network so that's that's also part of it right and how do you define each asset in a net in a network so that's the resource identifier and then we'll also trying to formalize the payload definition for different kinds of payloads that would be communicated across the gateways during the cross digit protocol and then the the protocol itself for the secure asset transfer right so there is a charter there's a draft charter and like Saul has gained a lot of traction over the last couple of months and there was like above session in Philadelphia a couple of months ago so after that the group has trained the traction and like getting formally involved with IETF recently so there are like sort of two or two or three main drafts and then some associated drafts currently we're thinking about first is the asset transfer protocol and then the architecture and then the use cases document to augment that and also after doing that we'll also have the views data sharing document which will have the definition for the view and view addresses for data sharing protocol right now I'll hand it over to Rafael to go through this in a bit more detail about this. Thanks Dina so we've been working excuse me on this protocol for a while it was called ODAP before we've been working for this for around two years so there's there has been a couple iterations in the meantime but what we figured out is for gateways to communicate securely would have to implement a three-phase protocol that's crash fault tolerant to account for possible failures in gateways which would eventually happen they coordinate as it exchanges as it transfers and they can also share data across chains and we also try for crossing transactions to preserve acid like properties so the atomicity consistency isolation and durability for transactions that come from the database research area we have done some work so we have a paper on that formalizes ODAP so SATP previous name which you can check for some details if you're interested so this is a sketch of the three phases that ODAP has well plus phase zero first we have an end-user that requests a cross-chain asset transfer and we have basically gateways communicating to assert their identities and to assert that they can effectively transfer their asset so they verify the asset profile that defines what an asset should look like they verify that gateways are ran by valid virtual asset service providers they verify the identities and so on so we we need this kind of standardization because to bring in order to bring interoperability to the enterprise world we need some sort of agreements on the semantic organizational and perhaps even legal layer otherwise if we keep using a set of hard-tock standards it would be much harder to communicate and to abide by existing regulations well after that the gateways agree to proceed and they establish a secure channel so TLS etc. etc. and this is where things start to get interesting we have our protocol is based on two phase commit so first there is a lock asset or a lock state so notice that these locks might be locks on state according also to the framework we're developing so we need the ledgers to have some functionality to create and coordinate such locks and when we have a lock operation on either side we generate a proof that should be very valuable by other gateways right so the the evidence is signed it's preferably published in a decentralized forum and the receipts are exchanged after we have a lock so it's it corresponds to the pre-commit phase of the two phase commit protocol we do the final commit so the the lock final where we issue a transaction second transaction against the origin blockchain and then transaction against the target blockchain doing the lock-and-lock for example or any arbitrary logic that we decide it's part of the use case proofs are generated their exchange they're persisted and the gateway concludes the session so this is a high-level overview of the protocol then we have the architecture draft which specifies the components specifies and explains the gateway paradigm right you have here an idea of where gateways are located we have a data sharing protocol so here we design the concept of views done initially by our colleagues at IPM but also we have more recent work on views and we use these concept of view as the proof that some computation happened on either blockchain and finally we have the use cases draft made by Rama and Thomas which depicts several real-world use cases that can be used or that can benefit from our gateway paradigm so this is all ongoing work you're all invited to collaborate on this if you have some interest in this area yeah we have a mailing list we have an open source repo with the drafts the meetings everything is posted publicly a la EITF and also please get involved with our hyper leisure community I don't know how it will be with the cacti channels but I suppose there will be a channel that's why I said to be combined exactly and I think there is also Thomas maintains this web 3 dot MIT dot edu webpage where he collates all the resources from the standardization working group and things so that's another place right and it's important to note that it's an interdisciplinary area where we need expertise especially on the specific verticals where this would be applied so we think that interoperability is complex coming up with a general enough model that's powerful enough to be instantiated on practical implementations he does is not being an easy task for now but we believe that it's important to be done and it will help alleviate the tax we are concentrating some of the vet of these efforts at hyper ledger so which we think it's the ideal place to conjugate these different skills we've seen that not all ledgers in principle can express the full power of interoperability and it also would like to invite you to collaborate with us on this matter and yeah as I already mentioned hyper leisure is a suitable venue to combine people with different skills and we appreciate your time here thanks a lot and we're happy to answer any questions by the way please go to the mic if you have any question hello hey what's really interested in your p-level c-level curious if there's information about the different interoperability let's just say code bases that exist within hyper ledger hyper ledger labs specifically cactus weaver ui perune and firefly and if we we have any ideas about what their p-levels and c-levels are thanks Tracey for the question I think it's an excellent question and I think it would be very interesting to do that sort of analysis within hyper ledger let me get the slide so weaver and cactus aim to be p4 so they provide weaver provides more the protocol to which you can realize data transfers and as the transfers while cactus aims to provide the infrastructure so the connectors the test ledgers and so on and they they do realize before interoperability and if you have several cactus nodes cacti nodes in the future hopefully you could realize level c2 and perhaps even level c3 now we haven't done a very comprehensive analysis to the best of my knowledge to the other solutions but it's interesting to notice that some of these in some of these works the elsewhere is used for example hyper ledger fabric they do a channel to channel interoperability or interoperability across hyper ledger fabric networks so the system that the systems that are proposed in some of those papers works actually enhance the levels of the of the solutions although the solutions natively do not interoperate with each other and that happens across several hyper ledger products which also brings the question should we consider hyper ledger fabric level p4 because someone has developed a bridge or should we consider the bridge technology itself providing level p4 probably is the later unless fabric has these interoperability capabilities by default which happens in some blockchain projects like cosmos and polka dot where you have some interoperability between instances of the platform so between parachains or between hubs but you do not have native interoperability with other blockages you need bridges for that so and this is a design decision right polka dot for example started working on this modules by default while hyper ledger focused on single blockchain for enterprise applications and one caveat is also about what's the end goal that the project's aiming for versus the current implementation I guess for some of the projects the end goal is like what Rafael mentioned but I think we are working towards that right like we don't have all the things implemented yet let's say you know for cactus thanks for your question you said that the blockchains that are participating in the transfer needs certain basic capabilities so in terms of that have you analyze the existing especially public blockchains and the assets that are locked in them in terms of the value and have you you know what kind of assets would be available for this transfer based on the base level of capabilities that that you know percentage wise like you know total value locked in all blockchains and how much would be really available using this basic definition like what no that's something that that'll be good to do once we complete the formalization like we haven't done it yet but that's certainly something we have to do right we currently whenever we think about something we will first sanity check that with fabric base who and then the hyper ledger projects right are unjust ethereum right but for the other we haven't done a thorough check on like what are the other public blockchains to we probably have to do it like once you make some more progress in completing the formalization so what you said what is that capability locking locking and some level of synchronization in the clocks yeah so I mean what I meant by some level synchronization clocks is we have to have some level of predictability in logs it should not be too fast or too slow right and also it should be not too variable in terms of like yeah it should not suddenly be too fast and then slow down bit and so on if we do those kind of things it's very hard to do interoperate it's hard to define any kind of formal properties for interoperability right so that's one part the second part is should have the ability to facilitate locks those are the two things I mentioned in the slide that's what I talked about so when you say that you're you mean the rate at which blocks are formed yeah so some some most of the networks assume have that as an abstraction for time so in that in that case it's the rate at which the blocks are formed and we we obviously need to account for that because different block chains have different block creation times or they have even timestamps saying for example in the case of fabric we do not have timestamps which could make it more difficult to have a decentralized view of time that is actually some papers working on that or they're already published so we need to account these different finalization velocities so for our C interop instantiation so cacti for example to be able to synchronize times in more absolute way well clock synchronization is a tough problem as you know it's like it's not a it's not really solved I agree so we're not trying to synchronize clocks right it's like yeah so what we're saying is it has to have some level of it's not about synchronization rates some level of predictability maybe that's a better word they could have used right so if if there is a decent amount of margin in terms of predictability then it's easy to then it's possible to define interrupt interoperability right formally what we are saying is if they don't have it then you cannot have any formal guarantees with protocol that you with these block chains has the underlying I mean the participating networks for interrupt if you can ask a repeat what about message-based interoperability which you know has been brought up as a not just a topic but even a solution there's a kind of a standard at the world economic forum can you elaborate a swift message type system where you're sending asset from one place to another but obviously that need it's not a completely decentralized system you know oh because you have to have intermediaries who have to be trusted right if they go down then I mean yeah I mean the whole financial system in the world exists today and it transfers trillions of dollars I agree okay so the working system it's not like some creation from the mind right so Swift is a message-based system and I think somebody in the supply chain world came up with this message-based interoperability for certain I mean it's almost like a fungible talk because in the end you can spend it different places I mean that's that's what drives for example trade finance you know you can sell a bill of lading or so they have formalized something there I wonder whether you guys have taken a look at it so that's a bit of the work that we're doing with the SATP protocol right so it's message-based it's inspired by such protocols and problems in the supply chain world where messages are just bit strings and they're interpreted differently by different networks so we have the gateway layer that does that translation that business level translation and has some accountability guarantees that's why we envision it to be better placed as the gateways to be better placed as centralized so run at least by centralized authority that can have accountability if a gateway misbehaves and tries to steal funds is that what you meant by message-based interrupt I was just assuming maybe it might be something it's fine okay okay sounds good we'll take it the fine thanks the protocol that talked about one that involves gateways communicating with each other you can make that as decentralized or as centralized as you want you can add centralized components to make it perform better add more trust and company that's trying to do that I guess what the SATP initiative is trying to do is to propose a protocol that gives you a very general way to get to networks communicating communicating messages with each other now you can adapt that for your purpose if you want you can bring in trusted parties middlemen if you want so the protocol does not prevent you from doing that thanks a lot for the qualification drama I guess these are two different views right one is a formalized you know a mathematical formalization of this and the other is the the guys in WF are just trying to figure out what should be in that message correct and and basically and the different variants of the protocol yeah I mean yeah it's a it's a view you know one is a very formal very you know precise definition and the other is I think the standardization effort tries to create a platform where you can adapt it to like whichever level of I don't know like the parameter parameters or variants of the protocol that you want to support formalization tries to say that okay if you have these such proper if you do it in such ways then it satisfies these properties formally yeah so the formalization to the the breach from the formalization to the real world are practical implementations and use cases and to make them more robust we try to work on standardization as well so it's full stack of interoperability I think we're on time thank you everyone for attending and see you around yeah if you have any questions