 Okay, great. Okay, let me share my screen one more time. All right, welcome everybody to the Atomic Cross Ledger Transactions between Beizu and Corda Ledgers using Hyperledger Cacti version 2. We would love to thank our presenters today, Dominic Merleys, Peter and Peter, who are going to take us through this fantastic program, just a little bit of housekeeping up front. This is a Linux Foundation Hyperledger Foundation event. It is held on the Linux Foundation Antitrust Policy. If you have any questions about these matters, please contact your company counsel, or if you're a member of the Linux Foundation, feel free to contact Andrew Uptegrab of the firm Gezmer Uptegrab LLP, which provides legal counsel to the Linux Foundation. This is also held in the Linux Foundation Community Guidelines, which can be summed up really simply, be excellent to each other. That behavior will result in you being kicked out, but we want everybody to feel welcome here, and that's our goal. Thank you for joining us. So, the Hyperledger Foundation, we host the open source enterprise distributed ledger projects, tool sets, frameworks, and their communities. There are numerous projects within Hyperledger we're going to just really quickly go over, but this ecosystem is why we exist. We are part of the Linux Foundation, and we use transparent open governance as just part of our DNA. We have a diverse ecosystem of enterprise-grade technologies and use cases, so we both have distributed ledger projects that are, in one case, like Indie and Aries in an on-cred specific to the identity use case. In other cases, things like Cacti, which are meant to be interoperable across different projects and chains, not just Hyperledger projects, but projects outside of Hyperledger, but they also fit within multiple use cases and industries. We have finance, CVDCs, healthcare, telecom, supply chain, climate action, education, and a lot of these are represented in our special interest groups within the Hyperledger community. When we talk about the projects that we have, we have graduated incubating and Hyperledger labs. Generally new project ideas start in labs. Let's see if we can build a community around this project and this code. Lab projects can graduate to incubating status, and I think a wonderful example of that is the project we're going, one of the projects we're going to talk about today, Hyperledger Cacti. Another project we're going to talk about today is the Harmonia Lab, and so you'll see there's Hyperledger Cacti started out. We went from a lab to incubating, to graduate status. That's an example of how these projects grow and evolve. As you see here, we have both full DLTs like Fabric and Sawtooth and Roja and Indy. We have tool sets like Cacti and Firefly, frameworks like Aries, and then we have even an Ethereum Enterprise Ethereum client in Beziu. That gives you a sense of the spread and the types of projects that we've got. But this is open source. We need you. We need contributors. We need maintainers. We need folks who believe in these projects, believe in these use cases, and want to see change. We are powered by the community. We are focused on the community. We do not exist without the community and the contributors and the maintainers who make up these Hyperledger projects. We both sincerely thank them, but also we would encourage you, the attendees of this workshop. If there are things here that you're interested in, if there are things here that you want to influence, if this is work that you want to do either for your company or for yourself, please, by all means, we are an open community. Everybody is welcome here. We communicate via mailing lists. We communicate via Discord. We communicate via working group meetings. There are a couple of ways to get involved. You can sign up for a Linux Foundation account. You don't need to have a Linux Foundation account if you want to join a call. You can look at the GitHub. Our GitHub repos are all open. You are more than welcome to come in there, get involved, play with the code. In a lot of our projects, we have a good first ticket tag. So if this is something like you've got experience in Solidity and you want to check out Beziu, there's probably a good first ticket you could check out there. And the maintainers and the contributors work really hard to make sure that we've got on-ramps for folks who are new. That's the only way these projects can grow and be healthy and viable. We have contribution guidelines for most projects. We have working docs on the Wiki and we have communication channels that I just mentioned, email, Discord, and live meetings and events. That is it for my part of the presentation. I'm going to stop sharing and turn it over to our fantastic presenters and let them introduce themselves. I will be posting links in chat. If you have a question, please post it in chat. I'm going to be capturing the questions and we're going to, at different intervals throughout the workshop, we'll be going through the questions. In some cases, when someone is presenting, another one of the presenters might answer a question in chat that's completely encouraged and we'd love to see it. So Dominic Merleys, Peter and Peter, it's your show. Thank you, Sean. Give me one second. Okay. I'm going to share my screen real quick. Say a few words and then we can carry on to the rest of it. Can you see the slides? Looking good, Peter. Okay. Welcome, everyone. I'm Peter. I work as a technology architect and I'm one of the maintainers of Hyperledger Cacti. So we brought everyone here today to show off the progress, what we have, both in Cacti and in the Harmonia Lab and we're hoping that in the future there will be more collaboration with the two projects. And so what you will see is first kind of laying the groundwork about what the Harmonia Lab is, what does it do? And then I will give a very brief introduction onto the concepts of how can this be made work together with the Cacti framework components. The interesting bits there will be about how you can use the API server and the connector plugins of the framework to actually connect to the ledgers and perform the operations that you will have seen before in the previous talks of the workshop as well. And so apologies. Yes. So Cacti really quickly is a plug-able framework that's meant to be enterprise grade. It's sort of an SDK of SDKs. So that's why you will see that first we build up from the content in the lab and then we move on to the framework as well. And then the reason we created the framework originally is to address fragmentation. So if you have multiple different ledgers then the complexity just shoots up for your software project, especially if you have to use different languages to develop for those different ledgers. It can get very complicated. So what we're aiming to do with the framework is to make the development simpler and I'm always very open to feedback about what we could do better. Another thing that we tried to lower is the risk of adoption, meaning that if you bet everything, not everything, but your project on a certain ledger and then that ledger doesn't turn out the way you wanted it to, then you should be able to migrate away and the framework should help you with that as well. So we are in the hyperledger greenhouse. This is a slightly outdated slide because it still says cactus, but the project has been renamed to cacti. And this side is a good reminder for me that I have to say that as well just to make sure people know. Then we have design principles. The big one for me at least personally is the plug-in architecture. It means that the GitHub repository that we have is actually consisting of at this point more than 60 different packages, which are all their own entities or components in the sense that they all have their public API surfaces that we publish individually. And they all have their own release versions that we do keep in lockstep, but nevertheless if we need to we can issue separate releases on a per package basis as well. And then we want it to be secured by default, which means that we don't want to be the source of some data breach or security breach where it was very simple to break into a system just because the software used was insecure by default, which is usually a tradeoff that people make with their software just in order to keep it convenient. And then the other big thing is that it's toll free, which means that it's an open source framework. So you get a source code, but also there's no mechanism built into it that would charge you in any way. What costs the ledgers crew is a different thing, but that's between you and the ledger. The framework itself does not have any sort of mechanism. And then low impact deployment, which means that we're trying to make it as convenient as possible to work with the framework and not have to re architect your existing ledger deployments, for example. And I'll skip over a lot of the additional stuff because I don't want to be talking for too long. Yes, I talked about yes, so the code is mostly written in TypeScript and Rust. And we have a big focus on test automation, we have a huge CI pipeline to verify all the 16 packages that I was talking about. And the mono repo itself, we use a tool called learner to manage that, which means that the 60 packages, for example, if you want to build all of them, we can do that with a single command. Or if we want to issue a release, that's also just a single command to publish them. It will skip over the plugin architecture. Yeah, actually, I'll just skip over to the end, which is the interesting bit that we have a community. We have this core channel, like Sean mentioned. And we have daily pair programming calls. So if you find yourself with questions after this workshop, then there's pretty much every day of the week, or weekdays at 10am pacific time, a pair programming call that you can join to. And the link for that pair programming call is also on the Viki page on this link that I'm showing right now. And later on, I will post these links on the chat as well. Okay, so this is just to kick us off and talk a little bit about cacti. And now I will hand it over to Peter and Dominic. They will also showcase what's in hermonial lab, and they will talk about the theories behind that a little bit as well. And then we'll come back to at the very end, we'll come back to a little bit of a demo about how you can use cacti to work with those as well. So that's it from me for now. Thank you. All right, thanks Peter. So what we're going to do now is talk a little bit about, as Peter said, about harmonia. And before we do that, I just want to do some introductions. So Peter Mullings, I'm with the company called Adora, and we provide product solutions for banks and for FMI's in this distributed ledger space. Primarily, the one that's probably getting the most press at the moment is finality with a blockchain partner to finality, which is a central bank backed cash settlement layer based on Ethereum, but connecting to multiple different other platforms for DVP or PVP, so cross-chain, cross-blockchain settlement. I'll talk a little bit more about that. But just to say that harmonia has come from a very tightly scoped set of problems that financial institutions and financial market infrastructures are trying to solve. And we've got to this place. So that's my background. That's where we're coming from. I'm going to do an introduction to harmonia. I'm going to then hand over to Marilis. She's going to be showing some of the Adora contributions that have come into harmonia, which are largely based on the Enterprise Ethereum and Iron Spec. And then we're going to hand over to Dom, Dominic Fox, who is going to talk through some of the differences and I suppose some of the discussions we've had over the years now as to how do we see interrupt happening between something like Ethereum, which is pretty different constructs to quarter. So he'll dive into that. And then he's going to show one of the other implementations that's in harmonia. And then we will hand back to Peter. So before we jump into all of that, Dominic, do you want to do a quick intro? And then I will slide up while you're doing that. Yeah. So I'm Dominic Fox. I'm a principal software engineer at R3. R3 makes sort of chiefly the architects of the Corda distributed ledger platform. We used to call this a blockchain, but it has some blockchain flavors to it, but it's also notable by being a private and permissioned network, which works into quite different ways and does some quite different things to a lot of the public blockchains. And with a few colleagues, Eduardo Irina, who's not able to be here today, but he's unwell, and Richard Brown, who's the CCO at R3, we've been working on harmonia in partnership with the general for a few months now. So yeah, kind of excited to show our work from the R3 side, because we've both been contributing code into the repository. Great. All right. So project harmonia, let's dive into it. So harmonia is not one of these, which is a harmonium. Thanks to Dominic for introducing us to a harmonium. It's not a harmonica. It is an initiative driven by a group of banks initially to help create common patterns and standards for interoperability between digital asset platforms in order to simplify the atomic settlement of delivery versus payment transactions. There's a lot in there, but that's where it came from. Peter mentioned the problem that banks specifically are facing in that they, as these distributed platforms grow and as they need to do more and more settlement across different platforms, what they don't want to end up with is a plethora of different gateways, interfaces, data formats, interrupt mechanisms, et cetera. So harmonia really is a place where banks can kind of easily start to put in their requirements, so principles that they've defined. And let me just say up front, it's been a real pleasure working with Hyperledger, the Hyperledger Foundation. And they've made this process really, really easy. And it's a great, very well understood platform that banks understand how they can contribute into. So one of the problems we were having when working with different banks is that they all had various parts of their system that were proprietary, but they did want to somehow create these standard patterns. They all had some principles that they wanted to implement. They all had specific requirements. And as we looked at them, there were quite a lot of common requirements across the board. And so we decided to form a lab within Hyperledger, mainly to really be a dump initially for all of these principles, requirements, and reference implementations. But as we started working with Hyperledger, we were quickly introduced to different projects in here, most notably the CACTA project. And because the aims of CACTA are quite similar, and because there's a much, or there's a reasonably mature interrupt product that's in there, one of the things that we are looking at doing is taking all of the requirements and the principles that the banks have started to put together and then look at some reference implementations. So we started off with some basic reference implementations, which we're going to show you. And then as Peter said, he will show you how you could do this using CACTA. So Homania has never intended to become a de facto product or a reference implementation by itself. It's meant to be a place where different products can say, this is how we would expect to implement what the banks require based on our product. So we expect a number of different reference implementations to be in there, and each one have different pros and cons. Just a little bit of our journey to get here, just to give you an idea of where we've come from. So much of the team at Adara that have been working on this were, or some of the teams, I'd say, not much of the team, some of the team, we got together in 2017 on Project Ubin with the Monetary Authority of Singapore. And even back then, that was a project where they gave us a task to implement interbank settlement with netting across distributed ledges. And there were three teams. There was an R3 team who built a solution on Cora. There was a team based out of consensus in JB Morgan who built a solution on Cora, which is an enterprise theorem plant. And there was a hyperledger fabric team. Each of the teams were paired with different banks. So we had real world bankers who were giving inputs into exactly how this worked. And the Monetary Authority of Singapore, the Central Bank of Singapore, hosted and gave us a huge amount of input. It was very, very intense, a very intense couple of months in Singapore, building this all out. And you can go and have a look at the Ubin 2 report, which was an interesting experiment, I suppose, back in those days. But we managed to get quite far in terms of the work that we did. 2018 in Canada, Project Jasper was kicked off. We weren't directly involved in that, but some of the R3 people were. And then 2018, we ended, along with Cuny Bay, she's also on the call, a project called Project Corka. Corka is a uni word, which means to pay. So it's a payments project, obviously with the South African Reserve Bank. So back in 2018, we ran a network, blockchain network with seven commercial banks connected to it, as well as the Central Bank. And it was the first real, I suppose, test natively like of a Central Bank digital currency for wholesale settlement. And we did a number of actually fairly decent performance testing with the Central Bank to check to see that this technology was viable. And once again, you can go and read the Project Corka 1 report from 2018. 2019, Finality International was founded. So Finality International, as I said, has been in the news a lot to the last couple of days. They've just done a second or series B funding round. They've just raised $95 million for their next currencies. They plan to go live with the production version of distributed ledger-based sterling, British pounds for interbank settlement. And they're looking to do dollars and euros next year. So they are a company that is very, very well embedded into the process. They've gone through all the regulatory process. And they're about to go live with some of the initial currencies. Adara has been the blockchain partner to Finality for the last four and a half years or so. So we've helped establish and build out the blockchain foundation. 2020, we started the interop work with Finality. Initially with payment versus payment, so an FX type transaction, settlement across two different Finality currencies. Obviously in those days, it was reasonably theoretical, although much of the production grade contracts were already in place at that point. In 2022, we did a collaboration with Finality and Navora and ourselves along with NatWest and Santander to do a cross-chain debt issuance. So we actually used public theorem for the debt instrument that Navora issued and then permission to Ethereum for the cash layer, which Finality were hosting. Also in 2022, we did actually, so 2021 might have been the beginning of 2022. Towards the end of 2022, we did a pilot PVP with Fintum. Fintum is owned by a couple of banks and they have an intraday FX swap product that can settle on Finality. And so we had also two Finality currencies running at that stage in a pilot mode with Fintum. So probably PRC is a bit of a word, not real money at that stage, but using their production grade software, both Fintum and Finality. And then also towards the end of last year, we did with HKLX. HKLX is high quality liquid asset exchange owned by the Deutsche Börse, partially owned by the Deutsche Börse in Germany and backed by securities sitting on now a number of different trial party platforms, but you're currently a stream of the primary ones. So it was a pilot PVP settlement. It was actually part of a repo trade. So repo with cash for securities and that was completed. And that was probably the one that we spent the most time looking at how do we do this atomic swap between two different chains. HKLX is running on quarter and Finality obviously running on Ethereum. So that's a lot of history. As you can see, we've been doing this for many years and working primarily with central banks and commercial banks and consortiums of banks as well as securities platforms and obviously as well as FX trading venues. So we've had a really good insight into how the regulated world sees all of this and how some of the issues that they are facing. So some of the principles that have come out of the banks and the FMI's. So what this group of banks got together and said, there's a couple of things that we want to see. So firstly, we would like to make sure that whatever principles or patterns or standards come out, they must be anchored in production implementation. So they're not interested in theoretical exercises. They want to know that there are platforms that are using these patterns, these interrupt patterns, and that was the number one principle. They said, let's start. We're going to start somewhere. So they wanted to start with a minimum set of wholesale business flows, including repos, so obviously HKLX Influence, FX swaps, Fintium, and then equity or bond settlement. So those are the three areas that they wanted to start with. There are a number of platforms, both cash and securities that cover those. And so it meant that we could find a number of different real-world implementations to start looking at how we would do this. Initially, the minimum protocol scope, so they said, we want to start also somewhere called an enterprise Ethereum. But obviously, the idea is to extend that as and when makes sense. Also, one of the core principles was that we need to be aligned to relevant standards bodies. So some of our team, and we've had some input in from R3 as well, working on the enterprise Ethereum Alliance Interrupt Working Group. So it's a working group that's specifically looking at interrupt between different blockchain ledgers. Primarily focused on Ethereum, but Ethereum to anything else as well. So very interesting. In fact, next week, Konima or Marilee said that you can talk about them. Next week, we're doing a presentation with the EA on a similar topic to this. I wanted to maintain community incentive alignment. So really what that means is that the banks want to make sure that whatever we propose that is implementable by the banks, and there is the banks as the sponsors of this, their incentives are aligned. So basically what that means, and I think Peter covered it quite nicely, they for example, don't want to see any third party tokens introduced in the process. They want to make sure that there's nothing that blocks any kind of innovation or blocks any implementation of standards. So very keen on making sure that as banks they could, anyone would be able to implement a solution based on the principles of the standards that get implemented. And then obviously want to see a number of different reference limitations in harmonia so that it'll help. We just find it's much easier when you can show people hard works rather than theorizing about hard works. And then one of the things that the banks also wanted to say is that they want the patents to be anchored around the wallet or the asset holders and not around bridges. In the first instance, they're not opposed to bridges, relays, companies that specialize in transferring data between different chains, but they wanted the focus to be around the asset holders. So just wanted to paint a picture of this is where harmony is coming from. It is very much from the regulated world that is now wrestling or grappling with how do they do what they used to do in the traditional centralized world in a decentralized manner. And so a lot of thinking a lot of thought got into how do we launch and get harmony again. Some of the requirements that have come out of some of the initial real world use cases is, and you'll see many of them are not technical at all. Although there are some technical trade-offs that need to be made based on them. So firstly, any two platforms that are going to interrupt together need to have a very clear and very well-defined rulebook. So both platforms need a rulebook as to how they are going to be doing their settlement. And they need a common rulebook in terms of how they're going to do settlement between themselves. So Peter mentioned at the beginning how exponentially this gets more and more complicated. Technically, it also gets more and more complicated from a legal and a rulebook perspective. And so some of the work, which we don't really touch on in harmony, but some of the work that's going on at the moment is around how to align those rulebooks and try and simplify the patterns around interoperability. There has to be a legal basis, so there has to be law around that. So for example, finality have had to have a policy change from the Bank of England last year, beginning of last year, the Bank of England updated their policy on distributed ledgers and said that if a financial market infrastructure wants to run a distributed ledger using sterling UK pounds as the settlement asset, they put together policy as to exactly how that would happen. So it's called the omnibus policy. But whether it's a commercial bank money, central bank money, securities, whatever they are, there needs to be a legal basis for being able to run the operation and law around what happens if something goes wrong. That has similar implications because there is a legal fallback in pretty much all the systems that we've been working on. So if something does go wrong, there is legal recourse in terms of getting that resolved. So the technology supports that. Participation, it's clear that participants need wallets on both systems or at least access through an agency bank to wallets on both systems. So all participants need to be able to legally own the cash and the securities on both platforms. There needs to be a unique trade idea. So this is quite an interesting one, is that for a number of reasons legally, any settlement across multiple platforms has to be based on an agreed trade. So banks have to agree on a trade. Part of that trade is a unique trade idea. And then based on that trade idea, the settlement then happens on both chains. And that's something that we've had to bake into the process. It's one of the obvious kind of legal results of the work that was being done. The process is a two-faced commit. So at the moment, so there's a concept of earmarking, the securities earmarking the funds, checking the earmarks are in place and then going on to the commit phase, which is the settlement. We'll talk quite a lot about that in the demo. So I'm not going to spend too much time on that. And then there has to be settlement finality. That's a key element of regulated market infrastructures. It's one of the principles for financial market infrastructures is that there needs to be settlement finality. It has to be deterministic, cannot be probabilistic. So for example, timeouts are known as needs to be deterministic settlement crisis and a deterministic cancellation process. So we explored at length looking in the early days at things like hash timeout contracts, but we just couldn't get them through the legal and the regulatory framework. And then huge amount of work has been done on liability. And for the purposes of this conversation, it's important to understand the financial market infrastructures and the banks have determined that each platform is responsible for providing a cryptographic proof that either earmark has been placed or that settlement has occurred. And that cannot be outsourced. So there's no there's no bulletin in law at the moment for a bank or for a platform to be able to outsource the checking or the validating of that because the platforms at the moment are responsible for providing a cryptographic proof. So that's quite important. And Dominic's going to spend a bit of time exploring what that really means, both from a quarter and on the through perspective, he's done a huge amount of work digging into that. He's contributed to harmonious so far. So currently it's our three, Adara and Finality. We're very keen and very open to having others to contribute. We've had others who have contributed through us in some manners and are prepared to contribute themselves, but certainly they've given us material that we can put in there. But at this stage we'd like to expand it. So how many is a couple of months old now? And I think we're at the point where looking to expand the pool of contributors. We're going to jump into some demos now, but I'm going to pause there and I see there's some questions coming in. So let me stop sharing. Let me maybe answer some of these questions quickly. Jim Mason is earmarker hard or soft allocation of assets. It's a hard allocation of assets. So in the traditional world, if you think of clearing and settlement, it's the clearing process. So I see Kuni's answered that already. Great. Thank you Kuni. So you can only free up those assets for use somewhere else. So there's no reapplication or anything else. It's a hard allocation. Any other questions? Marlis, I'm going to hand over to you then to go through the demos. My name is Marlis. I'm an engineer for Adara. I'm going to run you through a demo that we have or the code that we've contributed to Harmonia. Can everyone see my screen? Okay, or is it a little bit small? Can you see the console? Okay, I'll zoom in. So I'm going to show two flows, a PVP flow and a DVP flow. One, the PVP one will be between two Ethereum ledges. So we're going to do a FX swap. We're going to swap some dollars for some pounds, both being Ethereum chains. And then we're going to go over to the DVP, where there are securities residing on a quarter ledger. And we're going to swap them for some cash on the Ethereum ledger. Just a quick comment before Marlis goes on. I know we use these terms in our world a lot, but PVP has payment versus payment. So it's payment of one currency for payments and other. So typical FX settlement. DVP is delivery versus payment. It's actually delivery of securities versus payment of cash. So that's where the PVP and DVP come from. So when you have PVP, it's effectively two currencies and FX top deal. DVP is typically a security for cash. So just in case you were wondering. First, I'm going to show you quickly before I run the demo on the right hand side, just setting up the infrastructure of what is going to be running here. The first one is two Ethereum ledgers. So I'm going to just start up to Bezu nodes and a signer for us. Then previously, I set up three quarter nodes. I deploy the quarter up and I run the nodes and I started up a server to talk to node A, started up a server to talk to node B. There is a quarter decoder or a proof generator that's going to generate some quarter proofs for us. And then there's some setup necessary. So the protocol that's in play here is given by the EEA. So it's a cross-chain messaging protocol. And there are some, or whoever is going to be the one that we trust, which is the validators on Ethereum legend, needs to be unbordered onto the other chain because the cryptographic proof that they're going to get is going to contain signatures from these entities. So I'm just going to start the setup. There are four contracts, Ethereum contracts that will be deployed on both chains. And the EEA spec structures them into three layers. We have an application layer that has your business logic. This will be our cross-chain XVP contract. Then you have a function call layer, which will be responsible for doing remote function calls through this messaging protocol. Then the messaging layer will be responsible for verifying your cryptographic proofs. And then we have a watered-down asset token, with just minimal functionality in Harmonia. The token contract, you can create holds like greed set. You can make the hold perpetual, meaning it won't expire. You can execute the hold and you can cancel the hold. So that's simply your four contracts that resides on both chains. There's other things, like we would set the validator list for Ethereum to Ethereum. So chain A's validator list will be onboarded onto chain B and vice versa. For Corda, we will onboard the participants, this part of the consensus process. So this will be a notary and in this specific case, the two participants of the transaction, which will be Bank A and Bank B. There are some other authentication parameters and things I'm not going to go into that also needs to be onboarded. Then I'm going to run the integration test in Harmonia Lab that I'm going to demo from. So you're going to see lots of logs, but I'll step through them as I go through the flows. So the first thing that we're going to do is beat set. This was already agreed on that there is going to be settlement. The trade was already agreed. A unique ID was agreed on. So Bank A and Bank B under respective ledgers will go place a hold and they will make the hold mark the R application layer contract or PVP contract as nurturing, meaning that that contract can execute the hold. So on the other side, you will have the same Bank B or Bob will go place a hold funds on hold for Alice or for Bank A and mark the contract as nurturing. So I'm just going to go up to where we're running these things. So what happened here is we're going through an interop Ethereum interop service here to place the hold. This is not in the flows because it is not necessary. Nothing prevents Bank A from going straight to the smart contract and placing a hold. So here you will see it goes through our service to create the hold. You will get your receipt to create the hold. You will make the hold perpetual. You will get a receipt for that. Then you will place the other hold on the other PVP ledger. Receipt. You will make it perpetual. And then you will submit a settlement instruction. Again, this is not shown on the flows because it's for convenience and there's extra checks that we put in. For example, before we do the smart contract call, we will first check that both of these holds are in place. This is more for you almost say fail fast that we don't go and fail on chain. But again, we cannot rely on the interop service because nothing prevents Bank A from starting the lead leg straight on the smart contract. So the settlement instruction will get submitted here. And this is just state changes inside the interop service. And eventually it will call start lead leg in the contract and it will get a receipt for it. So what does start lead leg do? Now we follow a leader follower approach here, building on top of the cross-chain messaging protocol that these two layers going to talk to each other. Something will happen on your first letter, your lead ledger. We start a lead leg. It will be followed up on your follow ledger and it will be finalized on your lead later again. So the lead leg inside our smart contracts, blue is here our application layer, green is our function pool layer and pink will be our messaging layer. So our application layer PVP contract, it will check whether there's been a cancellation against the trade ID. We will get the cancellations later, which this will make sense. It will check that everything is in place for the hold. And then it will emit an event. Now this is at the core of the cross-chain messaging protocol. This is the event that we will prove that happened on this ledger. And inside this event, we will wrap up the function that we want to go call on the other chain. So for this case, we want to go call request follow leg in the PVP contract on the other chain. The event will get admitted and the interrupt service will pick up this event and it will bolt a cryptographic proof and Merkle-Patricia proof around it, proving that this event happened and it is sealed with the validator signatures. This is nothing new. This is what Ethereum does in its block headers or how it handles its storage automatically. And then it will do a cross-chain call. It will perform call from remote chain to the other chain. It will give it a proof and it will give it this event that wraps an instruction. So maybe it will just go in here today. So the lead leg started, an event was emitted. And we know that this event will only be emitted under specific conditions that runs in smart contracts. And we will only accept this from authenticated contracts on our other legend. So this is the event that was emitted. We will check some authentication thingies. We will build a proof and we will do the perform call with quite a big proof. We'll get a receipt that it went through. So what does this mean when it was successful? So on the other side, the cross-chain function call layer will receive the proof and the event. It will decode it. And when we onboard it, we onboard different decoding schemes and verification schemes in our different layers. So for this specific chain, chain A, we will know there is only one decoding scheme onboarded and we will use it. And we will pass it on to the messaging contract, which will have only one verification scheme for that chain. And once the proof is verified and it can go through, it will call this remote function using the data that was wrapped inside the event that we proved did happen. So request follow leg is then our application layer contract. We'll check for cancellations, check that the hold does exist and everything is in order. And then we'll execute the hold, transferring funds from bank A to bank B on ledger B. And then to prove this again to our other ledger, it will emit an event and inside hold an instruction that it wants to go call on the on the lead letter again, a complete lead leg. So maybe just go to where the event gets emitted again. We did a perform call, everything like this was executed, event was emitted. The event will be caught by the interrupt service. It will handle it will create a proof now on the GBP chain. And it will do perform a call across again back to the lead ledger. We're the same thing will happen. You will have a certain decoding scheme registered. And you will decode the event, you will verify the proof that came with it. If everything is okay, it will execute the function that was wrapped inside this event. This was will be completely click and the completely click will then execute the hold on the lead ledger. And this will be the happy path for PvP. And just to follow through on the test that you see here. The form call from remote chain, you've got your receipt. And the tests here, if you go and run it for yourself in your ammonia lab, you will see that the amounts and everything is checked again. So then if we have to come to another scenario for cancellations, rule books say that there cannot be a case where only one of these leaks go through. Either both of them has to go through or both of them has to be cancelled. And this brings us to cancellations. You must be able to under sit in circumstances, go and cancel your hold. So to avoid race conditions to make all of these cases work, we in the rule books, we said that you cannot go and cancel on your own ledger, unless there's already a trade mark that's cancelled. So you will have to go, if you want to cancel your own hold and on chain A, you have to go through chain B. So when will we want to cancel? In this case, suppose Bank A does everything he places his hold and Bank A could even start the lead leg where Bob did not place its hold or Bank B did not place a hold. The start lead leg will fail. So this is not indicated here, but suppose that did happen or it didn't even happen, Alice realized she waited enough time for Bob not to place its hold. So she wants to start cancellation and now on the follow ledger. So this goes through our interrupt service again to start the cancellation, but nothing prevents you to go straight to the contracts. So this is the next test here where you will create the hold only on one side. You will get a response, make the hold perpetual, you'll get your receipt, and you will initiate a settlement instruction and say that it must be used for cancellation. And here it will eventually send the start cancellation instruction to the smart contract. What does start cancellation do? It checks that the hold does not exist in the smart contract and then it marks the trade as canceled against the trade ID. And then to prove this, it will emit an event and with it wrap inside again a function that we want to go call on our own home later or let chain A and this will be performed cancellation. So the event will go again be picked up by the interrupt service, a proof will be built around it and it will be used to perform a call from remote chain. The function call layer will be code, messaging contract will verify and you will eventually call your perform cancellation function on chain A and perform cancellation will then check if this is a direct cancellation or a cross chain cancellation. Cross chain just means it comes with a proof. There's two options here and then it will mark the trade also on chain A as canceled, a check if the trade is cancelable and then it will cancel the hold and this is just the event for our API service or interrupt service to know that the flow is now complete. So if you want to know about the other case here, when is it possible to cancel a hold on my own ledger? So suppose now after all of this happened bank B go and place this hold and you must be able to cancel this hold again without having to go through the other ledger because that process already went through on each ledger and for that we know that the trade was already marked as canceled here. So in that case we allow bank B to directly cancel its hold because there was that trade that was marked as canceled and then it will cancel the hold that was placed afterwards and just emit an event that our interrupt services will know what to do. So as you can just follow through here you will start cancellation an event will get emitted, proof will be built around it, we will call perform call from remote chain and it will be the end of the flow, amounts will be checked inside the test of course. So I'm not going to go through the other option here for PVP cancellation is exactly the same because it's two Ethereum ledgers. So this is the case where bank A does not place its hold and bank B wants to cancel, bank B has to go to chain A, generate an event that we can get approved from that this trade was actually marked as canceled on your chain A and then bank B will be able to cancel its hold and the same scenario if Alice would come and place a hold afterwards she must be able to directly cancel her hold and she can do that because the trade was already canceled the trade ID was already canceled against in a previous flow. So that's your PVP flows. I'm next going to go to the DVP ones which is probably the one that's more interesting with the Corda chain. Is there any questions so far? I think the team has been doing a fantastic job of answering questions while you're presenting Merylis this is great. Oh really okay awesome but if you have a question for Merylis please in chat let us know. So Corda works a little bit different. State is not updated like on Ethereum you will create a new state if you want to update consuming the previous one and it can only be consumed once. So our example here is driven by our use case with HKLX. They work with digital collateral records. It's pretty much a basket with securities that you can is issued by an issuer and you can transfer it to another owner. For HKLX we had to drive sort of the whole earmark flow into them to be able to earmark the basket of securities before we transfer it that it can fit into our flows and for the flows here we already assume but I will show in the case that I have to create them anyway. We assume that there's already this DCR basket of securities created on the Corda legend and there's already a trade created which is a different state and it has a different Corda contract to verify. This state was needed for cancellations mainly and I will get to that later. So it will be the same principle on the right-hand side for Ethereum. Bank B will place a hold placing holds let's call it I think it's pounds in the example here on hold for Bank A and then on the Corda side transaction a draft transaction will be created to now mark this issued DCR basket and earmark it for another bank for Bank B and if you think of Corda your input state will then be this available basket it will consume it and the output state will be an earmarked basket and Bob will very importantly validate this because we don't have validating notaries and it will sign it and the notary will then attest that its unique inputs. So at this point in time we have a transaction in both Bank A and Bank B's vault and Corda and we don't have a concept of an event that we have in Ethereum. So it's not directly translatable so we're going to use the best thing we have is this transaction this signed transaction that we're going to rip off the Corda legend and we're going to wrap a proof turn it into a Merkle proof. So we come before we start the lead leg which is not an Ethereum function. We technically wanted the Corda side here to bolt the Merkle proof out of the Corda transaction but our use case with HKLX demanded otherwise we bolted into our interrupt service that it will create a Corda proof from a signed Corda transaction a raw signed Corda transaction and this is why we needed that decoder or proof generator running separately. So from the transaction it will submit a settlement instruction and with a raw transaction which will then bolt a proof and do the perform call from remote chain which we previously did with an event and Ethereum Merkle Patricia proof now we're doing it with a Corda transaction and Corda Merkle proof and the same will go through there is a Corda decoding scheme in a Corda verification scheme that is registered on the Ethereum legend so it will know how to decode this and verify the Corda proof and it will call the request follow leg as before which will execute the hold on the Ethereum legend and emit an event wrap within it an instruction to complete the lead leg on the Corda legend now this is exactly the same as if there was a smart contract function existing on the other legend which there is not for Corda but ideally one would like to be driven by this and the Corda flows should be equivalent to a complete lead leg so Ethereum proof will now be built for this event and it will use a callback because it's like how I use case with HKLX wise they would submit a raw transaction to us with a settlement instruction and give a callback for us for a proof because we had to build a proof for them and also because the Ethereum nodes was not exposed to HKLX so via the callback instruction a Corda service here will now initiate the Corda flow to perform to confirm the DCR flow so this is essentially transferring this earmarked DCR to a transferred DCR and very important here Alice will verify the Ethereum proof before anything else in the Corda app before anything else proceeds create a draft transaction that will now take two inputs it will take the xp consume the xvp or the dvp state this is a created state and the earmarked DCR and I put just the transferred DCR but will validate and sign or bank B and the node tree will attest that it's unique inputs and that is the end of a happy flow from Corda as a lead ledger and Ethereum as a follow ledger and then obviously we need to deal again with cancellations what happens if one party does not place a hold for example so oh I have to go through the demo here sorry I should have skipped to it a little bit sooner so what happens first we create a hold on the Ethereum ledger we make it perpetual and then we create a hold on the lead ledger now like I said before because it's a test here for every case I'm going to create a basket I'm going to create the trade on the Corda ledger and then important the one that we were going to prove is the earmarking of this DCR basket and this is the Corda transaction and then it will receive a settlement instruction with this raw transaction it will call the decoder service or the proof generator service to build us a local proof from this Corda transaction and then it will perform call from remote chain with a rather large proof we get a receipt back it was verified successfully and then an event will get admitted as before on the Ethereum chain interrupt service will wrap it into a proof and send it on via callback to the Corda flow which is now the output of the Corda flow that would have taken the state to transfer and here the proof is for the state and that's the end of the happy path flow we can again take balances but that is all in the test if you want to render ammonia test yourself so then to come to cancellations which is different from Ethereum I'm also going to run just through one because it's the same on the on the other side so what happens like Corda earmark successfully it's basket but bank B on Ethereum does not place its hold so Alice with start cancellation first maybe start lead leg by submitting the settlement instruction and it will fail on chain so what would you do then to cancel her earmark on her basket of securities she has to go through the Ethereum ledger so she will start cancellation which will initiate the function on a smart contract that will emit an event that we're going to prove that this trade is now marked cancelled on the Ethereum ledger this event will be used as proof so we wrap it in a Merkle Patricia proof and we pass it back to the Corda site via a callback and then the cancellation flow will kick off on Corda again Alice or bank A will have to verify the Ethereum proof and check or create a draft transaction canceling her hold what will be the input you'll have an earmark DCR and then you'll have your your xvp state that's not cancelled yet you will consume it in its created state and you will take your DCR state back to available this is equivalent to an Ethereum site of canceling your hold and then if Bob comes afterwards and plays a hold on the Ethereum site he can as before cancel directly because there's already a trade or there was already a cancellation against that trade idea what happens go back to the both earmarks in place this is the one with the Ethereum trade mark is not this is the one I just spoke through so we will create a basket we will create a trade we will earmark the trade on the Corda site sent a settlement instruction or the cancellation instruction to cancel the hold it will go to call back and start cancellation on the Ethereum ledger start cancellation function you get a receipt it will emit an event we will wrap it in a proof it will be sent via callback to the Corda site and your basket will be taken back to available status and then you can check your balances again so on the the last case then is if the Ethereum site places a hold but not the Corda site so Bob wants to cancel his or Bank B wants to cancel his trade he has to go through the other ledger so it will initiate a flow a Corda flow to cancel the trade and it will use that as proof to and provide it to the Ethereum ledger so what does cancellation of a trade entails it's a flow that will take in consume the XVP state and created and X and the output state will be a cancelled XVP state so this is again a Corda transaction that we pass on raw to a interrupt service that wraps it into a proof the Corda proof we perform the call from remote chain to the Ethereum ledger the Corda proof gets verified and we call perform cancellation on the Ethereum site then the trade will be marked as cancelled the hold will be cancelled and this is just for our interrupt service to know that it's not complete and the same happens on the other side if Alice comes afterwards on Corda and places an earmark she wants to be able to cancel it directly in this special case that it there was already a cancellation so how would this work you will consume the DCR state that should be an earmark there should be a earmark and there's a cancellation XVP state in cancelled state that you will consume and it will take your DCR state back to an available and that is it for our demo I don't think there's any other I don't think there's any other cases that's useful no okay is there any questions or did someone do a very good job again to answer them we were trying to answer in the chat okay great I think thanks very much Maronis for taking us through that just to say if you didn't catch it almost all of that code is available in harmonium there's just one piece that we will be contributing over the next couple of weeks which is the the signer which obviously need to sign the transactions but but everything else is there all the documentation is there so please feel free to jump in and have a look and read through we've tried to make it as as kind of easy easy enough to follow but comprehensive enough to recover you know everything that's that's in there so there's a huge amount it's it's many years of work actually that's gone into it so and and we'd love to get some feedback and some views on it and and yeah improve it as we go all right I'm going to hand over to Dominic to carry on with his presentation okay super so I'm going to show my screen in a moment this would be a fairly short presentation basically on the steps through a cross-chain swap from the coder side using the codes that mainly Eduardo has contributed and I'll talk a little bit as I go about what it is I'm showing so just share screen if that doesn't work I might have to drop out and come back in and right so hopefully everybody can see my IDE so what we have here is a single tray unit test and what we're doing is we're running up a mock coder network this says we haven't deployed a bunch of coder nodes and send RPC messages to them so they come up from the command line instead of that essentially what we'll see is quite a lot of things where we have an identified coder identity running a flow with a given name and that's how I'm going to drive the sequence of events here so what I'm going to show is a sequence of coder flows being executed that cause various things to happen and the sum of those things would be the atomic swap all of it driven from the coder side by the coder flow thing work so the preamble we have four identities in this picture there's Alice who holds tokens in an ARC 20 contract on the EVM network and wants to use and to purchase an assets on the coder network and we have Bob who holds the asset to be purchased and there were two other characters sort of hanging around on the periphery of the scene which is Charlie and Dave who are acting as trusted relays and testing for the finality of transactions on each network now the reason that they're involved is that when Eduardo was first designing this code and implementing it the scope was effectively all possible kind of EVM based Ethereum platform so not only those which have deterministic finality where there's a known pool of validators whose signatures you can pull to see if something's been finalized but also potentially things like Ethereum mainnet where determining whether a block is finalized in a way that you can go check offline with a much more complicated business and also it wasn't entirely clear whether we had a way to check coder signatures which use a different signing scheme to Ethereum's on the Ethereum virtual machine so on both sides what we have is witnesses who will basically say I have seen this coder signature on this thing and here is my EVM read of a signature to say that I have seen it and on the basis that these witnesses are trusted on each side that's how we kind of established finality this isn't necessarily the mechanism I'm missing the mechanism that does it are as implementation users at all because they have the ability to to check and prove finality in different ways but this is a kind of a general purpose fallback mechanism for if those approaches sort of fail you so this does also illustrate the process whereby we we send out for signatures and get them and present them as evidence of finality and I'll talk a little bit more in a moment about how that kind of fits into the general framework we're using the other kinds of the necessary which is the standard components of the coder network it signs transactions and the coder network but it signs them using a non-EVM native signature scheme for those who aren't so familiar with coder it's basically a ucx o ledger which means that every transaction produces output states potentially and consumes the output states produced by the transactions and the most of his job is to observe for any given transaction whether the output states of consuming have been consumed by any other transaction and forbid the transaction if they have so it's only really there to prevent double spends uh the transaction data is formatted as a a merkle tree and everything other than the hashes of the input states is torn off so the most we don't actually see the details of the transaction it just sees what states are being consumed in it so most of these are not validating as well so a noticeable difference from the aetherian model obviously where your validators run the transaction code so each of Alice, Bob, Charlie and Dave has an identity on the coder network which we write as Alice at coder, Bob at coder and so on and the wallet struck the signing key on the EVM network which we write as Alice at EVM, Bob at EVM and so on and I just take a quick look at the test net setup here you can see that these are currently hard coded into the test so we've got the aetherian addresses for Alice, Bob, Charlie and Dave we also have our private keys to signing with and we've got some addresses we've got the address of the deployer we've got protocol address which actually is the address for contracts called swap vault and we've got gold token and silver token deploy address which are addresses of ERC20 based tokens very basic things they have total supply, balances, the ability to transfer things and importantly the ability to allocate a portion of your balance to another account to act with on your behalf which is something that's used here you can also see we have a mock network up to set up and some mock nodes so essentially they're kind of running a coder network in a box which is what we're using to run us through this sequence of actions okay so given that so all of the actions in the swap are taken by running coder flows and some of these create distributed finalized transactions on the coder network some of them use the signing key for the corresponding EVM wallets to propose transactions to the EVM networks so sometimes coder will run a flow that basically says using the signing key of obviously EVM signer transactions does this obviously in a real production environment you probably wouldn't have your coder node holding the signing key for your Ethereum wallet there would be some kind of audited proxy or relays through which you did performed an action if you don't want to combine those kinds of authority in one place given that the other things that the flows do is they obtain attestations from Charlie and Dave so given that Alice and Bob have already agreed to make the swap the sequence of actions it runs like this first of all Bob drafts a coder transaction so in coder you can create every transaction it's effectively a Java object with a bunch of different pieces of data attached to it and you can create one you can serialize it you can create you can obtain the hash of it without actually performing that transactional notifying it and you can do so knowing that if you later on do sign the notifies the transaction it will have the same hash so this enables us to draft a transaction Bob does and send it over to another party to verify say does this do what you want does this give you the benefit that you're seeking from the swap so Bob drafts a coder transaction which if it's signed by him and notice we will transfer ownership of the asset to Alice Alice verifies that that draft transaction delivers the benefit she wants and then commits tokens on the EVM network and that's a two steps in both and that first of all Alice at EVM uses the ERC20 allowance function to allocate the tokens to an allowance held by the swap vault contracts so the swap vault contract can then act on those tokens and secondly Alice at EVM uses the swap vault contract which uses the commit with token functions to transfer this allowance to the contract itself simultaneously setting up a commit state which records the conditions under which the tokens can then either be returned to Alice or transferred on to Bob so at this point the contract holds some balance and has the ability either to return it once it came or forward it's on to the intended recipient Robert Corder then goes ahead and signs and notifies us the draft transaction and then collects EVM checkable signatures on the draft transaction hash from Charlie and Dave are testing that they have observed and validated no two signature that notice you say from the transaction hash is valid so if you have the ability to check all the signatures on the EVM side you don't need this bit and in fact in Adara's implementation there is some code that does just that but it's again if you're working with a platform that just can't read called a notary signatures you have to present it with some evidence from somebody that the transaction is finalised. The next step is that Bob transfers the tokens to himself using the transfer function on SWAT fault presenting the signatures so at this point the SWAT fault contract basically says if you can show me signatures on this transaction hash you can have these tokens and importantly when this happens the transfer function records an event into the transaction receipt for that transaction and we can show a proof that that event appears the receipt appears within the receipt storage for the transaction block and that's how we kind of construct a proof readable and called aside that the transfer has taken place. Now Alice needs proof that this transaction has been finalised and again in a where your EVM platform has deterministic finality where you've got a set of validations to signatures you can check that's quite easy to do but in circumstances where it isn't essentially we say again to Charlie and Dave who are on hands to attest to things can you check that these blocks have been finalised and signed to say that you've observed that they have and then you basically trust Charlie and Dave's signatures in that matter. So Alice are called to collect those signatures from Charlie and Dave on the block hash basically saying yes we've observed that this is finalising the EVM network and finally Alice at Corder runs the unlock asset flow which uses those signatures together with the Merkle processor inclusion proof to show that transfer took place and that it was finalised and that satisfies the Corder contract verification rules for unlocking ownership of the Corder asset set itself. So let's kind of step through this and I want to go into start running in DRock so the first thing you do is you know this test is we just create a random ID for an asset and we run this generic asset flow which literally just creates a Corder state representing the asset that Bob holds and then we have two further preparation steps so we create one the event itself we create an expectation in other words we kind of capture the properties of the event that we expect to observe. We expect to observe that a given amount in this case one token is going to be transferred for Alice to Bob against the goals token and also we expect that Charlie and Dave will have been the the signers involved in authorising that to happen. We can't actually build the event itself yet because the event is going to include a reference to the draft transaction hash and we don't have that because we haven't raised the draft transaction. There's an interesting kind of circuit here where one thing depends on the other but we can't have them form as disciple dependencies so we're going to pass this spot fault event encoder into the draft transaction but it obviously can't know yet what the draft transaction hash is so it can't make reference to that that data yet. The draft transaction basically we so Bob now runs a flow so it's draft the asset swap flow. Here are the transaction hash and state index of the asset we want to transfer which is the state emitted by the issuance we did earlier. We're going to transfer it to Alice and this is the notice we're going to notarise it. Here are the party participants who witness the notary signature and sign it with an EDM readable signature and we have signature thresholds that they both have to sign. We could have a larger pool of witnesses and do it with a demand of M's then we say five out of these six or five out of these seven meters sign and we pass in the swap fault event encoder basically saying if you can show an event that matches what this would spit out when given the hash of this transaction then you can have the asset. Okay so we have reached this point and we've done a print balances so we're just looking at what the balance has held by Alice and Bob and the protocol before the commit are. And that's actually visible, is it on the whole thing? So here's what it's supposed to look at and check if there's anything that's running. So the next step is that Alice commits her tokens to protocol contracts so Alice runs a flow commit the token flow. I'm just going to show you that fairly quickly. What that actually does when it's run is it uses the EVM interrupt service provided by Corder and what we've got here is we're using Web3J. We've got a wrap around the swap vault contract and the ERC20 contract. So the first thing we do is run an approval basically saying that we approve this amount to the contract address and then we do commit to a token on the swap vault which basically takes the money using that approval and creates the commitment that was then able to be moved back or forth depending on the conditions and you can see that the commitment records the signers that we want to see in the signature thresholds who would receive the money if everything was in order and what the amount is. So we're going back to swap number and we can move on. We'll come back to there in a moment fortunately just one of these can start through the FASH assets so we need to kind of chip over ourselves. Let's see if there's going to be questions while that's being answered. The question about the relays is essentially the important thing here is that once Alice has transferred funds to Bob or rather once Bob has taken the funds from the commitment made by Alice, Alice then has to be able to obtain the asset from Bob without Bob's further intervention. So purely by showing a proof that this has happened, I can get the same. Otherwise, you'd be in a situation where Bob has taken the funds and Alice says okay well you now approve my having the assets and Bob vanishes or does not cooperate and we're in a sort of a stuck state. So what we want is the ability for Alice in obtaining the assets to act without further involvement from Bob purely representing proof and because the proof involves not only proof that the given transaction has certain properties but also proof of finality that we need Charlie and they effectively to provide that proof of finality and that's true as I say only in the case where we don't have deterministic finality where we can't just check some signatures of the block hash ourselves. Okay so back here what we want to do is run forward. So once the commitments has run that's when Bob signs and neutralizes the draft transaction so we see that the money is in a committed state. So by signing the draft transaction we create a situation where Alice can automatically obtain the assets if she can show that Bob has taken the money and Bob also grabs the signatures from Charlie and they are testing that the draft transaction was motorized because Bob needs to show that that lock has been set up in that way in order to take the money. Once we reach this point Bob Nunn's claim commitment in the signature and basically says I would now like the tokens that were committed here are the signatures that show that I have motorized the transaction that places the recorder asset such that Alice can claim it if I've taken the cash. Once that's gone through the next step is for Alice to in turn collect signatures showing that the transfer has occurred and then use them with unlock asset flow to obtain the asset for herself. What would take the place of this in the case where we had deterministic analysis that Alice would simply observe the signatures on the block hash itself or the validators and we'd be on the side and use those instead. So we're using these witnesses as I say which is substitute for that. Okay so unlock asset flow, unlocks and finalized transfers and recipients by producing and presenting proofs that the EVM tokens were transferred to the expected recipients. You took the money therefore I can have the assets and at this point you can't stop these actually. Provided I can show proof of that happened. Run forwards to the ends and we have a couple of assertions to make sure things have ended up in the state we expected. So we verify that the unlocked assets are now owned by Alice and we can do that by taking the transaction that comes out of unlock asset flow and show that it has an output state which is an ownable state to its owner is Alice. We also show the blob querying his vault to see what assets he holds. No one holds that asset so it's an improvement for the transfer of the assets as a card. So once we have really comes to the end here and we'll see if I can actually find the outputs and different balances just to show the movements of tokens between the counters as this goes on as well. See a lot of things set up with it's kind of creating this mock coordinate here. So the balances before commit Alice has some vast number one on the end Bob has some vast number nine on the end and the protocol holds nothing. Balances afters commit that one has gone down to zero and Bob's nine is still at nine but the protocol now holds one token and then the balances are transferred. So Alice still down to zero, Bob's nine has gone up to 10 and the protocol doesn't hold it. So effectively there's a brief moment where the protocol holds the balance so it holds the token and then it gets transferred on to Bob in this case. So there are also testers I'm not going to get into now because I'm just keeping this very short but can the commit claims swap test and this is all in the repository you can see various other paths including the unhappy paths shown where for example Bob can revert Alice's commitment and by presenting proof of that reversion can get his asset back so there are paths with ways to back out of the locks on both sides and that's all kind of illustrated by the others here but I just wanted to kind of answer the main example. Okay so that's it for my demo, I'm mostly short I hope. If anyone's got any questions and have to take them now. We do have a question in chat, thank you for that Dominic. We do have a question in chat. If both Alice and Bob have accounts and can initiate transaction flows on both EVM and Quarta, why are the relays Charlie and Dave needed? Could Alice and Bob do the checks and gather the proofs themselves but the relays just provide a convenience service so they don't have to handle that part? Yeah so by calling them relays is actually a bit confusing in this context because obviously particularly in a backside scenario a relay is something involved in kind of getting information across from one network to another. In the working paper and harmony we call them witnesses which may be a better name for it. Their job is effectively to attest in a form that one network can consume that something has been finalized on the other. So underlying all of this is proof of action. Effectively what we want to do is construct conditions on each network that say this can only be done if you can show proof that some reciprocating action has been carried out in the other network and proof of action takes down into two parts. One part is what I call proof of outcome, in other words proof that a given transaction did the thing that you want and the other is proof of finality which is basically evidence that that transaction is recognized by that network as valid in the final and with the all of the proof of action is done by on the EVM side looking at events within the event log, so the transaction received and constructing your Hitchhunter Merckon clues and proofs right the way up to the block hash. On the quarter side we can't do that. It's very difficult to make a kind of a quarter transaction externally legible and so what we do is we have the recipient of the benefit of the transaction and validate the draft transaction and both the sign to say yes I agree with this does what I want and I will get the benefit I'm looking for out of it. But those are only the proof of outcome parts for the proof of finality you need some way of checking finality on a remote network and you need to be able to do so without having without being online in that network without having to kind of go away and ask it because the smart contract realisation obviously has to be deterministic and typically you know it needs offline effectively really you just want to be checking some signatures or some other kind of evidence as a cryptographic integrity if something has been given. So one way or another we need to provide a verifiable evidence of finality. If you can read quarter signatures then the quarter most of his signature on a transaction hash is proof that that transaction is fine along the quarter network. If your EVM network is using a core of some fixed validator call and you can again say okay these these people assigned it there which finally but if it's something like Ethereum mainnet it's quite a lot more difficult to establish finality in a way that's kind of offline checkable. So the witnesses on both sides in this example are basically there to say in the event that you can't read quarter signatures so you can't tell whether or not to be signature on a draft transaction is valid or not and in the event that you can't easily prove finality on the Ethereum side you can take the signatures of these witnesses as evidence they are assumed to be kind of parties who are uninterested in the outcome of this particular transaction were there kind of just mutually saying yep that's definitely happening in my network and that's how you kind of completely prove actually by using your own to provide them with finality. So let's clear that up. That's great Dominic, thank you. And thank you to Jerry for asking that question. We've got a question from Jim. So the harmonia agents on each platform communicate to each other to coordinate the transfer process with proofs. What is the protocol the two agents use to communicate with each? So in the in the example I've just shown which works quite differently from Adana's and I might kind of call Peter on these back into kind of answer from their side. In the Corder case every identity in the Corder network has its own node within a permission's network and they send messages to each other via a peer-to-peer messaging layer. So the way that they communicate within the Corder, so Bob's talking to Alice, Alice's talking to Bob, is via that peer-to-peer messaging within the Corder network. The way they talk cross networks so the way that Bob at Corder instructs Bob at EVM to do something is through a, in this case it's through a Corder service that just directly makes Ethereum RPC calls against an Ethereum client. And that's where the WebCJ magic allows us to find the example. So for example in print balance as we have a get balance flow and what that does is it just obtains the EVM into op service, grabs the ERC provider for the given token address and just calls balance off. So we are literally making an RPC call across to the other networks to get information back, also propose a transaction or whatever. Commit the token flow, it's the same when we actually run it. We've got the provider for the swap contract and the provider for the C20 contract. So the Corder flow here is basically saying I am Bob at Corder and as Bob at EVM I would like to approve this amount to this contract address and then I would like to construct this contract to create the commitment. And so that's just plain, it's actually kind of Jason over at HTCPS. Awesome. Thank you. Can anyone from the Harmonia team help out with that one as well? All right. So Peter just said we use, we simply use an HTTP API call in the Adara example. In the real world as Dominic has mentioned, it's an internal bank communication. So that's great. Thank you both for that. Jim, I hope that helps. If you do have a question, please put it in chat. Yeah, so just to clarify, that will use whatever infrastructure the banks have. So it could be Q infrastructure, it could be something that's based in their convenience environment, could be MTLS type connection. So we've seen a number of different protocols used for bank to bank communication or component to component communication within banks, service mesh type things. So that's typically how it gets done. But it's obviously locked down within the banks infrastructure. Thanks. I will stop sharing my screen at this point. Well, he's back to Peter. Okay. Thank you, Dominic. Thank you, Peter. Thank you, Milais for the demos and the great explanations. I will build on what everything that has been just said. And I will show how you can work with contracts, smart contracts, such as the ones that we're being demoed right now, within the confines of the framework. And to get started with that, I've created a little list of steps that you can take to try and follow along at home. Now, 99% of the time, these steps don't work on other people's computers. It's because it's impossible to predict all the operating system level dependencies that could be missing or incompatibilities with the operating system itself, or other kinds of issues like when you don't have enough memory in your machine. But nevertheless, I'll explain through each step, I'll show it, and I'll demo it working as well. Hopefully, if the demo diaries are with us. So I put a link in the chat. That takes you to the readme file of the workshop demo. And it's sort of a high level step list of steps to execute. But if anyone has any questions, I'm here. And also, I wanted to highlight once again that we have a daily pair programming calls where you can drop by in the future, if you have any questions or anything to clarify about all that. So with all that said, let me share my screen. So the first thing you do is clone both repositories. Now, if you look at the actual URL, then you'll see that it is my personal fork of both of these repositories, not the upstream repositories. And the reason for that is because I've had to make some changes to make the demos work. And the first mistake I made is that step five should be executed only later. So the first thing to do is to clone the repositories and then move on to step seven. And we'll get back to step five. So I've already cloned all of these repositories and everything is kind of set up on my side. So I will get on with actually showing off the pieces. So one thing that I needed to do on my fork is to alter the migration script. For one of these contracts, the within the hormonal lab, there was this our free atomics folder and that has the EVM interrupt workflows. And so as I was trying to deploy it, I was working with the main branch of the lab, the upstream repository. And that did not have this migration script. So I had to add this. So if you check out my branch on my fork, then this is an extra thing that you get. This is what makes the contract deployment functional. And I just wanted to highlight this because in the spirit of collaboration, I'm planning on sending a pull request to the hormonal lab. And we can talk about that more later. All right. So I'll show you the build working in the harmonia subfolder. So you have to change directories come down into the quarter directory. And then you have to run the gradle build. And now the tests, the tests right now are not passing. So you just have to do the build. And what will happen then is that you will end up with this build folder that has a lips folder that has your jar file, which contains the workflows. And the same way, there will be a jar file and the contracts folder. And also in the common folder. And so the way cacti helps you, one of the ways it helps you is that we ship a an old one container image for all the supported letters in our case, basic and quarter just for the scope of this workshop. And you can deploy these jar files to a running quarter ledger that is running within the container just from these jar files by sending them through a rest API endpoint. Now I have to highlight the the rest API endpoint for deploying contracts as strictly for development only. You shouldn't use that in production because certain shortcuts had to be taken in order to make it much more convenient to use. But this also means that if you were deploying your contracts and the production with that endpoint, it would not be considered safe to do that. So after that, we can go over to the steps. So yeah, I showed you how to build. And then I left some steps here on how to deploy contracts to the basic ledger as well. This is mostly just an extract from what's in the read me of the Harmonia Lab already. But there's one slight change. If you see here, the network is specified as cacti. So what that does is that if you look at the file under EVM hard hack config.js, then you will see here that I've added this little snippet. But these are all accounts that are the example accounts of the Bayseu Genesis file that they had as of version 1.52, I believe. And okay, no question so far. Yes. So with this cacti network, this is what you get. The old and one ledger gets pulled up on this RPC endpoint. And so when you specify on the command line that the network should be cacti, then the tool will pick up this information and make sure that it gets deployed to the right ledger. And now continuing down the steps, you also have to build a Docker image. But for first, you got to switch over to the cacti repository. So what this does, it builds a more up to date version of the connector. This is also a necessary step only temporarily because I had to make some additional features in the connector in order to be able to support the demo itself. Specifically, the endpoint for querying the volt, the cord of volts has been added. So now if if you issue an asset, the same way that Dominic was just demoing it, then with this new endpoint of the cacti cord connector, you can query the state of that asset and take a look at the data. So I will also show that Docker build is working. If I can, yes, I just have to paste it in here. And on my machine, this image has already been built. So all you see is the cache is getting accessed. So on your machine, it will take a few minutes because it's not exactly fast. And then after that, you can do a series of steps again. Which is mainly just configuring the cacti build, you know, install the dependencies, compile the code, specifically a type script code. And then there's this build dev script that you also have to run. And right now that won't finish because there's a little issue with webpack bundling of the JavaScript sources that I haven't had the time yet to fix. But that luckily enough only happens at the very end of the process. And all the other things, the things that we need from the build are actually already there by the time this failure happens. So you can just run the command and not worry about it not finishing all the way through. And then at this point, I will start the ledgers. So well, actually, before I even do that, I will show the Docker files itself. So we have the cacti API server, you can see here, which exact images being used, we always use versions that are pinned down to a very specific hash or release tag. Because that way, you get reproducible builds for the most part, meaning that the probability of this working on somebody else's machine at another time is much higher than if we use, for example, just latest. Because with that latest, what would happen is that it would always not always but every now and then it would give you a different version of the underlying image. And then we would run the risk of introducing incompatibilities, bugs, and then that would cause people headache. And so you can see here, the environment variables are configured so that the authorization protocol is none. So again, this is just for demo purposes, don't configure it this way production, of course. And then here's the plugins list of the API server and adjacent document. I'm just highlighting this because one of the cool features of the API server is that you can declare plugins to be installed at runtime. And then you can deploy those as rest endpoints if the plugin itself supports it. And all of our connectors do. And then there's this other image called connector corridor JVM. But this does. This is an actual JVM or Java application that is responsible for interacting with the quarter ledger directly. Now, the reason we need this is because in we're using in our demo, we're using quarter four. And then quarter four does not allow you a very easy way to communicate with it directly from Node.js. So this is one of the value ads of the cacti framework is that if you deploy this connector, then you can use it to deploy contracts and you can use it to invoke those contract flows, get a network map, query the ledger state. And similar things to that. And you can do all that through a rest API that this app exposes. So you don't have to run your code, run and develop your code in a JVM specific language, if you want to work with quarter four. In quarter five, this will be much easier because I believe once quarter five is out, then that will allow you to communicate with it directly over HTTP. But for quarter four, you need a utility like this, unless you are already on a JVM based project. Let me check for questions real quick. Hey, Peter, we've got a question from Jim for the architecture defined here in the demo. What security teams have signed off on the design so far? For example, the ability to dynamically install load runtime plugins, etc. There was a security audit for version 1.0 of cacti where we had this feature in the API server and I don't remember the security audit company's name, unfortunately, but we got a sort of a grant or I don't know what the right terminology for it is, but the foundation had acquired us this security audit that was done to great detail. And they signed off on it. But that's specific to the runtime dynamic plugin install. And so it's not specific to the quarter contract deployment, which is not signed off at all that is not meant for production. And there's a spectrum there as well. In terms of the dynamic plugin install, meaning that if you want to, you can lock it down in terms of the code that's being executed. Because if you want to, you can build your application in a way that the plugins that you are going to load into the API server are already pre installed with the exact specific version of the source code of that plugin that you know is safe. So you can have those pre installed. And then you can tell you can configure the API server in a way that it loads those local versions of the plugin without ever having to go over the internet and fetch something that may or may not be what you want. So there is a possibility for locking it down. That way you can pin the plugin code to a local version. And because of that, that's why I would say that you are able to achieve security with it. And then of course, there's even if you do a dynamic install anyway. And you just fetch whatever plugin you thought of directly from npmjs.com or the GitHub package registry. You still get the protections that are there. In terms of if you have a package lock file or a yarn lock file, those will come with hashes of the dependencies that you're fetching. And so even then, even if you go with the quote unquote less secure version, you are still technically protected by some pretty strong crypto that makes sure that what you downloaded is what you wanted to download. But with all that said, if you want maximum security, then you can just pin it to local installations. Yeah, and that was Jim's follow up following up on the dynamic plugin feature. Can we lock the allowed versions to load so we don't accidentally add new slash untested versions? Yes. That's just something you can do in your package JSON itself. This is definitely my favorite question so far just because it's one of my pet peeves, actually, that every dependency in every package JSON file and the cacti project or any project where I do code reviews must be pinned to an exact version auto upgrades through the tilde and the carrots specifiers are not allowed. Because of exactly this reason, if if you allow auto upgrades of any dependency, someone could compromise the publishing process of the dependency and push a new release that's just a little patch or bug fix, you know, as it is advertised as such. But what it actually does, it loads little malware onto your computer as soon as it gets installed. And so that's why and then there's there's been, I think, multiple examples by now at least one example where someone spent a considerable amount of time contributing to a high profile open source project that was an NPM dependency. And they eventually work their way into the circle of maintainers of the project. Then the original offer of the project wanted to do something else and they handed over maintainership to that person. And the first thing they did once they had permission to publish new versions of the package as to publish a new version with malware. And I cannot recall which specific dependency this was, but this had already happened. So supply chain attacks like that are very real and dangerous. So I always grab an opportunity to talk and rent about how you should always pin all your dependencies to exactly what you know to be working and safe. And of course, the downside of this is that it gets a little more annoying and tiresome to keep your dependencies up to date when vulnerabilities get discovered in the older versions. But I think personally that it's very, very much worth it. Because this gives you control and this gives you that cozy feeling that the dependencies you have are exactly the ones you wanted. And there's no risk at least of this one specific attack happening to you. Okay, let me just stop there and read more questions. Yes, there are ways to sign Docker containers. There's also this new feature in NPM called provenance, which is all about artifacts signing the same way. If you go on the NPM registry, some of these packages will have this little badge that says it's cryptographically verified. And that usually means that they've been published with that provenance flag. And then there's, there's this other project, Sigstore, which is something that within the Hyperledger Foundation, we're actually looking into using. For the same reason, we want to sign our cacti artifacts. I mean, not just the cacti, but all Hyperledger Foundation project artifacts. I'm on the technical oversight committee. And this has been one of the task forces that we have open. And Rajiv says locking is also the disadvantage of not letting bug fixes to be not included. It's mentioned it's tires. Yes. Yes, I agree. 100% I deal with having to bump dependencies manually every day, pretty much every day in cacti, because we have so many of them. And there's always some new vulnerability being discovered in one of those dependencies that we use. And so instead of just letting it auto upgrade, what I have to do, or we have to do as the maintainers and the contributors is go in, update the package manually, send the pull request, double check that with the CI that everything still works, and then merge it. And then this has overhead, but it is safer. Obviously, everyone has their own trade offs. I personally think that the way these attacks are growing in number, specifically supply chain attacks, I think it's worth my time to upgrade manually, but to each his own. Okay, that was a bit of a detour. But I'm liking the questions. These are very important questions. And I love the fact that I got to talk about these things, even if it was slightly off topic. Very. Oh, yes. So I've built the the connector image, the connector container image. You can see here how it gets configured. There's log levels, there's the corda nodes host local host, and the RPC port. And not so secret credentials, which are fine because these are tests credentials. And then you have the ledger container for Bayesian, you can see that it's a Bayesian all in one image. It's still called cactus just because we haven't yet had time to rename everything. But in the near future, this will be called cacti as well, just like the project. And the same goes for corda, this is a corda 4.8 all in one test ledger that also features the sample application called obligation. So if you go on GitHub, Kotlin, samples, corda, or samples Kotlin, yeah. And then you go advanced and you go obligation. Then what you see here, this contract is the basis for our all in one ledger, meaning that this contract can be deployed to it as well. But that's neither here nor there in the sense that we're going to use the all in one corda ledger now to deploy the R3 Harmonia Lab contract that have been demoed by Dominic just before I came on the stage. So now that I explained all of the containers in the compose file, they can go to terminal and start it. Well, actually it was already running, but I'll just show it running again. Because I have to delete. Yeah, I have to say down. So I will increase the font size. Yes. So what you see here is Docker compose. And then you tell it to execute this specific file. The command is in the read me as well. This is actually a great, it just makes it to get a pristine ledger state. Because in the previous execution, I was already testing the contract deployment. And I was creating assets on the ledger for the rest API. And so it's not pristine state anymore. But let's say I made a mistake. And I just want to clean slate on the ledger. And I quickly want to raise it. So then I just do this. And then I say up again. And then it puts up it takes all my machine, it takes a minute or two. But depending on how much RAM and CPU cores your computer has, it could take significantly longer. For example, if you look here, the memory usage, I'm already up to 22 gigs. And it's growing. So unfortunately, if your machine only has 16, this is not very likely to work at all. So these stack traces, this is the only reason this is happening is because the connector itself is pulling the ledger to see when is it finally available. And when it when it fails that it just says, oh, I can connect, here's why. But then it retries automatically. So once the ledger says, yes, I'm available. Then if I scroll down, it should actually show yes, that it completed the connector container completed initialization. And so now I have a rest API running that I can use to talk to the cord ledger and also the base ledger. But right now we will just focus on the cord ledger because I'm trying to keep it shorter. Okay, so this is ready. And back to the steps that we had. Right. So then the last step is to build and pull up this little toy front end package that I built specifically for this workshop. It's very, very simple. It's not something that you could use for anything else but this demo. But the important bit here is to demonstrate two things. One is that you can interact with the ledger through HTTP in general. And the second one is that the packages such as the cacti corda API client that we publish can also be used in the browser in for example, an Angular application. So if you switch to that directory and then, well, it's already running for me, but I'll just do it again. So if you switch to that directory, and then you say yarn run serve proxy, then it will build the code and it will also set up an HTTP proxy. So now the application itself is running on port 7000. But we also need to be able to hit the endpoints running on port 8080, which is the corda connector. And on port 4000, which is the basic connector. And the way we do this on the front end, it will remap the port base on the request of the or the path of the HTTP request. So all of the plugins that have REST API endpoints in the framework, they all follow this pattern, that their endpoints will start with this base path that says slash API V1 plugins, and then hyperledger and then the package name. So if you search for this expression, you will find a package.json file within the framework source code that has the name that is written here. And this way, we can guarantee that there's no collision between REST API endpoints of different plugins. Because for example, if you have a deploy contract endpoint, that's something that pretty much all of the connector plugins will have, you know, the the corda connector will have that and the basic connector will have that. But this way, we don't have to worry about those conflicting. Okay, so now that I explained that as well, we can just navigate to localhost 7000, which is right here for me. And again, this is a very, very simplistic demo. But it shows that what you can do, if you step through these buttons, the second one will take a few minutes, just to be aware. So if you say list flows to begin with, you get back an HTTP response. And the application is helpfully printing it here. So you see these flows here, these are deployed already, because of what I was explaining earlier about the cacti corda all in one ledger shipping with the obligation example from the corda samples repository. And now we'll hit deploy contracts. And what will happen then, I will show the terminal here at the same time. So if I hit deploy contracts, then it will take the base 64 representation of those jar files that I was showing earlier, the free jar files that contain the corda contracts, and it sends it to the REST API, that takes it saves those to a temporal location on the file system, which is of course already not very secure. And that's why the contract deploy endpoint should not be used in production. But barring that it uploads the files, then SSH is into the corda ledger. And then it calls the commands that are necessary for it to a have the contracts deployed to the corda nodes, and also be to create the database schema modifications that are necessary for the contract itself. If you look at the logs carefully, you'll see that ledger connector corda service implementation runs a command that goes roughly like this. It executes the corda jar file, which has the run migration scripts command. And then what it does there, it locates the manifests, or the scheme of migration XML files, and then it creates the scheme is based on that. So then if I scroll down to the bottom, hopefully, at this point, it just says that the jar files have been added to the class path. This is also necessary so that when I hit list flows again, now that the deployed jar files are are in their place. Now, if I hit list flows, it will show you the additional flows that these new contracts have, you can see here that we have the evm interrupt workflows, if to if swap. This is the same thing that Dominic was demoing earlier. And finally, or almost finally, if you hit issue generic asset, then it will send a request again to the rest API. And it will deploy or not deploy sorry, it will create an asset, which is the abstract representation of some asset for the lack of a better term. And it will return with the transaction hash that has occurred in response to that, and also the flow ID. And the demo application does not have support for this yet, but I have a postman request here, what you can do with this new endpoint called vault query. You can send that to the connector, which in turn it then goes back to the core ledger and says, all right, give me the state for this specific asset or state table, because under the hood, this is just a database table. And it returns with the state array or states array. And you can see that the asset name was specified as cacti asset one, which is exactly what the front end code does. I believe it's an API service dot TS. Yes. Yes. So if you see here, the cord API client of the cacti framework has this invoke contract method. And you can pass in a flow invocation definition, you tell it which class the flow belongs to. And then you specify the input parameters for that flow. And the connector will automatically convert this representation into the JVM equivalent objects at runtime. And so what is happening here is that you issue, you call the flow called issue generic asset, and you specify that the asset's name should be cacti asset one. And then ultimately, this gets persisted into a database table through the inner version workings of the core ledger. Now to also showcase how quickly you can develop with this specific example. If you wanted this to be asset two, then it would get automatically recompiled. And this webpage would refresh. And if I hit issue generic asset again, it will do it, it will give me back another transaction hash. And if I execute the request to query the vault. Now there's cacti asset one, but there should also be cacti asset two. So here you can see end to end, how from the UI through the back end connector through the quarter ledger, we actually reached the database in the end. And you can see the transaction hashes here as well. Okay. And if you're interested in the exact code that makes all this happen, then you can see here in this API service dot TS file, which has a path that I will put on the zoom chat just to be sure. Yeah. And that might be a good spot for me to stop for a second and look at the chat. Okay, Dominic had to drop. Thanks for joining Dominic. I know you've left, but maybe you'll see it on recording. Thank you very much for the help. And same for Jim. No more questions. Okay, I think that covers most of the content that I wanted to show. But let me go back to the read me and double check. Yes. Yes. So basically that was it. And this doesn't show the entire flow and to end that Peter and Dominic for demoing, because that would have taken way, way longer to explain. But I wanted to instead pick out a small part of it, focus on it and zoom in to really show you what is the code that needs to be written and how you can make it fairly convenient for yourself to actually develop with it. And so if there are no more questions, then I will just direct your attention to Sean's links once more. And I will also mention one last time the pair programming sessions that we have running every day. And if you do have a question for Peter or Merylis, please put it in chat. The links that I just put in chat are going to be in a thank you note to registered folks who registered, but we will also include the links on the wiki page for this workshop, which will have any deck slides or description that you might want to need that you might need or if you want to share it with a colleague as well as this is being streamed on YouTube. And that video will be on the wiki as well as the hyper ledger YouTube channel under both Cacti and the workshops playlists. We do have a question from Rajiv. Is there a group email that I can use to send my questions later to the presenters of today? What you could do, Rajiv, is you will notice under Cacti, there's a Cacti mailing list. I'm going to put it back in chat right now if you want to send a note to the Cacti mailing list. The Cacti team will definitely see that you can subscribe to the list there as well. Or you can go on our Discord and we have the Discord links. Let me repost the Discord links one more time. So we've got them, but there's the hyper ledger Discord, Harmonia Discord, Cacti Discord, and there's a Discord for Bezu as well. And thank you for the question, Rajiv. Cool, awesome. And there is a Discord channel for Harmonia, Cacti, and Bezu as well as all the other Hyperledger projects, whether graduated, incubating, or the Hyperledger labs. Any other questions for Peter or Marelyse? Go in once. Go in twice. Making sure I'm not muted, but Peter just laughed at me, so I guess I'm not muted. That was good. We can hear you, yeah. Listen everybody, I would like to thank everyone for joining us today and a special thank you to our presenters, Peter, Dominic, Peter, and Marelyse for putting on such a great workshop. As I mentioned before, we are going to post this to our YouTube. It's going to be in the YouTube channel. It's also going to be on our Wiki, the Wiki links we just shared. I will put these links as well in the YouTube description in case folks want to catch up and follow along. As Peter mentioned, we have daily pair programming calls for the Cacti team. The Harmonia Lab is active. Definitely check out our Discord. And if you have ideas for other workshops or you want to see another workshop taking this topic even farther, let us know. Peter, Marelyse, any last words for the attendees? Thanks for having me and thanks for staying and listening in. That was great to have you. Awesome. And thanks again to Peter and Dominic, who had to leave early. We really appreciate their time and all the effort they went into putting together this workshop. And thank you to everyone who attended. We really appreciate you guys being here. We are a community and we are powered by the folks who attend and participate and contribute and maintain. And we hope you're going to use these projects and get involved. And with that, I'm going to thank everyone and end this workshop. Have a great day, everybody. Thanks guys. Thank you. Bye-bye. Thanks y'all.