 LinkedIn and YouTube. So welcome to everybody joining us from there as well. Hello to India and hello to US and hello to the UK. I'm glad to, good to see you, Matthew. Okay, so we're going to get started. And for those of you who are maybe joining us a bit later, you can find the recording on the YouTube and also on our webinar library. So once again, welcome everybody and thank you for joining this Hyperledger in-depth webinar with Japan Securities and Clearing Corporation or JSEC on base production launch and its next steps. My name is Tomas and I'm an ecosystem manager at the Hyperledger Foundation. Today I have the opportunity to take you to some housekeeping and introduce you to our esteemed panelists who will present the content. So first, and if you haven't joined any of our other live webinars, you know that all are welcome in Hyperledger community and we strive to create a safe and welcoming environment for everybody. So please read our code of conduct, which you can find on our website and on our wiki when interacting with each other on or when interacting with other people in community as well. So all of the Hyperledger Foundation are run under the Linux Foundation Entity Trust policy as our meetings involve industry competitors. And you can find our Entity Trust policy also on our wiki and on our website. This session is being recorded and you can always return to it if you missed some details and see and you can find it on YouTube as well as in our webinar library along with the slides for your download. Now we encourage these webinars to be quite active and you know that you encourage you to interact with our panelists and the more interaction we have the better experience for everybody. So please feel free to raise your hand to get unmuted and to speak up and if you would prefer to type in your questions you can either use the Q&A box and our panelists will either answer them live or type the answer there or also the chat. You can use the chat button for there as well. For those of you joining on YouTube and LinkedIn live feel free to also use the chat there and we will relay these questions to the panelists at the end of the webinar as well. Now without further ado is it my pleasure to introduce the great panelists from today. So first we have Kai Miyazato from JSCC who is the head of IT Innovation. We have Narin Krishnan who is a senior IT architect at IBM. We also have Sakit Seti, an application specialist for blockchain at IBM and Sami Dridi who is at Aurora Solutions which is helping JSCC with its consulting services. Now without further ado, Miyazato-san, over to you. Thank you, Thomas. Hi everyone. So we are JSCC and we joined the Hyperledger Foundation last October. So we are a new member. We launched the first DRT production system at January last year. So today is the first opportunity for us to provide shared information and this is our first contribution activity to the Hyperledger Foundation. So before technical session, let me explain who is JSCC and why DRT and how to use DRT. Just five slides as an introduction. So first, JSCC is a Japan Security Clearing Corporation. JSCC is a part of the JPX Group including the Tokyo Stock Exchange, Osaka Exchange, and Tokyo Commodity Exchanges. So the role of JSCC is in the middle of post-trade process. This is the JSCC territory. So we, JSCC received the equity trade data from Tokyo Stock Exchange and the listed derivative trade data from Osaka. And also after the Riemann crisis, global regulation required to concentrate the old risk into the CCP. So JSCC received the OTC derivative trade or JG Japanese government bond cash. So JSCC, we called Clearing House or Central Counter Party CCP. And also we categorized as a financial market infrastructure, FMY, which should take care of the market stability to avoid the systemic risk. So in Japan, the Central Bank of Japan and the CCP, JCC, and also the CSD adjusted. These three organizations are categorized as FMY. And the last number is just for your reference. It's quite large number of the trade JCC received every day. So the point is the 70% of this number come from outside Japan. So even the global broker and global buy side investors, therefore, JCC shouldn't be a Galapagos solution. We should carefully listen to the voice from the global market. And also we should think about the more harmonization cross-border solution or cross-market solution. That is a background of JCC. So please move to the next slide, page three. So we started the DLT research from 2018 and start from the data platform discussion as a market infrastructure. Because in the market, no good data platform in the middle. So that's why there are many efficiency, many manual operation. That is a DLT could be a good solution to have the breakthrough for that current efficiency. And since 2020, we started the more deep discussion about asset organization. So today, we would like to explain about the short-term vision asset organization. So we launched a production journey last year. But at the same time, we are doing a lot of POCs for the long-term vision. So this is more next three years, five-year solution. So we touch as wide as possible and we try as many as a business use case. That is a long-term vision. But today, we really focus on the short-term vision. So next slide is... So this is the first DLT production use case. So we use the DLT in the physical delivery process for the commodity futures, final settlements. So this is not the core function. This is a small part of the business process. But this is a really good area to use as a first step. So phase one, we start to launch the rubber future physical delivery. And now we are discussing about the precious metal gold silver delivery. But to do this, we need to change the Japanese law. So now a change request to the government to do that in phase two. So the phase one concept we listed five. Number one, two, three is mainly the DX digital transformation approach. So scope is really narrow. But the obvious benefit and operational efficiency. And also under the COVID situation, people don't want to deliver the paper hand to hand. Therefore, it's really good to reduce the that kind of delivery risk. And number four and five is a more DLT part. So we start to use the DLT solution. But not this. This is just the first step. And also we are looking at the phase two and phase three and phase four for the future expandability. And also the thanks to the open source, we finish the three months, three or four months implementation. And the including the design and the user acceptance test in total five months launch and also the locals due to the open source. So this is just a five key concept. And next page is just overview of phase one scope. So bottom part is what's the solution before the January last year. So CCC is in the middle. And the JCC received the warehouse visit from the right hand. So a physical commodity deliver into the strange company, strange company issue the warehouse receipt and the client see to pass to the client, create member A and the great member bring the paper to the JCC. And also the other side is cash money comes from the buyer side. So we change the only warehouse receipt part into the DLT path home. So we use the by using the ELC one one five five on the high value of this. So but this is really simple use case. So all token is used within the JCC business and network in phase one. But the potentially in phase three of phase four, this token could be used outside the JCC so that it become more like a tokenized approach. But we selected a really good use case as a first step. And next slide is touch a little bit about the long term vision. So number 123 main CCP asset flow. So number three is a DBP settlement of the equity settlement cash equity settlement. So CCP and the CSD in the middle and CCP receive the money from buyer and the receive the equity from seller and in the middle and compress and also the provide a secure and safe settlement that is a DBP settlement. And once in the world digital equity tokenization is used and also digital yen is used in that case we can transform the current situation to the purely digital settlement. But actually it takes time because all clearly member firm agree to use the digital yen or digital equity. Therefore, I think the number three DBP settlement could be a last piece. And the middle part is a cash settlement. So this is for a derivative market. CCP receive the money from loser every day and send the winner. That is the cash settlement scheme. But this is also the all cleaning member firm agree to use the digital yen in this situation. So total number is a plus minus zero in the middle in CCP. But the number one, the collateral deposit is much simpler. So this collateral is used for the member default case. So a CCP receive the collateral from declaring member A in case of declaring member A default. So CCP keep a huge amount of the collateral in the CCP account. The point is CCP can CCP sets the eligibility rule not only one product asset. We can provide some options digital yen or digital yen or equity or U.S. or U.S. Treasury. So clear member firm can choose, okay, I want to use a normal yen or I want to use a digital yen. That is a possible. So that's why the JCC foresee the collateral use case maybe the first step as asset documentation solution. Therefore, a long-term vision, we are doing lots of the POC related to the collateral deposit. So that is my part. So I will pass our tech team. Nanesan, could you please thank you. Thank you, Mr. Dasan. Thanks for setting up the business context. So I will start with the technical backup and other processing part of it and then followed with my colleagues who will deep dive into this the technical aspects of it. So I want to start with this technical session by going over the the non-functional requirements that we had to go and also the non-functional requirement is very vast in size and so I want to focus on the backup and the some of the challenges we had on this one. One other requirement that we had from right from the beginning is to make sure that no data loss, zero data loss during any of the failure of the availability zone. So like you know, as you know, you have multiple availability zones and any one availability zone goes off. There should be zero data loss and zero impact to the application as well. Also the backup creation itself should be automated and seamless. The backup, as you know, it's made so that it can be used in a future situation. So if it involves multiple manual steps, as you can imagine, it delays the effort to come and bring it back up. So we wanted to be right from the beginning to be fully automated and seamless. Also when we look at the backup passage, we wanted to make sure that the system that we are doing for the backup and recovery of course, is working for five to ten years down the line in terms of the, you know, the growth of the ledger size. So we didn't want to box ourselves to one type of mechanism that can be done one way and then find ourselves not good later on. Another important point which we again, which has started out with this is to make sure that the data that's being backed up should not be kept in some place which itself opens it to other parties. It should be as secure as the blockchain, if not more secure and then it should be accessible to not anyone outside the blockchain as well. Then with regards to the bulk processing, again there are some instances where we had to run a batch processing of not like an end user doing it but a batch processing that helps, that pushes more data in a single time. For that, we wanted to make sure that, you know, at least up to 50 token in a single transaction can be handled. We did try for a longer one but, you know, we kind of settled for 50 to make sure that we did all our testing so that the logs, especially the logs for the backup, it's like, again, I'm now looking at the logs for the backup. We wanted to keep it for a minimum of two years to make sure that, you know, we have the proper auditability and also the recovery time has to be within two hours, not just even the availability zone but beyond that, right? It may not be even a disaster but even the system failure, for example, or any other region failure, we wanted to be within two hours. This is one of the things which we are looking at going forward as well to make sure that we have it in a multi-region. So right now we are in a single region which I'm going to talk about. So I want to go into the details of what we have achieved. I think the best way to do that is to explain what we have right now. Again, this is the overall architecture diagram which I think which Samison will go into detail later on. So we have about nine Besu nodes, six of them validators and two RPC. I'm going to be focusing on the backup node at the bottom, right? As you see, they are spread across three availability zones in the Tokyo region and if I focus the backup part of it, right? The backup node, we created a backup node by creating a read-only node on the Besu by disabling the RPC endpoints so that the aim of the backup node is to make sure that it has a read-only copy that can be backed up. So during the backup process, what we do is we stop the Besu on that particular backup node. So the rest of the node is still active, but on that particular backup node we stop the Besu and then we backup all the data under the data directory, like tar them and compress them and then push that into the S3 backup bucket. And as you know, the S3 itself is across region S3. So it's available across multiple regions as well. So once we make sure that the file has been, the backup has been pushed securely, then we bring back up the backup node. And then once the bring backup node comes on to the part of the network, then it quickly catches up and then syncs up with the data. So this, as you see, is very critical in our design and we made sure that the backup is, I'll go back to the challenges from now, but we looked at, by doing a systematic approach of this backup, what we also had to do is to make sure that the growth of the data, as you know, the blockchain has data in immutable ledgers so that keeps growing. And we had chosen the forest mode enabled in the Hyperledger Besu that allows us to do the pruning later on if you have to. So if at all really, you know, we are still handled for the 10x growth of the data, but we have to do it later on. We can use the pruning part of it. And as I just mentioned about the read-only node, the read-only node had to be down at that time, right? Because the, because what happens is you don't want the ledger to be updated while the backup is happening. Though we did test it and it seemed to be okay, but we didn't want it to be, we want a clean backup. So that's kind of the one statistician that we made. And with the backup, we do a daily backup, but again, that doesn't stop. We can do a much backup every 12 hours or even more frequent because all it does is it doesn't affect the availability of the Besu. It just, you know, brings it down and then does the backup node and then takes the backup. The bulk processing, we had, whenever we pushed it with the large TPS, we had this nonce challenges. As you know, the nonce is in a blockchain hyperledger Besu, right? It's stored in the Redis. And when multiple nodes, again, when you, same account, same ID, when a large number of volume is there, we have load balancer that splits across multiple availability zone. And then if it is request from the same ID at the same time is sent to the zone, which we don't think it's really works during a real processing, but you know, in a high volume bulk processing, that could happen. So what happens is the request from the same ID, sorry, two different request from the same ID is being sent to different one and both handled at the same time. If it's within the time taken by the Redis to replicate, then what happens is both of these nodes get the same number and in which case it, one of them would go through and the second one would fail because the nonce is already used up. So we try for some time and then we clean up the transaction pool and then the API and then the above transaction layer we resubmit it. So we were able to automate this process as well and then we were able to handle those error conditions. With regards to the backup, again, the retention of the backup file, we, as is mentioned before, we push it to S3 and then we also have a, you know, seven day daily backup and then we rotate after seven days. Let me, okay, I will hand it over to Sami-san for the next couple of slides. Hi, thank you. So I'm going to present next one of the challenge we had to face when creating the production system, which is disaster recovery. So let me first explain why it is particularly important for us to take care of disaster recovery. So the main thing that Japan is in a very high risk zone, we get hit very frequently by earthquake tsunami on typhoons. So it's a legitimate concern for GSEC to make sure that the system will not break in case of an incident happen. On top of that, GSEC is a core financial institution in Japan and they have a lot of responsibility toward other financial institutions. So we have to make sure that the requirement of the policy will allow like a high visibility and a high resilience system. So I'm going to share some of the requirements that will impact our design. So for example, as I mentioned before, we have to make sure that in case of an incident, we have to recover the system in less than two hours. We also have to make sure that all the system is hosted in Japan and the backup on data or the system cannot be all in the same location. We have to split the same multiple location in case of one location go down because of earthquake or other incidents. On top of that, we also have some blockchain specific requirements that we had. So one of them is that we don't want to have gas fee for our customer. We also want to approve all the accounts. So because of that, we decided to use QBFT as a consensus protocol. One of the requirements that come with QBFT is that we need two sort of validators always running at the same time. I didn't mention earlier, but all the network will be handled by GCC. So GCC has to ensure that all the nodes, all the validator nodes are running correctly. So now I'm going to show the solution. Can you go to the next slide? So the solution we came up with is to create an address cloud with three validators on and split our nodes in three zones. Here you can see there's nine nodes. So six of them are validators, two are IPC and one are backup. So if we split the six validator nodes in between the three region, if one region, if one zone go down, we still have four validators which still allow the network to continue. And then the IPC nodes, backup nodes are just there for, we just decided to split the responsibility and only have one single responsibility per node. So we have IPC nodes and backup nodes that only do that. Can you go back? And we also have our relational database on caching cache database in two region. So if one region, if one zone go down, let's say az1 go down, we still have the data in az2 or caching also in az2. We can easily, in less than two hours, start up our orchestration system in that zone and move redirect all our traffic into the zone too. So this allows us to create this disaster proof system. And I will, I think I'm done. Maybe Saiz can continue. Hey, thanks Ami for the context on the current architecture and our disaster recovery system. So now I'll be taking over the future initiatives. This is what they are working on currently as part of their short-term initiatives and also we bring up some of the long-term initiatives, JSTC would like to do in the next three or five upcoming years or in collaboration with other organizations as part of the global outreach, like making their self-open to the whole world for the better usage and broader adaptability. So in the right, as you could see, this would be the ideal, what your life cycle of any complex system starting from a single az testing in a dev setup and which would go to a multi az environment in a production in our case also. We are currently at the multi az single region setup and it could the complexity would rise or you could have a system as complex as a multicloud which is fault-tolerant in multiple scenarios and highly resilient to the disasters which Sammi was talking a few minutes back. So in this, in the past one year after going production, we had multiple learnings and challenges along with the multiple POCs which you have done with multi regional multiple. So in the next few minutes, I would like to share our learnings and our challenges again, the challenge which we have learned in the past year for the community so that everyone can help everyone in any way possible. So again, why the main question would be like why multi-region, right? Because multi-region when you're talking about multi-region, you're running resources in two regions, right? So like Sammi mentioned, Japan is one of the areas which is highly prone to natural disasters and when recently itself in January, there was a major earthquake and it cut off power and connectivity in some parts of Japan. So even though a multi az system is fault-tolerant to certain extent of one az going down, if a whole region goes down, then the whole system availability and system business continuity won't, will be completely halted. And in JLCC's core requirements, the system need to be brought back and should be available in any kind of, any issues in the next two hours. So keeping that all considerations as part of the considerations, multi-region would help us in a few points. So I will be covering that one. So again, like I mentioned, the main concern and the main point of going multi-region in our perspective was the higher resilience and disaster prone system. And obviously with the respect when you're going with multi-region, you get better performance also regarding latency if you're connected from different parts of the world, a region, if you're connected to a closer region, obviously you get a better performance and better connectivity system. And also when it comes to financial institutions, the government or the whole body has strict regulation with respect to data storage and data recovery. So to comply with the government standards also and the whole data compliance, we need to store the data in a robust way and we need to spread this data across regions for the better usability and recovery in case of a region failure or any other issues. So again, then it would lead to one more question. What is the difference between multi-region and no multi-cloud? Because multi-cloud also can do the same actually multi-cloud can do something better also. So if you consider like future like compliance itself as business compliance, where I want to partner with the separate organization and they want to join my network, but they have their whole system built up in a separate different cloud and different cloud provider and they have all of their system in a new cloud. So if they now want to comply with join my network and for their ease, they could easily connect towards our nodes or our system running in that cloud, that cloud and help them easier integration, a faster integration with our system. So multi-cloud provides you higher flexibility and advantages for multi-region. But again, Miyazawa also mentioned that in future they would like to incorporate with multiple other companies also like currently if you see DTCC is creating their own test net and they have been very active on that. So if in future just want to transfer like Miyazawa's point was like if these tokens need not be part of our own system, in future these tokens should be able to be able to transfer to other networks, other companies can be used by anyone kind of thing. So if in that keeping all that in our consideration, multi-cloud would be the end product or end solution. But with multi-cloud other challenges also arise when we are speaking about now spreading base nodes across cloud latency becomes a huge issue and all these blocks need to sync with all of the nodes in a periodic fashion and all of the validates to be in sync always to raise the consensus rate. So that is one of the main concern I would say and there is obviously a secure connection needs to be established between two clouds because you can obviously manage connection well securely in a purely AWS or GC where you have VPC peering and stuff. But when you are going cross-cloud that becomes an issue too and obviously when you are spreading across multiple cloud you are sometimes left with some cloud native services which would hinder from the easy integration. So you need to modify your architecture to support the new services provided by new cloud. So there's up and downs in both the cases. But again we plan to cross this bridge once we come there and in our current explorations and current setup we are going with the multi-region as we talk now. So we have started the multi-region exploration and design and development parallelism. So now the next slide. So this is the high-level architecture of our current multi-region setup because we haven't reached multi-cloud yet. So seeing this architecture you could see we have split our blockchain. I wouldn't say split. It is more of a hot and cold approach kind of thing where majority of our validators and the nodes regarding orchestration everything runs in AWS in Tokyo region and you could see there is one node running read-only node running on Osaka. So why we are running separate node in Osaka is because as currently we are taking only daily backup if the whole region goes down even though S3 data is replicated it can be replicated in multiple regions we would lose the whole day's worth of data. But now since running one more standby node on Osaka it's always insane. So you have the latest ledger data in Osaka at all times. So you are minimizing your data loss to the least possible scenario where the worst scenario the read-only node would be backed by a block or so. It wouldn't be much back so you are not losing any data at this moment and why and again imagine you're the disaster struck in Tokyo and the whole region is down. So you could bring up the whole new same blockchain network in Osaka vary in a robust manner because you have the latest backup in S3 and you just have blockchain just have to sync to the latest block present in the read-only standby node. So the recovery also is very efficient and very fast in this scenario. So this is again a very high level diagram talking just on the blockchain perspective but other services like Ardiel's and Elashcast provide their innate multi region capabilities and they help solve that difficulties in a much more efficient and robust manner because they are at service. So this is my to like a short introduction of what are current system or what we are working on. That's it from my end. Thank you. Thank you, Saith. Thomas. Not in some. This is the end of this, right? Yes, correct. Thank you. So just let me quickly summarize that. So after 2018, we did a lot of the POCs but after five years we thought that maybe production launch would be a best POC. So that's why we launched phase one as a really small limited solution. But afterward we are doing in parallel three part. One is keep enhancing the infrastructure for the long-term vision because we are not sure in the future interoperability what's happening, to network, how to connect. So at the moment we touch as much as possible. We touch the possible solution as much as possible. And the second one is we try to expand the business use case as wide as possible. And number three is a lot of the legal discussion to do the next. So today is really focusing on the infrastructure design and the how to enhance at the moment. So thank you. So this is the end of the presentation. That's great. Thank you so much. Oops, my camera is not here. Okay. Thank you so much for all our panelists. And we do have some questions already in the Q&A box. And Saqib, thank you for answering some of them. I see that you already handled them. Would any of the panelists like to take any of these questions live maybe? Oh, yeah. Yeah. Okay. So I see the many questions on how we are handling gas. So like I mentioned in one of the answers previously. So currently since ours is a purely private network and there's no only JCC or JCC parties who are the companies affiliated with JCC application can access the network. So we have configured a gas free gas network. So currently so we don't have to worry about we are not handling any gas at the moment. So it's pretty straightforward on that perspective. Thank you. I will take a question from Jerry Kikkinson who is talking about encrypting ledger data in storage. So again, like I mentioned, this is very important. You don't want to take the backup and put it out there in the open for others to do that. So one good thing is the S3 itself has built-in encryption. So that's one layer of encryption that we have. And we also restrict the S3 bucket access as well. So we kind of use the native AWS features for privacy. We can go to one level extra by privating. But the point is that since the ledger is also signed by some of these private keys, so even if someone snoops in on the data, that's not going to be very much useful for someone. So we have multiple layers. One is at the AWS level encryption and then the access control of the buckets. And then on top of it, it's actually the raw ledger data which cannot be done anything without the actual keys. So I would like to actually take up a question on multi-cloud because I see one question on multi-cloud. So we had done in the past few months, we had done a successful POC on multi-cloud. So what I would see the main challenge which we faced in multi-cloud during the multi-cloud testing was like establishing a secure connection between the base nodes because we don't want any malicious user trying to submit transactions directly to a base node with coming out without going through our application stack rate. So if you're like, that's one of the main challenges. So because now since we are talking between clouds, the innate natures of the cloud which provide the through VPCs or transit gates based are not easily integrable. Now you have to integrate GCP with AWS services. So it can be challenging at first, but once you develop that secure connection between the base nodes, then it's pretty much done by base itself because base has the capability to connect to its own blockchain nodes and they would think about it. So it's all about establishing that secure connection between the base nodes. Thank you. Thank you. I just want to answer. I know Arindam has a thing about hyperlogy, but thanks for the link. Sharing the link, I think that kind of explains what the free gas versus other gas. In the last one, the anonymous certainty where we see the handchain privacy, that's very important because in this case, privacy is paramount. You cannot compromise with that, especially in this financial sector. So we do have a protected network. So we are not there out there. The final network where you've seen, I don't expect it to be too much fully public network like the Ethereum and so on, but it will be some sort of a protected network. That's number one. Number two is within that, we make sure that it's available, what are the data is available within those participants and then also we have certain critical data is not stored on the, as part of that ledger as well. So right now we have, that's kind of reason why we have it in a protected private network, but definitely that will be something that we will closely monitor as we go along for the, as we onboard more participants and make it more open. Guys, thank you, Narendra. Very concerned for the, there's some questions about transactions, how much transaction we have on performance. So maybe just to explain a bit more, the way it works right now for business use case, that's most of what happened at some period at the end of the month or something like that, where like they have to resolve all those contracts. So they will do all the transactions at the time and usually it can go up to, we design it so it go up to a few thousands in like some days. So we don't have a lot of transactions per second. We don't have that many users yet. So it is not really any performance problem. And usually when even we do bulk transactions, since we are using the ERC-1155, this allows us to make multiple transactions, like send multiple tokens in one transaction. Yeah, thank you. And just to add to that, we have a 10 seconds block time. Again, we have tried with multiple times, but that's something fits in with our requirement right now that they have. We just now answered that right, sir. Yeah, yeah. So I have seen a question on how does performance look with the high-evolvement traffic rate. So with, we have been having still challenges with managing ITPS because when you're talking a very large number of transactions entering the blockchain rate, not even the blockchain, the orchestration, handling the nonce becomes very critical. So we, so because, since to taper the, to resolve the nonce issue, we have tapered the transaction a bit for a lesser performance and more robust solution for risk-prone solution. But we are looking into a long-term solution which would resolve ITPS issue and like basically mentioned rate for higher traffic and larger transaction input. Thank you. Thank you. Thank you. It was great. I don't think we have any open questions right now and I see that I don't have any questions on YouTube and LinkedIn right now. Would anybody else like to ask a question? We can get you unmuted and you can ask our panelists. Nothing for now. Okay. I wanted to ask you what would be a good way to get in touch with you for our viewers that may be watching this later on or YouTube on LinkedIn as a recording. What would be a good way to get in touch with you? Gregor. Gregor, may I ask a question? Oh, thank you. Thank you for joining. Sorry, Thomas. Could you please repeat again? Sure. Of course. I was just wondering what would be a good way to reach you if some of the viewers have questions for you maybe about Peso or about organization. Is there a good way to reach you? Yeah. I'm sure they how to connect. Okay. Yeah. Sorry. This is our first time in the webinar. After the webinar, please advise us this way. Sure. Of course. We can add some of the contacts also on the presentation which we will have on our webinar library and then people can reach you there as well. But we did have one question in the meantime. If you don't mind taking it that one as well. Nathaniel was asking, does it have a multi-channel communication? Said, do you want to answer that? You're on mute. Yes, sorry. Currently, since we are running it in a single region and there is so JCC is the main, like, party or the controlling party in most of the scenarios. So, we currently don't have a multi-channels setup. But in the future where we are spread across multiple cloud and we're working with multiple clients and their own compliance and their own nodes, then we could set a multi-channel specific to that user based upon the requirement. So, current requirement doesn't, as per our current, where we don't actually need multiple multi-channels. So, we have a configured one here. Great. Thank you. Okay. So, if we don't have any other questions right now, then I think we can wrap it up. If some of you have questions later on that might not get answered this time, please feel free to reach out to us at Hyperledger or at the JCC. We will share some contacts later on as well. So, first of all, thank you so much to our great panelists. You managed to cover a lot of information. I enjoy listening to the presentation. And thank you very much to our audience as well. Thank you for joining us and thank you for your great questions as well. As mentioned, the recording will be later available on our webinar library as well as our YouTube channel as well. I would also like to invite you to visit our events page to see all of the upcoming events. We had a webinar with Cripsy next week and we also announced all of the live sort of events where you'll be present there as well. I would also like to invite you to visit our Meetup page to learn about the upcoming Meetups with the Hyperledger Foundation. There are Meetups, workshops, and events from the regional chapters announced here as well. We also produce a lot of content that is our community. So, we issued an updated CBDC book. Our supply chain and trade financing has at the end of last year published a supply chain and trade financing book. And you can find all of that on the research page as well. And we will be producing more of that content. So, make sure to visit us there as well. Here is also the link to our Discord. You can use the QR code or you can follow this link. And here is a lot of live interactions with our Hyperledger project. So, we were covering or our panelists were covering the Hyperledger Basel today. So, there is a dedicated channel for Basel, but also Fabric, Firefly, and so on. So, you're welcome to join the Discord and ask the people questions there. And for those of us considering the corporate membership, you're also welcome to contact us on membership at hyperledger.org. So, these conclude our presentations for today. Thank you, everybody, for watching and thank you again to our great panelists for all the available information you provided. And for those of us watching the recording later, please remember to also follow us on YouTube and follow the organizations from our panelists as well. Thank you so much and hope to see you again soon. Thank you all. Good luck. Good day. Thank you. Thank you all. Good day. Thank you.