 Good afternoon everyone. Welcome to Appalachia Global Forum and I know it's soon after lunch and we are getting into more of learning a lot. So in this session I'm sure you will enjoy the session learning more about the scalability aspects of blockchain technology. So when people talk about scalability they often go into how do we measure performance, how do I understand how to measure like how to do things. But one of the things that people often ignore is understanding what really affects the scalability. How do we achieve? Can we achieve high throughput by doing something? Can I increase the number of transactions that I'm processing? Where exactly is the bottleneck? It's okay to measure it on one side but it's really important to understand the real implications. What really is causing that the issue that is not allowing us to go further. So I'm sure you all will enjoy the session today and today I'm around talking with my colleague Deepika and we are from Walmart Global Tech India. And briefly the agenda will cover three aspects. We will of course define the scalability in this talk. We are definitely not going to just showcase the numbers that this particular blockchain is the throughput that we are achieving. We are not going to do that in this session. We are rather going to understand from an architecture point of view, from a design point of view, what can blockchain as a technology improve at the same time what application developers from their point of view do to enhance or utilize the maximum features that are possible from a blockchain protocol itself. And towards the end we are going to see some of the common patterns that are emerging that you may already be familiar with and some of the other settings with respect to blockchain. Now I would invite Deepika to take over from here. Thanks Irwin. Hi Dublin, like we say back in India, namaste. That's in Canada. That's where we both are from, from Karnataka. Just thought a lot of bit of culture into this presentation. So you know to kind of just do some context setting. We all know how blockchain works. Transactions come in eventually they get written into a block and that block gets committed onto a blockchain after it passes the test of consensus which usually happens maybe through leader election or where a bunch of nodes are selected to validate the block or it could be that all the participating nodes in the network need to validate the block. So whatever it is, this is basically how blockchain works and we are all aware of that. So when we kind of say blockchain, you know, there's still this big question that is being asked today given all the noise that is created by a lot of token-based projects that are coming into our ecosystem. And I kind of found a meme to sort of capture that sentiment. So basically it's like blockchain, not sure if it's a great idea or a shiny object syndrome. I mean, lately in, I think 2021 there was this whole hype cycle. Everyone were like, oh, blockchain, blockchain, let's mine, let's earn a lot of money and that was the kind of hype there. But slowly I think we're gravitating towards a mindset where we're trying to look beyond crypto. We're trying to see how we can utilize certain aspects of blockchain to solve real world use cases. We don't want to try and force fit that into our solutions. But there has, blockchain has been gaining a lot of mass attention because of two main aspects of it, which is decentralization and democracy. But that being said, there are some real concerns regarding the mass adoption aspect of it, right? I believe I attended the keynote of Bob Robb and Christopher. And like he said, we still have a long way to go before we reach a stage where we say, if not this without blockchain, how do you achieve this without blockchain? There's no other way to file your taxes without blockchain. Why are we not using blockchain here? So to achieve a state where blockchain becomes the norm, there is still quite some way to go. And that is mainly because of two main or one main bottleneck, which is scalability concerns, which kind of relates to the transaction speed concerns. Scalability is directly linked to how many transactions we're able to process in a given time frame, right? So that would be the main bottleneck. And then this presentation, we're going to try and see what kind of design patterns. And we're going to review some architectures of, like Arun said, existing hyperledger projects to see where those issues kind of come in. So if we look at a typical transaction scale, right? In FinTech today, we process around 24,000 transactions per second. Blockchain started off with around 10 transactions per second. And though, you know, there are solutions which are coming in to try and increase that the number that you see, the tiny rectangle that you see over there, we, like I said, still have a long way to reach the 24,000 things. So in order to kind of make it relatable to the real world problems that we see today, there's quite some way. To kind of give an example here, if you see, I mean, we all hate traffic, we come from different parts of the world, but we all have the common problem. We don't want to sit in traffic. Traffic exists on a road because a road is built. It's a fixed bandwidth, right? It's meant to allow only few number of vehicles on the road per second. So if, you know, there's some event like I believe there's a concert recently in the Dublin City Centre, which caused a lot of problems for us. The amount of traffic increases. And so the flow of the vehicles on the road also becomes a problem. So that is essentially what is happening in blockchain today. There are a lot of people who are trying to execute their transactions on the blockchain, which is resulting in lesser speeds of execution, right? So how to achieve a bunch of things. So the three main pillars that we kind of want to talk about when we say blockchain is scalability, then decentralization and security. So if we try to achieve scalable decentralization, how we want to do that is maybe creating island networks which execute these transactions, right? These do meet scaling requirements because these networks individually execute these transactions and maybe later they are sort of compiled together and then the result is given on to the main blockchain. But this does result in some kind of compromised security because these island networks which are individually executing different parts of the transaction may all be interlinked. There is a dependency factor. So if any one of these networks undergoes some kind of a bad block addition or something like that, then there is a ripple effect which affects the overall result of that transaction. Now if we try to achieve secure scalability, then we can talk about a trusted execution environment where we sort of delegate whatever transaction we want to do on to a trusted system which is not a part of the main blockchain. This again meets the scaling requirements because only a few selected hardware nodes, etc. will be able to execute the chain. So not all the nodes have to participate in the network. But that being said, this leads to increased centralization and by the very concept of blockchain, more the decentralization, greater the security, right? Because that's kind of baked into the concept of blockchain. The more the nodes who need to validate the lesser the chances of that node or that block being committed wrongly on to the network. If we try to achieve decentralized security, this is something that we are all very familiar with a lot of traditional blockchains that projects that we are aware of try to achieve this. This obviously leads to longer consensus times because it is highly decentralized, which means there is a communication overhead, where each of the nodes need to communicate with each other to achieve that consensus that leads to increased security like I just mentioned. But again, we know that the solution is not scalable. So this kind of leads into a problem that is popularly known as the blockchain dilemma. This was term that was coined by Vitalik. So what this dilemma states is that at any given point of time, you can achieve only two of the three points in the rectangle in the triangle, which is decentralization, security, scalability. So if you're trying to look at, say we spoke about traditional blockchain, right, you cannot achieve scalability with that. If we're trying to look at some kind of project which achieves scalability and security, there are a lot of projects which claim high TPS throughputs, but they are not necessarily decentralized, they do that through a lot of other mechanisms which promote only centralization where a few selected nodes which maybe have specific hardware requirements or specific resource computing capabilities. Only those kind of nodes are able to execute it. So that leads to increased centralization, compromising security. There are also multi chain networks, which again, we spoke about island networks, right, so that kind of ties into this concept. So I think we've kind of understood the problem statement here. I hope so. So I give it to Aaron to kind of talk about the factors influencing scale. Sure, thanks Deepika. So now that we understand problem statement, scalability is really a concern that we need to deal with. So just in terms of blockchain technology, how can we define like what factors would influence, right? So in this, probably we are going to cover, let's visualize it to be hardware based factors that would influence and software based factors. So quickly talking through what could be like common hardware based factors that could influence, we all know, let's visualize blockchain technology into a system that follows a protocol that does multiple things. One of the thing that it does is running smart contract. For it to run smart contract, it needs access to state database, right? So there is state database read, there is a processing requirement that to run a smart contract. And at the same time, there needs to be verification that needs to happen in terms of cryptographic proofs, both for block verification, the signature verification of submitters. And apart from these, the other factors that would influence is just in terms of network bandwidth. We often talk about blockchain being an application mechanism. And like we are not really concerned in terms of our deployment infrastructure. How are they syncing that information with each other? So of course, when we dissect blockchain into these segments, different responsibilities that are to be done, we can come across how can I increase my in-memory storage so that I can store more data and be accessible at runtime. I don't end up making a read request or write request that needs longer time, longer cycle to commit on the state database. At the same time, how can I increase my network bandwidth so that I can parallelly do a gossip or send and receive transactions with other nodes. So these are some of the things. Let's not go deep into, I mean, into each of them, but it's good to be aware of these terms so that when we go for our network deployment, these factors are something that you should, this is how you should probably consider of building your solution. And quickly talking through some of the software based factors that would influence the scalability. Of course, one of the things that people talk about in blockchain is consensus algorithms. There is little choice as of today in which protocol we choose and what consensus algorithm that particular protocol supports. For instance, if you choose, let's say, fabric, for instance, there is, of course, endorsements that do happen between the nodes, but then at the end, when ordering service runs out of the box, you all are familiar that it does have an option to run ordering service in a raft consensus protocol, right? So there are integrations which others have done that would run a different approach. But as you see, many of the blockchain mostly, they have limitations in how you can configure it. So again, quickly talking about it, there can be two approaches. For instance, let's say consensus that would speed up the execution. One of the challenge that it will bring in is how do I create my block? How do I deterministically say that I can create this block in next five seconds? If I need that kind of determinism, I need a consensus that can support. For instance, if I choose PBFT as an approach, PBFT, it gives me that support. But the network bandwidth consumption increases. Every node has to talk to every other node in three different phases. Pre-validate, validate, and then the commit phase, right? If you go into dice, if you dissect how exactly PBFT works. Similarly, if you choose raft, the similar thing happens. Every node has to talk to every other node for leader election mechanism. Now, and also replication mechanism. And there are working algorithms that provide increased decentralization, which means if you are familiar with the initial version of sawtooth, which supported for it as a consensus, it was kind of a forking algorithm, which will span. So for instance, every node will optimistically create a block. They think that they are the leaders in the network. They keep creating the blocks. But the moment they realize that I was not supposed to create this particular block, they try to discard it. They try to respect the other node that has created the block. So we are creating folks and we are finalizing on that fork. You can imagine that there are multiple blockchains getting created, and only one of that will be finalized. Now, when you're doing that, there's high possibility that you're discarding rest of them. So there is, of course, increased decentralization. You can run any number of nodes. Eventual consistency is what will gain out of this, right? So there is probably another way of looking into it, so consensus algorithms. They generally go hand in hand in with how we achieve fault tolerance in our blockchain networks. So when we talk about fault tolerance, let's imagine this to be like, do I need my blockchain to be just safeguarding whatever is committed on to block or whatever is committed on into a block? Or do I need my blockchain to be safeguarding even the transactions that are getting executed to be validated before it gets returned? So that's the kind of question or think thought process that will go into it. So if your application requires a design, which you have all the trusted parties and all that you need is probably a record keeping of what has happened in the past and you have established trust through other means in your enterprise, it probably makes sense for you to even go with fault tolerance, I mean, the one that does not provide Byzantine fault tolerance, right? So it's fine. So for me, what matters most is once I create the block, it should not be possible for somebody to break that from that point. When we go to Byzantine, it's really important for me to know if somebody is being malicious in the network. Can I stop them from being malicious? Can I identify that scenario? And if I identify, how do I rectify my, I mean, recto actually, how do I stop that from occurring further? These are the kind of decisions that go into when designing Byzantine fault tolerance systems. So this, of course, do matter, like depending on whether you want to achieve decentralization or scale, you can choose between these. And probably when I was thinking through this slide, these things, one thing that called out or that stood out to me in terms of software factors that would definitely affect the performance is in terms of the flow in which a block is created, you might all be wondering how exactly does blockchain create a block, right? So let's, let's dissect it in very simple terms. The things that happen in a typical blockchain is you send a transaction and somebody needs to execute it and that somebody in our case would be a validator node or a blockchain node that's receiving transactions and once they execute it, they are reading some state, they are generating output state. So once they generate the output state, they are committing that output state into a structure like in computer science terms or in programming terms, an object is created and that object is a blocked object. They are putting the result of execution into the block structure. They are linking the result to the previously created block. That way, if somebody has to be malicious and break the blockchain, they don't have to just break the current block but also block and break the previous ones. So just by going into the same process, I mean the same process, it's possible for us to follow these two different orders, right? These two different flows. So fabric, if you are all familiar with fabric, it supports first for us to execute in the sense we send transactions to peer nodes, they execute the transaction, they generate the output and then we send it to ordering service for ordering. So when it goes to ordering, you already know the result, transaction results and after ordering every other node, they just probably validate. If whatever is ordered in your block structure, ordering is nothing but creating that block structure with all the transactions. They just validate if those transactions were really executed as they claim in the result. So that's the validation that goes in. But in probably other blockchains such as Beisu or Sawtooth, it's like before you create that block structure, you have to pre-validate the transactions. You are executing the results, then you're creating the structure after execution, you are invalidating all the invalid transactions, you are just removing them, ignoring them and you put the valid ones onto block structure, then you send that block to everyone else in the network. When they receive it, it's their responsibility of receiver to verify if they are receiving all the valid transactions first and then they have to reexecute all the transactions and figure out if whatever is being sent to them is as per the result given when a block was created. So there is extra step, like every node is executing the transaction, every node is trying to replicate what one particular node did. So this is again another factor that would influence your scalability, like going really into the system details level, you will find the difference. And quickly, couple of more factors, let's also talk about the smart contract itself. When we all talk about smart contracts, in very simple terms, if I have to say what smart contract does, it's a program, it's a piece of code that would probably instruct a blockchain node such as validator or peer in case of fabric to do some activities. And that needs to be triggered from a client, right? So client sends a transaction based on that the validator or the peer node will request somebody to hey, here is a request coming into this particular smart contract. Can you tell me what to do now? The smart contract will say, yes, as per the agreement or the as per the code that we have come up with, I want to read some data that is stored in this address or I want to read a key from database, which is referenced with key one or key two kind of a thing. And validator will read it back. I mean, the smart contract will then say, hey, as per the rule, I processed your input data, here is the output for you. And like validator or the peer then takes it to the next steps of creating the block structure. So just in this process, if we see there is possibility for us to parallelize and make effective use of the blockchain protocol, how can we do that? If let's say if you can design your smart contract in a way, the read and write operations that you're performing can be done on two different nodes or two different keys. If you can logically separate them through your solution engineering aspect, it's definitely possible for us to utilize the parallel transaction capability that some of the underlying blockchain protocols may provide. We'll talk about quickly in coming slides, one of the protocol that is available within Hyperledger, but this is one of the way that is possible. Now, I also wanted to cover this because typically when we go out of Hyperledger, there are some other blockchains of protocols also available. And within Hyperledger also, there are protocols that do support both kind of approaches. You might have all heard about, I mean, it ties back to what I was again discussing in the previous slide with respect to the keys and how we parallely execute transactions. This is another way of visualizing it, right? So initially, somebody creates a block or at the end of the day, we are all dealing with some state. So let's say in UTXO model, what typically happens is I have an initial state which needs to be created as a special transaction. And this initial state is with me forever. How I spend it or how I move to next state is up to me. I don't need everybody's consensus to make that decision. Of course, I need to run smart contract, but that smart contract can be between exact parties that I deal with. I don't need entire network to agree with me. If I'm dealing with Deepika, for instance, I just need to ask her, hey, do you agree for this move? If yes, we both make that state transition because it only concerns with both of us. So that's kind of, that kind of allows us to parallely utilize, like, perform multiple transactions parallely through initializing them into separate logical entities through our smart contract design. And the other approach would be, of course, the shared state model which allows us to simplify some of the business processes, allows us to define our smart contracts much better terms. But the challenge is, of course, how we logically organize them into the state database and how we utilize those key value pairs in our decision making. So we'll quickly review a couple of protocols available within Hyperledger. I'll again invite Deepika to cover them. Thanks, Arun. He's the one who spoke, but I need some water. Hope that's OK. Right. So first we'll just be looking at Fabric. For those of you who are new to Fabric, the client first initiates the transaction request, sends it to a peer which belongs to a participating organization. So within the peer, certain processes happen. Let's kind of look at the zoomed in view of that. So we have a peer node. The transaction request that is sent to the peer initiates maybe a query to the DB. This could be a read or write. So here's where the first maybe bottleneck kind of could be there, where some latency could have been potentially be there because whenever we do a DB query, read or write, it depends on the architecture of the DB itself. Once we have the information required from the state DB, the information is sent to the chain code server. Here potentially the chain code service itself could have some inherently built latencies in terms of processing the code that is on it with the supplied information. Then when this information is sent back to all of the peers of the participating nodes, the Fabric client then sends it to the, hold on, yeah, it then sends it to the orderer node. So let's take a look at what happens in the orderer. So the orderer is not just a single node, right? The ordering service consists of multiple orderer nodes. So these nodes again need to communicate with each other, which results in a communication overhead. And once that is done, the orderer sends this information to all of the organizations, then sends the block to all of the organizations, and then this is again the validation happens. So what we saw just now is where the Fabric client is the one who's responsible for initiating the transactions between the peers and then to the orderer. But if we look at an architecture like Bessu, right, the validators of on the chain or the nodes are the ones which are responsible for doing all the validations among themselves and inter-communicating. So what happens is initially the client sends a request to any one of the validators, post which let's say validator 2, then it becomes validator 2's responsibility to communicate that request to validator 1 and 3. So in blockchains, which kind of follow the Bessu architecture, one requirement is that the transactions need to be available to each of the validators in that network. The second thing is let's say 2 is communicating with 1 and 3, but there's some kind of network delay or lag which causes validator 1 to be out of sync with what state the block is in 2 and 3. In that case, a block sync needs to happen which again takes time. Block validation relates to how every single validator in the network needs to validate every single block which comes in. And when we say consensus, this again is the communication overhead where each of the validators need to communicate with each other. So far, I think we have understood the various factors that influence scalability and we have also looked at the architecture which currently exists in hyperledger projects with Fabric and Bessu where possible latencies can be introduced. So we'll now kind of jump into the main topic which is design patterns for scaling blockchain. So the first pattern that we'll be talking about is single chain sharding. Sharding is a concept that is kind of dear to all engineers. It started with database sharding where instead of storing all of the data on a single database, we decided to partition it horizontally maybe into shards, into logical partition called shards which are then stored on two different physical shards. There in database sharding, what we did is split the data itself. Here in single chain sharding, what we're doing is let's say we have maybe 10,000 validators in the network and 100 blocks that require validation. Now we shard or partition the nodes itself. So we have 10,000 nodes. We partition them into groups of 100 nodes each and each of these groups is called as a committee. Now what we do is we assign the first block among the 100 blocks to be validated by the first committee. So the first block is validated by 100 nodes. The second block is validated by the next 100 nodes and so on. So you can think of it as somewhat of a random sampling. What happens here is now after, let's say committee one validates block one and what they do then is publish the signature, their signatures that they have in fact validated onto the network. Now what all the other nodes need to do is instead of validating 100 full blocks which contain all of the transaction history of that entire blockchain, they just need to verify the 10,000 signatures that have come in from each of the validators. And this is a significantly easier or a lightweight job to do than validating the entire block itself, right? So if we kind of look at, I've written some bigger notations over there. So how this single chain sharding actually helps in scaling is now let's say the block, each validator is able to scale, is able to validate two times more. Let's say the computational resource power, whatever increases by two X. Then what happens is they're able to validate even block sizes which are two times the size and they are also going to be able to validate instead of having just say 100 committees of each having 100 nodes, we can now have 200, right? Because the computation power has increased by two. So if you look at both of it now, the sharded chains capacity has increased in from two to four, right? So that is essentially what I'm trying to say there, computational capacity of a single node if we call it big O of C, then the sharded node capacity becomes quadratic which is C square. But there are again limitations to it, quadratic sharding isn't exponential, there is a threshold to it, but we won't get into that in the interest of time. So then that begs the question, instead of having 100 committees, why can't we have 100 separate chains and do a data partitioning where this shard one is sent to the first block, a shard two is sent to the second block. The problem here is the security, because the entire data is split into shards and sent into multiple block chains, but now that the block chains are smaller, if any attacker kind of messes up the first block chain, then the result will affect all of the shards, right? So basically the attacker only needs the first chain to go bad for the entire data to go bad. So that's where the problem of security comes in, which is why we spoke about single chain sharding. This was on chain. Now if we look at some off chain transaction processing, I already mentioned trusted execution environment and there's also the concept of state channels which Arun will cover with the example of an existing hyperledger labs project. There's this concept of side chain where we peg one of the chains to the main chain. And this is like a two way pegging. So what happens here is we move assets from the main chain to the side chain. Let's say there's some transaction that needs to execute on the main chain, but there is some latency or some delays over there. So what we do is we lock the assets on the main chain and then the bridge is what is used to actually transfer the assets to the side, to do the locking essentially. And then on the side chain, we generate equivalent amount of assets on using whatever asset generation is done on the side chain. And then once that is done, we spend that asset and unlock, we spend that asset on the side chain and unlock this exact amount on the main chain. So that way the work that the assets that needed to be minted have been done on the side chain and the entire transaction has been validated on the side chain and the main chains, latency or delays have not affected the overall process. Over to Arun for optimistic rollups. Sure. So in the interest from time, I'm going to make it more interesting. So far we have been seeing different techniques of scaling in terms of utilizing some capabilities in addition to blockchain. So I'm going to continue the same, but in this technique, instead of utilizing additional capability, let's imagine that what if clients can be more intelligent? What if person who's going to send transaction to blockchain can do something more than what they currently do? So that's where the rollups technique come into picture. So what do rollups do? They process some transactions outside the blockchain. So let's say in this case optimistic rollup, it's like we have an, let's say there are transactions that are coming in. So we want to batch them together and then put them onto blockchain. That's the straightforward sentence of definition of what this rollup optimistic rollup is. So what exactly happens is we have an operator who would be monitoring for incoming transactions. They would batch them through some means and they will say, I take responsibility for adding this or committing these transactions onto the main blockchain, no matter which protocol you choose. And there are other, there is possibility or there is responsibility of other people in the network to challenge this operator because now we are making the operator to be more powerful. Like the person who is sending transaction to the actual blockchain is more powerful than the blockchain. And the trust establishment within blockchain layer is now delegated also to some extent to the client who is sending transaction. So that's where the challenge approach also needs to be there. So challenge approach is like if you don't abide by or if you don't agree to this result, you do have a period where you can go and say, hey, I don't agree to your results. Let me challenge you, let's reprocess this. Let's see who is correct. And whoever is correct, they win and either block gets accepted or rejected. Similarly, this is one of the interesting concept that almost everybody speaks nowadays in very simple terms. Let me try to break it. It's another roll-up technique where you do something outside. The client becomes more powerful. They do then commit the information onto the actual blockchain. But this time instead of directly batching and adding, they do some more intelligence. They add some more intelligence. What exactly are they adding? This time the operator or whoever is adding the transaction, they would be holding additional responsibility. They cannot be blind. There is very little chance that they can add something random onto the blockchain. How is it done? Like you all probably know what zero-knowledge technology is. Like it's existing for a long time now. For instance, like it has those three properties of soundness, completeness, and zero-knowledge. If I have to again break those terms in simple way, if I hold something, for instance, I hold my passport, Indian passport, I just need to come to, let's say, immigration and say, hey, I'm an immigration officer. I'm coming to your country. I hold Indian passport. That's all I need to say. And if they trust me, they will allow me in. If they don't trust me, they'll probably stop me from coming in. So how can we achieve this using zero-knowledge? That's the challenge, right? So let's say Indian government would have issued me a passport. So that's my proof. I'll take that proof. I'll add it onto blockchain for extra security. And that proof cannot derive any other information other than saying that I hold a passport. So that is the kind of zero-knowledge. So there is proof of ownership, and there is proof that it is issued by the proper correct entity. And then there is no leakage of information other than what actually I'm trying to say. So the same technique is applied in the roll-up. When I apply this technique onto blockchain, I add the proof onto blockchain. And that proof, you probably might know like ZK Snarks, have heard about ZK Snarks. It really means that I want to verify my proof. And that proof has to be added just one time, and it has to be done much faster, and it should not reveal any other information other than what I'm really adding as a proof onto blockchain. So we'll probably also quickly talk about current projects that, I mean current state within Hyperledger, what is available. We spoke about a lot many concepts. Are these available or are these just concepts that we are talking about? So let's break them by using two examples I want to quickly talk about. In the previous one of the slides, I spoke about the way somebody can design the smart contract. So quickly taking an example from the way we write any transaction processors in Sawtooth, it is state address based Sawtooth project. Like if you remember how to write the transaction, your smart contract, you basically have to arrive at the addresses where you need to store your data. And based on that, you will submit your transaction and a transaction processor will execute it. Validator will try to parallelize if there is no conflict in your addresses. So what it allows us, let's say if you can represent your data in 35 byte string, then imagine that first few bytes can be representing a namespace. So that way you're right away delegating. If anything comes from another namespace, these two can be executed parallelly because my state input state, so smart contract is making state transition from one input state to another state. So my state would still remain the same even if I execute two transactions parallelly. They don't have any conflicts and my block has high possibility of getting accepted. There is no conflicts that may come in. There is parallelization that comes into picture. So similarly, I mean that is achieved in sort of choosing these two parameters basically in the client transaction that you sign. You're adding input and output transactions. What are you reading and what are you writing? If there is no conflicts in your namespace and you're really creating that prefix so that you can parallelize your transaction. There is option for you to further do a design analysis based on use case, right? So I'll just quickly cover state channels as well. There's a project called Perron in Hyperledger. So what is, which uses the concept of state channel. I have three minutes left, so I'm gonna go rapidly. So what is a state channel is essentially, let's say there's an agreement between me and Arun that I need to pay him say $1 every week, right? Or every day for one week. Now, instead of logging every single $1 that I pay for him as a transaction onto the blockchain what I do is open a channel which is the first transaction that he'll get executed on a blockchain which is initiating the channel. And then I'm just going to send him a message, a signed message which says I owe you $1 now today. And then tomorrow I'll say the same thing, I owe you a dollar tomorrow and so on. So at the end of the week, I have $7 Arun seven dollars and he wants to kind of cash out. So what he does is sends another transaction, does another transaction of closing the channel onto the blockchain. So when he does that, the final result of $7 the smart contract which exists on the blockchain will transfer the $7 to him. Now there's also this concept of hashed time lock transaction which is at the end of the week if Arun doesn't initiate the closed transaction request then all of the transaction messages, the signed messages that have been sent on the state channel will be invalidated. So that is essentially the concept of state channels. I will skip the conclusion in the interest of time we're open to questions. So this session we wanted to cover the aspects that would influence the scalability. So this does not cover the result of execution of any blockchain as such, but it does improve the performance. For instance, the state channel concept that was just spoken about, it helps us in let's say, if only two parties are interacting, they can interact as many transactions as they want just like any other streaming process. They can exchange the data and perform transactions. Just make sure that every transactions that is sent is signed both ways. Like both parties have to agree for a particular transaction and they don't need to wait for blockchain to commit or as such anything because they have opened a channel and that channel is active for a period and within that period they could execute as many transactions as they want. Just make sure that both of them are agreeing for everything and the final resultant, all the signatures for each transactions are verified on the blockchain at the end of it. So this will of course improve because we are removing the validate phase from each transaction and validation phase in order execute validate that I initially spoke about that validation is happening just once for a period for a state channel and this concept will definitely help. And another thing like one more cool factor about this is it helps us in abstracting the blockchain itself, right? So because I'm not tying myself to a particular blockchain I can use any protocol of my choice. So state channel is independent of what blockchain I'm running it on. Yeah, so there's a lot that happens with an hyperledger community. I know it's end of our schedule but please do join the calls. There is a scalability, I mean working group as well where many topics like this and there used to be architecture working group all of them, they do run weekly meetings or monthly meetings. You can learn more about and meet the people who are building such solutions. Thank you if there are any questions we are happy to answer, otherwise we are good. Yeah, side channel out of the box there is no such implementation. There is one fabric smart client it not exactly a side channel concept but it allows you to do intelligent transactions that would deal with channel concept within fabric in a different way. Where in which you can define the workflow that has dependency across channels. It's not like straightforward implementation of side the concept that we spoke but it's kind of implicitly giving you that feature, that capability. Yeah, each channel is kind of like dependent on each other if there is any dependency event that solves the similar kind of problem statement. It's not exactly designed that way but it out of the box provides that kind of solution. Any other questions? Oh, okay. So when we design the smart contract it's really important for us, generally people start thinking in terms of business workflow. They say as per my transaction I have to follow this flow. Basically person A needs to do ABC task, person B needs to do XYZ task and only when both persons have done something then the workflow has completed. But it's very important for us to start thinking in terms of can I make effective use of the blockchain that I'm trying to use. One example that I gave was again within the way we select how we create the namespaces within my state database and can I store something in memory so that my access pattern becomes much smoother, much faster. So that's the elegance that I was talking about as far in the conclusion. So by not just on the solution side but also spending some time in terms of the scaling factors that we just discussed. If we can bring those factors into our designing it makes the process much smoother. It helps us in extracting all possible capability that a particular blockchain protocol has to offer. Okay, we are around here and thank you all. Thank you very much.