 So, on Ethereum there is a limit on the contract size that we can talk about. What will be covered in this talk? This is the DreamFlow overview. I will talk about the rationale behind the limit while this was put in place. Then I will be talking about a few alternative proposals for overcoming the problem by eliminating the need of a limit. Then I will be giving some tips for optimizing and reducing the size of your smart contract. And finally, I will be talking about libraries and proxies that you can use to basically deploy contracts of infinite size. So, is there a limit? Yes, there is. EIP 170 introduced a limit of 24 kilobytes on smart port size. You can't deploy a contract greater than 24 kilobytes on Ethereum. Even though it only costs about 6 million gas to deploy a port size of 24 kilobytes, you can't go any higher. The block gas limit is now about 10 million, but you can't deploy any bigger contracts. Why was the limit put in place? So, whenever someone calls a contract, the node has to load the contract source code from the disk. And it's obvious that it costs more resources to load a bigger contract than a smaller one. But the cost of loading a contract is fixed in Ethereum right now. The cost of calling a contract using call, delegate call, static call, etc is fixed. So, it became a DDoS vector where someone will be able to deploy large contracts while still paying lower amount of gas. So, it was decided to just do the basic fix of limiting the smart contracts. It was a very nice fix, but it worked. And one thing to note is that even if something goes out of gas in Ethereum, we have to generate a proof of that, that it was executed and it was going out of gas. In case of this scenario, if a smart contract code is greater than 24 kilobytes, like if there wasn't any limit, so it will still be fetched from the disk, but then we might run out of gas and we have to generate an out of gas proof. But for generating the proof, you still need to load the whole contract. So, even if you don't have enough gas, you always need to generate load the whole contract from the disk. So, it costs a bit more gas. And then back when this limit was created about two, three years back, the apps on Ethereum weren't that complex. We mainly had issues and all and they didn't really care about this limit. They never reached it 24 kilobytes code size limit. But nowadays, we have a lot of many more complex apps, for example, Macadal systems, then Polymath ecosystem. So, these are a lot complex and require bigger contracts. So, it's finally time that we explore other solutions that can overcome the problem by still allowing us to deploy larger contracts. So, one of the solution is pagination. So, in pagination, we divide the contract into chunks of 24 kilobytes and we load the chunk whichever is needed. So, whenever you will call a smart contract, the first chunk will always be loaded and that chunk will then load further chunks depending on the name. So, any additional chunk that will be loaded will cost additional gas. So, it's not a dealer's vector anymore. But the problem with this solution is that we'll first of all have to change how the smart contract code is stored in the account tree. So, right now, we directly store the contract code linked to the contract array. But then we'll have to store it in something like an array of contract codes of different chunks. So, it will require decent changes in the protocol itself. And the other big problem with this is that it will require huge changes in the programming languages like Solidarity and Piper. Because when you introduce chunks, then you have to write your codes such that you load minimal chunks when calling a function. So, if I'm doing a transfer in an ERC-20 token, I don't want to load 10 different chunks to take out code from different pieces. I want the whole function to execute in a single chunk. So, we'll have to write codes accordingly and high-level languages will have to adapt. So, it's very hard to implement but it's a very good solution. Now, the other solution is basically buffing libraries and proxies. This solution I like very much and I'd say that they should have been implemented a lot like quite a time before. And even if other solutions are not implemented, they should be implemented. And it can also be conjugated with other solutions to give us other advantages. So, what it says is basically that we divide the call cost into two different costs. One is fetch cost and one is execution cost. So, right now for loading the smart contract and executing it, there's a single call and it costs the same amount. So, if you do a patch transfer and you call the token contract multiple times, you will be charged the whole amount multiple times. So, but the node will actually have to do less work because it only loads the contract once and then it just executes it. So, this proposal proposed that you pay differently for loading the smart contract from the hard disk and for executing the smart contract. So, if this is implemented, then that transfers will become cheaper because you'll only have to pay for loading the smart contract once and then you'll have to pay only for the execution cost. It will also make delegate proxies and libraries easier. So, right now if you call a public function of a library, it will load the whole library contract and you'll have to pay cash for it. So, if you're using like 10 functions, 10 library functions in your smart contract and then you load 10 of them, you'll have to pay the loading cost for the library 10 times. But actually, the library is loaded only once. And if this is implemented, then you'll have to only pay the loading cost only once that will make libraries more viable. Right now, if you use libraries, the gas cost will increase by about 20%. So, it's not really that useful. But after implementing this change, that difference will reduce. So, I hope people will actually start using libraries and delegate proxies. So, there's one slight disadvantage to this. Now, this proposes the use of more smart contracts, bigger size smart contracts. And I'd say that's the good case for Ethereum. But it comes with the disadvantage that light clients will have to do more work. It's not a problem for full nodes because they already have all the code within their database. But light clients have to fetch all the smart contract code from other clients every time they have to execute. So, imagine I'm now using 10 different smart contracts in my call. The light client will have to fetch 10 different smart contracts from the nodes. So, it's more work for them. But that's what Ethereum is. Now, the other simple solution is basically just charging more gas for more work. So, this says that if your smart contract is greater than 24 kilobytes, then you'll have to pay more gas for all the actions that you do. If you do a call, you'll have to pay more gas. If you do a delegate call, you'll have to pay more gas. But how it works is that the first 24 kilobytes will be free. So, if your smart contract is below 24 kilobytes, you won't have to pay any extra gas. The gas that you are paying right now will remain as it is. So, it's completely backward compatible. But it will allow deploying of larger size smart contracts. So, if you deploy a smart contract, let's say that is 30 kilobytes in size, then you'll have to pay the 24 kilobytes big size plus an additional cost for the additional 6 kilobytes that you used. And this feels a bit natural and how it should be like you pay for what you get. So, this also comes with the same advantage that the last solution came that it's more work for the like clients. If there will be more contracts, more code, more data will have to be transferred over the network and more work for the like clients. But that's what Ethereum is. So, these were alternative proposals, but what can you do now? Delimitize in place and let's say you have to create a bigger contract. What can you do? First of all, you can optimize your code to reduce the size. I'll be talking about some things. So, one thing to note is that EVM only works in 256 bits. So, even if you create an 8-bit variable, let's say Uint8, and then you add it with another Uint8 variable, the solidity compiler will convert that Uint8 to Uint256 in the backgrounds and then add them as Uint256 and then convert them back to Uint8. So, it might be obvious that it uses more contract code and it also uses more gas when you're doing it in the runtime. So, if you don't need the smaller variables, then you can just always use the bigger variables. Now, the main reason that you want to use smaller variables rather than bigger variables is that smaller variables can be packed together. So, as I mentioned, everything is 256 bits, so every storage slot is 256 bits. And for every storage slot, you have to pay 20,000 gas. If you create 256 variables, then let's say you create 32 256 bit variables, then you'll have to pay 32 into 20,000 gas for that data storage. But if you create smaller variable, let's say you create Uint64, then 4 Uint64 can be packed in 1 Uint256. 64 plus 64 plus 64 plus 64 plus 64, 256. So, for 4 Uint64 variables, you only have to pay storage cost for one slot. So, you only have to pay 20,000 gas. So, this is where you save gas. And if you can pack your variables, then go with this approach, then use smaller variables. But if you cannot pack your variables, then just use the larger variables. Okay, this one might not be obvious, but function modifiers are inefficient when it comes to the code size. So, function modifiers in solidarity are in line. So, let's say I have a contract and I have a function modifier that says only owner and it allows only the owner to call the function. And I have 20 functions that only the owner can call. And if I use this only owner's modifier, then what the solidarity compiler does is that it will copy paste the code inside the only owner modifier into all 20 of those functions. So, you can see there's code repetition there. The code is in line. The code is copy-pasted and all 20 of those functions. Now, this is not a big issue when your modifier is very small. It's just one line. But if your modifier has like five, six lines, it does some calculations as well, then this code repetition can become very painful. Internal functions on the other hand are not in line in solidarity or even whatever. So, if you call an inline function, it will be as a separate call. And so, there won't be any code repetition when calling inline functions. So, if your function modifier is big and you are using it at multiple places, I'd suggest that you just use internal functions and call them instead of using a function modifier. That way, you will be able to save a lot of code size. But do keep in mind that calling internal functions is slightly more expensive than using function modifiers during runtime. So, you'll be using, let's say, about 50 gas mode. So, it doesn't really practically matter, but it's something to keep in mind. Another thing you can use is libraries. So, as I mentioned, libraries are an awesome thing. The only reason keeping holding them back right now is that they are expensive to use. If you use libraries, you'll have to make an external call and it will cost about 3,000 gas just to make that call. Meanwhile, if you use an internal function, it costs only 50 gas. So, there's a huge discrepancy there. But if the change I talked about earlier where we split the fetching cost and execution cost into two different parts, then using libraries will become very cheap and they'll be like a very good tool. You can use them even now if you are very desperate. So, one thing to note about libraries is that only the public functions in the libraries are basically called externally. If you define an internal function in the library, then that code will also be inline in your main contract and you won't be saving any gas. So, just like function modifiers are inline, internal functions of libraries are also inline. So, if you want to save the contract code size, use public functions. In libraries, delegate call is used under the code. Conducts of calling contract is passed. So, this info is just about how libraries work. So, I'll just skip forward. And so, these are some miscellaneous tips. So, some of these are obvious. For example, avoid initializing variables with default values. I've seen a lot of people that come from backgrounds by using C++ or C. So, they have a tendency to initialize all the variables with zero. But on solidity or EDM, if any uninitialized variable is actually of value zero. So, there's no difference between uninitialized variable and a variable with default value of zero or false in boolean or whatever the default value for that variable like this. So, if you just want a variable with value zero, just create it. You don't need to set it to zero separately. You don't need to waste your precious gas in doing that. Use short reasons strings and require statements. This might be obvious, but I've seen people who just write a whole novel in the required statement that you should be doing this, this, this, this, for this to succeed. But that actually costs gas when deploying the smart contract. The reason strings are stored on the blockchain itself. So, you have to pay gas for the reason strings. So, always try to keep your reason strings within one word limit. So, that is 32, that is 256 bits basically. So, keep that within the limit. Avoid repetitive checks. So, this is a lot common and it was present even in open zeppelin's previous contracts. So, when you use same math, you are already checking for buffer and overflow and overflow. So, you don't need to check separately. So, if you are doing transfer function and you are seeing balance is equal to balance minus amount. So, if you do balance equal to balance dot sub amount, then you are already checking that the amount is basically less than balance. You don't need an additional check which says that require balance greater than amount. That's a unnecessary check and you can just remove that because same math already does that check. Make proper use of the solchee optimizer. So, many of you already use solchee optimizer, I guess almost all of you do. But there's one parameter in the optimizer that you can configure that is called runs. So, the default value is 200. Runs is basically the number you pass through the compiler and compiler uses that number thinking that it's the number of times that smart contract will be called. So, if you set the number of runs to one, then the compiler will optimize the contract code such that the contract code is very small. So, it's cheaper to deploy, but it will be a bit more expensive to make calls to that smart contract. If you set that number to very high, then the code size will be a bit bigger. So, it will be a bit more expensive in deploying but for end users using that smart contract will be a lot cheaper. If you're deploying token contracts or something, I'd suggest setting this number to high so the transfers are cheaper. But if you're deploying something like a vesting contract which you only call once, then you can set this number to low so that the deployment costs are low. See the limit? So, bypassing the limit. Basically, you can just use libraries and delegate calls to bypass the limit. What delegate calls do is to keep the storage of the current contract, but they use the code of another contract. So, you can make a delegate call in your own contract and call another contract while using the code of the other contract. So, in this case, let's say you have a token contract and you are using delegate call in that. So, all the balances of that token contract will be used, but the code of the other contract will be used. So, you keep the data, you keep the context, but you have different code. So, you can divide your smart contract into multiple smart contracts where only a single contract, the master contract stores the storage and the other contracts store the contract code. There are other techniques like EIP1538. So, this is also based on delegate calls. So, in EIP1538, what the author has done is they have created the first contract as a master contract that is an index of all the other contracts. So, you always call the master contract and then the master contract will decide which function to call. So, it's a bit like pagination, but done on top of the current architecture. So, it's a bit more expensive, but it works. So, there will be a master contract. You call that, let's say you call the transfer function. The master contract will see which of the subcontract has the transfer function and then it will just call that. And that, those were my slides. Thank you for coming out. If you have any questions, feel free to ask. Are there good libraries for doing this? I don't think there are any libraries, but the variables are always packed by default. So, in storage, if you define variables, smaller variables, the searching compiler will always pack them together. You don't need to do anything. There are some rules to pack that you need to follow. For example, you cannot pack the variables defined in memory. So, if you define variables within a function, those will not be packed. The variables defined outside the functions will always be packed. So, there are a bunch of rules around this. You can look at the solsheet documentation and you can look at my blog I did a post about it. I have a question on something like mappings, right? If they grow really big, like the size of the mappings, right? Is that a problem? Is there a restriction there? So, on EVM, you can have 2 to the power 256 storage slots. So, in reality, you can never use 2 to the power 256 storage slots. So, that storage problem is not an issue. Okay, I guess that's it. Thank you.