 Good morning everyone. So today we have Kaye Michaels, founder of Nikkei Systems. He's the founder of Nikkei Systems, a consulting firm for the platform security and cryptography. He's also co-founder of Opensers Firmware Foundation and Immune. He earned a master's degree in computer security in 2018 from Roor University Bochum and has previously worked on GNU PGN Sequoia. So I'm right, I believe. PGP. So I welcome you Kaye. So for the talk on the past, present, and the future of scaling of Web3. Thank you everyone. So yes, this first talk will be a little historical overview of what has been done in like roughly last 10 years to scale blockchain Web3 to 8 billion users. All right, I'm Kaye. I'm mostly into computer security and for me at least the most interesting areas for computer security is where security is an enabler, right? So you don't want security just being add-on, just you want to security have to enable new products, new services. And I think blockchain is one of these areas where security can still be an enabler. So if you search for like high gas fees or something like that on the internet, you find some pretty funny pictures. So this is a unique swap swap where we swap like roughly $35 in ETH and the gas fee is $24. So roughly 80% of the whole value of the transaction is just burned away in gas fees. Here's another example. This is like a $150 swap. These examples are of course share-effect, but because the capacity of a blockchain is like at least an individual limited. Oh, thanks. It's limited when there's high demand for block space, transaction fees are spiked up and they stay there elevated for quite some time. This is not just an annoyance, but this has a real impact on the type of applications you can realize on Web3, right? So this is an example here as FriendTech, which has been like super popular a few weeks ago. The basic idea is that this is an on-chain telegram, but in order to join a channel, you have to buy your ticket into that and the ticket is an on-chain object, right? So you have to do it on-chain transaction to join, which means that in order to use FriendTech, you have to have a wallet, you have to fund it with ETH, which means if you want to use FriendTech as somebody who's like not a crypto native, that is quite complicated, right? So FriendTech is on base, so it's very good integrated into Coinbase, but you have to explain to you, okay, you have to get a Coinbase buy some ETH, put the ETH that is in your Coinbase account into your FriendTech wallet and they can use it, right? It's not really very useful, very easy. Good. Another example would be Farcaster, which is an on-chain Twitter, so the basic idea is that your account and like all the important information associated with it is an on-chain object, so you are the only person that controls this data and it's open, right? You can move it to other networks, other implementations, which is pretty cool, but that means that if you want to open an account on, for example, this is Warpcast, this is like the client for iOS. Again, before you see anything, right, before you see your timeline or can understand like how the product works and who's on there, you present it with a screen that tells you, hey, please pay 5 bucks, right, in order to pay for the gas fee to generate your account. And I think Warpcast is pretty much the best implementation of that because it's an app. You can use Apple Pay, right? It's just like one button pressed away and then you have your, in this case, gas fee paid. But it's still hard if you look at at least a high-friction way to get people onto the product, right? If you compare this with what we have in Web 2, where you first give your product away from free and get people hooked and then you have the upsell, which makes it way easier to do that instead of like just starting off with it. Okay. Okay. And then you look at the price, right, 5 bucks for a year. If you look at what I can buy on AWS in terms of servers, so the cheapest server or like the cheapest VM you can get there is like roughly $20 a year, right? And I would say that one of these VMs can support more than for users for a year, right? Obviously, I understand that like doing this on-chain has additional benefits, it's not just some server running somewhere. And I can understand that this costs a bit more, but maybe it doesn't have to cost all this of magnitude more. Okay. So what can we do to make this a bit cheaper, right? And the first thing I have to look at how these PogChains work and why this has to be so expensive. Okay. This is the Web 3 track. I assume everybody has like a rough understanding how this PogChains stuff works. I will just like give a short overview in this case for Bitcoin. So we have a bunch of nodes, which are just computers and they're connected on some kind of network like the internet. And if I want to do something on this network, like for example, send money from A to B, I create a new transaction, I sign that transaction, I send it to one of those nodes and the node would then broadcast this transition over the network. Each of these nodes receives the transaction, verifies the transaction that the transaction by itself makes sense. And then after all the nodes have received this transaction, the mining process starts. And the mining process is effectively just everybody racing to mine the next buck. And this mining process randomly selects one of the winners who will then like produce the next buck, which includes my transaction and all the other transaction that has been collected during the time. And then we broadcast the buck. Everybody receives the buck. Everybody verifies the buck. And if they happen with it, the cycle repeats. So if there's a network interruption, blockchains are on the availability side of the cap theorem. So system continues working. But we have a split wall to write. So we have like in this case, two walls, one with two nodes, one with three nodes. And the mining process in this case will select one of the winners in both worlds. And once this interruption is resolved, they will merge their state. But in the sense that there is a superset, but one of these states will win. And the case of Bitcoin will be probably the one with the three nodes because there has been more computational energy put into their state than the other one. Okay, so a few observations here. Every node in the Bitcoin network has to verify every transaction, right? So there's a lot of superfluous verification and computation going on. Also, all the state has to be distributed across the whole network. There are optimizations in order to allow me to, after I've verified something, I can at least throw or like prune the state, but there has to be at least some nodes in the network that keep the whole states when all our new networks join a network. I can retrieve the whole state and verify it myself. The network has to proceed in lockstep, right? So there's not much parallelization going on. Everybody collects these transactions, they have to be broadcast, then everybody erases to mine a new block. Then the block has to be distributed and this is then the whole cycle repeats, right? We kind of like combine these steps in any way. There's no real parallelization going on, which means that adding nodes to the network doesn't really extend, increases the capacity of the network. It does to some extent, but it's trades off pretty fast. It also means that the speed of the network, the capacity of the network is pretty much limited by the smallest node that is a member of this network. I cannot slow down a network by becoming a node and then just doing less work than I have to. We'll just be left behind, but there's no additional benefit from adding larger nodes if there are still smaller nodes in the network. And then this is mining process, right? This is ironically of all the inefficiencies I talked about. This is what people concentrate on, this mining work, which is effectively just busy work in order to randomly select one of these block processes without much synchronization going on. The first solution for this mining process was to just stop doing it. The idea came up like in 2011 called proof of stake where we say, okay, instead of having this mining process, we have a different way of synchronization and in order to prevent people from just generating blocks that are invalid, everybody has to put up some kind of stake, some kind of fidelity bond that is taken over by the network if they are caught doing something that is against the protocol rules. This was later then formalized into a few blockchain implementations, for example, Agrand or Kandano where they have even security proofs for the system. But if you look at the system, it pretty much works the same as the proof of work version of Bitcoin. We have a node that generates the transaction. The transaction is broadcast across the network. But then instead of mining something, the algorithm in the proof of stake system has selected one of the block proposals in this case in advance. And the block proposal knows that he is a block proposal and he will generate a block and broadcast the block over network and then everybody can verify that okay, this block is valid and this block was created by the block proposal. So there's no mining going on, but everything else is still the same, which means that proof of stake can only, in a very limited way, increase the capacity of a blockchain. So the core problems that we have to distribute the state across the whole network and everybody has to verify all the transaction. We still have that. There's some improvement on the system called an asynchronous PFT algorithm. Implementation would be, for example, phantom and header hashgraph where there's a limited amount of parallelization going on. So these blocks and this mining process, one of the main goals is to order all the transaction in the global ordering. And with these asynchronous PFTs, I, as a note, collect already transactions and put it into local ordering and I distribute this local ordering. And then there's some clever graph algorithms going on that combine these local orderings into a global ordering that is then the same across all the nodes. But this is a very limited amount of asynchronous going on. So you still have blocks, you still have the network pretty much running a lock step. Okay, so in order to make the system really more efficient, we have to essentially use the blockchain less and build stuff on top of the blockchain, which are like in combined known as layer two solutions. Okay, the first layer two solution starts with a very simple question. If a tree falls in the forest and nobody's around to hear it, does it make a sound? I mean, everyone can like think about this for a second. Okay, the blockchain interpretation of this would be if two people exchange money, why does the whole world need to know, right? I mean, if you look at it from in the physical world, if like two people exchange cash, they don't have to like broadcast this fact to everybody else, right? And maybe you can learn something from that in the blockchain world. And the first ideal limitation of this was the lighting that worked for Bitcoin, where we limit the amount of transactions we put into the blockchain. So we have here Alice with the fancy glasses and Bob in the cat suit, and the Bitcoin network over there. And they decided they want to exchange money using the lightning network. And what they do is they create something called a funny transaction where they both put up money, like for example, one Bitcoin and put it in some kind of shared pot. And this is funny transaction that this has to be broadcast on network. So do we pay gas fees for that? But what they then can do is exchange the pot of money between them, right? So everybody, both of them put like one Bitcoin and then one party can pay from this pot the other party, right? And for each of these payments, we create a commitment transaction that effectively allows both parties to receive the money out of this pot. But we don't broadcast this to the Bitcoin network just to keep this for ourselves. And only if we want to close the channel, we broadcast the last commitment transaction to the blockchain and tell the blockchain, okay, this is like how the state is after our commitment transaction is like the last known world state. And this way, we can have exchanges here that don't have to be broadcast to the blockchain. It can be like a billion different commitment transactions, but we only broadcast the last. So we only have like two fees we have to pay once to set up this channel, what they call it, and once to close it. I would like super briefly tell you how this works because I think it's pretty neat. The basic idea is you have this funding transaction here where we start off with and then we have these commitment transactions we create and these spend from the funding transaction, right? So everybody puts like one Bitcoin in and we create commitment transactions. There was two of them. So one of them is Alice commitment transaction that she can send to the blockchain. And then one is for Bob. And Alice transaction just says, okay, Bob gets his money back immediately, but Alice has to wait some kind of time and the time is measured in blocks produced. And then there's the mirror transaction for Bob, right? So Bob's transaction, the transaction Bob can send to the blockchain. Bob has to wait for his money, but Alice gets some money immediately. And in order to spend this money or do a transaction on this pot, you just create a new set of commitment transactions. So let's say we pay 0.2 Bitcoin from Alice to Bob, then we create a new pair of these transactions. In this case, we have one where Alice has 0.8 and Bob gets one too. And again, there are pairs of them that each of those can close the channel. But depending on who sends it, then they have to wait. And the other party gets some money immediately. And we also create a vocation transaction for the previous commitment transaction. I will explain this a bit later. So in the best case, if everybody is correctly executing the protocol, we have this funding transaction that is committed to the blockchain. And then we have the last commitment transaction. So both put in one Bitcoin. And then later we close the channel with the updated balances. And it doesn't really matter, right? Alice can send a transaction or Bob doesn't change the fact. But now, of course, the question, Alice could try to defraud Bob by sending the old commitment transaction where still has one Bitcoin instead of 0.8. And this is what this vocation transaction is for. What Bob can do is he can monitor the blockchain. I can see, hey, wait a second. Alice sent this revoked commitment transaction to the blockchain. And then he can use this vocation transaction to steal Alice's money, right? So because Alice has to still has to wait a few blocks before she gets the money off this transaction, Bob can use the other transaction to steal the money before this happens and effectively punish Alice for violating the protocol. So in conclusion, the basic idea is first we limit the amount of interaction of transactions we send to the blockchain. Instead, we have a local history of what has happened that we can then later use to update blockchain or in the case there's some kind of fraud to prove that we are that party that followed the protocol correctly and the other party violated it. This kind of mirrors the clearing and settlement system we have in TradFi, right? So when you pay something for credit card, it first goes through clearing, which takes a few seconds. But in order for the transaction to settle for the money to actually go from your bank account to the vendor's bank account, this takes like days to weeks. And during this time, the transaction just exists as some database entry in the clearing house. That's all cool, right? We have solved the problem. The problem is it is unclear how to extend this to something that is not a payment, right? The real benefit of blockchain is that you can have general computation on the right, for example, Ethereum as a smart contract platform. How to do this system with something that is just a program instead of just a payment was an open question for a long time. It's already the next idea where first packed sidechains and later then refined into plasma chains, where the idea is you just have multiple blockchains with different tradeoffs. So in this case, we have Ethereum, which is our slow and expensive, but very decentralized and secure chain called the root chain in this context. And then we have the polygon here as example. There are other implementations, but polygons like the canonical example of this. There is a smaller chain with less deserterization and lower gas fees, but also lower security, presumably. And the idea is that the plasma chain receives the state from the root chain. So the plasma chain always knows what's the latest block in the root chain. Then the plasma chain runs its own consensus algorithm, which is tuned towards speed and low fees. So for example, you could just have lower deserterization, less nodes that have higher capacity, or you have something like proof authority where we say, okay, all the participants in this smaller blockchain are known and we trust them so we can lower the requirements on verification for transactions. And it's not like everybody can join that network. It doesn't really matter what it is. It just has to be faster than what we do in the root chain. And then we commit these blocks we create on this plasma chain and send it back to a root chain as some kind of commitment. Okay, we say, okay, this is a new block. We put this into root chain and then this block is settled. There's no verification done here, really. It's just to be put in there and then we say, okay, that's the last state. So basic ideas, right? We have like a high security, high deserterization, high fee system here, the new root chain, and then we have the low security, lower deserterization, but higher throughput and lower fees chain on the other side. And the basic ideas, you can use the plasma chain to pay for your coffee, but you can use the root chain to pay for your house. And you can have these different trade-offs and decide which one is going to be the better protocol to use. And the whole reason why we connect these chains together is so we can move assets between them automatically, right? So Bob can deposit his some kind of asset, into some smart contract on the root chain. This state update is sent to the plasma chain. The plasma chain says, oh, okay, somebody has deposited something and then the plasma chain can credit you that asset or a mirrored version of this asset on the plasma chain. Then you can use that and the plasma chain can then burn it, like send it to some address where there's no private key known for. And then later there can be a proof submitted to root chain. So yes, this has been burned on the plasma chain, so you can retrieve from the smart contract on the root chain. Okay, this allows us to have different blockchains with different trade-offs, but doesn't solve the core problem, right? We still have these blockchains, right? We still do redundant computation. We still have distributed the whole state across the whole network. So the core inefficiencies are not really solved. The next idea was to actually do things off the chain using something called a rollup. There are two types of rollups, the optimistic ones and the zero knowledge ones, because Arbitrum, one of our sponsors here, is an optimistic rollup and they paid for my breakfast. We're going to start with them. So in the optimistic rollup version, we again have a blockchain that we assume is safe and secure and then we have off-chain components. And off-chain component here means it's just some server running somewhere by some guy that we don't really trust. We know it's there and there we can verify what they do, but we don't have to trust them. And there are two of them. There's the sequencer, sometimes also called the batcher, and then there's the state function or state transitioning function or a proposal. And the basic idea is in order to use the rollup, you send your transactions to the sequencer and the sequencer batches them up and commits them as one block into the underlying blockchain. And as they ask the first way, you can save on transaction fees, right? Instead of everybody submitting their transaction to the blockchain and paying the fee, you do it once in one batch, right? Obviously, we don't want to... Oh, right. And then we have the state function that reads these committed transactions plus the previous date and computes the next date. And again, the state is committed to the underlying blockchain. So of course, we don't want to have to trust these systems. So if the sequencer, for example, is down or it just refuses to accept your transaction, everybody has the option to directly submit their transaction to the smart contract on the blockchain. This obviously is more expensive, right? Because you have to pay the gas fees in this case, but you cannot be censored by the sequencer. Okay, but what about the state transitioning function? This is way more important. What if this thing does something wrong and, for example, includes my transaction but doesn't execute it or executes it wrong? And so if you look at the state transitioning function, right, it's pretty simple. We have the list of transactions that goes in as an import. You have the previous date that goes in. And then we have the function in the middle, which is known, right? Everybody knows the state transitioning function. It's part of the system and it's deterministic. So what I can do if I have the feeling that the off-chain version of the state transitioning function did something wrong, I can challenge that. So I have to put up a bond first in order to make the system not abusable, but then you can say, wait a second, this last state that you produced, I think this is wrong. This should be the state. Put up your alternative interpretation of the word. And then what we do is effectively run the state transitioning function on-chain. So the whole thing that has been done off-chain for efficiency reason, we do it once on-chain for this particular state transition. This is a gross simplification of what we actually do. So in reality, there's a game that runs before that where I and the off-chain state transitioning function figure out, okay, which is the exact instruction that we disagree on and we only run this instruction on the blockchain. But this is just an optimization, right? From a conceptual point of view, you can imagine, okay, we run this transitioning function on-chain once because the assumption is running it on-chain is something that nobody else can manipulate. And once we decide, okay, who was right here, we can continue with the off-chain computation. So these are the optimistic roll-ups, right? We assume the off-chain components are truthful to the birds, right? They do everything right. But we leave the door open to challenge this. The other implementation would be a pessimistic roll-up where we eagerly verify each of these new states. I'm only going to explain this like a little bit here, very brief, because yesterday we had to talk from Leo, right? They've always planned CK in very much detail. So I guess you guys are all experts now. But the basic idea is we still have a sequencer, which is also off-chain, but instead of this block proposal, we have the prover. And the prover gets the transaction set from the sequencer and does the transitioning, computes the next state, but it also computes a cryptographic proof. And this proof can be verified and make sure that the prover has followed this state transitioning protocol correctly, right? And then the smart contract can verify that. And if the proof is correct, it accepts the next state and everything's fine. Okay, why don't we always do that, right? I mean, isn't that way easier to notice like on-chain fraud-proofing with the state transitioning function? And with that, we have any optimistic roll-ups? Well, so collecting enough transactions for the next state takes a sequencer maybe a minute. Creating this cryptographic proof takes an hour or two hours. So it's very computational intensive. And that's why we can fix this problem a bit using just more provers while paralyzing the operation. But it's still a very expensive operation. And this is why, at least for the CK protocols we have now, we are just on the edge of something that's actually practical. So there's a lot of research going into these things, but there seem to be like the best way forward here. Okay, last one, data availability. So if you look at like back at the roll-ups, use rows for two things, right? We use them for execution, doing this fraud-proof thing or doing the CK proof verification. And for just bulk data storage, right? For the list of transactions, dump them in there. Not really to store it, but to just have a commitment to the String Jackson and have a place where we can store them where it cannot be deleted later, right? Because if you want to do this fraud-proof thing, you have to be able to retrieve the previous data. And Ethereum wasn't really built to be a bulk data storage. I will skip this to save a bit of time. But the basic ideas, there have been some improvements, like mid-March this year, having a new transaction type, which makes it a bit cheaper to have this data. So instead of packing the data in transaction, we just have as an attached thing and we only have some commitments in the transaction, which makes the whole thing like one order of magnitude cheaper, roughly. Well, at least it did until like a few days ago. So this is the graph of the fees we have to pay for this new transaction type. So it stayed at effectively zero for a long time, but then later people figure out, hey, we can use this for other stuff. And suddenly we're on the upside for the block fees too. So this doesn't seem to be like a long-term solution here. What is probably more of a long-term solution is to have something called the data availability layer, where we have a dedicated blockchain that is only there for data storage. So instead of having a smart contract platform in there and being able to execute whatever program you want to, these kinds of blockchains, they only receive messages. Messages have a namespace. This is a color-color coded here. And then they have some un-integrated bulk data attached to it that we don't really, like the blockchain doesn't really care about it. And what we can later do is to use that as data storage for these roll-ups. So in this optimistic roll-up, for example, you had the sequence that it would just dump the transaction list onto this data availability layer, and only the important things like the next date maybe would be put onto the Ethereum blockchain saving a lot of transaction fees and making the system a lot cheaper. All right, in conclusion. So we started off improving on the consensus algorithm itself, going from proof of work to proof of stake, trying to make the proof of stake mechanism as efficient as possible. But later we went to more chains, having different trade-offs for different chains, and trying to make the system a bit more efficient this way. And then we later moved on to off-chain components, right? We have this batching and delay consensus mechanism that you have to roll-ups. And at least for the future, it seems to be that we're going towards the CK solution, right? Because the CK solution solves two big problems, right? We first can generate a proof that some kind of off-chain components followed the protocol correctly, right? We do only the verification on-chain. And second, because of the zero-knowledge properties, you can have inputs into your smart contracts that you don't have to reveal, right? So you can even process private data on the blockchain even though everybody can see the state. Not everyone can see all your inputs. And that's it. Thank you very much. 10 seconds for questions. Any questions to ask Kai? Yes. Okay, thank you very much, Kai, for the session. We'll move to the next session. Yes. Thank you so much. Today's speaker is Mohit Bhatt, who is a lead blockchain engineer at Singularity. Yesterday we both met together in the community meet. Mohit Bhatt is a distinguished full-stack blockchain engineer with rich background in the blockchain sector, currently leading the lead blockchain engineer at Singularity Singapore. His extensive involvement in the open-source program is marked by significant contributions as a project admin and mentor at the C2SA organization for the Google Summer of Code, along with participations in the Summer of Bitcoin, GitHub externship, and Ethereum Fellowship. As a certified Ethereum developer, Mohit has a commendable track record of success in over 30 hackathons and awards, showing his expertise and innovative approach in the field. His passion for blockchain technology extends to heavy contributions in both Web2 and Web3 open-source projects, highlighting his commitment to the advancement and development of blockchain technologies. Welcome Mohit for the talk. Yeah, it's yours. Thank you. Hello, everyone. Welcome to the session on decoding open-source in Web3, leveraging open-source solutions for Web3 challenges. So I am Mohit and I am the project admin and mentor at Google Summer of Code for the C2SA organization, and I'm also the lead blockchain engineer at Singularity. In past, I have been the part of GSOC, the Google Summer of Code as a student, like past four, five years back, and then I'm the admin now. And now also I'm the part of Summer of Bitcoin, which is for the Bitcoin fellowships and the open-source programs and of course the GitHub externships as well. So let's get started with the session. So the first question that comes to everyone's mind is like why open-source in Web3? Okay, I know like there are a lot of students here and they ask me this question like why we should do open-source? Okay, some people say we should do open-source in Web2. Some say like we should do open-source in Web3. But the main agenda comes like what's the benefits? Some people do this for passion, but most of you want it for like some reason to do it. So the first thing is like Web3 is inherently open-source. So what I mean by this? So see what exactly is Web3? Web3 is basically a kind of blockchain. It's a decentralized database, a ledger, which is like basically controlled by a set of nodes, computers. You can just, if I talk in layman terms, Web3 just means basically like there are a lot of computers around the world which is storing data. And these computers are basically running a software, a client software which is like written by a set of people, random people around the world. Who are they? They are basically the open-source developers. So you just have to run a client software to connect to the network. And these are all maintained by the open-source developers. You might go to the Bitcoin GitHub repo. You can go to the Ethereum GitHub repo. They all are like maintained by open-source developers. There is no particular organization which is controlling that particular source code and all that stuff. And now talking about like dApps, I want to take this first because it's very important to know like when you are contributing to some applications or as a developer, what are the things that are present in that particular technology? So initially like in Web2, we had this storage back end and the front end. The storage was the MongoDB Firebase, where you store the data. Then there was the back end which was written in Node, Python and all those things. And then there was a front end. So these are the applications you might be building as of now as a web developer and as I have developed. But in Web3, what comes new stuff is like the blockchain. So blockchain is basically the storage layer where you store the data. And then comes the smart contracts. Smart contracts act as like the back end of this whole application that you build. So now the blockchain is replacing the storage layer. Back end is being replaced by the smart contracts which contains all the stuff. And this is written in solidity, viper and rust. So these are the main languages where you can contribute. So if you want to contribute as a smart contract developer or as an auditor, as a security person, you need to learn solidity, viper and rust. If you want to work on the protocol itself, if you want to like contribute to the blockchain itself, the source code of blockchain, then those are written in C++, Python, Java and mostly the rust again. So these are the main things. And then comes the node provider. So node provider is nothing. It is the way to connect to the blockchain and then build your application. And then there is the front end. So front end is like that you build. If you are a front end developer, you are already a blockchain developer. Okay, so you just need to learn the blockchain aspect, the blockchain stuff and you are ready to be an open source developer. You are ready to be a developer in the ecosystem. So yeah, now the second part comes like why open source and web free. The second part is for the build for community. So when what I do basically personally, when I am like building something and I found, okay, there is no tool for this particular application, there is nothing for this stuff. Let's build a library for it. Let's build something for it. And that is what I mean by build for community. So you have anything in your mind, you just saw like how web three works, you found out that web three is not working as per the goals or there is no tool for it. Or there is a problem. There is a scalability issue. There is this particular issue. So what you can do is basically you can just build a tool for it. You can build a project out of it and just publish in the community. You just put a GitHub repo on the market and just put it on your Twitter LinkedIn. And it has a huge impact. I personally have built some libraries which are being used by a lot of people based on IPFS five point network. How I did that, I just found out, okay, there is there is no library for this particular thing and I build it and now the community like looks behind me. And then there is another way to build a portfolio. So many students ask me, we have learned web three. We have learned this thing. Now what to do next? How to find the projects? I don't get ideas. So I would say go into the open source development. Okay, you just contribute to that. I will talk about some platforms where you can contribute. You can contribute to our organization as well, which is an open source organization. So just contribute to these projects and you can mention all of those things in your resume as well. And it has a in web three and open source developer has like a great benefit. So there was a lot of hiring that happens on open source contribution basis. So if you are an open source developer, people like companies directly hire you. Okay, you have already contributed to this code base, you are already a great, like, they know, like, you know, all those things and they just hire you. So it's a great way to build portfolio. If you want to build resume, it's a it has a lot of hiring. If you are looking for a job or something, it has those benefits as well. And then comes the networking grants funding and airdrops. So this is another huge benefit of it. So in web three, this is like the the best part of open source development, like if you have built some tool, okay, if you have built something, you can get a grant for it. There is Ethereum Foundation here. There is many other foundations as well. So if you have built something very good for the community as an open source good, you can take a grant and keep continuing that project for a long time. And then there is like funding opportunities as well. If your project is very good, you can get a funding for it and they support you for all directions. In web three, you get a lot of support for open source development. And sometimes you get the airdrops as well. So what are airdrops? Basically, it's just like if you have contributed to some good code base, you just basically get some tokens for that blockchain. So recently, a lot of people in the stock net, there is a stock net blockchain. So a lot of GitHub developers in the stock net ecosystem got this airdrop. It was around like, I would say $10,000, $20,000 just for the GitHub contributions. So that is also something great and exciting stuff in the web three part. And then it's of course challenging and exciting, you will learn a lot of things and it's a great way to boost your knowledge as well. So you will learn a lot of things. You will learn how to write better code, how to write all the better stuff. So yeah. So so for the benefits for contribution of like open source and for the organizations that was the part for the students and the developers. So for the organizations, as I said, web three is almost 90 to 95% like open source itself by nature. But if some company is not an open source organization, the benefits you should go for open source because the benefits you get is like community collaboration and innovation. So usually you will find like five, six developers who will like put on your product. But if you make those things open source for a whole community, for a whole all the people around the world, you will see like you are you will already build community because they will tell everyone, okay, how this all stuff works. If they are like contributing to that project, they will automatically give a word of mouth for your product. And also like it allows other organizations to work directly with your ecosystem. So you will get all the support, customer support, ecosystem support very fast. And the next part is like the transparency and trust in web three. It's very important that your product has trust factor in the market. So it's not like web two where you can just post, you can just publish a product and your close source or everything. If you are not open source, if your code is not open for the public, it is very difficult to generate that trust. So I will not like work on something I will not use a token or like a website which doesn't tell me like what exactly they are doing behind the scenes. So you have to make that all code base and all the contracts mostly open source because that is how the three would like to trust you because automatically all the code base is open. And then also it tells it helps you in the auditing and review. So you know, there are a lot of hacks that happen in the web three. If you are like, you have like put all your code open and there are a lot of contributors around the world, they will tell you all the bugs. It will automatically go in the bug bounty and all those aspects because people will automatically find the issues that that particular smart contract or the particular website has and they will automatically contribute to it. And then of course there is cost savings and flexibility. So it will automatically reduce your cost. You don't have to hire a big team. That is a very important aspect in the web three. We don't want to burn money. Okay, we want to save money and we want to build something better for the future. So it also helps you in the cost savings and it also provides you the flexibility because there is like everyone contributing to your code base. So the next part is basically how to start contributing to the web three open. So this is specifically for the students and the developers. So the first thing I would suggest you like make sure you have a good grasp of the tech programming language you want to like work on. Many people just after like doing this session, they will just go and start contributing to some read me or something that is not the good way. You should first learn what exactly is blockchain. It will not take more than one or two months. Learn about the solidity rust and all those stuff. Not exactly rust. You can go for Bitcoin, Ethereum networks and the solidity. There are a lot of resources present on the internet. And once you have done that, go on the platforms to start contributing. Okay, so I will tell you about a lot of platforms where there are open issues present. You get like the tokens when you contribute to it. You get paid for those solving those issues. And those platforms are the best part. And then there is like first of all on these platforms or the GitHub repo of Bitcoin, Ethereum or any kind of blockchain polygon or something. You can go there, understand their project, find the issues. You will find all the good first issues listed on these repositories. And then of course, write the code, get review, test your code. If it's a big issue that you are solving, try to do a security check and audit. And of course, like once merged share in public sharing in public is very important. People will know you. Okay, you have contributed to something good. And it will help you as a developer. And it will help that organization as well by like they will be like known by okay, there are all contributors around the world, which are contributing to it. So yeah, now the next part is like my own score lab organization, which is the part of Google summer off code. So we are like a part of open source organization for a of Google summer off code for a long time. We have been till now we have like 100 plus contributors around the world, which are contributing to our projects. We have like 50 plus projects across various domains like AI, web three, cloud, web dev. If you are a web developer, you can contribute to a cloud. And we have like 20 plus research publications till now. And also, as I said, we are a part of Google summer off code. This year's state have already gone, but you can contribute for next year. And it's like great way to start your open source journey. So some of the projects that we have is like, these are a lot of projects like you have scan it, which is based on Kubernetes. We have like label lab, which is an image analyzing and classification platform. We have like this open MF, which are like the forensic tools. So if you are interested in any of these projects, you can like basically contribute to it. We have like a lot of projects we build basically tools for the community for the developers. It's not like very hard to contribute to these projects. Some of the projects are very big, but most of the projects are like very beginner friendly. And you can just start contributing tomorrow as well. If you are already a web dev, you can contribute to survey six, it has like front end component react, next chess, and all those stuff. In web three blockchain, we have this NFT toolbox project and the web three stash library. And if the toolbox project has been for like last two years and the three stash, I started in a hackathon and then now it's an open source project. So you can also suggest your own open source projects to our organization. We like would love to help you grow your project with us. And there are like more coming soon like the next year in the next two years. So a little bit about the NFT toolbox project. It's an npm package for seamless integrations of NFT related functionalities for the web two. So you all know that we have, we need basically web three and many web two organizations who want to issue coupons as NFTs, the airlines and other web two projects who want to like leverage the web three functionalities and issue tokens or NFTs as a marketing stuff. They don't have like a no core tool for the web three. So we have built this library, which is like, you don't have to learn anything about the NFTs. You just have to instantiate the library. We have this whole engine working behind the scenes. You can generate images automatically. And then you can put a lot of images and it will basically automatically transfer it to the file storage systems like IPFS, RVV and other stuff. And then it will basically create the NFTs on any kind of blockchain you want. We are like adding supports for Solana near and other kinds of blockchains. And we are also providing like batch minting support with the best algorithms. So it's easy to use npm library that you can leverage. So the next part is basically about the web three stash project. So this is my personal favorite project. The web three stash project is about like the storage layer. So you all know that there is like IPFS NFT dot storage web three dot storage. These service providers are there, which basically like allow you to upload data to the decentralized storage networks. So what we have done is like there is a problem with all these service providers that you have to learn about their documentation. You have to learn about their like syntax. They have a totally different syntax to do the same operation. So to solve that stuff, what we did we made a common library where you can just provide the service name like which is basically if you want to use pinata, just provide the pinata as a service name. If you want to provide NFT dot storage, if you want to use NFT storage, just provide the service name as NFT dot storage. In conflict, you need to pass the API key and the API token. And that's it. This is a single line that you need to change if you want to switch between the providers. And then you can just call the same function on all the providers. So service dot upload Jason service dot upload image upload video. We have also added like support for uploading a chunk of loading of big videos and photos that is not yet supported in these these service providers. So we provide you with that functionality if you want to upload directly from AWS or from other stuff. So this is like and you can also contribute to this because this is written in JavaScript. You need of course the knowledge of blockchain providers and the decentralized storage networks. But this is like easy for like the beginners as well. About the node cloud project. This is not on web three, but I want to take this because I started my journey in open source from this project. So the node cloud project is again and library for the cloud services. So you might have used AWS and this Google cloud and other stuff though. So what we did is like again, we have built a library which it's kind of Terra from where you can control all the services in your cloud to this library itself. And this is again common for all the cloud providers. So you don't have to if you want to migrate data from Google cloud to AWS, you can easily do that with this library. If you want to use as your cosmos TV, but want to like run your application on AWS EC2, you can do this. You can do that with the node cloud project. And the best part of this project is like the class generator module. So what I mean the class generator module. So you know, like in these cloud SDKs, there are a lot of services and they have like a lot of classes and functions. So to write to rewrite those things manually in your library, it's very difficult, right? So what we did is like we made a class generator module, which is a magic module. So what it basically does, you need to write a transformer parser. We basically write a plugin that is called transformer parser and the extractor. Once we have wrote that plugin, what it does basically, it basically just after the cloud SDKs, it will basically just automatically find the type definition files in the project, parse it convert into an abstract syntax tree, do the data extraction. And once the data extraction is done, it will automatically generate classes for all the services. So you don't have to write everything manually, or you don't have to write all those classes manually in your SDK, you just have to create this transformer parser, which is the plugin, you just have to create the plugin. And once plugin is created, if new services get added to the cloud SDK, it will automatically parse those files, and it will convert into the JavaScript classes that is needed by the no cloud library. So this is like the unique thing that we have developed in our organization. And it has been used by a lot of people now. It is being used by the other open source organization as well as a very important project. So if you are interested to contribute to the Google Summer of Code, and you want to join the C2SI score lab channel, you can scan this. This is a Slack channel where you will learn everything. And there's like all the stuff present there. And when you are contributing to like the Web3 open source, I would say it's very important. It's, it's very important to create secure and secure dApps, which is like very important aspect. So I will quickly take this stuff. So if you want, if you are doing contributions to score lab or any open source organization, it's very important that you use, you use the tested open source libraries like open zeppelin and other like important libraries that are well tested in the market. Just don't copy from chat GPT or random code on the internet because there are a lot of hacks that happen. Use open source tools like foundry, hardhat, to battle test your code, use proper linting, the security plugins to clean. There are a lot of BS code extensions available and always write your scripts and tests for the feature that you are like building. Don't want to use remix ID for big projects because it's not easy to test it and keep learning as Web3 advances very fast. So you have to keep doing all this stuff. So as the time is running, I would just quickly tell you about the platforms where you can start contributing. So one is like the super team dot earn. This is for Solana projects. You can start contributing. There are a lot of bounties listed where you can on which you can work and get paid as well. Then there is build guild yesterday. There were a lot of workshops present on this. There is only does platform which is for stock net. So if you are Cairo or rust a person, you can contribute to this only does platform. Then there is code for arena, which are like in interested in security. And all those aspects is if you want to audit the smart contracts, you can use code for arena for that. And then there is like this learn web three dot IO. So if somebody wants to start learning web three, this is the best place to start learning. And then we have like dworks, dows and decentralized, then bounty caster, which is new where like all the bounties are listed. And if you want to know about the fellowships and open source programs that are present in the that are currently running. So these are some of the fellowships and open source programs. So you can take a pic of this. These are like Google summer off course, summer of Bitcoin, MLH three fellowship, why can the other protocol level fellowships. So this is like you can as a student contribute to this and as a developer as well. If you're a professional, you can contribute to all these open source programs. So I would just conclude by saying do try open source contributes at least once because it's a great way to start building if you are a student and if you are a professional as well. And you will learn a lot of things. If you don't become open source developer for life, but it still gives you a lot of learnings. It gives you a lot of like knowledge and how to write better code, how to like work in a team, how to like find the issues and solve those things that is very important for being a good developer in the market. So yeah, that's it. Thank you. If you have any questions, I'm happy to answer. And you can scan this if you want to connect with me. Thank you. Thank you, Mohit. That was a wonderful talk giving all the insights about web three. And we have a translator as this is an international conference. We have a translator to speak about Sorry, I have a cough, so I have to wear a mask. Thank you very much. Thank you, Mohit. And now I welcome our next speaker, Andres Berlin. Hope I pronounce it perfectly. Yes. Andres Berlin is the founder of Deepwork. Andres Berlin is a creator of Web three native organizational design from Deepwork. He has been involved in the design of over 200 different products, services and organizations ranging from open source teams, startup, enterprise and fortune 500 companies. Today, Andres Berlin starts at the helm of Deepworks as its visionary CEO, driving the company forward through innovative processes aimed at saving time and designing products for a transparent economy. His work focuses on creating an infrastructure grounded in cybernetic principles and complex system design, assisting teams in crafting, designing and shipping, next generation products, while supporting creatives in leading healthier and smarter working lives. So there's a lot much to say about Andres Berlin. So I'll go with your academic profile. Andres academic background is equally impressive with studies in computer science and psychology at Technical University, Damstead, complimented by a focus of time based media at University Hawks School, a means University of Applied Sciences, this unique combination of different disciplines underpins his holistic approach to design, technology and human interaction, making him a multi-phase leader in the tech and design industries. Now his talk is on building sustainable infrastructure with real world impact. So I welcome you Andres for the talk. So today I want to share with you an evidence based approach to dramatically increase the impact of the projects that you're working on and also basically make money with the projects that you're passionate about. Is everything okay here? Okay, cool. So over the last six years we've been working with a wide range of startups on their brands, products, organizations and sometimes even physical spaces. We follow very diligent market research approaches to ensure high quality deliverables and also give teams go to market strategies and recommendations on those. And we noticed three big opportunities for improvement for most teams. The first one is internal cohesion, which basically means staying focused by being aligned on goals and values and documenting processes. The second one is market research and user research, which basically means increasing adoption by consistently interacting with your users and stakeholders and eventually developing a business model and business and treasury management, which means allocating resources and funds in line with the goals and a shifting macroeconomic environment. So what's happening in the economy around your project. So let me first outline the current economic pressure. Then I'll go into the core issues of these three areas and I'll give you tangible recommendations for how to increase the impact of your projects in the long run. So since 2021 investors as well as founders are under pretty high pressure, investors and VCs were able to pay back only a seventh of their investment and now their limited partners are asking for their money back. There's almost 300 billion dollars of undeployed capital and some VCs are trying to pay it back to their limited partners while others are focusing on their existing teams. Sorry this does not work ideally. At the same time the social media landscape is heating up again and attracting a lot of speculators, traders and less experienced contributors and developers, which increases noise and historically at least like in the last bull market it made it almost impossible to raise funds if you want to stay aligned with your values and those projects that did were at risk of getting burnout but luckily everybody's been building in public and now we can learn from the wild world of free space and take these insights and actually apply them to the projects that you're building. So the generally the approach for most teams looks about like this. So the protocol layer is kind of the underlying foundation, the technical foundation. Most teams start by doing their technical research, designing incentive mechanisms and then raising funds in order to hire a team and then build out applications and then forcefully trying to kind of increase the user adoption in order to ostensibly satisfy their investors. Keep this diagram in mind because most of the well-established infrastructure and projects like HTTP SSL or pretty much any product in that you're probably using on your phone follows a quite different approach and to make this more relatable and tangible I'm going to use an example of a toaster. So imagine your product or protocol is going to be the technical part of it. The technical infrastructure is going to be the toaster and then on top of it you have different types of features. It can toast different types of bread and obviously the consumers are the users. So over the last six years my team of researchers and consultants have juxtaposed the internal goals of teams with the feedback that they received from the users who actually use the products and the first thing we noticed is there was a misalignment on goals and values, lack of clarity around processes and organizational design and very few had actually mapped out what the organization looks like so nobody creates an org chart. So as you can see more successful teams are actually very granular and specific about their goals while those whose future is uncertain are usually keeping their goals very vague and then reaching for global adoption or being the de facto leader in a space. In the case of a toaster you can think about it as if you're getting together with a group of friends and the only thing you decide on is that you want the best toaster in the world and then everybody starts chipping in and as a consequence of misalignment and this vague goals we noticed the proliferation of internal conflict, blame culture, finger pointing and that led to significant delays in development timeline. A lot of founders frequently restructured by letting go of completely skilled and capable team members after months of misaligned work. I actually have a friend who is a lawyer who told me once that it's actually quite common for managers to fabricate false evidence just to let go of team members and that obviously leads to internal conflict that's uncomfortable and in many cases also to expensive lawsuits. Secondly almost 50% of users don't actually trust the products that they're interacting with so this is especially relevant for the web 3 space where people interact with money but that stems from the lack of clear user research processes so either these teams did not follow any user research processes or they were insufficient by asking the wrong questions or developing without rapid prototyping and as a consequence it led to a public launch that failed to get adoption and so the entire protocol had to be rebuilt from scratch but at that time the user tested they actually user tested too late so they actually ran out of money to fund and pay for the mistakes and pay for the changes in this in the system so again like in a case of a toaster imagine you're building this toaster it looks super shiny or you make it super shiny it does many different kinds of bread you make it make cool robot sounds and then after two years of refining that device you notice that the food buying marketplace stops eating bread or switches to a keto diet or something and they actually hate the stupid robot sounds and lastly oh yeah and lastly most teams did not have any business management processes and made an intelligent allocation of their funds so last year we've been developing a treasury policy for our organization and we're curious how other founders and teams would approach it and even though many founders mentioned that they had a background in trafi their approach to allocating resources was rather inattentive so this is probably the wildest quote I've ever heard it's gonna make more sense in web three but yeah they keep their money in it was one team that kept their money and yes bank account which is like millions of dollars which devalue over time and some in crypto which is very volatile because funding would just for payroll is managed by a freelancer and their token has three x and they're not sure why and they're trying to figure it out there's actually um so yeah and again like imagine building a device and then you actually have now a feature for frying steak and it's makes sick flames when you turn it on and then after building a prototype for two years and running burning through all of your money you realize you can't actually pay for the electricity there's also there's also a an example of a physical co-living space in Portugal that raised millions of dollars to build a high-end concept and they started construction designed their token like four times invited a lot of people to physically be there and participate in the construction in exchange for the token and when I spoke to the participants I um they everybody told me that almost the entire team was burnt out the token was worthless and the space didn't even have water supply so and like aside from all the money and time and attention that's completely wasted think about what that means to the reputation of the founders and the people involved in the project so now that you know more or less um how the or so most of these teams spent years on building this expensive web three native infrastructure just to see that nobody's using the products and and trusting them they burned through nearly all of their money and in most cases it's millions of dollars completely senselessly with no result so now that you more or less understand the current situation of the web three space let me give you uh really break down what you need to know in order to make confident decisions that lead to cell sustainability and profitability in the long term it's a clear alignment on a mission statement and values are probably the most succinct piece of information that you need to know in order to capture the goal the mission statement you can get by aligning on what it is that you're building how your team is building it and why it's building it values shine through in branding marketing material and communication style and they connect the audience with your project over the long term and they make it them interested in being involved with your project over the long term in addition to mission and values a clear documentation of processes in an org chart makes transparent to everybody what work is being done and how it actually relates to the mission so people so people who work on the project feel connected to the mission regardless of their position and you can get that by mapping out all the tasks first then grouping them and categorizing them into the teams and then assigning the tasks to the people who enjoy working on it the most so getting buy in from the team on the accuracy of this chart and of course there's also different management structures and who is responsible for what but getting buy in from everybody on the accuracy of the chart excites them to work together in the long term and also creates a foundation for conflict resolution just remember that people wear different hats and have different roles that shift throughout time so it's very important to keep that organizational chart and the processes updated regularly with regard to user research there's actually just three questions that you need to answer and keep track of who are your users and stakeholders what how do they describe their problem and how do they describe your solution to their problem the users and stakeholders are initially assumptions and the users are in most cases people that the team can approach directly and the solution is something that your team can prototype and then present to your users and stakeholders for feedback to gather feedback please try not to ask people what they need or what they want but rather rather stick to approximately these guidelines so ask questions about their past experience so what do they remember what was what worked for them in their experience or what hasn't make questions open-ended so don't ask yes no questions because they awkwardly end the conversation don't give away any clues because you want to understand what happens in the minds of your potential users and you can start broad just by outlining everything and then going more specific about what you really want to know about their experience and obviously avoid confirmation bias so try not to make people excited about your thing don't talk about your project but rather understand their reality and so increasing the level of detail about these questions will give you a very high granularity around the problem space that you're facing and also show you probably unlimited opportunities for your solution design and so the gaining clarity over these three areas relies on constant interaction with the users and stakeholders and understanding how they articulate their experience and then these insights you can take and use in order to strategize around which features to to allocate to allocate your resources next to and prototype and develop and so if you can articulate that back to your audience and that's how you increase interest and drive adoption to your project with regard to the treasury management I just recommend starting with the treasury policy so if you have a bank accounts or wallets just map out what wallets there are who controls which wallets how are they being used and how do you juxtapose that to the economic environment because everything changes in your environment and if you have a certain runway or a certain amount of money then you can intelligently allocate those to our specific resources to generate interest and actually extends your runway significantly the business model is going to develop by paying attention to these two areas so your internal cohesion and in in terms of mission and goals and values and how that relates to your market research and understanding of your users and then optimizing for the highest value density so what is the highest value offering or service that you can provide to the highest paying customers so constant feedback will show you how people value it so if they stop valuing your project you can still readjust your strategy and also how close your team is in achieving its mission and usually every single conversation that you have with people will give you very high quality information around a meaningful trajectory of your development so in case of a toaster again like if you frequently interact with with the users you will notice that people stop buying bread early enough and you can build in features for a frying steak instead and so only once you have a business model established you can start mapping out the value flows quantifying those in terms of real money or numbers and then start adjusting incentive mechanisms and once your payments have to cross geographical borders and your internal database is insufficient for the scale and adoption that you need to facilitate only then it makes sense to introduce blockchain and token technologies to actually facilitate the necessary growth so aligning your team on a mission and goals and values keeping track of your users and getting feedback from them as well as paying attention to how you allocate your resources you can do that at any stage so you don't even need a product or any funding at that stage but it will always show you the opportunity for creating high impact and then developing a business model as a consequence so developing this kind of infrastructure or or projects comes as a consequence of frequently interacting with users on the top and creating value locally first and then keeping the quality of their experience high or increasing it while slowly automating out your team's effort reducing your team's cost and then breaking it down to a minimal cost for maintenance which is then what the protocol kind of runs on so if you can keep track of these three areas so aligning your team on a mission and being clear about the organizational structure doing market and user research to understand all your stakeholders and users and keeping track of your treasury and resource allocation then you basically have everything you need to know in order to develop long-term sustainable products that become profitable over time that's it i wish you a wonderful day and if you have any questions i'll be around here probably have a nice day thank you thank you very much yes yeah you can do it now again we have a translator who will be doing continuous work thank you very much now we'll call upon the next speaker so is the next speaker here vince and clow yes yeah right hi yes while connecting the laptop i'll just go through the brief bio data of vince and clow so vince and is a tech lead at liquid x studio web three devs community organizer chun in vince and clow is a seasonal tech lead and a web three architect hailing from singapore recognized for his expertise in bridging the gap between strategy and execution within the startup ecosystem fluent in multiple languages vince's career is marked by significant contributions to the development of technology teams companies and visions he currently offers his services as a tech lead at pdr lab leveraging his rich background with notable entitled such as any mocha brands nine gag yahoo and various startup leadership role vince's professional journey is distinguished by his adeptness in technical architecture design web three solution architecture and acting as an interim cto for engineering team coaching and hiring his portfolio showcases a profound ability to manage vendor relations and partnerships alongside evaluating product roadmaps from an engineering standpoint his commitment to testing automation recruitment and cloud cost optimization further illustrates his comprehensive skill set a pivotal figure in the tech community vince has led development initiative resulting in significant achievements such as mocha vs nft project which boosted 5.5 million dollars in sales within its first 24 hours and achieved a market cap exceeding 200 million dollars his advisory role across 300 plus subsidiaries and joint ventures have been instrumental in enhancing software architecture smart contract security and tech delivery best expertise an active dev community organizer and lead of facebook dev c hong kong vince has orchestrated over 30 developer communities events across hong kong taiwan and singapore his open invitation for coffee reflects his enthusiasm for startup tech and language exchange marketing him as a pie hotel connector and innovator in tech landscape i welcome you vince for the talk yes you can continue thank you all of you thank you hey chaka about uh hi everyone i'm vince and um today i'm going to talk about translations about languages yeah so it's uh like human languages so if you look for like was python javascript then you're in the warm room yeah we are talking about uh how webfee can help cow sourcing translation for open source yeah if you don't understand then you don't you don't so yeah um so most of them and most of you uh i think you speak Vietnamese yeah that's great yeah so just want to share a little bit of my own uh genny yeah because i think that gives some context to the talk yeah thanks for the introduction just now and i would like to highlight uh my first drop in a startup is actually in a localization company yeah so i help the uh different kinds like fmb uh to translate into different languages and then eventually i started my own company a language technology company where i also help mozilla to do some technical content translations and work on some education technology products like language learning products yeah so um after that like in reasons years i started my career in webfee and i learned a lot about the new concepts the opportunities and it's like i i finally realized there are some new opportunities unlocked in the localization industry by webfee and that's why today i'm here to share my ideas yeah so yeah myself just want to add like i like learning languages uh korean japanese uh hopefully a little bit of Vietnamese yeah so um that's that's why um i really want to devote my time on localizing different open source yeah so i think uh like today our talk started with a question and here is a university university is the place for questions so instead of like just i sharing everything like i was i would say like why don't we like uh raise the question that's let us like think together and i would say um i would break that down into three questions the first one is why should we care about localization at all and then uh i mean like no talk is uh compared with talk about ai this year so like is ai going to help us to solve everything in localization and then okay finally what we mentioned is uh what can we learn from webfee okay the first one so i think um i'm really grateful that the ethereum team did a sharing yesterday so they shared their experience localizing their own website and i think the goal is super simple if if we expect the next billions of users of ethereum or any webfee ecosystem they they are coming from everywhere in the world so cryptocurrency has no barrier technology itself has no barrier but then many of the time uh we have like a the language barrier in between no matter the community or the actual occasion content i'm not sure about you guys but like many communities i know they have they find it quite hard uh to understand what is going on because most of the materials are in english and i myself i i spend a ton of time learning english so yeah i i always like spend more time struggling the language rather than like actually picking up stuff when i was younger so i would say it's a ping-pong for myself and from the ethereum.org yeah uh we actually see that uh according to the data once they started the translation program they actually see a like a wise in the page fields so because as what we said why most of the audience doesn't uh doesn't like use english as much so once they have their own language version they actually are much more engaged to learn about ethereum yeah so i think it's a really positive example and showed importance of localization and okay if localization is important why isn't everyone translating as much as possible then right yeah i mean why sometimes we might come across like uh videos online they're only in english why isn't everyone doing that so um just to share my experience working in the localization industry yeah let's simplify that a lot so imagine you have a website similar to ethereum.org you want to translate it so you are the website developer so the first thing you have to do is to like uh extract those strings or like source in in those like uh original language in english and then to supply that to the manager right manager here we mean a project manager or like localization manager and then eventually they pass on to the translator but then you need a mechanism to guarantee the quality and at the end of the day like if you are the developer you don't speak japanese and then someone say okay i have translated translated japanese for you you just trust me right yeah that's that must be good but that doesn't usually uh uh work so you need a reviewer to further reveal uh the work by translator to make sure everything is correct just like a quality assurance in the software department so this is a highly simplified info but we all know like when we develop a software software changes like super fast so at the end of the day your software uh i mean like we will need like gits or other version control system to manage right you develop your launch your faxbox to deploy and then you iterate again and again right and for obvious reason whenever you iterate your software most of the time the language uh the strings also change right the copywriting changes yeah you have a new page or you have a change that to be shorter or longer or whatever yeah so the like the ideal like workflow for localization to work generally is you have to pair up with your software development workflow so not just the ci cd for different but also the ci cd for the translations yeah so let's zoom in a little bit yeah so let's just talk about okay i have a website and i want to kickstart the process of localization all right so let's go for the most standard api yeah so it is from the mozilla so if you are only doing for anguish then okay what you have to do when you are building a website you just have to write 10 okay we are done but then okay if you consider at least like for anguish you want to differentiate the message and messages because we all know like anguish you want to be plural to add an s at the end right most of time so they provide a library for you to write that yeah so it's like something like format the plural and then you you can do it and at the end of the day you will need to supply some of the translation in this format so that makes sense right okay if you have one then you just use item in here and then no then items sorry more than items and then if there's no item then no item so you have at least three versions just for anguish to handle these kind of cases so you see some capacity here right i mean just by starting the first step of localization and then okay what about german then okay i'm not sure anyone speak german here but i study in germany so i can say german they have like a few genders you have the male female and then the neutral gender and then more than that like you have to think about the grammar cases so by just like a think about how you say dirt or like uh then you have to think about all this case i spend like six months of my life memorizing all this and then if you try to put that back to what we have i can i can say it's way more complicated than this and if german isn't enough let's look at russian yeah so in russian just by talking about proform i think you need to consider like both tables and like child cases but i don't speak russian so i don't comment on that i can only tell you it's complicated so uh the root cause of why it is hard is because we mentioned there are a lot of like capacity of software department we are keep iterating the software and we need to make sure we can launch the things uh safely fast and then at the same time you have to think about the capacity of human languages as well right when you try to mix them together localization become a really complex problem that's my take and i can tell you there's a lot a lot more capacity for example like uh what like what i said like do you need a refill on your reviewers because if you don't speak japanese right no matter how much translator agency you hire you can only trust they are like maybe they are resume or their reputation but you can actually never check against their work right and if you are talking about website beside the like the test are you uh uh required to also translate the images or the videos and then okay if we are talking about translating to spanish are we translating to spanish mexico or are we translating to the spain uh that spanish yeah and if you are doing like a translation of video and how you profile that video as a whole when they are when the translator is trying to translate to make that ux as smooth as possible there are many many problems like that so all you know like localization is a complicated process and i would say like they are quite some commercial platforms i don't want to name them but like they sometimes provide support for open source projects uh like if you don't go out they are also using one but then that is not a permissionous thing so you have to talk to them and then say hey can you uh share your platform for us for v use because we are doing like an open source project something like that right and we are grateful that someone actually started some open source project just on that so it's like um uh web late uh which is like gith uh gith version control that translation system and i think a little bit recently mozilla also built their own which is called a pontoon yeah you you can check this out if you just want to like have a proper workflow or um platform to manage your translations but we were saying like uh this is not uh sufficient because um um you need to think about uh who is going to like sponsor this kind of translation right for example if today you have a website then uh you you don't suddenly go for like one language to 70 languages because for every translation you you will have to pay so many of the time uh people go for cow sourcing translation because it become like a win-win situation right so just like again you feed them dog so when they try to ask you to contribute to the renowniest version of the website it's not only about that translation uh we also hope that like during the process like you will have the chance to like with the materials you will have the chance to share your work with others so it's like not just contribution but engagement yeah so by cow sourcing your translation you're able to have uh like you can grow a like more international diverse community to contribute and to work and to use your open source software and it sounds great right it's kind of like free and then like uh uh it's it's so good in engagement but again like why why people not do that too often yeah what is the charge over there and there are again many challenges and the first one is like if you are doing a cow sourcing translation then you don't have a guarantee on when can you defer right by kind of the definition a little bit similar to open source typically and also i would say um if you look at some of the chance a cow sourcing translation platform by the big techs like Twitter and YouTube they started with those cow sourcing translation uh maybe if you are like a use that like six years six years ago in on YouTube you can actually submit your own translation onto any video and it will be available but eventually they all shut it down and why it's because like there are many people trying to use use that as a tool to scam and abuse maybe they will put their own advertisement into that video or maybe they just want to like pair around and then they just put some gibberish jump some like a rubbish content onto it and then to get some reward or whatever so cow sourcing is not that reliable in a sense although it do gives you a lot of like engagement it's a little bit like okay active percent of the people are good but then 20 percent of the people are bad and then okay the system itself didn't work too well okay so let's stop there for a moment yeah i will come back to this but then what i explain is a lot of capacity of localization and cow sourcing uh sometimes didn't work well okay and we are going to the second question okay we have to talk about ai so we all see like like open ai chatbot etc they are so good at translation right basically you put any question into most of the languages they can give you a proper answer and then they can do the translation work for you then how is that going to be put into place today so i would say like machine translation is actually nothing new because the first machine translation conference is actually in 1956 i think it's six in london so it's around like the time like we have computers then we talk about machine translation we think about this question of like the machine doing the workforce and language didn't change too much as well right and from what we see like the innovation at ai and llm is correct that they do have a lot of use case in nlp and translation or transcription tasks and if you look at the y-combinator startups in silicon valley last winter someone is doing the lip sync so how you like submit a video maybe in anguish and then you can see those actor their lips their mouth they change to be speaking with them this so it's not just about translation but also about the video generation and modification yeah so this kind of the state of the art so i would call this like filter is already arrived just not even it distributed because this ai technologies actually are so powerful but if we go back to the like any open source project today or we go back to the current in the industry like those platforms they're actually sticking with what they have 10 years ago or even like 15 years ago they are having a like a quite complicated workflow without getting the benefit from all this ai so let's let's look at one example which is the whisper model from open ai i think it's from two years ago so it got a lot of like trained data on the voice on the yeah mostly anguish and some translation and what it does is it it will do like the speech recognition so you give give it a voice clip and then it will output a transcript basically so this is the state of the art but if you look at that then you will see even just for the transcription is far from perfect especially if if we forget about anguish or spanish etc if we go for for example like rename is here then like i think like 100 out of 100 word then maybe experts or networks are just incorrect and this before translation so this like about transcription so how many years we have to wait before we actually have a ai model that is so proficient that we can ignore human in the workflow i think we are still a little bit far from there and but but the good news is yeah it recognized the usage of that like supervised the data set so the point i'm trying to make here is we have ai but ai alone is not sufficient we need also human in the loop and if you were aware of the news like last month and we also know like okay google germanite they have a lot of buyers so my question will be could we just trust the ai from the back tag and who should decide the best translation i would say it should be the community it should be the human not ai not back tag and again human has to be in the loop and okay so ai is about data and computation can we own our own error and partly yes we already have that hugging phase and like mr ai open source model but then it's not sufficient because we do have concern on the data for example if you're trying to translate for louse you will see in the nlp committee the data sets that are available for you to change your model is actually less than one percent compared to anguish yeah so depending which country you are from which language you are speaking but then it's actually very hard to develop ai model for production use on your own yeah all right then let me jump ahead time is running out so what we can learn from worthy so let me get back to the ai pond yeah so i believe like uh what i just mentioned is mostly like centralized ai from big tags but actually in worthy they're already like decentralized computing and storage and filecon is a very good example then uh my personal favorite you can use for kaurau this framework to one decentralized ai models on decentralized nukes and also um i really believe in the visas like ai belongs on chain so if one day ai can buy ai can pay ai can like trade i can vote ai is autonomous then how you govern them how you control them and the only way i can think of is put their parameters onto the blockchain onto a smart contract such that we can govern them and there are a lot of things going on like how you participate voting in a DAO or like how you uh incentivize people and then we have different ways of funding open source project thanks to the previous speaker then we all know like there's a lot of like incentives airdrop whenever you contribute in the open source and i would say like i will highlight this one so there is a term called virtual active funding it's actually okay once you have your job done you have your project then um it is easier for people to agree on like what is useful and and to acknowledge who builds it okay you have already done this work and why don't we the community we right now we uh uh support you we give you uh the incentive we airdrop your money for example yeah so it's a trend in the web fee which uh i want to kind of like uh uh answer my own question on how web fee can help localization because we mentioned about cow sourcing we mentioned about ai and actually with this kind of like web fee technologies you can have the incentives you can organize these incentives okay for example the room here why now we can go buy ridam this ai model and run it on the decentralized cow we just have to cow fund it and we just have a desire okay what to translate and that's what i'm building yeah that's my idea and that's the problem i'm trying to solve so i'm working on this project this open source project uh called a ephiom localization service which does uh just that and we are working with the ephiom attestation service in the ferro shift team yeah so the too long uh don't wait it's like the short version is if you are running any open source project then go with us and then we will set up the ai for you we will set up the cow source translation workflow for you to help you to reward uh the contributors and i will say at the end of the day um um how you determine the translation uh is is actually a very uh social problem for example who can attest me speak uh ridam this or not right like then the easiest way is you just talk to me in ridam this so it's actually a very social thing whether uh you are able to like uh have that language proficiency and for us to attest whether a translation is good or not we have we can build this social graph and then we can uh have the cow have the like people to decide okay this translation is better this translation is like less good and then we put the incentives we reward people accordingly yeah so that's kind of the idea and then we are already building this like a product prototype that you can work up or work down just like stack overflow uh when we have this like a translated uh subtitles being online so finally yeah just to wrap up yeah because time is running out so we are also calling for contributions if you are keen on becoming a technical video translator i truly appreciate you can join us we have like set like 100 videos to translate and we we are planning to like distribute like 1000 of usd dollars to any contributors and as i said right we want to set up these incentives and then we uh would like to give that to the uh international community such that we can co-own our own language we can build our own ai pipeline and then we can bring uh ecosystem like ethereum to the gopro audience so uh this for us on twitter and then also join our telegram and that's kind of uh my talk so thank you and uh let me know if you have any question thanks so much yeah thank you vince for the talk and yes uh now i call the translator to translate into vietnamese yes thank you uh so thank you very much vince so manish is here yeah by the time manish connects the laptop i'll just go through about manish manish kumar bernal is a community lead of capex developer and community builder and has hosted many technical workshops sessions conferences and hackathons from participating all mentoring and organizing he has hands-on experience too also manish is a quick active in open source community he's a github campus expert and lfx mentee now i welcome you manish for the presentation all right good morning everyone uh i hope you had a great time here and like today we are having a topic on decentralized storage devolution from ipfs to filecoin uh i guess like most of you have already attended yesterday's build station so you have an basic idea about blockchain uh can you raise a hand like how many awful already know about blockchain and uh web three or anything related to that have basic idea about it can you have a reason okay a few awesome all right let me quickly get started myself manish kumar bernal i am coming right from india and and i was really excited about this conference and this talk as well let's start all right so this is our today's agenda uh we'll cover introduction to content address ipfs uh about filecoin and then ipfs with filecoin and then we'll also share some resources where you can refer later on and learn from it next uh here is an introduction about content addressing what is content addressing in in the general web uh web uh like web 2 what happened like we uh we have to put a location address to like basically it works on ip addresses right where we can get the geographical location right but in case of web 3 we are we don't have to do that like we don't need geolocation for it so we are uh so basically it's distributed system so how it does like uh basically it's a web 2 uh where you can find like uh if there is a domain url you can find the beagle dot jpg suppose uh it's a dog image right it's supposed to be a dog image but uh anyone can tamper it very easily if uh you just go to the domain and you will find a cat image right it is possible right it's completely possible but uh like so basically in case of web 3 what happened like uh we basically uh believe like in content addressing we put images which have a unique hash which cannot be tampered if you try tamper uh it will give you another unique address i will show you how it actually works so basically in uh decentralized system uh we actually follows uh is library isb and number how it works like instead of going to the exact location we follow a call like algorithmic hash code let me show you an example so it is a self-describing hash code how it works so it it works with the algorithm you are using the length and the values so it will give you an cryptographic hash okay so if you uh suppose uh i'm giving an example suppose if there is an image of a dog okay and you have uploaded it to a ipfs it will give you an uh code like this okay a hash code like this if someone try to uh like tamper it or try to crop it or make some changes on it it will generate another unique id or unique content address so basically it will help uh like if in sensitive cases like suppose if there is an uh sensitive proof or like media file for maybe news agencies so if someone tried to tamper it they can't if they are trying to tamper the whole id address will be changed so once it changed like you will get to know that it's tampered and if you're using the same content address you will get the original image or the original proof whatever you have uploaded so that's the idea so if you have any uh content address you can like simply go to cid inspector and check the what exactly the content ad address is holding like basically you can recover the metadata next is ipfs so ipfs like in the today's era like we see like data is everything and like most of us like have our data uh hosted on like big giants like microsoft google and uh companies like that but in decentralized world like uh things are quite distributed and it needs to be distributed because here users like power all the data okay if users are holding it like then it's like more secured in uh more secured because it's distributed system no one knows like where it is uh storing and if it's centralized like we mostly see like whenever uh something happened wrong and it's like went down like the whole server went down right so for that like we have come up with ipfs what is ipfs uh first is like ipfs stands for interplanetary file system file system is very simple uh similar to like folders and files we actually have in our drive or in our local system but here interplanetary stands like basically meant for like even if you are in mars you have you can access your files okay from earth suppose if there is a file you want to fetch from earth to mars and uh like it will take around an hour to you but uh there isn't like there's a thing like if it's a distributed system anyone in mars already fetched that file and you can easily access it in within seconds okay so that is the idea and yeah so this is how our actual files look like uh if we have any file on our local system it will visible like this right path the index dot html whatever it is right you have to know the location of it right where it's stored and it's similarly on web 2 you have to know the domain like domain actually shows the ip address you have to know the path but in case of ipfs you don't have to know the actual location of the where it hosted instead of that you can simply get a content address which will tell you like which will give you the exact uh like image or exact media file but it won't disclose your main location or leak actual geographical location because it's a distributed in nature so no one can fetch your exact location but that's pretty much about ipfs and yeah so using ipfs is and like a task basically there are two ways okay either you can host your own ipfs but again it's centralized or either you can give it to someone why uh why uh someone will host your media files or your videos either it should be uh very popular either maybe uh Elon Musk having some data for you or you like that's what you are hosting then you can do it or someone will do it else you won't do it right so here would be a few things like either you can run your own node but again it could be centralized system or you can have someone to run your node and pay to them but again it's a centralized system right if you are paying someone to host it but there are companies like pinata service like pinata temporal infura they are like giving pinning services where they have distributed network where you can give them your data and they will host you for hosted for you but here is a twist that again there will be a there is no reality like reliability on it how you will you verify that the data is not getting tampered or data is uh data is safe for you right so now we have to solving this problem we have a decentralized system which is filecoin like it will help us to fetch our sensitive data which it will help us to protect our sensitive data to get on like get public or something so yeah this is how filecoin storage designed so it's compatible with ipfs of course and then it's in web 3 so you can uh like it would be decentralized and then you have a concept called verifiable with via cryptographic proofs we will see like in the uh letter slides also uh they have a very massive decentralized network as of now i guess 180 million or tb data is already stored there and which is one percent of cloud storage which we are having right now and yeah so this is an anatomy or like architecture of filecoin how actually it works so basically you can consider yourself as an client uh client who are hosting their uh basically metadata or the or the data basically stored in the smart contract or via app then you are doing a like storage deal where you are coordinating with then storage provider and they will once you are made a storage deal you have get you will get a proof of replication what it does it will show you they have it will launch in proof which will prove that you will get and uh your data is uh replicated on the uh on the storage providers uh server and then again over time like you you may be done a storage deal of six months one year or two years they will buy time like suppose in every three months or every six months they will give you and prove that your data is uh already there so it it makes things reliable that your data is uh like protected by them and if someone's trying to like any storage provider trying to uh like tamper it or do something about it so there is an concept called staking like uh every storage provider have to stake some some file tokens if they are trying to do any wrong stuff they will get slashed from there okay and next is uh towards the end like either uh like end of this storage deal either you can retrieve your data or you can extend your deal by paying more right so this is the whole idea so there are uh two things like uh proof of replication and proof of space time which gives you the satisfaction that your data is uh safe and yeah that's pretty much all right so ipfs and filecoin is the best compliment where ipfs is fast and uh like flexible for retrieval it made things uh distributed and the next is like filecoin uh like which makes things reliable for you and makes sure that you have all the proofs that your sensitive data is on the right hands awesome so these are the some tools of filecoin uh sorry web3 where you can uh actually build on like you must have heard about ethereum yesterday and and then you have a simple layer tools and then development environment and for storage you can use either ipfs and filecoin and yeah that's pretty much there are other plenty of tools which we are using for uh development next is uh like web3 all the way down so basically if you are uh like hackathon project or someone which is just started there are a simple tools which is hosted on filecoin if you have very less storage or like suppose if you want to host one gb file you don't have to go to filecoin or ipfs because it will be a long process so either you can choose for uh tools like this web3.storage and ft.storage or lighthouse uh these are some tools which is very easy to access or very very easy to use if you can simply go and upload your file as you do in on like normal uh like normal oversell or somewhere if you are a web2dev and that's pretty much about it and here are some resources you can take a pic of it or something like that so that you can refer it later on uh yeah that's pretty much uh if you have any question you can ask me now all right am i am i on time awesome all right thank you very much uh this is the discord link if you want to join discord and slack filecoin is active on slack mostly so yeah that's it thank you very much yeah thank you manish kumar for the talk and now it's time for in translation into vietnamese yeah thank you again thank you manish kumar speakers and everyone it's time for tea and coffee break and we'll come back by 11 okay so and the tea and coffee is sponsored by pen pot okay thank you very much all of you so we'll be back by 11 thank you welcome back all of you so now we have a talk on empowering web3 innovations bridging communities with dev folio by the speaker denver desauza who is the CEO of dev folio denver desauza is a dynamic and visionary leader in the tech industry currently serving as chief executive officer at dev folio since january 2022 with the robust experience spanning over six years in various roles within the company denver has demonstrated a profound ability to lead innovate and inspire prior to his current role he had made significant contributions as chief of staff for seven months where he was instrumental in steering strategic initiatives and operational excellence his tenure at dev folio is marketed by dedication dedication to force fostering a vibrant community of tech enthusiastic and developers through world-class hackathons so denver holds certifications in supply chain fundamentals and supply chain analytics from edX underscoring his commitment to continuous learning and professional development his multifaceted character is a testament to his continuous leadership in tech community his innovative mindset and unwavering commitment to making a positive impact in technology landscape so i welcome you to the talk thanks for the introduction guru all right um first of all thanks a lot for your patience um i was unfortunately running late and you know thanks for waiting up for me i hope uh what you get of this talk is worth it all right um so first up like i think in true vipri tradition we usually start by saying gm and gm is just it's nothing but good morning um and it's just why and why because it's nice to say good morning to you know your friends right um and um yeah so for this talk uh i would like to keep it a bit informal um because there's not much point if it's just a one-way conversation um so i'm going to be covering some ground uh talking about our work in web three and how we are thinking about you know helping developers put their reputation on chain uh what i would like is that at the end if we have some time i'm going to leave some time for questions i'm going to run through the material a bit fast um but then if we have time for questions um i would like to give the three best questions um some swag from defolio so i have this t-shirt with me um and i have this diary with me um and i can offer it to uh three people who ask me um good questions right all right everyone good for that okay all right all right at the end all right all right oh we've been in fact i've been in the hackathon space since around uh we've been doing hackathon since 2015 uh back but when we were in university just like this um and then i think we formally started hosting we built this platform called defolio in 2018 because we were like um there are already so many hackathons that we are running and um we did not think the other hackathon platforms were good enough for us so we just built our own platform and then we grew all right i'm going to talk about it a bit um yeah cool um so yeah this is defolio now defolio's tagline is redefining economic opportunities for builders i'm going to get to each one of this um each one of these words what this means and i hopefully it's clear by the end of this presentation right we are now 600k builders strong uh we have uh 50 000 projects built and we have dispersed 4.5 million usd in bounties um so this is through hackathons fellowships and grant programs right all right uh what else okay uh we have been while we have worked across uh you know industries including fintech ai and much more we have done a fair amount of work in web 3 um and why is that uh i think back when we started in 2018 i don't think anybody was doing much uh in this way so we see ourselves as the middlemen to help inspire the next generation in the forefront technologies right and help you get in early so that's why we started with this one hackathon in 2018 called ethindia um now that one ethindia it was not a very big hackathon just like i think 150 people or so um but yeah like that led to a lot of things a lot of the early web 3 indian projects were formed were built at that hackathon many people hired team members and that really grew over the years to the point where so we did ethindia 2018 2019 in 2020 onwards we started doing online ethindia just because of covid continued that right up till 22 when we did another um yeah physical in in person edition of the ethindia hackathon um and that hackathon was the biggest ethereum hackathon in the world uh by biggest what do i mean uh we so we had 2000 um if i'm if i want to be exact i think we had 1700 um hackers and like 300 or so other folks at the hackathon over three days in bangalore and we had 460 projects submitted um so by number of projects submitted we were the biggest hackathon um in the world um in the ethereum space and uh yeah we were the biggest hackathon um until last year when we again became the biggest hackathon um and we had 480 projects submitted so that's the overall metric we go by um unlike other events ethindia as a hackathon first um and everything else later and the whole you know uh blockchain week pops up around it all right um so what's the impact of ethindia been um so yeah like projects this was a project built at ethind at 2018 ethindia uh called insta-tap um and insta-tap is now a protocol um that is that has a total value locked in its tokens um i think right now it's sitting at around five billion uh USD um so that's sort of the market cap of that project uh now it would be unfair for us to take all the credit here but um we are glad that we were able to at least provide a platform for them to uh come and build this uh we all right there were also many other builders that i'll talk about um all right so how do we support hackathons other hackathons so we built a platform for ourselves first and then we went around and provided to other hackathons uh eat denver last year uh this year eat Barcelona eat KL uh we have done uh one called biddle viet biddle viet nam eat soul um eat Munich and many more so all other hackathons ethium hackathons around the world also use their folder as a platform for managing their hackathon applications and judging and more all right uh what else do we do we also do this thing called fellowships where um it's a mental ed program and in eight weeks we pair you with industry leaders and they teach you what it is means to be in the forefront of f3 and unlike other uh educational courses uh you actually get paid to learn so you get um thousand dollars from our end we pay you so that you can learn right um and why because why not if it's possible like we always think that you know it's an opportunity that you you are taking a bet to learn a new technology you should be fairly compensated right and the idea is to be additive not as extractive uh we also do grant programs and more uh but yeah here i'd like to talk about uh some of the fellows over the years who have come like uh yeah he's an anon like um i mean no longer goes by israel name but then he started off as a fellow at our at our hackathons fellowships and now is working at uh a top website protocol uh he also started off as uh one of our fellows and now is the CEO of stacker labs they're doing some good work in the space and we have uh more like that kushar granados all right uh like i said we also do grant programs so once you're done with hackathons and fellowships if you want a grant to continue building in the space we offer grants of up to five thousand dollars in equity free funding um all right through our grant program we have supported 31 projects so far um 61 500 in grants and more and uh we are currently in the phase two of our grants program i have 130k in grants this first um all right um all right come coming to something interesting um i mean i think all that we've been doing so far is um certainly interesting but in a sense where we were just setting the base for what was supposed to come next um so what do we think this is what we think about the future of work i don't know if you've heard this buzzword being thrown around um but if you think about it right as devs and builders and designers where do you go to showcase your work you go to linkedin you go to fiber and more right um but all of these platforms um have their downsides um i'm not going to go much into that so we are building an integrated system where we have immutable on chain cred stronger connections and have an ownership network um all right so all right before i take it ahead i'd like to just do a quick check in the room how many of you participate in hackathons um i'd want okay so i think yeah there's a rare audience that i'm getting to speak to um all right have you how many of you have heard of hackathons all right anyone in the back hackathons hackathon yeah all right uh for those of you who do not know what hackathons are they're just um very simple contests in a way um so there's usually 24 hours or 36 hours to come and build something cool and then the end there are prizes if you build something cool all right that's very simple of what hackathons are what i want to talk about in context of web 3 is what if so you have your regular hackathon judging process at the end of hackathons and um what's interesting is what if hackers could vote for each other um for sort of a community choice program right and that is enabled by quadratic voting um there's a mathematical formula that describes quadratic funding so what happens is that at the end of hackathon uh you get 100 votes um you can vote for other projects we take those votes uh put them on the blockchain um making them immutable and then we work backwards to plug it into the quadratic funding formula and figure out how much the how the price pool should be dispersed right um and don't worry if you if i'm not making too much sense um if you go and look it up uh there are more than enough blocks on our website um we're also working on on-chain creds uh what that means is that rather than um trying to um having your credentials locked in we do we provide something called soul bound tokens which are like proofs of your achievements on-chain uh if you win a hackathon you get one type of sbt a soulbound token and if you if you just participate you get another type all right um this is also something interesting that we have uh recently launched it's kind of proof of backing uh people like vouching and tipping for each other on chain um so let's say you know you have um somebody big in web 3 or just like um i don't know who would be a good builder right now in the world like so for example the sam altman is going through some hackathon projects and he really likes some project he can tip them and leave a message for them um saying good job or you know i think that's really cool and that message actually goes on-chain so it goes on-arbitrum we also live on base and optimism um now and um yeah so that's on-chain proof that sam altman likes your project right and then uh we have started seeing people use that in their bitch decks and more right so that's something good for us to see um eventually our goal is to build uh your one profile and your defolio id um and that being a single source of truth and helps you get access to economic opportunities uh so that's kind of the idea here uh and yeah if any of you are looking to host hackathons um or perhaps just want to talk more about building your community uh please reach out to us on hello at defolio or follow us on twitter at defolio all right um so that's about it for my talk i'm gonna check how much time do we have for questions um where is my clock okay i think we have five minutes how much time do it all right um now i'm not sure how much sense i made with a lot of this but um i mean let's open it up to questions and um yeah the top three questions like i said get one of this and one of this so yeah okay thank you very much for your very very uh comprehensive and very clear about macathon project in india we are admired so the your maybe fast growth of macathon in india maybe some uh some some uh some some success success india maybe for maybe last year last three years we even get some information that's the india take very high growth rates in startup in in in the innovation so macathon is one of the most important instrument for india yeah so uh it is very important but now in vietnam we just only know macathon maybe for some reason here but uh we need some support from uh india so we can deliver maybe a wave of macathon into vietnam but uh the question here what's the condition we are prepared to resist maybe the big wave of macathon move from india to vietnam the first one and for young student now fast guy of knowledge skill we prepare to uh follow this the macathon's wave and i you have a chance to provide some scholarship for students okay but uh we vietnam india is a very close friend i think you should increase the fellowship and scholarship for vietnamese student okay i will take your answer thank you sure thanks thanks for that um and yeah like i said uh we are more than happy to help anybody who is trying to host hackathons in in in vietnam um and uh it's to be honest now so we started in india and but we're not just an indian uh hackathon platform now hackathons on default you happen globally um in the us in europe um even in southeast asia in korea and in singapore um so we have done one hackathon in vietnam um so um happy to you know collaborate on that and i think um the one advantage i think vietnam might be similar to india in that sense so there's a new increasing paradigm right there's no reason why the future of tech should only be built in um sf um it can be built from anywhere in the world now and i think southeast asia is poised to lead this tech revolution and hackathons are a good way to get um young devs excited um so happy to support any such sort of initiative we are doing a global online southeast asia focused hackathon uh in july happy to reach out and talk about it also yep thank you yeah uh i have one question for you uh i'm a newbie so uh i want to know um if i want to join a hackathon but i need to prepare to join a hackathon um i think just um have a name have an email id and uh be willing to learn that's that's that's about it but um you should like ideally if you're participating in a hackathon set a goal for yourself that this is what i want to learn by the time i finish with the hackathon um because hackathons to be honest like your at least your initial hackathons should not be for winning uh hackathons are opportunities to learn and grow and as long as you learn something new um you will so whatever position you're at if you absolutely do not know how to code uh then your goal should be to build um a basic website by the end of the hackathon and then for that maybe in the lead up to the hackathon you start learning html css maybe some javascript um some back end stuff um so that can be a goal but if you already know how to code for a bit you can set up a different goal you can maybe think about working with apis you can explore um you know um you can build like a simple app whatever you like so um do you like uh music yeah i like music okay you like music okay so think of a simple uh project right uh imagine you could build a project that uh like suggests a song uh based on um the weather outside all right uh so if you think about it how will you do that you need two data sources one you need is weather you need to know what the weather is like outside um and you need music you need the actual music recommendation so you can know the weather outside from the weather api a lot of news uh outlets publish weather data so there are a lot of weather apis if you don't know what an api is it's an application programming interface you can google it so you can get data weather data you can get music data from spotify and then combine that and it will you can make a simple app that changes the music as per or suggests music as per the weather outside so you keep simple goals like that and then maybe then as you go through hackathons you go and take up more and more like ambitious projects right okay thank you for the sharing yeah that's about it all right uh do we have one time for one more question or no oh yeah thank you denver disoja for the wonderful talk and giving insights about hackathon to all the students present over here and even the professor from the economics is very much interested in the hackathon so thank you but unfortunately the kids have not gone i think i'll take it for myself and now you know it's time for the next speaker uh yes nibbik chauhan is the speaker and he's the principal multimedia consultant at centricular limited he's a g streamer maintainer genome foundation member mason bill system maintainer former gen 2 developer and i sincerely apologize for the delay thank you and it's all your time thank you hello i'm nibbik um i work for centricular i've been a force developer for 18 years now this 2016 i've contributed to um to genom gen 2 the mason bill system for the past decade to g streamer uh free and open source multimedia framework um today i'm here to talk to you about g streamer um to give you some insight into how the project is run and how we become a sustained a financially sustainable and healthy free software community project um g streamer is a graph based a framework um it can be used and often is used everywhere there is audio or video um tv's uh this is like a graph of how to play a file back your file source you demux it into video and audio and you play them out um this is used everywhere you can use audio or video um tv's smart speakers doorbell security cameras airplanes satellites phones watches um drones broadcasting equipment radio stations drilling equipment desktop applications mobile apps it's ubiquitous all these companies um i'm known to uh use g streamer and there's probably more that i haven't heard of um i stopped putting logos when i got tired yesterday night um so yeah it's very very ubiquitous you used but you've probably never heard of it um not really heard of it which is a good thing um because middleware should be invisible if you're pointing to it or something so uh g streamer free software and it was created the same year um that the movie the matrix was released it's a pretty good movie um the year was 1999 when the operating system world was beginning to be consolidated into um a gnu linux uh solar s windows and the bsts the brother continued to evolve um to the 2000s and 2010s during which the general adoption of free and open source software centered centered around linux picked up the pace dramatically it led to the world today where open source has fundamentally just one and it's now the de facto standard for software all across the world in fact open source was so hard that every single company software company in the world including firmware behemoths like microsoft oracle have wholeheartedly embraced it now i just said open source but earlier i described g streamer as free software can anyone here tell me the difference between free software and open source anyone the difference between open source and free software well one difference is that um licensing is often different uh free software is gpl or copy left license gradually open source is more progressive licensing another difference is uh community um free software projects often have always have a strong community around them they're drawn organically whereas open source software is often just dumped by the company by a company into a github repository and there's no community around it another difference is um ownership there is often a single entity which owns um usually a company that owns an open source project whether the free software project there will be a community around it multiple stakeholders often multiple companies that manage the project so yes all of the above are true to some degree um but in my mind the biggest difference is ideology um free software is an ideology that wants to maximize your freedom to control the software that you use in a nutshell the idea is that if you have access to the source code and you have the ability to modify it you control the software the most visceral example of the importance of this uh is people who have implanted who have medical devices implanted into their bodies um like pacemakers for instance Karen Sandler um the executive the former executive director of the GNOME foundation has once a device inside of her a pacemaker and she has talked extensively about the issues around her lack of control of this device and hence her own body um in the glorious victory open source over proprietary software this freedom seems to have been forgotten you know such is the nature of life um time change world needs the world change the greatest change and users change that's the context changes now this team as a project has undertaken a lot of necessary changes over time started out as a framework just for playing audio and video files in the GNOME desktop project but it is now a completely different beast capable of handling every multi-million need that the world has today however we have not forgotten our roots and we continue to work with the GNOME project and I am a GNOME foundation member myself at the same time our existence and evolution of the project as a project uh continues in the last 15 years have shown that our model is fully sustainable and I want to be clear about the meaning of the word sustainable here sustainable I mean that the project has grown over time accrued new members embrace new technologies and new sectors such as authorization of browsers for app development um machine learning in AI autonomous vehicles remote controlled vehicles ultra low latency devices and so on and all in a way that um that with the way to fund the maintainers that keeps the maintenance in control of the project so this is what so what is this model that has led to sustainability and most importantly financial freedom isn't funding uh or the development of uh open source uh of maintenance of projects a challenging topic yes it is I think uh but I think that the question that most people haven't considered just last week um a chilling supply chain attack was a template on the force ecosystem as a whole a sleeper agent um a side of back uh attack was attempted that tried to add a back door to every less machine in the world almost sounds like something from a spy movie right um how many of you have heard of this this thing called the xc utils attack but it was widely reported and um I won't go into the details but it fundamentally relied on the pervasive funding funding problem that the at the force ecosystem has um the maintainer of a critical uh free software project called xc utils which is basically a compression utility um not have enough time to spend on the project and they had a lot of pressure on them to continue to um maintain the project and suddenly out of nowhere somebody showed up and said I'll I'll be happy to help as a maintainer um and they spent the next two or three years building up trust with the project and the end of that um when they had built enough trust with the for power maintainer they once they got the keys to the project so to speak once they had enough permissions they tried to insert a back door into a release and if that had succeeded the majority of Linux systems out there would have uh had a back door in them and would have had access up by this person the current best guess is that the attack was funded by some national intelligence agency by some state actor but we don't know for sure they were caught because of the obsessive nature of force contributors that makes them really hard that makes it really hard to ship uh a nefarious change and have deployed around across the world in my view this incident was both a shining example of the passionate quality um of people considering the force ecosystem and it was also a critical failure of ours in failing to fund the people who constitute the ecosystem that we all rely on today this is not an unknown problem but it was it is relatively new one um one that the movement did not foresee we wanted the force movement to move and succeed we wanted the whole world to run on space software and it has now but we failed to we became we can be successful we couldn't foresee some of the consequences you know not tell our movement is no longer a hobby it has become too big for that and the only sustainable way to continue is for maintenance to work full time on it and hence be funded but taking a step back how's it how's the thing how can you have too much success isn't this strange this is the problem our world uh the systems of our world um are built are built to whether if you if there's an incredible demand for something and you supply it you're rewarded for it and the people have spoken they do want your software so why do we have a funding problem in force is it because we want to um is it because we want to give away software for no cost some people do think that and on the surface this seems true after all if you get something away for free how can you make money off it right but one example is that every single website almost every single website on the internet that you visit is free but they're still making money so it's not that you can't make money if you give your product for free it's just that you can't make money selling a product in that case so what can you sell again the way to answer question is by asking what do people want um let me let's say i give you all the parts all the components of a car to you right now for free can you build the car yourself no of course not what if i give you the specifications and the design for each in every component of that will you be able to build the car now no you won't be able to build the car of course not you need somebody who understands all the components and is an expert and can it and can help you build it did you know that patterns um are designed to be a complete description of a system in such a way that you can build it from scratch not meant to be like an all encompassing description which allows you to prevent other people from building what you have built is a lesser known fact even lesser known is the fact that despite having a patent you cannot rebuild the system from scratch because there's a lot of tribal knowledge and um there's always experience that is only in the heads of the people who are working on these things so you need expertise to build anything and i think this is a point that is enormously enormously underestimated by the people because the people are are the experts and if you know something is obvious to you you not understand how much value it has to people who for whom it's not obvious and you see this all the time people always undervalue their own expertise in other contexts but companies will always need the help of experts to put together components that make up the products that they want to sell and there's a name for this kind of uh business it's called a software services consultancy right and this is exactly what Centricular does the company that i work for and Igalea and Collabra and Zintotic and many others who are stakeholders in the g-streamer community obviously just the four biggest consultancies here the critical thing here what you need is some way to bridge the gap between the business end who talks to users who gets paid by users to ship us to them and the first community which has the technology that the businesses and the users need and i work for Centricular and the company consists entirely of g-streamer open source maintainers we've all been working on g-streamer for years and it's decades and the entire purpose of our company is to fund the development of g-streamer through a consultancy and so that we can do the work that no one wants to pay for which is to be maintenance to be maintained the project we were founded in 2013 we were still for over a decade now we've worked with over 300 companies in the past 11 years and the team is tiny it's just 13 people as of last month and i would say at out of this um for the most of our life we've only had about seven or eight people and my experience with the past decade has convinced me that the idea to fund yourself is that providing something get the people that people will want to pay you money for not something you're doing out of the goodness of the horror of charity but because they their business depends on it and it allows you to be fully in control of your future if the people the users want something different they'll get in touch with you you will have you will have basically your your finger on the pulse of the market and of what people want that means you'll stay relevant for as long as you want but this solution needs an additional special source for success and the ingredient is community ownership the project must not must be owned by the community and not a single entity note how essential it is not the only consultancy um that works on the streamer this is the project is extremely emergency stakeholder and i think this is this is extremely key to the health of a project because if we have a single uh if you if you have a single uh source or single ownership of the project inevitably somebody will try to um it'll begin a it'll they'll take discussions private it'll become divorced from the community you'll add a cla or the digital license even most recently persons registered this they made all of their um code source available so now people are forking it instead that's really bad for the community it immediately kills the whole community you don't want that so you have to have multiple stakeholders because in this case if any one company did you do this the rest would continue on and that one company would be alone and they would not succeed so it basically is a check against a bad actor in the community but that's not the only reason why um it's not the only reason why having um a company based structure is important for your for your open source project another reason is replacement rate so projects um people are going to leave the project you will have attrition right um people are going to move on people will pass away people will get busy and you need to have new contributors coming in continuously to project and the so in open source projects people have had many things people have had to google somewhere of course people are going to do outreach and they have had limited success um the reason for that is that open source maintainers is that it relies upon the labor of unpaid open source maintainers which are already overworked so it's quite a difficult model for success but as a company you have more options you are already paying your developers from maintainers it's very easy to hire somebody as an intern right and just have them come into the community you can indoctrinate them um because you they're young people and you can teach them about a project they can have them care about a project and as time grows they will and the time comes they'll take over the mantle and i sincerely believe that this model can work in more places than it is being used right now and i think this is not being tried is because uh as software developers as technical people as engineers um we are fundamentally blind to uh the business they are thinking and sometimes there's a person maybe one in ten who does have this is a mind for business and um and my message is to these people um if uh just if you think that you can think outside of the box of the developer please please try this model um it won't work with every project and i can tell you what are some necessary conditions for its success but it's i think a model that can be replicated to summarize force funding is a big problem in our community it's the underpinning of our ecosystems and we have to find a way to fund different things different uh to fund it and my thing to propose and my proposal is that you have to self fund and if your project is um a project that can be self funded you should try it um you want to if you can't sell your software which and you shouldn't sell your software you should sell your expertise people do need it you we have spent years on the project and uh you know it better than anybody else and companies do want that so just from and little evidence i have so many projects that i know of where companies want to find a way to um fix issues with it because they're using it they don't know who to get in touch with so they're sitting around and you know okay i guess i can try doing it myself they don't have expertise themselves and eventually after a few years of trying to find somebody to work on it they just move on to something else um you have to protect your company via multiple stakeholders um if you start a company try to encourage other people or start a company within it but otherwise if you are your own company try not to you know go the way of redis or the companies like that um and the community must own the project the um trademarks and the assets must be owned by a foundation or some other legal entity which is independent of your company um finally you must grow the project you have to recruit the youth because um as times change over a period of let's say eight years nine years um you will not be necessarily in touch with what everybody is using and what people need or what people want and um have getting new blood is important for um having perspective and the necessary conditions that you need for to apply this model are these three first the project must be a component or consist of components that companies can we can use to build projects um this applies to a lot more products uh projects than you might think um in fact a lot of talks have been done at this conference today uh and yesterday and day before um they have talked they have mentioned they have talked they have been about projects that fit this bill very well secondly ideally it should be copy left because a lot of times um a barrier to enter the company that developers face developers inside companies face is that um the companies don't want to share their work but if it's already required they legally require to share the work and they're more likely to approach an external entity for help with the project um finally the project must already be well known and of good quality um I this is not a recipe for making your project successful or starting a project if your project is already well known then this is a model that you can apply um so yeah summary is this these are the requirements and um please go ahead and try it um if you think the project fits this bill and you are well aware of it and um that's what i'm here to tell you thank you thank you derby for the wonderful talk questions and giving more insights yeah if you have any questions you can freely ask them no questions sure thank you yes thank you as it's an international conference now we have translation in vietnamese so basically you exposed uh that having an open governance model with several stakeholders and company was critical for open source but as an open source user uh which kind of guarantee uh can i have that the project is really run in an open governance model how can we enforce such a thing so the way you would as a user enforce it is through the threat of going somewhere else um if you see like the good the good thing about open governance and open um community is that you can see it happening they usually talk on discourse or matrix or irc or discord or something like that and if you see that this is not happening um then you can reduce your um your alignment with the project it's fundamentally all alliances and all friendships are about alignment if you if you are uh prior your incentives aligned with the company or with the project then you should review with them and you should always be in mind that if they stop aligning because the community is starting to die out or the company trying to get advantage of the project then you should be like okay my uh my um align my incentives along the line with yours and i'm going to take my work elsewhere i'm going to fork it or something like that so you have that power as a user and that is a threat you should definitely enforce on company because companies should be afraid of you right okay thank you very much thank you so bye yeah i can present myself nice to meet you everybody i'm given a job for you yes i'm in no better i'll do the job for you yes so yeah thank you very much nitbeek and now you know we have a talk on open standard jmap the new generation email protocol by benoit benoit telio he's a general manager of lina gora vietnam benoit has been part of the apache james community since 2015 his motto is a modular email system that scales using modern protocols a contributor since 2016 and a project committee member since 2017 he became the chairman of the apache james project in 2019 and a member of apache foundation in 2020 benoit has participated in several ietf meetings collaborating with other project members to propose two rfcs to the jmap working group additionally benoit has flourished with the inaugura group for the past 10 years where he's currently serves as a product owner for the team mail solution and as open source program officer he's now back in france based in loin after spending the last eight years in hanoi as the general manager of lina gora vietnam i welcome you benoit for the presentation yes you're most welcome and yeah the stage is open for you thank you okay hello everybody so uh let's get starting so welcome to this talk about email protocols as a first thing deep in my heart i want to be building something open something that everybody can use and that don't enslave you to a specific technological vendor so that's lina gora vision since 2000 so for over 24 years and we believe that on that we can build together ethical software solution that benefits us all who amongst you have an email address who amongst you sent an email today today no not sure it's still early we'll send an email today okay long live email email is here to stay my mother use it my grandmother use it maybe not everybody grandma but uh everybody use it uh where is your email address uh gmail okay my need to okay me i've got a personal email address at gmail a company email address on lina gora.com g streamer you have your own email address and we are on different system but i can still send you an email that's very different from whatsapp signal all this crazy messaging applications that cannot communicate together and that but i can answer when i want to an email i don't need to be acting upon it right now email is here to stay that's what i believe and our mission at lina gora is to give the communication tool for people to continue doing email in 2024 2025 and so on and so on so to be doing that in an open fashion we need to be using open standards for doing that who can give me the name of an email protocol in the room yes i'm up smtp okay yes here we go so that's email 101 i'm sorry i'm not sure that the room is very technical so we'll need to go down to this bob sends an email to alice bob has his email server alice have her email server bob uses simple mail transfer protocol to send an email to his server while alice is not here where is alice email server we do a dns look up we find the server we use smtp to transfer the mail and then alice reads it not with pop but with imap great what's wrong with that huh not my problem that's great tools like air spam d to actually fight that you plug air spam d just here before receiving things and problem solved don't pollute my talk okay when was imap created 80s ish yeah last revision is from the 90s but yeah imap is a very old protocol so now for instance actually the picture looks like that if we are using webmails we need to have a custom http proxy layer to translate imap into whatever web stuff that we actually need and imap is an old protocol it's complex it's line-based parsing is awful it's connected it's very chatty it's per mailbox so i need to always select and select switch things then any na na na i can go a long long time listing very technical stuff on why imap is not any longer adapted to today's needs maybe it was in the 80s but since then we learned a lot about crafting efficient protocols and that's the point today email usage did change we i have this pocket a phone a laptop a tablet a desktop at home so just me i use for device and everybody is getting an email address i'm expecting to see my mail the same way on my phone and on my laptop real time i'm expecting all this kind of things and that's very expensive to do that well with imap so big tech giants did actually do their own thing by their side with mappy protocol gmail api the list is long that's where about the time we started thinking about having our email solution we looked at what is being done in open source we thought okay we are not just redesigning yet another webmail proxy that's not for us we're more ambitious than that we want to be part of a global initiative to design email protocols 2.0 the new protocol for reading your emails and that's how we started contributing within the itf to an initiative called jmap json metadata application protocol so basically what jmap is it's imap 2.0 so based on common technologies like htdp json everybody knows that leverage real time in the core spec leverage good re-synchronization primitives within the core spec leverage something that allows you to combine requests together a bit like graphql into the core specification and all of that makes you a super efficient stateless easy to cache protocol build on common tools right you can have all the htdp tooling of the world that you want on top of it easy to integrate so jmap have been on the design stage for the four years and based on the exchange the consensus built within the internet engineering task force we came to a final rfc in 2019 so i have a little demo i'm sorry today i won't be doing a live demo i will be going at the end of the slide deck to show you how jmap is working so the first thing with jmap it's getting the session object the session object will tell you which jmap's extension is supported and allow you to discover all the several settings and limitations no more guessing like with imap and smtp once you get that you get your jmap endpoints the very next thing that you do is get a list of email right because we need to be displaying on the left panel you see twig mail so that's the jmap email client that we've been doing and here on the right panel you see jmap query so we've got an email query that allows us to get a list of identifiers well true unique immutable per object identifier something that we actually lack in imap sounds stupid to say that but we are lacking it and then we are chaining this with a back reference to an email get call and this allows us to get here the result of the email query that is a list of identifier and then change with the email get we are having all the list of the metadata in one call that we need to actually display our web mail list of email right once we have that it is easy to re-synchronize we have another method that is called slash changes right so you see there's a common trend you have generic method names and you have entity so here email is the entity and slash changes is the generic method name so it works the same for mailboxes for identities for email submission so all the jmap entities work the same and slash changes i give it the state and i would be able to collect everything that did change so here are no changes and i would be able to pipe it with an email get method to actually directly get all the metadata of the item that did change and update my view in one call in one api call then i click on the email whoa it opens i can read that whatever this guy did it's not security okay whatever but what you should notice is that then i pass in more properties and i get the email server doing the parsing for me and returning me a nice json including the html body and on the front end side i don't need to be doing any mime parsing no mime parsing at all so i can plug a team that is not doing email doing my front end as a project manager i love it okay and then you've got off band download and upload for your big binary files right and this is how sending an email looks like so on the right left left you've got the composer of twig mail so rich text composer spent quite a bunch of time doing that and on the right you see that you have a call for creating the email with email set and we are piping it with email submission set so that actually we end up sending the email i won't be demoing the real time stuff but basically simple server event stuff like that so basically being part of a new protocol is an awesome adventure we are reviewing the standards we are implementing standards amongst the first one to implement them and sometime we encounter functional needs that don't yet get a standard so then we share our extensions and some of them eventually become standards of their own can give you the example of rich receipts but we also did the same for kota so really involved into the ietf community last but not least open standards is important we need to have a diversified network of companies contributing to our software so basically on the server part we are contributing to apache james the email server of the apache software foundation and we are reusing a project to actually build one of our products so apache james brings storage primitives brings modularity brings email standards and we are reusing that core project to actually develop a product called twig mail which is built on top of apache james brings in our custom collaborative features and extensions right we have some proof of concept with uncrypted pgp data at rest kind of stuff that's part of twig mail and integrations with our ecosystem but it's a double thing we also don't need to push all of those things into the core project because we have to make mail so that's how we can link our two talks together and so on the back end side with james we are taking the approach that in the end an email server is a online service like anything today so we can rely on a central database well that scale thanks to no sql with kassandra as a meta database with open search for distributed search with s3 for storing big datas stuff rabbit mq for messaging right and then we just manage an email server like any massively scaling services that we have today no longer sharding and stuff like that so we did also rework the core architecture of an email server and integrate it with modern cloud technologies like kubernetes promiferos and and so on and so on and on top of that we did develop and i did already show it to you you've been seeing screenshots a mobile application based on flutter so it's cross platform not only android ios but also the web application the webmail so built with flutter cross platform you can use twig mail front end with any jmap server you just need to be finding a jmap server hopefully linagora is not the only service provider so as it's mostly today on premise email server deployment we've been for instance deploying that for 10 000 lawyers in france just last month there's over this people also these people that are open source and part of the overall jmap community some of them should be pretty well known to people of the assistants some of them are some big telecom provider some of her are a bit less well known so so i don't know if there is a lot of email guys in the room but i wish i did convince you that jmap is the future and that you need to develop what you need to integrate jmap into your email products thank you thanks for your attention do you have question yeah thank you very much do anyone have question hi um so i've heard that fastmail is also working on um their own um they're on their own channel for email as well something called courier is that a competitor they're collaborating with you what is that fastmail yeah i would say fastmail is a partner the email market is big we don't have we don't target the same persona linagora historically is french and mostly focused on the french market so uh with uh sovereignty speech we speak about data sovereignty and data would need for our customer to be hosted at least in europe so uh but we do actually see fastmail people quite a lot for example i saw ricardo at first them earlier this year we share a same fight which is have a wider uh jmap adoption we have some disagreements uh i'm very glad for them contributing jmap into cyrus but cyrus is only a mail delivery agent and fastmail did not open source their code for mail submission which i'm quite fed up with uh but apart from that uh i would say overall partner we share our same fight thank you i have one more question um you mentioned that the protocol is more is leaving more work for the server and less for the client and one part of the video showing the screenshots you showed something that indicated that it's always being more parsing and that i don't have to guess sort of thing could you explain that a little bit more yeah of course so email is a complex topic and everybody come with his little spec making it full of uh complicated edge case corner and uh that's for a first problem the second problem is that if i'm parsing uh the email so writing without help writing an email application is a very complex topic that's number one number two depending on what you are doing you might need to download uh the full email blob to actually be acting upon it as a client which is okay if you've got an offline uh i'm a client like from the bird which is not okay if you are doing a web mail uh something so basically basically it simplifies this a lot by saying that it's the server responsibility and uh to be fairly honest it don't take that much compute resources on the server side i would say my workload is if i look at the server cpu one third protocol getting data in and out to the customer to the client one third accessing it from the database layer and maybe one third in the middle doing this advanced jmap stuff it's the cost that we can pay and uh again then you can have lighter applications onto the client side then uh you can also put take people that never don't know what an email is and ask them to develop an email application they just need to understand how the protocol works they don't need to care about multi-part inline alternative nanananana and all the complex stuff makes sense thank you i'm my email client currently at a company we use imap and whenever i am on the question it's a little slow or even just normally in india because the server is in the uk it's usually if i do too many actions it just takes half an hour to finish the job so i'm really happy to see a project the new protocol that can batch and can do operations in single api instead of having like multiple api calls for multiple actions we're really looking forward to a future where we can get rid of imap thank you uh thank you very much thank you and we have another one more question yes hi so my question is whether you found at least one big isp or mailbox provider that wants jmap because that's the problem we have and i work for open exchange we make dot code which is the leading imap server and we've been having jmap support on our list for like six seven years but then we never find anyone that wants it so the free software community doesn't want it when you ask which features do you want and you ask for other things and the commercial customers like the very big isp say no we're fine with imap we don't want to rewrite our clients so we don't need jmap so i mean we want to do it but so did you find some demand in the market because that's the problem we have so the chicken egg problem i think at mail is the good answer to that the problem is that you need some kind of vertical integration i would say to own at the same time the server space and the client space uh to actually uh be able to push jmap uh i think that uh it's not necessarily the case for big isps i i must confess i don't know exchange that much i guess that you have your custom alternative to jmap for the webmail more or less we generally do everything with imap so we have the webmail and i've got some a bit of extensions maybe for the commercial customer yeah by the way at the itf there's also the new extension for imap so i think even the itf is developing jmap and i mean i go to the itf there's a meeting of jmap for half an hour and then the same people become meeting of imap version four so it's like they don't know which one they want to push yeah yeah but i think the answer would be uh for the people running open exchange if you are able to push jmap across the stack you will be reducing the uh cloud bill for your customer uh so i think that's where it should start but they have something that is already profitable enough already running stability so uh but that that's a discussion that we did had at first them uh Dave maybe i don't remember the name but uh basically yeah that's maybe the next big moving thing into the jmap ecosystem is the new protocol modularity making it into thunderbird so it's not impossible that we've got a big email client starting making the jmap move and uh with that maybe dominos will start falling hopefully uh what i can say is developing a new solution being able to do it on my side with just jmap i leverage the knowledge and expertise from other people within the itf and uh clearly we would not have been able to come up with such a great protocol by ourselves just at linagore thank you very much so such a great interaction manoid and yes thank you we'll end the session and all of you we have a lunch break at 1 30 p.m and so i request all of you to enjoy the lunch we'll be back by 1 30 p.m we have very great sessions and talks uh regarding to open wisp and secure print experience smart home ecosystem using open source and you know open source and open data opportunity and we have many things ask me anything with roga and so i request all of you to come back by 1 30 after lunch thank you all of you yes thank you