 Let me check the time. So it's the hour. Maybe first a word on me, so my name is Ben. I started doing this line of work a couple of years ago. Started on Ethereum in 2016, then I got two years later. And then we started this company, OST, two years ago almost, 2017. Because we saw the blockchain hype happening and I was building on Ethereum and I realized that if we can move the infrastructure forward, a lot of these things will potentially become winter and I did not care to create or destroy my new career with a winter. So I really tried to address sort of the question, how do we make Ethereum 1x work and get to actual use cases? And so since we've worked on what we now call Mosaic, and this is a short presentation on Mosaic 1, which would be the first full DAP with open validators on Ethereum 1x, but I'll get into that throughout the talk. Let me see how this works. So the outline is very simple. I wanna say a few words about Ethereum 1x and then I wanna say something about Mosaic. The numbers are a little bit outdated but they haven't significantly changed. So in April last year, I checked Ether scan and roughly we do like 100,000 one gas operations. So additions per second on Ethereum right now and I'm comparing apples with oranges. So all the critical spirits, like just bear with me on this slide, can be very critical later. But for every operation we do in the AVM, we still have to also execute eat hash and that ran at some 150 terahertz. So if you would say like 6,000 operations per hash, you roughly come to like 10 to the 17 operations, which isn't bad because that's like world computer performance scale, but we're wasting a lot of that, right? There's 12 orders of magnitude between the number of operations we execute in the AVM and the number of cycles or operations we do to get proof of work. And so if you just like take a very standard thermodynamic argument, you compute, you divide the useful part by all the stuff done, which is still 10 to the 17, you have an efficiency of zero, I mean approximately zero. And so the question is, why do we care to run a machine at near zero efficiency? Like apparently we spend millions of dollars a day to keep this running because we meant eat and it has a stable value. So we pay off electricity bills. Why do we pay this cost? Like why do we run the most, one of, I mean there's Bitcoin as well, but like why do we run the most inefficient machine I can ever think of and keep running it? And my argument is that it uniquely builds a collective state. It solves a problem that we weren't able to solve before and I think that's why it's worth trying to improve the efficiency of it. Because it allows to connect otherwise unconnected isolated intelligence, whether that's people, companies, edge devices, but right now we're in a very, very early age of this machine of course. And so then the question I asked like, who is addressing the scalability for the existing ecosystem? We're a lot of projects building this and maybe this is my cynical slide, but Polkadot Cosmos, great projects, but they're building a new ecosystem. Ethereum 2.0, yes, but we've just stabilized phase zero and it will take a while before we get to like feature completion of what we have today. And then the layer two solutions, always scale one that at a time, not Ethereum 1x itself. They don't aim to even extend the interface and a lot of them even restrict what's possible to back to UTXO transfers, to just token transfers. So that's why Mosaic is a DAP to scale Ethereum 1x itself. And so my moonshot challenge, I mean, that we're posing ourselves but happy to invite anyone to it. Can we make Ethereum 1x 100,000 times more powerful, more efficient? It will still be near zero efficiency. I acknowledge that, but it's already a lot more power that we'd get. So that's the first part, my short introduction of why I think this is worth trying. And then to introduce Mosaic itself, it's really often said, and it's partially a good analogy, but I think it's not a very helpful analogy. Like blockchain is in the time of 1990s, where the internet was then. It wasn't very useful, blah, blah, blah. Everyone goes with this analogy. I think it's not very helpful because it doesn't tell me how to build a better blockchain. Whereas if you think about the 1990s, we had an Intel Pentium processor and it ran at 60 megahertz. And then six years later, we had 10X stand and we had a Pentium tree which ran at 600 megahertz. But 1999, I bring this up because it's also the introduction of the first GPU. And the GPU was a card that you plugged into your existing computer to make it do more computations. Because it took another seven years before the first desktop dual core processor came around. It wasn't an easy task. It was also a different task. So by 2006, and the video was at, it's like GeForce 8 series. So this is just a historical fact. I went over to Wikipedia earlier today. And so my claim is, or my position is that, I mean, we'll all agree that Ethereum 1x right now is sort of the Pentium and we're being generous, right? Like, we can't do that much on Ethereum. So it's a single core. It's really like almost an embedded device. It's a very similar constraints you'd have to solve for. And of course, we're not 2006 or the 1990s. So our version of a dual or a multi-core processor will be one that runs with a thousand charts. Not, we won't go to two cores, we'll go to a thousand charts. But it's a hard problem. And the question is, can we not build a GPU in the meantime that gives us a hundred or a thousand additional cores that we add on to Ethereum 1x so that we already get performance boost? Actually, I wanna make an additional point because we have a bit of time, although I don't wanna make it too long because otherwise we're here till 11. I think this analogy also really nicely illustrates sort of how the design philosophy is different. Like, dApps for Ethereum 2.0 don't have to care about which chart they're executing on, which is somewhat similar to a multi-core processor. Whereas, if you wanna code for a GPU, you have to explicitly write your dApp for being able, or your program to be able to use the video card capabilities. And I think this is somewhat similar to what I'll explain here. If you wanna write your dApp to use Mosaic as a dApp, you'll have to write additional contracts. But it might be worth for some use cases. And so, my claim is, if Ethereum is the world's computer CPU, then Mosaic is a GPU you connect to accelerate your computations. And so, what is the blockchain, right? Like a blockchain has two parts. It has a supply side and a demand side because it needs to be in economic equilibrium. The whole point is that no one is doing this for generosity. It's a for-profit motive to run a validator. So if it's not profitable, no one's going to keep running the chain. And so, on the supply side, Mosaic is a dApp, a set of contracts on Ethereum 1x with an open set of validators. And you need to stake EAT and OST to finalize meta-blockchains and get rewards from transaction fees in OST. On the demand side, any dApp developer can now deploy EVM contracts to one or more of these meta-blockchains or cores. I'll get into that. And you can pass messages between the chains. So you can send both ERC-20 tokens or other message data from Ethereum 1x to the score or back. And then within each core, now you can comfortably run at 300 transactions per second. Plus, and this will be the point, you don't have to pay for proof of work. And this is also where in 2015, this ID, well, for me, 2016, this ID really started. In 2015, there was a huge Bitcoin fork rerun or hard fork. And it was cost, but what was later described by Jason Teuch as, now the name slips me, but I'll just explain it. Sorry? Yeah, the verifier is that. I mean, it's always good to have backup there. Namely that you need to be inefficient to make proof of work work because proof of work only works if the amount of time you're spending on verifying the actual transactions is negligible compared to starting to finding the next nonce. And that was what happened in 2015 because a huge part of the mining pool effectively just said like, you know what, if we skip these 200, 300 milliseconds of verifying the transactions where we have 200, 300 milliseconds ahead of everyone else of finding nonces and so for a while they were successful but eventually also they just started including bad transactions and then later on someone noticed and there had to be this huge reorg of Bitcoin. And so Teuch then still in academics I think and now Trubit like analyzes and sort of said there's this verifier dilemma. Like proof of work only works if it's inefficient because you need to be negligible on the useful amount of computation. But if we can build a POS system, a proof of stake system on top of the proof of work that we're already expending then we can use the same bits that were already securing with proof of work to secure vastly more amount of work and we want to reward that work with just transactions fees not with a fee to produce new blocks and find the nonce because that we know is inefficient. And so I briefly wanna go over like how the contracts work. So these boxes are like representations of contracts on Ethereum and so a validator joins a specific core so one of them by depositing both ETH and OST on Ethereum. And initially it somewhat connects to the previous talk so you need to earn reputation throughout your life as a validator and we already, the stake itself helps. Well, so I don't wanna get into this now but we can ask questions if you want. The reputation in our case is earned throughout this process and it defines how much your reward is relative to the others. But every vote counts for one. So if you have a lot of money, you need to create a lot of validators but we'll get to why it's still Byzantine fault tolerant and secure. How is this different from Plasma? So Plasma is based on fraud proofs, right? So you can only do certain stage transitions within Plasma and then if any invalid transition would happen on the Plasma chain, now you need to report the fraud proof on Ethereum and then people can exit blah, blah, blah. Here it's a traditional consensus engine. So I call this a meta block and a meta block chain because the idea is really just re-implementing a consensus engine at layer two and so the finalization of a block is a Byzantine vote of the validators on Ethereum. Okay, but I mean, as far as I know, you can also put like a POS or POA or what you want like kind of algorithm on Plasma, I think. Well, I would love to have this conversation but I'm not aware of any, well I'm aware of some people working towards like full EVM capability on a Plasma chain like LeapTowel and Solivium and very good friends with Yuan but I'm not aware of like them having a full EVM interface. I'm working with Leap, so let's talk later. Okay, yeah, yeah, love to talk, yeah, yeah. So I mean, yeah, absolutely, yeah. I've been researching on the Casper, so Ethereum 2.0, FF3. So how is different between the amount of money depositing for Ethereum and Mosaic 1.0 between the... So I mean, the reason I like the GPU analogy is because if you think of your old desktop, it's a card you plug into it and so the same here, like it's a set of contracts we deploy and then build out a validator pool and a demand side for it but anyone can take the same contracts and deploy them with their own rule, like you can deploy multiple instances of this on Ethereum mainnet, no one's stopping you. Right, so the rules of like how much is staked depend on the implementation. And specifically in our case, we are working on an algorithm for requiring that increasingly some percentage of the reward you earn you can take out, but some percentage decided by the governance is added to the stake so that the stake keeps increasing and we start out with, so that we can grow with demand because otherwise... So it's close to a POS, right? Then POWU. I will go through more details and maybe then it will become clear because we actually also rely on Casper FFG. So like in 2016, I built Ethereum plus Stendermint and then I'm now sort of hopping using Casper FFG which sort of took inspiration from that and then using this layer two to build this layer two consensus engine. So I'll talk about Casper in a second. Thank you. Okay, great. So next slide. So as soon as you sort of... Well, if you join on Ethereum as a validator, there's a concept of meta block opening which I won't go into detail because I wanted to sort of paint the big picture but just if you would take the quote like from the next meta block. It has a specific meaning. Then the validator needs to join on this auxiliary system and on this auxiliary system, he needs to play Casper FFG. So why we want to do that is because we don't want to expend new proof of work or have other algorithms. We just want to have a way to finalize a chain history based on any block proposal mechanism. With more detail, we actually play this twice. You need to play it once to observe the state of Ethereum 1x and agree on that and once to finalize your own chain so that you also have information being able to be transferred from Ethereum 1x onto the auxiliary system. But now, I don't know, do people want me to explain this Casper FFG a bit or yes? So Casper FFG works by finalizing a history after it's been published. So with Tenement, every block is only committed if it has a super majority of two rounds from the validators. With Casper FFG, the original gadget was also to take away the economic finality of proof of work. So we don't really care anymore about which blocks are produced. We still want to have some noise reduction, of course. But once the blocks exist, now we're going to cast specific vote messages that need to, and they're of the form where you identify a source and a target. And so as soon as, and then there's specific rules. So the construction is such that there's specific rules that you can prove certain properties of these checkpoints. And so specifically what you want is that there cannot be two contradicting histories of a life with more than a two-thirds majority. And so that is then proven to not be possible unless you would sign contradicting vote messages. And if those contradicting vote messages exist, they're very easy to verify. So we can easily slash you on Ethereum 1x immediately as soon as you try to finalize contradicting histories on the auxiliary system as well because the vote message themselves are very clean. And we know that if they exist, you're accountable for it as a validator. So if you see a history of super majority vote messages finalizing a certain history, now you know that this history is finalized. I mean, it's a little bit more complicated than that, but that's sort of the basics. I hope I got that right. But what is really nice now is that we have a really good noise reduction mechanism because on this auxiliary system, the validators are playing Casper FFG to finalize a history. And now it takes a very specific form to bring back to Ethereum Mainnet to say like, you know what, this is our proposal of what has happened and we propose that as a valid finalization of the auxiliary system. It's now a very condensed format. And so we can present this to the score contract. And if it's indeed valid, like if it satisfies all the constraints of Casper FFG, now we know that at least these validators of the score have a super majority for it. And if no one got slashed, there wasn't a contradicting history. But that's not enough because we don't want to weaken our security. So imagine that you now don't have one core, but a thousand. You now need to divide your validators over those thousand cores. And then I get a thousand times less stake representing any finalization. So we want to make sure that the total stake in this chip, I think of it as a chip now, these days, but like in this that is securing any of the cores. I've been neglecting the side of the room. I realize, sorry. And so that's why we in, well, we introduce a committee, but in some sense you can think of it as a very, as a traditional blockchain system. Right? Some leader needs to propose a block and that is happening on the auxiliary system with the finalization of the side chain. Any finalized, any segment on the auxiliary system that is finalized is a valid proposal. You can now pre-commit this on the core contract with those validators to get it to be a valid proposal within the system. And now we want to make sure that we commit it. And so we only committed, oh, how we committed is by in the near future selecting a random committee from all validators that now need to do a couple of things. And so it's important that they're randomly selected from all, right? Because we want to have scalability. If we would just ask all million validators to validate this transition of one core, it wouldn't work. If we would rely just on the core validators, it would be weakened security. So we want to randomly reselect from all validators now to be in this committee contract to evaluate this block proposal. And in order to successfully do that, these committee members, they must be able to get the previous state of the core that was committed. So the previous commit, which might be like a day old, a week old, need to be able to get the snapshot of that state database. And then they need to be able to get all the transactions that are proposed state transition in this meta block. And then what is important, they need to prove that they recomputed the full state transition. Because otherwise they would just be lazy and they would wait for someone to show up with the result and now they would just vote with whomever has done the work. So we need to make sure that in the committee contract, they also need to prove that they did the work, namely get the old state, recompute the state transition and then show that proof that they did the work. And then it gets very interesting because the data availability problem is a really tough problem because it is subjective. Whether or not I get the data is a subjective question. I might just have a lagging internet connection or the data might be being withheld from me. So it's very hard to quantify that. But here we try to make an approximation to that because we're saying that this newly selected committee in the future has to be able to get the old data and recompute it. And so that's some approximation of the question, were they able to get the data? Which is the data availability question. And so as a bit of context, the reason that the data availability problem is really hard is because Ethereum is, like we said, a Pentium One processor, so I can do a lot. And if we would ask it to reevaluate all the data, we wouldn't get scalability. But if there would be a corrupt majority in the core, then they're obviously not going to present the bad data that they want to get committed. So you do need to address sort of an approximation of the data availability. And like I sort of said, with the committee formation that's randomly selected, we've evaluated again the whole stake that was present in the contract. And we can support up to 10 or 100,000 validators. Roughly we aim to have like a 100 per core. But of course this depends on demand. If no one's using it, there's no transactions fees paying for validators and it's not going to grow. And so at last point, then I'll get like back to higher level stuff and if there's questions also. Of course this wouldn't work if I need to wait for the finalization of Ethereum to finalize my auxiliary system before I know whether or not to send the bike. So it's very important and I was a design principle from the start that the two systems are asynchronously connected so that there is no time constraint between the two. And so whether this process of committing to Ethereum takes a day or a week and happens once a day or once a week, doesn't matter because the auxiliary system keeps finalizing a normal CASPF of G-speed, its own history. And it knows that eventually if it needs to, it eventually will be held responsible by a committee for any of the votes at any point that those votes can be brought to the committee contract and they will be held responsible. And then at last point, so we've established sort of an extension of Ethereum and now we want to be able to transfer information between the two. So we have a concept of message boxes. There is an outbox and an inbox on both sides but if you have an outbox on Ethereum 1x, you can put a message into that and then every time information or consensus goes either way, now you can prove all the messages that are in the outbox and send them to and copy them in the inbox. And so that can be used for locking up tokens and reminting them on the other side and redeeming and unstaking the same way or any stateful information you can pass to this message box. You can write your own for that matter. And so how do you build on top of this? So right now we have Mosaic 0, we didn't call it that but now we call this Mosaic 1. So we call what we have so far Mosaic 0. You can either work on the VM interface and then you definitely need to talk to us because it's still a bit rough and all the code is in like get up OST, OpenST, sorry. But like I said, we sort of wanted to make sure that they were real uses. So we've also further polished this in for Web 2 applications. And if you go to platform.ost.com it uses all the underlying architecture and you just have to like click through web pages to use it but it uses like a contract framework that we call OpenST. It has a non-custodial wallet for iOS and Android. And then we have build a relayer service. So meta transactions to contract wallets that you can implement in your own app to make transactions happen on Mosaic 0 right now and also anything that's on Mosaic 0 will automatically move to Mosaic 1 when it launches. And then Mosaic 1 is what we're actively working on. You'll sort of notice that this is a little bit bare minimum like my slides are black and white and it's somewhat deliberate because as a company we want to build the demand side. So we have this, if you go to platform.ost it's all polished and documented, et cetera. And we think that's important to get like web two applications on board. But that's on the demand side of the blockchain. We explicitly don't want to own the supply side. So the validator network. We want it to be an open validator set. We don't want to have our hands on it. Obviously we'll run validators in it but the whole point that this is sort of like done outside of the company is to never have owned this piece of code. And so it's definitely sort of why I keep presenting here. And so we're aiming to have a first version of that end of this year. And today I created a new Discord channel to if you want to discuss you can join there. And then so for example, what have we already built? So we have one application that is on Ethereum Mainnet with Mosaic 0 and they're now rolling out to like beta users. So it's beta users of the application with real Ethereum tokens. And they had an internal point system that they called Hornet because the app was called Hornet and the app is a gate dating app by the way. So people using this app have absolutely no interest in Ethereum or cryptocurrencies. They're there to find guys. But they sort of have this internal community. So if you translate or report bugs or moderate you earn points and that's where we started. So we transformed those points internal to the app to LGBT tokens. And so right now it's a very limited use case but it helps us stay within legal constraints plus also test the technology because the app itself has like 30 million downloads. We have 12,000 beta users and we're slowly rolling it out. So in a week we have, well it's a week and a half now so we're at 700. We'll keep pushing that up to a few thousand but this was the whole idea for us. This was like let's make sure that Ethereum is used for real use cases. And that means that people who don't know about Ethereum can use it. And that means that it also needs to be really, really easy. So where we do put a lot of our company focus is making this UX really easy. And so what we developed for example is a smart contract wallet non-custodial that lives in your phone. But you can recover the pin if you lose all your devices, you can recover access to the contract in a non-custodial way with just six digit pin takes 12 hours. So there's a delayed function on the contract and it's a different talk. But it was very important for us that people who don't know about Ethereum don't have to go write down 12 words in order to use this because then there will never be adoption of the technology. And so that's sort of as a little bit of a self-promotion here, the last slide. Thanks. Yeah, that's it.