 Okay, so I will try to explain to you what TrueBit is all about, why we need it, and what can be used for it. To start, I would like to talk about the topic of scalability. This is a chart of Bitcoin transactions per day and it starts in January 2009 and this is basically now. It's a linear scale and you can already see that, yes, so, it's growing. That's not so much in the last months, but, yeah, and it's similar chart for Ethereum. This is, yeah, transactions per day also, starting in July, August 2015, until now, but also say it's kind of growing. Yes, so the question is, blockchains do not scale currently and this is one of the main debates in the Bitcoin community. But what does it mean, what does the scale mean? To put it simply, something scales if it performs equally well as it grows. And currently for Bitcoin, there is empiric evidence that this is not the case because it's, so it gained more and more popularity over time and now it's harder and harder to get transactions accepted and they also get more expensive. And of course, this problem exists in exactly the same way for Ethereum, perhaps even worse since transactions can be more complicated, but currently Ethereum did not hit this limit yet. And what is the reason that blockchains don't scale? The simple answer to that is that every full node has to process every single transaction. Verify every single transaction. And of course, this does not really work well because you have a, so you have the fixed block time, the time between two blocks and in this time, if the system grows, which means it has more and more transactions, then every node would have to process more and more transactions in the same time interval. In scalable database systems, for example, you have the property that if the usage increases of something of the database system, then you can just add more servers and it will balance that out. So if you add more nodes, then you can cope with high demand. And that is not possible with blockchains because of this property that every full node has to process every single block. And of course, we have this property because of trust. If we distribute transactions just across some nodes, then those nodes might be able to cheat because not all other nodes verify the transactions. So as I already said, scalability is also exactly the same problem in Ethereum. And the difference between Ethereum and Bitcoin is that we were very well aware of that from the very start, which does not mean that we knew the solution for the very start or that we had some way to already prepare for scalability. But we at least knew that we will have to do at least one artwork to cope with the scalability problem. And there are at least three proposals how to scale the Ethereum blockchain. And the first one is the CASPER research program. So CASPER is not just about proof of stake but also about trying to scale the blockchain. And the idea in CASPER is that you use a concept called sharding that's more or less the same concept that is also used for these scalable distributed databases. They're the ideas that you just send transactions to just certain nodes for verifying. But CASPER makes it secure again by rotating these verifiers in regular intervals. Then there is Rayton, which is the Ethereum analogy of the lightning network, which is a Bitcoin lightning network. And there the idea is that you can scale transactions by grouping them in a certain way. You move them off chains, you don't process them in the blockchain, but you process them in a different network. And then at the end you group them, you group multiple transactions, a single transaction and just put that transaction on the blockchain in very, very simple words. So you can get more transactions in quotes into a single actual transaction on the blockchain. That's how it scales. And Trubit is the third way. And there the idea is to scale computations. I will explain a little bit more later what that actually means. And the way to scale it is using interactive verification, but we'll also go into detail on that. And also a nice thing to note is that only CASPER requires an actual fork, because the other two ways to scale can be directly implemented on the smart contract mechanism. Okay. Why do we need to scale computation? So Ethereum has smart contracts, which means computations running on the blockchain, but they are limited in complexity, limited in resource usage because of the fact that every full node has to process every transaction. And with Trubit you can get smart contracts, which do not have a gas limit essentially. They can be written in any programming language. The reason you hide that is so if you don't have a gas limit, you can just take a program written in Python and then just put the Python interpreter on the blockchain and run the Python program through that. Ethereum smart contracts can be driven by neural networks, so that we can have artificial intelligence on the blockchain. And you can even have file system access in smart contracts, which means you have a smart contract and it accesses that terabytes big file and reads a single chunk in there or even computes a big sum over all the entries in this gigantic file. Of course, these smart contracts will not run directly on the blockchain. We know it doesn't scale, but the trust promise will be the same as if they would run on the blockchain. So this was a bit abstract, some more specific practical examples. With Trubit you can link multiple blockchains. So there's this Dogecoin Ethereum bridge project, which tries to create a bridge not only from Dogecoin to Ethereum, but also from Ethereum to Dogecoin. And the idea is that you can take Dogecoin and move it to the Ethereum blockchain, where it will be an independent token. You can move it around as a token there and then also move it back, destroying the token and releasing or generating new Doge on Dogecoin. For that, you basically need a light client for the other network in the other network. And the light client for Dogecoin running on Ethereum will, yeah, as I said, so if you don't have a gas limit, you can just implement anything and verify the full Dogecoin blockchain on Ethereum. Another project is, so Golem, Golem is a project to, yeah, pay for other people to do your computations and their white paper mentions Trubit to actually verify that these computations were done correctly. And another example is LivePure. They want to create a video streaming platform where people are paid to encode the videos and they want to use Trubit to verify that the encoding was done correctly. Okay, another very nice property of Trubit I want to mention is that it has so-called unanimous consensus. This means that, okay, perhaps I should first explain the general framework. So Trubit works in a way where you have a computation task. You have a program to run. Someone runs the program off-chain and puts the result on it. And then people can reroute the same computation and check that it was done correctly if they want. And the idea, so this is similar to a blockchain where you have multiple miners who process all transactions and then verify each other. And on a blockchain, if you have a disagreement there, then you get a fork. And if you can convince over 50% of the hash power that your version is the correct one, then you basically convince the whole network. Not exactly like that, but roughly. And for Trubit it's different because you have to convince every single person. So as long as there is a single honest person who checks the computation and posts on the blockchain that he or she disagrees and the person is actually right, then it will not go through. So the person will get the result. So basically there's only one connection for a question board from the executioner. Basically, not on every computation I have to submit a different question to the side. But only if there's one question and I want to debate it. I did not send a question, sorry. I run a transaction and I'm saying that it was done wrong when it was cheating and then something did transact wrong. But I don't think the transaction, everything runs fine. And everybody agrees. Yes, yes. So only in the case of actually cheating. Yeah. And another question? You probably get this, how is that done? Is that your same thing or how is that? We'll get to that. Yeah, so some of you have been here in an earlier talk about the same topic. And this first line was already in the first talk, but the second line is actually new. So some months ago we said, OK, a single honest verifier suffices so that nobody can cheat. But it turned out that this single honest verifier is not always there. But yeah, we added an economic incentive mechanism to actually ensure that this single honest process is also always there. And so there are other projects which do similar things, like GoVendMyExec and some. But the difference to those projects is that they focus mainly on performing the computations or outsourcing the computations, but not on the fact that they are done correctly. And Trubit, on the other hand, is focusing mainly on correctness and not on reducing the costs or something like that. So Trubit is really about scaling out what can be done inside a single transaction. If you don't have gas limit, how do you solve the whole thing problem? I don't even know what a problem is. OK, then so it's Ethereum smart contracts, but with an extremely very much higher gas limit. You're right. So Trubit has some concept of gas, but it's much more relaxed than it really is. OK, so yes. This was the non-technical part. Just to see everyone here. Yeah, I hope it's still all understandable. So how do we do that? Yeah, as we already said, the main problem why we cannot scan computations because everyone has to complete everything. So the solution is obviously, build a system where not everyone has to complete everything. So I have a computational task posted on the blockchain. And two or three people take it on and perform the computation and then post the result on the blockchain. And the main point here is not, we just take the majority answer. But the main point is if we have any disagreement here, then these people go to court. And court here means blockchain. So we have the smart contract judge who finds out without error who gave the wrong answer. And again, the simple solution here would be that the smart contract just reruns the full computation. But that, again, doesn't scale. So the checking who was in error has to be magnitudes faster than actually running a computation. OK, and how do we do that? We use a concept called the verification game. So and the key term here is not sound playing, but binary search. And we'll see how that works in a minute. So a computation in a same computation model always consists of some kind of steps. So we have a computation which starts at step one and ends at step one million. And every single step is quite small, easy to follow. And we have a proposer with an input, a challenger with the same input. But at the end, they come up with two different outputs. That's their disagreement. And what they do is they rerun the whole computation while taking a snapshot of memory in every single step, computing a Merkle tree of that snapshot, and storing the Merkle root. They do not submit the Merkle root. They do not submit all the Merkle roots to watching right away what they store. And this means the Merkle root at the beginning will be the same because they start in the same in a very fixed setting. And it will be different in the end because their memory content, their output is different. And then the smart contract, the judge, will come and say, I would take the step in the middle here and ask them for the Merkle root of their state at that point in time. And they might give the same answer. This means that at some point here in the second half, there has to be at least one single step where we go from agreement to disagreement. So both parties were in agreement here, but they are in a disagreement here. So at some point along the line here, they have to move from agreement to disagreement. And that's just the point that we use, that we try to find using binary search. So the smart contract asks for the center position here. And the parties again reply here. They get a different reply. So we continue the search in this area. This goes on and on. And at some point, because the size of this time interval always halves, at some point we will reach a situation where we have one step where both parties are in agreement and the next step where the parties are in disagreement. And the good thing about that is that a smart contract can, so a smart contract can just take, start with the situation here in this step and just recompute a single step and check which is the correct result. Because a single step is easy to compute. We have some computational power on the blockchain, but it's very limited. And yeah, a single step easily fits in this. Okay, so this takes 20 rounds, which is quite long I would say. I mean, it's still tiny compared to one million steps. That's what I said has to be magnitude faster on chain than actually buying the full computation. But still, yeah, the good news is that these 20 rounds can be further reduced. And so the more the cheater is followed with certainty, there's no way to get around that. And but because of that, if you try to cheat, you already know you will lose in that game if someone watches, then why would you want to play the game? So why would you want to cheat? So as long as you know that someone watches, you will probably not cheat, which means that this verification game is actually never played. But it has to be there and it has to be in code and correct for this whole mechanism to work. Yeah, so if there's someone who watches, then the verification game will not be played, but still everything works out. And the problem is, so everything works perfectly. Everything works perfectly. All people, nobody cheats. Everything works fine. There's one honest verifier who verifies everything. And so how the game works is that, of course, this is all a deposit base. So all participants put a deposit on chain. And if they are guilty of cheating, then their deposit is destroyed and the people who discovered that fact, they get a reward. So the verifier will get a reward if they find an error. But if everything works fine, then yeah, they never find an error. So they will never get a reward. And this is actually a very fundamental problem because as a verifier, how do you prove to a system that you check the computation? You can, at the end, you can say, oh yes, yeah, seven, that's also the result I got. But yeah, I mean, you knew the result beforehand. So how can you, yeah, you can just have copied that. So yeah, over time, the verifier will lose interest because they are not paid in any way but have to do exactly the same work as the main problem solvers. So they will stop looking and as soon as the solvers notice that nobody's looking anymore, then they can cheat and the whole system breaks down. So we need a way to pay verifiers for their work. And the solution to that is something called forced errors. And the way it works is you basically inject an error into this whole process, which can then be detected by the verifiers and then they can be paid for that. So, the system. So, okay, perhaps I should have explained the whole setting a bit more, sorry about that. So if you want the Trubit system to solve a task for you, you have to pay for that. So you have to pay, yeah, a fee. And this fee is in part paid out to the person who posts the solution to the task and in part it is saved in an account called jackpot. And if such a forced error occurs and a verifier finds the error, then the verifier is paid from that jackpot. And so, yeah, I mean, of course it's a bit different. The solver cannot be punished because the solver had to inject this forced error. It's a pseudo-random but deterministic process and the solver cannot kind of, yeah, choose to include or not include it. Okay, Al, is that clear how it works? Oh, GPS. Wait for it. No, because, yeah, it has flaws. What do you have? What do you have pseudo-random in? So, true randomness doesn't exist on a blockchain. That's why we need pseudo-randomness and it's kind of, there are multiple factors that go in and it's just a process that can be verified by a smart contract. So, the, what is it? Explosive. In between. Yeah? So, that's who injects the error. Is it the, the proposer and self? And so, the solver. So, there's a proposer who gives the task and the solver who solves the tasks and posts the solution to the blockchain. Let's have the solver know that he has to include in an... So, it's some deterministic function taking into account, I think, the block hash and perhaps also the hash of the solution. So, so that, and the idea is that the solver knows that this just happened. Okay, let's, let's perhaps skip the next slide where we'll find another problem. So, the solver knows that this happened. That, so the solver is forced to inject a forced error and this condition is also verifiable by smart contracts on-chain and if the solver does not do inject the error then he or she will be punished. And the problem is now, so everyone who detects this error gets a reward. And now the solver can of course notify verifiers about the fact that the error, that the forced error just happened. So, the ideal word, the solver, injects the error but does not tell anyone about that because we want the verifiers to actually re-execute everything. So, right? So, okay, perhaps, yeah. These forced errors, so the rewards for finding the forced errors are of course not for the forced errors. We pay the verifiers rewards at forced errors because that's where we can pay them but the actual idea is that we want the verifiers to verify all other transactions too. It's just, so they have to verify everything because such a forced error might happen in every single task and you only know that it happened after the fact, basically. And so, as a verifier you have to acquire that information that a forced error happened and the way we want the verifier to acquire that information is by just re-executing the task, okay? And there's a second way that they can acquire this information and that's by asking the solver, right? Because the solver knows that the error happened because they had to inject the error and of course on a smart contract blockchain you have all sorts of weird, bribery things which are possible so it's quite trivial to just add a mechanism where solvers are paid by verifiers to tell them to give them this information. And the question is how do we, yeah? So the verifier is only paid out when that is narrow but that can be verified as correct. Sorry, can you say that? The verifier is paid out but he access a limitation as an error. Yes, so this. But that's when the limitation, the original limitation is correct. The verifier verifies the limitation is correct. Yeah, you come as a verifier and there's no way to prove that you re-rand the verifier. So the verifier is paid out when the verifier. So it's a, it happens at random, more or less regular intervals? Yeah, so the random error is the way for the verifier to be paid out. Yes, it's. And the mining. Yeah, exactly. Can we compare the mining where you get a payout, yeah, not, with every block, but at a certain time. Spend a shift on information more which is going to be an interesting to work solving those injected errors. More useful. So the people are then more motivated to check those actually and it might decrease their motivation to go out. Sure, so as a. If they don't know again. Yeah, as a verifier, since you can only pay for forced errors or for errors, you want to only verify these tasks but there's no way to find out whether it's. So I mean, the only way to find out is, or there are two ways to find that out. One is rewriting the computation and the other is getting that information from the software. Wouldn't the other way round be to randomly pay people for checking rewriting applications to go to a conference is that we did it. So whoever, we don't care who actually runs the verification as long as someone does. It's like a search dog. You have to put in fake rewards to keep you interested. Yeah, but I just question whether it increases. I mean, they increase interest but mainly for the wrong motivation. So one, I just ask myself that it's also possible to increase the motivation for the things where you say you can't check when they really need. You have to run everything to find the rewards, the errors, first errors. I mean, it's exactly the same thing you do when they use just verifying. So yeah, how do you prevent the solver from telling the verifiers that a solver, yeah, okay? Maybe I should be choosing the verifier. Sam. So that would be kind of like, how much information we need to validate that this is a true outcome or true consultation. So some different circumstances. Yeah, I mean, so on a blockchain, you never know. On a blockchain, nobody knows that you're a fringe. So you don't have a reputation system attached to the solvers and yeah, it's difficult. And also, I mean, so we want this to be tight. Also against kind of arbitrary bribery, smart contracts. That's why just sampling the solvers is not the best solution, I think. But you're randomly changing the, what did you say, randomly changing the solvers? Yeah. No, not the solvers, the verifiers. Basically, as the same randomizer could be erasable, of course, errors. Something I perhaps didn't say clearly is that anyone can be a verifier. And also, you can step in as challenging a computation from outside at any time. You're not, so that's another model when you say you have to verify the computation. But if you do that, then you have to punish people when they do not verify the computation. That's kind of a tricky thing to do. Okay, but that's what we cannot put as a solution. But this is one question. So basically, every time I do a computation, I do have to submit a redstone structure and do a solvers with the solve. Basically, each computation would be better, how large it is, one to two transactions. So this is meant for large computations, yes. And the whole process takes multiple transactions, yes. Okay, so it's not scaling the amount of transactions and computation of the transactions, which is the size of the transactions. So it takes a constant amount of transactions, in a good case, where we don't have to run the verification game. But the length of the computation can be... Exactly, we increase the diameter of what we have for the other project, but I can't do more transactions, right? I can only do lots of ones. Yes, so Trubit does not scale the number of transactions, but it scales the amount of computation on transactions. But it feels like that the middle would be right. We just find all the transactions together. You put more than that. Okay, this is how you prevent the solver from sharing the information about the forced error with the verifiers. And so we don't prevent the solver from also challenging their own computation in case of forced error. That's a good thing. So the solver just get another reward, we can factor that in with the regular rewards. So that's fine. But the idea is that the reward will decrease dramatically the more challenges there are. So these numbers can actually be improved, but that's how it is written in the white paper. So I will explain it like that. So if we only have one challenge, this is usually just the solver challenging him or herself, the solver will get a reward of 100. That's just some fixed number. And if there's another challenge, then the two challenges will both get 25 and then some 50. So if the solver shares that information with someone, the solvers reward will decrease from 100 to 25. So that's not something you would like to do. And also are a verifier and acquired that information by rerun the computation yourself. You also don't want to share it. Because then, yeah, if only you and the solver found that out then your reward decreases from 25 to 8.3. So yeah, that's the way how you can solve this information sharing problem. Okay, this basically stands against whatever the computation is worth, right? So the computation moves with a lot of money that might be, or I could potentially move up. So of course, so yeah, this all runs on a blockchain and even the blockchain has an upper limit of how much, what do you want to do on the blockchain? But what, which value is attached to what you do? So if you, if you put something on a blockchain which is worth more than the whole blockchain, better attacker can just attack the blockchain and get this thing somehow, yeah? And two questions first. And the first one is not really, so it's connected but it is not a problem for this solution but it's another attack. And can I, as a verifier, simply wait until someone challenges and then only run this particular computation because I'm quite sure that there's a problem in this particular computation then it always would be good for us without a great time hasn't it? Yes, very good question. That's especially relevant for the solver because the solver immediately knows there's a forced error. So you could just watch if the solver challenges are not. Because of that, you have to randomly make something called fake challenges. So it has to be some commit reveal mechanism where you first post something to the blockchain which looks like a challenge but then in the end turns out it's not a challenge. Okay, so that's part of that. But there's another problem which I see. And for example, there's some random numbers. If you have like a forced error every 10 computations or whatever and each computation requires quite a lot of work for the solver. And so what he really would like to do is just post fake results. And he earns a lot of money because he doesn't have to have this computing cluster behind him, so whatever. And so what he could still do would say, okay, I just share this information anyways and for these forced errors I don't get any money because I share the information and everyone can challenge and so on. So I don't get any money for the forced errors but I save all this money for the other transactions where I don't have to run my computing cluster and because I always share the information about the forced error, no one will ever look at my other computations because they know that I didn't share information about the forced error, there won't be an error. So no one will challenge any of my other computations and I'm like, so if there's a mismatch between money you can earn on the forced errors and money you can save by not having to run the other computations. The, so this is the reward the verifiers get for, so if you do it every 10 tasks we were more think a lot about every 100 tasks but so this is the only reward the verifiers get for verifying all the transactions for the tasks until the last forced error and the amount of work they put in is the same amount of work that the solver puts in. So this, the amount that is paid out here is extremely large, so it's comparable to the sum of all the computation fees paid for the tasks until the last process. Okay. You're talking about running arbitrary code, right? Like running some like lines of, I mean running some Eiffel software or something, some crazy thing, and but at the same time you're talking, you gave a couple of examples where there was the need for on-chain combination or injecting the error as well as for the computation of the last step between the right result and wrong result from the work. And so, but wouldn't that require that on-chain combination has to be running like Eiffel code or Python code or something? So we're planning to implement this single step verification for a process, for architecture called Lanai, which is developed by Google for some networking processor. This architecture is quite simple and Google wrote an LLVM backend for that architecture. So you can compile C, C++ and Rust code into that backend. And I'm not sure what Eiffel is written in. Eiffel is a language in the 90s, I just think it's very relevant. Yeah, but so yeah, anything you can, where you have, so if you have a, yeah, if you have something that runs on LLVM, then you can do it. And even if you have an interpreter written in C or C++, then you can also do it. How about having access to things like you have access to electricity, like block number and block hash and value, like I don't have to at least have it complicated, right? Yeah, I mean, it's not part of a block so it doesn't really make a lot of sense. It's just, so also this file search that only works for pure functions. So you have to include the full state and input and output. You can work on Swarm files, as I said, but that requires that you actually have a proof that the file is available, which is possible in Swarm, at least in the final version. And that's why you can, so you can basically put the hash of the Swarm file as an input and then you can read it during the process. So all that won't happen is state and I can't go through any value in file search. If you want to use state, it has to be part of the image. But it's fine, maybe, yeah. You can take the whole blockchain as input and run a single block on there. Are there always multiple Swarmers for every computation or is one Swarmers sufficient? So, yeah, this is something that we still have to work out in experiments, but it's, yeah, it depends on the reliability, but perhaps one or two. Is there a strategy to wait for another Swarm, a competing Swarm that gets the same result or re-submitting the same result? Is that a possible strategy to cheat the system? Okay, sorry though, yeah. It only works with, so this module where it currently only works with this one Swarm. So you post a task and then a Swarm commits to the task, runs the task and posts the solution. Yeah, there's a verifier, but in the same, same. How often, one, you said that the challenge, no, no, when you said that there are two solutions and then there's a final search or comparing the two solutions, how is this done when there's only one solver and one solution? I know there's a solver and a verifier who are discriminated. Back there on the question room. Yeah, when did you release the white paper explaining this? Sorry? When did you release the white paper? Did, two weeks ago, I guess, or not. It's a technical white paper, I have to, at least. Yeah, it might be a headache, just please the right hammer and then ask you this question, but another attack that I would come up with, and so in principle, you wouldn't have a situation where then there would be a solver that would always be verifiers. You know, that's what we wanted. So in the end, every time there's a cost error, there won't be the only one challenge because the solver will challenge and one, two, three, four, five, six verifiers will challenge. And at some point, you will reach an equilibrium because if there are too many verifiers, they won't earn anything, and if there are some decreases, then they won't earn any increase until there's some kind of equilibrium. Okay, the equilibrium is the solver plus three verifiers. Just, you know, for all you can say. Wouldn't it then make sense for the solver to just create four counts and always submit four challenges with this challenge and three other challenges so everyone looking at that sees, ah, there are already enough verifiers, it doesn't make sense for me to join in. And then same problem as before. Yeah, that's called the scaring of verifiers attack. Yeah, there is a solution to that. And the idea is that always save multiple parallel tasks via choose randomly between these tasks. So it's a bit difficult, but it's expanding way deeper. This is really cool. What is the road map overall? No, that's not a road map. So we don't have a full road map yet, but we're actually planning on implementing that for both funding and developers to do that. The exact specifics are not clear yet. But, yeah. Yeah, to wrap up. So, yeah, I explained how throughput scales trusted computation would be in other circumstances. So, the only requirement is a working trusted execution environment with limited capacity, so in blockchain. And it can be scaled to more or less unlimited capacity and with unanimous consensus. The website is true. The technical white paper has a complicated link which can also be found on the website. Okay, thank you. For the last one, I'll approach it. In terms of scalability, is that an either worse scenario or can you have this like true but run simultaneously as the other proposed solutions? Yeah, it doesn't require hard fork. It's just another smart contract on Ethereum. Okay. So the whole system does not require Ethereum specifically, but just some smart contract blockchain system, some trusted execution environment.