 Hello, everyone. I'm George Falford, and today I would like to talk about some potential applications of off-chain computation in the Lite Client ecosystem. Maybe you have already heard about off-chain computation and interactive validation. These are really promising and exciting new concepts, but the basic idea is really simple. If there's a complex but deterministic calculation that would be too expensive to directly process on the blockchain, it is still possible to have multiple parties evaluate the same function, check the results against each other, and only start an interactive validation process that could prove one of the parties wrong if there's a disagreement, and this validation process can be realized inside a contract, so eventually the results of such a calculation can be canonized on the blockchain. This technology can be used to process large amounts of external data, which could be located anywhere like a swarm, but in this case I would like to present some use cases where the input data is a blockchain itself, either the same chain where the validation happens or can be another one too. These use cases are mostly related to event filtering. In a current single blockchain scenario, event filtering is quite simple. We have the contract log events and bloom filters, but in order to do a full history search with good performance, we already needed some clever performance optimizations, which I will shortly talk about. Later I would also like to talk about the future challenges of event filtering with light clients when we are going to have a sharding and state channel technologies like Plasma or Polkadot and eventually we'll end up with a massive hierarchy of chains and massive amounts of chain data and we will definitely need some sophisticated filtering methods to make sense of all this data. First, let me just quickly talk about the current filtering system in the Coeterium client. You probably know bloom filters, they are really simple data structures, just a 2048 bit long bit vector and for every log address and topic, three quasi random bits are set and if someone else is looking for the same events later, they can just check for these three bits if they are set and only check the block resets if these bits are set. And this is already a good performance improvement compared to checking all the block resets in the entire history, but still checking them means that you have to read the entire header chain, which is already more than two gigabytes and it was kind of slow even on a full node and with the light client it is even worse because downloading and keeping the entire header chain locally is something that some devices just don't want to do, cannot do and that's also why we implemented the checkpoint syncing so that we can avoid downloading all the headers but then we don't have all the bloom filters either. So we needed some clever data structure and what we did is we took fixed length sections of consecutive blocks and put the bloom filters of them under each other, imagine that there's a bitmap and you can see on the left blue box then when doing a simple search, we are interested in three vertical columns in this bitmap, which is still if you read this, you had to read all the bloom filters from this because it's not tightly packed and the interesting bits are not tightly packed together but if we do a 90 degree rotation of this bitmap, we will get horizontal lines, horizontal lines will be interesting for us, just three lines out of the 2000s so we only have to read those and this optimization already yields a two or three orders of magnitude performance improvement in log searching and it also works nicely with the light clients, by the way this requires the second version of the LES protocol which has just recently been merged into the Go Ethereum code base and it also works very nicely, it can filter the entire log history in a few seconds but there was another problem we had to solve in order for this to work, namely that these data structures are not part of the consensus so we needed some, so light clients cannot directly validate it even though light servers can generate and serve it and in order to solve this, we created a special try, the Bloom filter try and organize all the bit vector in this try so that the light client only needs to know the root hash of the Bloom filter try and use Merkle proof to validate everything else. Of course the question still remains how a light client can trust the Bloom filter, root hash, Bloom filter try, root hash and currently we are doing checkpoint syncing with hard-coded trusted checkpoints which is only a temporary solution and right now the Bloom filter try is also hard-coded into these checkpoints but soon we would like to get rid of hard-coded checkpoints and use trustless validation of checkpoints and in order to do this, we need to somehow validate the Bloom filter try on chain and this is where off-chain validation comes into the picture because all the light servers know the input data which is the header chain and this is a deterministic calculation that they do anyway so servers can send the root hash, new root hash is to a judge contractor and only do validation if necessary if a root hash remains unchallenged on the chain for quite a long enough time then the clients can trust it. Of course we also need some security deposits and other incentives to make the system work but that's also part of the plan. Now let me also talk about how I imagine the future challenges, the massive chain hierarchies and scaling will mean for light clients and what we can do about it. Here's an example situation where there's a user who wants to observe a subset of the chain hierarchy which could still be a quite a large subset. Right now our imaginary user uses a decentralized marketplace which has multiple state channels for listing different market offers like one for listing crypto versus fiat mini trade offers another for listing second hand cars and a local news service which has a state channel for a federal arts and the user wants to observe these chains and get notified about interesting results and the filtering criteria for interesting events might be even more complex than what could be realized with our current simple log address and topic systems. So it would be great if clients could somehow get some help for filtering also it's possible with some state channel technologies that use low redundancy that their security model is based on an assumption that some interested parties actually do validate the chain and that's also something that light clients cannot do so they would also require some insurance to be able to trust the validity of these chains. I think it is possible that they can hire light servers to do all of this for them and also it would be even better and still possible to build a filtering and observing hierarchy for each client who is running complex applications. This hire he could run on multiple light servers and deliver just the interesting results from the entire work computer to our clients who can then run with very low resource requirements. I would like to show how I think this is possible so let me just define two very simple primitives that could achieve this. One of them is called the chain filter. A chain filter is a deterministic set of operations performed on an input block chain and it is specified preferably in a virtual machine that is suitable both for just-in-time compilation and interactive validation and a chain filter can have its own state but not its own consensus mechanism because chain filter blocks are deterministic functions of previous chain filter blocks and new input blocks so whatever consensus mechanism the input chain uses the chain filter we just follow and these chain filters can be used for realizing user-specific filtering criteria and so these are also of course use cases of chain computation but unless the bloom filter try they are user-specific. And the other primitive I would like to show is called the observer chain. An observer chain belongs to a single node, a single light server and it is also validated by a single signature and what an observer does is that it processes multiple observed chains and creates observer blocks that contain the new latest heads of these chains and the observer chain is also backed by a security deposit at a judge contract and the observer has to defend the validity and availability of these chains or at least the latest sections of them on request. The observer chain can follow public chains, private chains, state channels, chain filters and even other observer chains. And now let's see how we can realize a filtering and observing hierarchy with these primitives. Here's an example scenario where there's a public chain and two state channels and the user has its own chain filters defined for all three of them, which are called myEvents and these chains and chain filters are processed by hired servers, servers one, two and three in this case, who give certificates about the validity of the input chains and also the results of the user's chain filters. Maybe in some case it would be enough to just hire a few servers and collect the results but it is possible that we have such a huge hierarchy and we want to observe so many chains that we might need additional layers of servers. In this case, there are servers four and five who are observing the observer chains of servers one, two and three and run their own chain filter called collectEvents that could filter and collect all the interesting events for our client. So the client can very conveniently just contact the last layer of helping servers and receive just the interesting results. Of course we need some redundancy to make sure this system works correctly. So if we have multiple paths leading to every interesting chain or chain filter then the client can always detect if it receives different results from different directions and then it can try to investigate the observer chains and the chain filters of its hired servers or maybe raise an alarm and notify other clients about a suspected fraud. And eventually if there seems to be a fraud then start an interactive validation process to try to prove it. All of this is of course quite far from the original idea of the light client but I believe it still fits pretty well with the philosophy of what the original light protocol is based on because the idea is that in this massive world computer ecosystem there are nodes with very different capabilities and small entities will always require help from bigger entities. And what we can do to avoid the concentration of power is that we can create a standardized protocols that make the bigger helper entities interchangeable and therefore create a liquid market of services so that whenever someone wants to stop providing services or raise their prices very high then other servers can just take over and no one can stop them from doing so. So we can ensure a continuous service with reasonable prices and also we can still provide security not with the regular Merkel-proof like in the classical light client but we can still increase the cost of an attempted fraud by using security deposits and also reduce the risk of a successful fraud by using multiple paths and detecting any attempted fraud right away. Of course there are still some details that are not worked out and right now our main development priority is still making the classical light client reliable and usable but when setting the directions of new development it is always important to keep the future challenges in mind and the long-term goals and I think the long-term goal is a massively scalable high-performance work computer that even small embedded and mobile devices can safely access. And I believe that in this massive, in addition to the massive chain hierarchy that provides a global consensus the client-specific observing and filtering hire is belonging to this unique perspectives of some applications will be equally important in this ecosystem. And I hope you found my own unique perspective on the future of Ethereum interesting and thank you for your attention.