 Hello, everyone. I hope you're all really excited to hear the latest news about the light client Well, the good news is that it's already in public beta testing stage We are being it's being prepared for the official release if you would like to try it I will show the necessary links at the end of my presentation Yeah, if you haven't tried it yet the first thing you will notice is that it sinks absolutely fast and This is of course mainly because it doesn't have to process state transitions Just download and check headers processing headers is a lot faster than processing entire blocks Our current implementation can process up to five to ten thousand headers per second on a good desktop computer To improve syncing even further it is possible to start syncing from a trusted checkpoint which is Represented by the root hash of a mercury Containing all previous block hashes. It's called a canonical hash tree this structure also allows to Access all headers that hasn't been downloaded during initial sync which could be useful for searching for old logs or accessing all transactions Currently there is such a checkpoint hard-coded into into the client, but in the future if we can make This these checkpoints or some equivalent information a part of the consensus Then it will be possible to obtain these checkpoints from the servers in a safe and trustless way in addition to Fast-syncing another important feature of the light client is that it has generally low resource requirements Since everything can be fetched on demand the database basically acts like a cache It can be kept really small Memory requirements are also significantly lower than with a get the full node mainly because we don't have to process entire states This aspect of the get implementation can be improved even further And still it provides an IPC interface that is compatible with the existing full node interface It's not perfect yet, but It's it can already work with mist. Hopefully tomorrow you will also see a nice mist demo using the light client by Alex Several people have already successfully tested in on smaller devices to this picture is from Martin Brooks syncing with an Intel Edison and This screenshot is from a Raspberry Pi Using earning mist using the light client got a see of John Garry's who by the way also donated a light server to help the public testing and like other community members provided a lot of useful feedback during testing a In addition to basic protocol functionality Another important question is whether all of this can work in a large scale with good performance The basic client strategy is simple It always tries to have a few active server connections Selected randomly from a suitable peer set whenever one of them seems slow or unresponsive It drops it and looks for another one servers. They can't take care of themselves by Limiting the bandwidth spent the bandwidth of clients and Dropping also dropping them if necessary They so they they limit the time and resources spent on on on serving serving clients For limiting client bandwidth though, we needed some smarter mechanism than simply delaying request replies because that would Is Ruined the user experience of the client which depends heavily on quick server responses This is why we created client side for control Which is a simple feedback mechanism then cat that can tell clients when they can send their Next requests so that the clients can better distribute their requests among the few server connections. They have If they send a request too early, they would get immediately disconnected So they shouldn't do that, but these two crews have the advantage that requests never get queued up on the server side and therefore they can be Answered immediately This mechanism can ensure a good distribution of server load throughout the entire network, but we also need some market forces to Incentivize the running of good servers in theory micro payment is the ideal way to to incentivize high quality service and responsible use of resources, but On the other hand, which we should also take into consideration that the requiring Payment for all alias requests would seriously hinder the adoption of the protocol and also limit its usefulness You couldn't even sync up to make your first payment using a lifeline and then so another important question is whether it is possible to Create an ecosystem where both free and paid services have their Place and purpose Fortunately, I believe the answer is yes with a service like this Demand changes very rapidly while the available server capacity is Changes relatively slowly. So if you want to provide high quality service You have to have a lot of resource capacity and you usually get a low utilization ratio Which means of course that the servers can Sadder remaining capacity at a lower priority and a lower price so basically our model is that clients are buying priority from the servers and On the lowest possible priority level if the servers still have some free capacity They can basically give it away for free, which of course still wouldn't ensure that they actually do this But we can create a service model and they will have an actual incentive to do so free service is a good indicator of reserve capacities which are Necessary for providing a high quality service that is actually is actually worth paying for so in our model free service can act as an as an advertisement and Also as a protection Against the scam for clients. So so that it can protect them from for From paying for and then getting those Service in return. So the basic client strategy should be that even if you are willing to pay for services If you find a new server to be a discovery first You always evaluate it for free collect some statistics about the availability and very each delays and then if the statistics are acceptable, then you can start paying for it. I Wanted to talk about the new peer discovery protocol. We're working on unfortunately. There's no time for many details. It's a new feature is an advertisement Feature where notes can advertise their capabilities. They can pick multiple category identifiers or so-called topics and Advertise them under under these categories and of course they can also look for notes who advertise themselves under certain topics Of course one of such topics will be light server And finally I would quickly like to talk about one of my future development plans Which could greatly enhance the performance and Flexible flexibility of the light protocol by allowing clients to run complex operations on the server side In theory basically a strict request can provide any information a client needs But if they want to evaluate something more complex like and a contract Accessor function that accesses a thousand state entries that would also mean a Thousand consecutive alias requests, which would take a very long time Usually evaluating complex data structures on server side could be orders of magnitude faster and I don't only want to evaluate contract function contract functions. I would like to create a universal virtual machine that can Access anything anything from the blockchain including block headers transactions receive logs everything and Allow clients to run any code in such a Virtual machine on server side so that basically they can ask any question about the blockchain that a phone Can possibly answer of course if we are running code on the server side We have to make sure that the clients can somehow know that They are getting the correct answer and there are two possible approaches to achieve this and I would like to Make both of these options available for clients to choose according to their priorities one of such one of these approaches is a that when a certain when the server runs a Virtual machine code it collects all the data it accesses and creates Merkur proofs for all of them and returns this proofs to the client so that client can with one request and one reply the client can rerun the entire Function and have all their data available Another approach might be useful when processing larger amounts of data and it's a More generalized off-chain computing approach very much like what the Christoph was talking about yesterday Basically, it's about a server signing Statements saying that I guarantee that running this function with this blockchain as an input Returns in this many clock cycles with these results The clients can then ask multiple randomly selected servers to answer the same question evaluate the same function Hopefully all of them were returned the same and then the client can believe it In the unlikely case when they return different results, of course At least one of them will be false and the client should have posted these statements to a judge contract Which can which will then request the intermediate states of of this VM execution from the from the signing parties until it finds the one single instruction that has been executed differently and Punish that one which has been lying by taking away a security deposit This the both of these approaches have their advantages and disadvantages, but whichever one the clients are choosing This remote virtual machine execution. It will basically be the ultimate flexible alias request which can Minimum mission minimize any gap between the capabilities of full and light clients Which I believe will bring us closer to realizing our original original vision we had with Ethereum So We are the and thank you for your attention and if you as I promised Here are some links. There's a Gitter channel Gitter that I am slash it a him slash light client and this is the main forum of the where you can follow the developments you can ask questions and Whatever news I have I always post it there and There's also a wiki page with with the instructions for to try the current beta version so Please please please stay tuned for much Development in the near future also a lot of documentation is coming soon because another good news is that now both Purity and C++ wants to implement the light protocol So of course we have to improve specifications better because so far I have concentrated mostly on on code Thank you