 Our first presentation is titled, Beyond Blockchain, New Directions and Distributed Crypto by Victor Groschenko from the Kravitzky Institute of Mathematics and Mechanics. Victor, I'll let you take the screen over from here and start sharing. I'm not presenting any paper, so in part I'm presenting the project I'm working on. In part I'm using my paranormal powers to predict future for the next 10 years. And one may ask why do I want to predict the future? Well, about 10 years ago in 2011 I read the paper by Satoshi Nakamoto and I was working on peer-to-peer cryptographic systems. So before that I worked on one cryptocurrency startup. So I kind of analyzed the paper and I put my opinion in writing, I posted it, it made circles, hacker news, the crunch, whatever. So every year after that, I'm becoming increasingly proud of it because most of the points I made, they even more evidently true year after year. I even mentioned that this technology is definitely not green. I mean, from the calculation, it is obvious that if it gets used at any significant scale, it will be a concern from any green point of view that this kind of energy consumption at this kind of scale is just criminal. So at the time I had an exchange with the top Bitcoin developer at the time. Basically he asked how to fix these things and basically I had no idea. So that kept bothering me for about 10 years. So now I have a lot of ideas. I may say that the situation is slightly better. Well, on this graph we may see the scale of the catastrophe, more or less the last 10 years, the hash rate. So we all know that Bitcoin is consuming electricity like a country, but the discussion is like which country Denmark, Netherlands, Poland, Argentina, there are many opinions. Nobody knows for sure what it consumes a lot. At the same time, it produces, it makes like seven transactions a second. So I will put that into comparison. This is like the earliest telegraph, electromechanical telegraphs, which was actually popular. It is the Bado system. It is like 30 words per minute per operator for the operator's per machine. So it is like two words a second. So it was slightly slower than Bitcoin. So definitely, yeah. But if you consider this machine, which is more or less the same time frame, it is a Maxime gun and it was like 10 bullets per second. So it was faster than Bitcoin, noticeably. I must especially notice that the mechanics of Maxime gun was modeled after a steam engine. So basically the approach to engineering, also engineering paradigm, it was a lot of what a lot of us cut it from steam engine, steam engines all the time. So basically so far Bitcoin is slower than steam engine level engineering. We may say that we are big fans of steam banks. Then some people say that we will introduce more sharding and that will increase performance. Then here is your sharding. This design is from 1940s. So it is relatively modern, but it was already obsolete at the time. In case your system has less than four shards, then the Maxime gun is still faster than yours most likely. That irony aside, let's think what the problem is. My understanding of the problem is the pieces. A blockchain is basically a linked list. I mean, if you take a regular linked list and we replace pointers with hashes, we have a blockchain and the other way around. And linked list is a data structure which is not actually recommended for practical use because of its numerous shortcomings. For example, to reach an arbitrary point in a linked list, we need all of N iterations. We basically, we have to end cryptographically speaking if we want dealing with blockchain, we have exactly the same problem. To verify some piece of history, we need to iterate the entire chain back to the point. About 10, same time, 10 years ago, that was Delft University. That was joint work with Zahrona Bakker and Delft University. We proposed a live miracle tree. If you consider a git or typical blockchain, we have like miracle blocks which are connected by a linked list of references. And the idea was to use a unified live miracle tree which doesn't have that two parts. It is just one unified miracle tree which grows right and hashes in the tree is getting like filled out gradually. And this kind of construction was actually envisioned for live video, since it was adopted by the DAT protocol which you may know about. And it was partially borrowed in the design of Bitcoin version two. So the good part about this design, it is extremely easy. You need a logarithmic number of steps to reach any cell. So it is like a binary search tree as a data structure. So I'm really surprised that I look at all those blockchain designs and I see that same old linked list everywhere. It introduces a lot of limitations. It restricts really the design space and I think that is really an absolute design. Then another problem to consider is the accumulation of cryptographic sediment. But we have history of the blockchain, the transactions from people who are on debt and all the cryptographic primitives, hashes, signatures and everything, it accumulates indefinitely because all those primitives, they affect their part of the record. So they affect any later record, basically to make everything clean and nice, you have actually to keep that sediment forever. It is part of the record. And then the idea of Panatono script is to arrange primitives in a way that all primitives are generally not part of the record. So there is one way error from the data to keep the primitives. Basically, we may discard all the primitives because either they are derivable or they are superseded by new primitives. As an example, we have two DAG graphs and one is a superset of another. Like this one is earlier, this one is later and the assignment with the same key. Well, under most circumstances, we can discard the earlier signature because it is the same key and all the same data was signed by the same key later. So the idea is Panatono script is to make the primitives discardable under the natural course of events. Another thing to consider, the output of trust and trustless computation are kind of, there is a contradiction, but it is mostly illusory. When we send something over Bitcoin, some coins to somebody, we probably expect to get something back. So there is some instant trust relationship and overall all that economic activity tends to happen in networks. By the way, this is a screenshot from a volunteer application, some earlier version. So it naturally happens in networks and all blockchain designs, they seem to ignore that completely. I was very happy to hear about the talk yesterday from the Facebook related group. That was very nice, yes. So if we speak about human social graph of the planet, there is a small world rule that we can reach any human being on the planet in six handshakes or less. If you speak about computer networks, that number is probably three or four. Oh, depending on what you count, but it is possible to arrange really compact networks which are still based on some sort of trust relationships. So this is a big opportunity in, well, ripple, ripple, ripple, maybe ripple. Then actually the main topic of the talk are optimizing linear log. This slide I had to borrow from the next speaker. Thank you very much, by the way. So all the architectures with linear logs, they have clear scalability limits and big internet companies face in the end of the 90s. So the scaling, they're optimizing it and basically there are two ways. One is to make industrial deployment of that linear log like Kafka and then separate systems. But it is hardly applicable to our case because this implies no trust boundaries. And the other way was to give up total order, give up linear log and somehow live with the eventual consistency. And so Amazon Dynamo, Cassandra, Cassandra is massively used to that level. It is massively used all the data from iPhones and everything probably flow straight into Cassandra. And it has some really impressive scale of deployment. So one key problem to talk about, suppose that this graph depicts our blockchain, right? So we have Genesis block on the left and we have like entire history of the chain. So suppose we are accepting some coin as a payment, then this red dot is us accepting a coin and the white dot is the point if you see the yellow rhombus on the right and the white dot one of the coin was minted. If you arrange things right, then all the data that is relevant to the transaction resides in that rhombus. Like it is not like all rhombus is relevant but all relevant data is in that rhombus. From the point it was minted to the point it is going to be accepted. So the idea is why when I accept a coin I have to consider the full history of every coin in the universe. That is actually a question I don't see much of talks about. Then suppose we switch to some sort of causal partitioning. I mean, it sounds cryptic, like causal partitioning then give up total order in Minkowski geometry but in fact we use causal partitioning when we are using Git branches. When we are using Git branches we are using causal partitioning. It is not that noticeable because typically those branches have 99% of shared content but still it is causal partitioning. There is one branch, there is another branch. At some point we merge them or maybe not. At this point I mostly talk about the ongoing projects. It is a replicated object notation and their own base storage and everything like that. Assuming we want to limit the relevant data to that yellow rhombus on the picture then basically we have to live with the database which naturally works with causal partitioning. So it is like Git but even worse to say it this way. Or more radical, like we have data you maybe have some shared route in the past but the data naturally lives in different branches and unless there is a real necessity to synchronize those branches they stay independent. So this kind of approach. It is like more radical than Git. Another relevant model is like class six from 1978. It is Lamport's model of events in the distributed system. Also highly relevant in this context and obviously the haven't before relation. Like for most pairs of events in our blockchain we are probably not really interested in their reality for them because these are just different pieces of data. More or less replicated object notation based storage looks like this. It may have different branches. Those branches may not merge at all, they're mergeable but it may not be needed to merge them. Like in case of Git eventually everything gets merged actually not because for example, Linux kernel it has parallel branches for new tree and maintenance tree but now as a list so long-living data branches and more or less this is the situation. So in terms of chains for each writer, for each entity, for each key we have like separate chain and then those transitive closers of those chains in the form branches and so on so forth and branches also may merge them. This approach to databases it is based on the CRDT theory, conflict-free replicated data types. It is like mathematical framework which is like 15 years old for dealing with the mergeable data structures. I mean without mergeable data structures you cannot have branches because I mean it makes no sense to have branches unless you can merge them any moment. And in this particular case if all data is CRDT you can merge them any moment. So you may keep your data in branches and merge it, merge them if that is necessary. Currently there is one public demo from the systems there are various alterations of the code on GitHub. This is a RON Viki which is available. It is based on RON so basically all the text is RON based storage where every letter is a creation. So on the right you may see RON basically these are patches which are referencing each other like which patch relates to which part of the document and so on and so forth. On the right is the result in Viki page. This is the kind of storage I'm talking about just a little demo. So thank you very much. I sincerely hope that this proof of work madness will be a thing of the past and I really have big hope in all the directions of research I mentioned. Thank you very much.