 Thank you very much for the introduction. Who of you knows what Socrates is roughly, so just that we can do a quick intro, I guess, and spend a bit more time on the later slides of the presentation. OK. This used to work. So Socrates started as my research project, basically, at TU Berlin. But by now, we're happy to develop it as part of an Eastern Foundation team. It's three people at the moment, but we're looking to grow. So in the end, be free to get in touch. Let's start with introduction vision before we dive into more. So the plan of today's talk is basically three things. First, we provide a bit of an introduction background for those that are not that familiar with the tooling, but we'll keep it short. Then we'll give you a bit of an update on what's been happening in the last year. So basically, a depth update, which new features we have, where we were going. And then we look at some use cases. So we report back from actual applications that improved on their privacy properties by leveraging sovereignty-based CK Snarks. And as a last step, we'll give you a bit of an outlook on what we think where we should be going in the next year. So Socrates is basically two things. It's a high-level language for programming CK Snarks. And it's a bunch of tooling around all of that. So with Socrates, in that language, you can program in an imprinted style as both programmers are used to. Some program. And then it provides you tooling, so you can compile that program into an abstraction that is provable through CK Snarks. And then you can use Socrates' tooling to compile your program, execute your program, and generate CK Snark proof that shows you that this execution happened correctly. And it also generates you smart contracts, solidity contracts that you can deploy to the blockchain so you can easily verify CK Snarks on Ethereum without having to bother with writing solidity verifiers for the specific CK Snark mechanism you choose in the end. So what are the main goals behind Socrates? That's basically two things. And the first is enhancing privacy. With Ethereum, we have this platform that's very general. We can do general-purpose computations. It's quite powerful. But for real applications to take off, we require a bit more privacy, I think, because there is awareness that all the data put on the blockchain can be mined and used by anyone. And that's a short stopper for some applications. And we'll see that later when we look at the use cases. For Bitcoin, we've seen an improvement through C-Cache that managed to do fully anonymous payment schemes on top of CK Snarks. And of course, you cannot simply do a scheme for Ethereum because Ethereum is too generic for that. So our approach is to provide easy-to-use and convenient tooling that you can use four-year-specific application in the Ethereum context to leverage C-Cache Snarks conveniently to improve your privacy properties. And the second thing is scalability. So we're in the world on the left right now where we redundantly process all transactions that are submitted to the blockchain on every node. And we would like in the future, and we already do it in some cases, move to the world on the right, where we do not execute complex things on the blockchain, but rather execute complex things off the blockchains. While maintaining privacy, because we do not need to expose all the information we use in this off-tier processing, then send the result of that computation plus prove that it tests the correctness back to the blockchain and only check that. And as long as the verification step on-chain is cheaper than the native execution on-chain, we will be able to scale better because one block can fit more computational complexity that way. So that's the two main drivers behind why this is meaningful and why you should have a look. The language itself that we use is or built is an imperative domain-specific language, which means that it's tailor-made to efficiently compile into abstractions, C-Cache Snarks use. Sure, you could use a generic programming language like C or Rust, and there's also compilers for that. But many of the data types and the syntactic contracts they offer, they are not efficiently translatable into the C-Cache Snark proofable abstractions. So that's why we kind of have this DSL, to be sure that what you write is kind of efficient at least. It's Python-inspired syntax, but we borrow like the best pieces from whatever language kind of, but it's mostly similar to Python. And the thing that makes C-Cache Snark programming hard or one of the things that makes it hard, one of the scary parts of Snark programming is this non-determinism, which means you call to an outside component, you get some results and validate that as part of your Snark, and as you maybe feel this is quite complex and it's something people do not want to take care of. So Snarkly hides away all this Snark-specific programming complexity and hides non-determinism without any loss of expressivity or power in what you can do with it. That's also kind of a contribution here. If you want to check it out, thanks to the guys in the back and work by T-Bow, there is now remix integration of the tooling. So it's live on remix, not the very latest version, but you can go play around with it to get a feeling and you can go through all the steps, writing the program, compiling, executing and tooling. And also there's going to be a workshop tomorrow from remix and then go into more details about that. And now I'll hand over to T-Bow for a bit of a deaf update what happened last year. Right, so the first thing that we focused on over the past months is adding more expressivity to the Zorbitis types. For those of you who have actually used Zorbitis before, we have very basic types that were basically field elements which you can think of as integers and we have booleans, but within we have complex types so it made it quite verbose to live programs. So what we added is arrays of multiple dimensions with helpers to create them with a given value as well as the spread operator that you might know from JavaScript and all the languages as well as slicing to get some part of an array. So we'll have a syntactic level. In the same sort of spirit we added structures which enable you to create composite types to represent, for example, a point on an anti-curve that can be really useful in the context of stars if you're trying to verify a signature inside of a star. And for the things here, we also have constructors and mutation and access as you would expect as you have it in many other versions. So that's like now on the latest version of Zopitis that we released very recently. These new types and more complex types bring us even closer to a higher level language and that's what Zopitis is and it enables you developers to have code that's even clearer and cleaner. However, we don't want to sacrifice performance in terms of the size of the circuit that gets generated. So in Zopitis you can write the clearest code and then Zopitis will apply a number of simple yet powerful optimization steps to your program in order to reduce the size of the program as much as possible. So these optimizations are the ones you would find in many programming languages, among them constant propagation which basically computes as much as possible your program at compile time. If you have constants in your program, for example, all of this is going to be already pre-computed for you. Removing all of the linear constraints, those dots are based on quadratic constraints so all the linear constraints are substituted so that you only have quadratic constraints and a few other optimizations that we have worked on and we have quite good results in that. On a different end of the compiler pipeline, we changed from a hand-written parser that we were using in the first version of Zopitis to a parser generator. So this is quite internal but basically the idea is that we used to have a parser that we wrote by hand to parse from source code to the AST and now we have a formal specification of our grammar that generates a parser automatically which actually gave us better performance in some cases which would signal that our handwritten parser was actually not that efficient and also interestingly made it much, much simpler for us to implement new types that I just described, structures and arrays. So we would totally encourage anyone working on programming languages in the early steps of the process to use parser generators, especially we used a framework called PEST in Rust and we've been pretty happy with it. Then over the course of the past year, when we went from having simple examples to get people started on Snox to having more complex applications, we realized that there's a lot of building blocks that people end up needing over and over again and that includes mostly hash functions. The first thing that people want to do is how do I compute a hash inside of Snox as well as embedded curves so that you can have elliptic curve photography inside of the Snox which is very powerful especially for scalability schemes. So what we did first was to use existing other libraries to import those building blocks but we changed our approach and ended up rewriting all of those building blocks in Zorbitis itself to see how it would perform and also to use our own project. So the result of that is that now Zorbitis ships with a standard library which you can import directly from your source code and use directly that has those utilities, so some cryptography, some utilities to do packing and unpacking if you want to go from bits to a number, et cetera. And we also worked on a library called PyCrypto and basically the intuition that is that in the standard library for example I mentioned that we have elliptic curve cryptography so with elliptic curve cryptography you want to be able to work with signatures. However, inside of the Snox you're always only going to verify the signature, you're not going to generate the signatures because generate a signature, you could do that before hitting the Snox itself. So this PyCrypto is our current approach to this missing component of everything that you can basically do before you hit the Snox and in this case there's the example of how you would sign a message with ETSA and then based on this piece of Python code you can take the output of that and then feed that into the Snox and the Snox would verify the signature that you created is actually valid. Finally, we worked on integrating with more backends so we used to integrate with only Lyssnox which is C++ implementation of preprocessing Snox. We added support for Belman which is a Rust-based proof word that's now being used by Ccash as well as we've been designing and also developing integration with something called ZK interface. So maybe some of you have heard of it that comes from the ZK proof initiative which is basically a standard for backends of zero large proof systems and basically what this means is that using Socrates it will soon be possible to integrate with all the new schemes that came up recently so maybe some of you saw and all those new schemes that pop up and we think it would be really, really valuable if you could just write your program in Socrates and then it would completely interpret with any of those backends because they all rely on the same assumption. So now we're going to go through a few applications that people have been building with Socrates and I'll hand over to Jacob today. Okay, now we've heard a bit more on the background and internal changes. Now we want to look at what's actually has been done or is currently being done with the tooling we provide. So first project is EY Nightfall and EY Nightfall is a project for Prince to Young or EY and it's basically a privacy preserving implementation of ERC-20, ERC-721 token standards on Ethereum so that you can have these tokens and use them with complete privacy on Ethereum. So we will today not go into much detail about that because on main stage tomorrow there's gonna be a talk about the project by the EY Force themselves and what's interesting to note though is that it was not like us going to them and telling them please use it and we'll help you out but they picked it up independently and actually built it and we didn't really know that they did it unless they released, right? So that was for us cool to see as kind of a validation that the tooling is actually useful for people and projects are being built with it. Something else is a joint project we did with CentroFuge. So CentroFuge is a service for financial documents and it basically means you have financial documents that two people or a bunch of people agree on off chain. It's an off chain protocol but these documents, financial documents for example, invoices they get anchored on the blockchain and then it provides a single reference point everybody can point to, right? So that's the CentroFuge model. But having claims that means if there's an unpaid invoice and I have the right to retrieve that payment that's actually valid and it would be nice to unlock that on the blockchain, right? Bring that to Ethereum as kind of a token to tokenize it. So for that CentroFuge wants to mint NFTs that kind of tokenize the claim to retrieve a payment by party in the future. And of course what's important is the credit rating of that counterparty because it kind of gives you an estimate of your expected value that you will be able to retrieve after you purchase that token, okay? So they built this but the problem is to mint that NFT in the correct way to make sure that you need to expose the value of the invoice which can be an issue sometimes and you also need to expose the buyer's identity or the counterparty's identity so you will know whether he has good credit rating or not and otherwise you're not going to buy this token, right? So nice idea but it's not really gonna work out because disinformation is quite sensitive. So with the Socrates-based approach we were able to replace these checks by using CK's arms and now the only information that's exposed when minting NFTs is the token amount and it's guaranteed to be smaller than invoice amount that allows you to also split up an invoice into several NFTs and the counterparty's credit rating. So you only learn the key fact that your counterparty that you're buying a claim for basically is credit worthy but you do not learn who exactly it is and that's key and approachable. How it works is that we basically replace a traditional NFT registry that does these checks on the blockchain through a Socrates program and that Socrates program does all these checks as part of a CK snark and after it did that you generate the CK snark proof submit it to an on-chain token registry and that on-chain token registry that's into checks is proof correct or is proof not correct and if successful the token is minted and only that information I mentioned before and some root hashes to make sure the right data was used are exposed in the process. So that's quite nice. The implementation the core part of the code I mean there's some utility functions and stuff is 80 lines of Socrates core DSL code and you can find it here on GitHub and the verification cost was 900 K gas for the prototypical implementation that was done which was not highly optimized. I mean there could have been for example we could have used Peter's commitments instead of Shah and slide details that will bring down the cost of proving for example and the verification cost will probably be around about 300 K which is not so bad after the Istanbul half form and proving time in that unoptimized version was two and a half minutes so totally practical and on a consumer laptop not as super heavy to the cloud machine right. Okay, I'm also gonna quickly go through another project that I personally work on as part of a project at TU Berlin in Socrates as well and the problem is that currently the energy system at least in Germany and most other countries kind of works like this households are in a network and there's an electric utility right and some households they produce more energy than they consume because they have solar panels on their roof for example and others consume more these, okay let's not do that. And the other households they consume more than they produce but the problem is everything goes through that electric utility and of course they make a profit from it right they make a margin, they give you a small, they give you little money for the energy they buy and they resell it at a higher price and then there's tax effects and other effects why it could be beneficial to not do it that way. So what we propose, how you would do it we try to do as much internally in the network but we do not do it on market days where you bid and ask which is super expensive and has many problems but we rather say we look back in time after everybody produce or consume the energy and then we basically make a matching and match the produced energy by households with the consumed energy by households and kind of minimize the amount of energy that ever goes through that electric utility and that takes away their profit and makes it more profitable for households to have solar panels installed because a big issue that in many cases is not profitable is kind of bad for moving on to renewable energy. One minute, one minute. Okay. So that's the idea and the core problem is we do not trust the utility with doing this allocation algorithm, this matching right? So we want this to be in a blockchain because then it's untrusted but that means households publish their energy consumption data and the blockchain then computes the matching but publishing the data is really bad right because then everybody can see how much energy is used by everybody or produced by anybody this is highly sensitive data. So we cannot do that. That's why we introduced Asocrates based CK Snark that does that matching algorithm and the blockchain only verifies that the matching has been computed in line with some specific requirements. It's fair and the same amount like the amount of work out and it puts outputs or equal or that kind of stuff. Okay. Sneak peek in the future in one minute. So we've been moving very fast. We've been breaking things. We've been changing specs. Implementation has been progressing rapidly and now I think we need to slow that down a little bit. We need to come to a version that is stable so users can actually rely on some version start building with it without us breaking it again and again through disruptive changes and it would also allow us to do further optimizations to code base and crucially for people who want to use in production get some audit, get potentially some formal verification on the optimization steps we do and stuff like that. So that's our plan for the next year to slow down a bit with the breaking changes and to move to a more stable release totally. If you're interested in the Rust development programming language and state of the art photography we're looking to grow the team so please get in touch and thank you very much for your attention and for coming.