 So, our next panel is on DSLs and IRs for ZK circuits. Yeah, these are all implementers and authors of some very popular ZK, DSLs and IRs. And so today we're going to get together and I've prepared a few high-level questions here. I'm going to read them out. So the first one is sort of an exercise to locate each other on the language set and see how we relate to each other. And the second afterwards is to then figure out to what extent can we share infrastructure or should we even try? Is it impossible after a certain point? And over here I want to note also a lot of prior work from the compiler and SMT verifier communities. So can we also reuse some of their work? And then lastly I had a high-level question about, you know, what programming in ZK should look like? What's your ideal DSL? Is it very different from normal programming? Yeah, so just keep these in your head. And I think I want to save maybe 25 minutes for audience questions at the end. So I think let's start with a round of introductions. I have, like, I have pre-prepared some visuals for you in case you need them. So the first one is PIL, Jordy. So can you give us an introduction to what PIL is? Well PIL is a language mainly. PIL means polynomial identity language. And mainly what it is is when you define a circuit that's based on polynomials, for example, a plonk is an example of a way of a circuit that's based on polynomials, but you can do much more, things much more complex that are based in polynomials. So PIL allows you to just write those identities and then out of those identities just build the proof automatically the same way that you do in a normal zero-knowledge language. You can do it in a circum, you can do it in PIL. Here you need to do something else because you need to maybe write the witness somehow, but it's just a kind of abstraction where you can just focus in writing what are the identities that you want to fulfill. And yeah, this is mainly what's PIL. And Jordy also is the author of CIRCOM, which is a very, another popular library. So how does PIL relate to CIRCOM? Well, PIL is for polynomials and circuit is more for signals you are just working. We are not working with polynomials at all. You are just building, you have just signals and you just build constraints around those signals, not constraints around polynomials. So the idea of CIRCOM is, CIRCOM is a low-level language because you can write every circuit, you can write it in CIRCOM. CIRCOM does not have any special library, you need any plug-in special. So all the CIRCOM is write it in CIRCOM, so it's on there. But at the same time, it's a high-level language because it allows you connections. So you have blocks and then you connect blocks, small blocks, you do big blocks, big blocks, you do bigger blocks. And so in this sense, you can go all the way down and all the way up on CIRCOM. That's why it's so flexible and then it connects very good with Snark.js and other proving systems. And yeah, it has been there for years, it has been already a full rewrite. We now introduce it on these new extensions of the language, mainly are anonymous templates and we also are already adapting to a plonk because CIRCOM was born in that R1CS era and nobody knows what works perfectly in plonk. So we have also custom gates and yeah, we're extending the language and it's a full community that's working on there. Thanks Jordy. So yeah, to me it's interesting this distinction between a language for polynomials versus for signals and I'm gonna skip to NOR because as I understand it, NOR can target both plonk and R1CS and air and even more back ends. So Kev, what's NOR about? Hi. Can you hear me? Yeah, so NOR is a lot more high level. It compiles down to something called a CIR. And essentially you never tell the proving system how to do something, you just tell it what you want to do. So for example, if you want to do a CIR 256, you don't tell it to use custom gates. The proving system just decides what's the best way to do it. So in this way we're sort of back end agnostic fully. So for example, you'll never see like custom gates inside of NOR or any sort of that syntax or lookup tables. You might see maps for example. So yeah. Cool. I'll just go around. I have questions. I'll just go around the intros first. So this is the picture I've found for Cairo. I really apologize. I'm sure that better pictures exist, but. Yeah, I feel like this is the best they've ever produced, 100%. So I guess Cairo deferred to some extent. I won't be able to actually go into the difference, but Cairo is just a during complete language. So you were writing Cairo the same way you were writing Solidity or in any smart contract language that you can think of. So from the perspective of Cairo, you write a program. This program is compared to a Cairo byte code, which is run into the Cairo VM and prove among all the program that are proven within the same proof. So in that sense, it's like, there is not much difference. You don't have things that you don't have lookups. You just have the program and some time for some operation, we do have this notion of built in, but built in are just something you call as part of the function. Cool. And the best for last, kimchi slash snarky JS, Brandon. Hello. Thank you. So this is snarky JS with a Y, different from snark JS. Had a few people confused this week. That's good. Yeah, you know, name, name might evolve. I don't know. So, yeah, so kimchi is our, is the proof system that snarky JS is built on top of right now. The way that it's built, we can plug into other proof systems. We just haven't yet. Kimchi is the proof system that is in Mina or, well, in, in the test net of Mina that will be hard forked pending community voting. But the idea is, I mean, kimchi is a plonk 15 type proof system. It has no trusted setup. It's universal. And we support like infinite efficient recursion, which is important for Mina. This is synced blockchain. But the cool thing is you can also use this proof system to build custom ZKPs, which are our smart contracts on Mina, and you can build circuits outside of Mina as well. So I don't know, there's not much in this picture I want to explain. I guess the one really important thing about snarky JS you can see on the top left there, the code snippet, it is type script and is extremely high level language. And there's this sort of, I guess, there's a lot of reasons why we design it the way we did. I'll just point out a couple things. One is we get all the tooling and infrastructure and ecosystem that comes with type script. So NPM is great, VS code is great, all the integrations already work and are good. And we think that when people are writing zero knowledge proofs or writing any programs, the model that people are used to thinking about computations is you have a function, it takes some input and you produce some output. So one of the things that snarky JS does is it lets you write your computations with your circuits at the same time and the output of those functions get sort of fed in as a public input automatically for you. So in the end, your code actually looks like normal code. Cool, thank you. So I'm going to start with an opinionated question, which is what do you think of intermediate representations? So Cersei is this project that's inspired very much by LLVM, everyone's turning to look at the screen. But so the idea behind that is to define a common standard IR to which many different front ends compile and from which many different back ends can compile. So how well do you think your project would fit into a model like that? Yeah, so snarky JS is like I said super high level and right now it's directly connected to kimchi but we could and probably should connect to some intermediate representation so that we can share that work of connecting to custom back ends. In the case of CIRCOM, actually it exports to two different languages, wasm and C. Internally it has an intermediate representation already, it's CIRCOM one, so exporting that to other language like Rust or like Go is in the road map and exporting to maybe some other intermediate representation if it looks like good enough, that would definitely be possible. Here I'm not sure about the specification, I have not read it yet, but for example there are things that who you parallelize, witness computation. In CIRCOM we have a way that you have different components because there are components you can say that this component can be running parallel, so the witness computation can be running parallel. This is very important especially when you are doing big circuits. So I'm not sure, maybe somebody that's experienced that would be made an answer for them, but sometimes there are details there that are not that easy. Yeah. So in the context of CIRCOM, it won't really work because CIRCOM as of now doesn't have an intermediate representation and directly generated by basically it's just a very high level, you know, like C is, you know, it doesn't have an intermediate representation, it's just like two-by-two assembly and CIRCOM itself, CIRCOM the current version at least didn't just directly generate the by-code. That said, we have CIRCOM one coming in a couple of months and CIRCOM one does have something called CIRCOM and CIRCOM is literally means safe intermediate representation and the purpose of CIRCOM is to enable the language to turn deterministic actually. So CIRCOM is not deterministic but CIRCOM will make it deterministic because when you make a language for NL1 or for NL2, for blockchain itself, you want determinism. You want determinism for a very specific reason actually, which is you want to be able to prove fair transactions. Without determinism, you're screwed for that. And so we, for now we don't have it and it is of those doctor that we control by making by right now white listing the contract but tomorrow when we're going to go for permissionlessness and then for decentralization, you can obviously do that. So we are working to have this safe intermediate representation which will be deterministic and this intermediate step will be able to actually compare to multiple backends. So either Cairo or like, you know, optimized for XZ6 for execution layer and potentially something like this. In that case, we'll probably lose much of what you were gaining and what you wanted. So I don't see, I don't know if it's practical to compile for, I don't know about Cersei but I would be, it's a great name for Stark by the way. But yeah, I don't know, I don't know the specs, I don't know if you could do that. Yeah. So we did once have an effort to integrate a CIR with Cersei. We dropped it just due to priorities. And what I remember, it was fairly easy if we extend Cersei to have these black box functions just because the Cersei IR just can't optimize for these black box functions. So yeah, it's possible. We just haven't picked back up on it. Yeah. I think NOR is probably the newest, shiniest language like it just dropped. And I have lots of questions about ACIR, ACIR is NOR's intermediate representation. So it can compile both to R1CS and Planck. So how does this IR capture both of these very different arithmetizations? Essentially, you have these arithmetic gates which are just linear expressions which aren't bounded. So for R1CS, that's quite simple to compile down to for Planck because you usually have like WIT 4 or some fixed WIT. We have something in ACIR which basically chops the gates into the perfect WIT for Planck. For the black box functions, it's really just the proving system just takes in the witness indices and just fills in the constraints for them. So it's quite simple what it's doing. It's not doing anything complicated. Yeah, that makes sense. Brandon, did you have thoughts on the IR thing? Yeah. Well, I think I spoke in the beginning a bit. I mean, yeah, I guess, yeah, I don't have anything else to add. Cool. So what I'm hearing is like basically compiling to an IR, you could stand to lose some some very specific performance optimizations unless the IR allows you to black box things easily. Cool. I did have a question for Louie as well. What do you mean when you say CHIRO is not deterministic and that your IR is deterministic? So CHIRO today, so proof and DKPs or language engineering of gender deterministic can create hints and that break determinism. But CHIRO today is deterministic. So meant to be more specific, we actually have two flavors of CHIRO, what I like to call pure CHIRO, which is the one you would just directly prove a program within SHARP or SHARP being the SHARP Prover infrastructure that we provide, or that's pure CHIRO and what STARKX is built on and even STARKNET itself is built with, and you have what to be called STARKNET CHIRO and STARKNET CHIRO does have a notion of state, does have a notion of like a syscall and plenty of things that you get from the OS itself. So you have like CHIRO for a STARKNET, you have CHIRO for a pure CHIRO. So the thing is that the CHIRO for STARKNET cannot be non-deterministic and that is for a very pure incentive reason because if you don't have determinism that you can make transactions that are not provable, meaning I'm the only one we know that this transaction is valid and I'm the only one we know the hints and therefore the rest of the word cannot prove it. And so for instance, let's give an example that people like to use, you know, I want to do a forced transaction from L1, you know, being that optimistic roll up push, I mean talks a lot about. Then in this context with determinism, I can't, this is a dose factor because I don't know if the transaction that's getting pushed to me is provable by me. I don't know that. And so another thing that is important is that fair transaction. So fair transaction can be valid, you know, the price of this pull on me swap move and now for my transaction failed and otherwise it can be malicious. So I'm spamming the network with transaction hoping to get what I want, but if I don't, I don't think I don't pay anything. And so I can spam the network, run everything and I need to run it. And so creating dose factor. So for STARKNET, we have to make CHIRO deterministic or I mean the version of CHIRO for STARKNET will be deterministic for that purpose. And to make it deterministic, we need to prove that the program was compiled into a deterministic version of CHIRO, which is a safe intermediary representation called CIRA. And the CIRA will compile to CHIRO itself. And so CIRA itself will be proven within STARKNET. When I'm deploying a contract, I will be proving to the STARKNET does this contract was compiled with using CIRA. So this is what I mean by CHIRO becoming deterministic in this context. I see. Thank you. Cool. So I guess my next question, I guess my next question is number three, which is, yeah, when you were writing your languages, what features did you prioritize? So for example, NOR is very Rust-like. They even have a version of cargo called NORGO. And then Snarky.js is like TypeScript. CHIRO is also a pretty high level. Yeah. It's mixed right now. It's a mix of C and Python to some extent, but right in the new version will be very Rust-like, which is coming in three months, two months. And for example, Gnark was just written directly in Go. Halo 2 is written directly in Rust, but the pill and circum are kind of closer to the metal. I think so is Halo 2, the metal, the circuit, I mean. And so is Halo 2. So, yeah, what are your thoughts on the trade-offs between higher-level representations and lower-level representations, and how did you decide on the designs that you did? That can maybe start. So the first, that's actually an interesting story from Starquake's experience. So the first version of Starquake. So Starquake's was and is the first product in Starquake then, which is basically a centralized but non-custodial back-end for DABs who need scalability. So you know, customers are DYDX, diversify, immutable, and so on. And so the first version was written, you know, very low-level, literally in polynomials. And this thing does transfer and swaps. And so we started to work in Cairo, it was 2019, and it became production-ready in May 2020. The first case of actually where we used it for was for the Reddit backup in June 2020. And so the funny story is that the performance of the system massively improved when we started using Cairo because it was much higher-level and had a lot more flexibility in the way we could reason about the system and the way we could write about the system. Not only was it gain in velocity, but it also gained in actual pure performance. And so there is, you know, in some, I mean, when you need that performance improvement, we have these things that are built in the same, built in as equivalent to Cairo to those specific chip on your CPU, like the one that does 64-bit modular multiplication. And so if you really need, like, optimization at the very low level for a specific operation, like say, proving, right now we did it for the K-Chack, we did it for ACDSC signatures, range-proofing of other things, that we write them in those, and for the rest, we just write in Cairo. And so there was not really any trade-off in any way except that much simpler and nicer to write. Now we're two. Yeah, that's interesting that, like, a higher-level language actually improved your performance. But yeah, I guess so you're pushing a lot of the manual optimization to the compilers and like stuff that computers are better at. And then the commonly used so-called pre-compiles, you do hand-write that, right? Let's see. When we started, when I started to write CIRCOM, the main purpose is, as I said before, CIRCOM is, at the end, it's a low-level language. So the idea is that, if you're writing CIRCOM, you should understand what you are doing. You should understand what's behind the circuit. At the upper level, of course, you can do things in the high-level because you can connect things. But the idea is that you have precise, and this is something that's very specific to CIRCOM, is it should not be any hidden constraint anywhere. So all the constraints are there. You can check the code. Maybe you need to go inside the CIRCOM leap or someplace in the library. But you will find all the constraints there, and you can track them, and you can see them. So you can have full control of what you are doing. This, of course, is dangerous. I'm not saying that. That's why, because you know, you need to understand, if you are, I see people that writing CIRCOM language, that they don't know what they are writing. So it's not going to work, okay? And this is, we understand this is a thing, but this is what CIRCOM is for. CIRCOM is for writing, I would say. CIRCOM is for people that understand what's happening underneath. That's the thing, okay? Said that, and this is a little bit linking with what he said is, I think that in the future, nobody will have to write circuits. And because circuits are a very specific thing for things, and then you can write, you will be able to write things in Cairo, for example, or things in maybe some high level or even in Solidity. The CKVM, at the end, is a language that you're writing a language, and you're building a zero knowledge, you're building a zero knowledge there. So you can, there is other projects like RISC-5 that are also building on top of that. So it's the CKVM, what brings behind to the world is that you can have any processor, any base, RISC-5, an EVM, or Cairo-like, or some specific virtual machine, so some processor, and then you build on top of that processor. But the circuit is the processor, the circuit is like the hardware, okay? So it's like, maybe we can talk about hardware language and software language. Circom is a hardware language, okay? And software language should be normal software language. C, Solidity, Python, or any other language that builds on top of that. I don't see a space for, I don't see, but this is opinionated on me, but I don't see a space for specific language, specific high level language. Because I think that a really high level language should be enough to run on top of that. Can I just add one thing about RISC-5? What you said, I tend to agree with the entire statement about the high level language. There is one thing though, I'm not sure, and I don't think necessarily that, you know, things like Python or Rust will enable you to write VKPs tomorrow. Because there is still a very strong, competing paradigm beyond it, but there is between differences, which may require you still to have a different, accepting that you're working on something different. Yeah, probably, but this is, at this point we are running fast there. So I think it's just a matter of times and how difficult it is. You know, right now, depending if you want to run a very, very complex language, a very complex thing, it's one thing. But if it's complex, it's going to be also difficult to write a complex language. So I think the way to go is high level language. I agree with you. I'm going to go with the top of your work. Yeah. Cool. Yeah, so I and my team have a very, very strong opinion about this. So I guess for background, so MENA protocol, we started building it in 2017. And for those that don't know, MENA basically is just like an enormous, like ZKP circuit that powers everything. So there's a recursive, like linearly recursive proof for compressing the blockchain that follows, you know, the rules of blockchains. There's like consensus logic. There's verifiable random functions where like fractional numbers are approximated using like crazy calculus things, basically enormous, enormous programs. And enormous circuits. And we realized very early on in 2017 that like we could not write the circuits manually. And I agree it's, I would sort of, I mean, in some ways it's like programming directly with hardware. But I think you can also kind of think of it metaphorically, like writing assembly language. And you know, you can't, like these days when you're trying to write a really complex program, you don't use assembly language. You use a higher level language. So we built the first version of Snarky, which is an embedded OCaml DSL in 2017. And we used it for, and we still use it, you know, whatever, five years, six years later. And it's great. And we've learned a lot. And the specific, you know, we've taken those ideas and brought them to Snarky.js, which is just sort of like bringing that into TypeScript instead of OCaml. The, you know, we thought about like, okay, if we could build a high level language that compiles into something and then like implement a bunch of optimizations. But the landscape of proof systems is, we thought would evolve quickly. It has evolved quickly. It continues to evolve quickly. And so it makes sense right now. I guess it's similar to what we were just talking about, like you can't today efficiently use, like, a regular language directly, like, you can't write a Rust program and have it run efficiently, you know, for some definition of efficiency. But you can build like a little library or framework in a language that already exists embedded in it. So that's what we did with OCaml. That's what we did with TypeScript. And you can build, you can expose the lowest level. So like there are people who are like experts at optimizing constraints for the proof system. And then you can build abstractions over that. And with types, build, you know, type structures, build computations. And this has worked for building this enormous system that we have in MENA. And we see it starting to work as well for building really complicated applications with crazy recursion and all this stuff in Snarky.js. Yeah. I think I disagree with everybody. That's okay. That's why you're here. So from like, I think that using Rust, at least for our use case, it would be a bit clunky to encode like semantics for like privacy, like private state and public state. So I mean, it's possible, but I think for the user, just be a bit clunkier, which is why I think maybe if you're not using it for the privacy use case, then high level languages like Rust and Go might work. For using TypeScript and languages like that right now, when we were designing Noir, it just seemed like we needed to restrict the user from doing very specific things that it seemed like in TypeScript, they could just counteract it. And I guess one example that I always bring up is the if statements. Like if you're using like a high level language, you sort of can't just do if something, like else something, because in TypeScript or high level languages that works different from like a circuit based if. So in Cairo, we actually do at the runtime, so we don't evaluate if statements. We only evaluate the path we take. Of course I have a counter-opinion to that. Yeah, I think the costs of using a custom language, in my opinion, in my team's opinion, severely outweigh the benefits. So I guess just to talk about the if example, yes, you can't use the built-in TypeScript if in the context of like within a circuit. But if you are computing a value that you're then using in a circuit, you can use if statements. And if you're actually in the circuit, there are tools in the TypeScript ecosystem that can help us actually warn or error to the user. Like we can use ESLint to provide an error if you're using an if statement in a place that you shouldn't be. And of course that won't be perfect, but we've found that that actually is working really well. So actually I'm agreeing with both of you. So I'm going to explain what I mean by that. I agree that you should have basically a specific language at the same time that tends to help you think about the computing model you are in. And at the same time, you also want languages that are familiar to the public or the people that we're using. And so one solution that we find, first of all, and I'm saying I didn't say that Carol. So Carol actually means CPU error. So roughly Carol can evaluate any program. I don't know if it was clear to everyone here, but it's, and notably what it means that there is two things that you get that you don't get in circuit usually, which is you only get recursion, very easy recursion. You don't have to evaluate the whole circuit. And second thing that you get for free is you only evaluate the if statement, the path you're taking. So when you have many, many ifs, then you actually only, you don't pay for all the branches. You only pay for the one you're using. And so one consequence of this, one thing that we were able to do or having all the team built is they started to see compiler from one, for instance, high level, other language to carry itself. So we have one right now, a language, a compiler called Skyro, made by a team out of Switzerland, which is combining Idris, which is being a functional language to Carol. For the story, if you are, if you want to use it, you'll be the first one to ever use it. So have fun. I actually never find anyone who wants to use a functional language. So maybe there is one in this room. But just make it functional languages. So go for it. Idris. I love Idris. So seriously, if you want to do have fun, go for it. It's crazy. But so I'm trying to say that I don't know if it's the ultimate path, but there is a middle ground. Can I? Please go. Yeah. So, yeah. So just for in Snarka.js today, connected to kimchi, I mean, this is powered by kimchi, of course, but we also can handle recursion, infinite recursion. And in Snarka.js, we've built an interface that's just, it's like essentially the same as writing a recursive function. So you just, you know, you put your function in a JavaScript object. That's the only difference. And whatever arguments you have, if you type them as like proof, then the system will know that you are trying to do something recursively, and it'll do all the complicated shit for you. And in your code, you can just say like, you know, if like p is your proof, you just say like p.verify, and that sets up all the constraints for your circuit. And then the other bit about the if statement branching thing, like today in, I think, any proof system that I'm aware of, including kimchi, including the current version of kimchi, yeah, you have to pay for both branches, and you can kind of approximate branching by like multiplying by zero or one. We are actually working on an extension to the proof system that will allow you to do these kind of like arbitrary jumps in an efficient way. And that, though, is, you know, I'm not working on that. How is it not the CPU? Like, you know, like basically running in the arbitrary program, then? Yeah. How is it not in VM, basically, that's what my question is. At that point, you get essentially the power of VM with the performance of direct circuit. Yeah, so it's kind of the holy grail. And I mean, the intuition behind it is like, there's, you know, there's like lookup tables and custom gates and all these things in Planck, you can sort of extend that to allow for like arbitrary RAM, like random access memory. And then you can kind of do another fancy thing to get arbitrary jumps efficiently. And the details of this, I'd have to connect you with a cryptographer on my team. But I am told that this is the path that we're going towards. I've seen that. I've seen that in your repo. It's the dynamic lookup table. Right. Yeah. Jordy has been silent for a while. No, no, at the end of this, this is, what are you writing? Hardware or software? If you're writing hardware, you need the hardware language. If you're writing software, my opinion is just use a standard software language. And why? Because the hardware, and the problem is that you need the piece of hardware. You need the process of the hardware that can compile to this software. But this is there. So this is the way to go. Cool. I want to save some time for the audience to ask questions. We have about 15 minutes. So any questions from the audience? So we actually touched on hardware a lot during this conversation. It'd be interesting to hear from the different panelists, like what you actually see the kind of future of ZK hardware to be and what direction it's going. Not that kind of hardware we're talking about. I think we're talking about that. But yeah, we can talk about that, too. We should. We can talk about real physical hardware as well. Yeah, we're talking about the polynomial hardware. It's an arithmetic hardware. I recognize that. We're making a joke, but that's on the hardware acceleration. So I'm going to take a hot stake here. Starkware, at this stage, do not believe that we need them, as simple as that. The reason is because the software itself is being improving so fast. And the architecture of how you do things has been improving so fast that at this stage, we don't know what the future will look like. And so what it means by this is hardware development is roughly a year and a half before getting to production. And a year and a half from a year and a half ago, hardware barely working. Starkware was in premises. Recursion wasn't working. And so, for instance, the thing that we are working on right now, so we already do recursive proof on production today of our proof ourselves. But like, for instance, one architecture we're not doing at the moment is basically three of proofs. Like, basically, right now, the only thing we do is we have what you call jobs. And jobs are like those Starknet, Starkware infrastructure, all of that use Sharp. And so all the customers and use Starknet included of their infrastructure are sharing the same proof. So every one of that is basically everyone is paying a marginal cost of the proof cost on that. And so, for instance, diversify or immutable, like right now is having like 70 PS in their environment, and they're using a significant amount of step within this proof. So they do one job. And we have other proof, and also another type of the job is approved. So we didn't really went all the way yet into that tree architecture. And so for hardware, there is already a significant, like the things change so fast that we can't really say, this is the way to go. That said, I have a friend of mine, I guess, Omer Shtomovich from the company Coldway. Inguniama, which is a very good friend of mine, is also already working on this, making groundbreaking improvement on hardware and module explanation, or even MSM, which I have no idea what it means. But I would let him to you, he's not here today, but yeah, I would let him to answer that on another day. I, yeah, so different, I guess, slightly different opinion. So when you use SnarkyJS for writing applications on MENA, or just in general, we're encouraging the proof computation to happen actually like on the user's machine. In this way, you get privacy. And, you know, there are ways that you can sort of do part of the computation on the machine to get the privacy and do the rest on a server, all these things. But when you have a proof system that has a lot of cool features in it, like kimchi, it's slower than a proof system that has a lot less features in it. And hardware acceleration would then kind of make it faster. I think it would be cool. I guess we're encouraging people with SnarkyJS to write circuits with recursion, with linear recursion or tree recursion that's been working in MENA's mainnet for a year and a half sort of internally. And now on our decentralized test net, we have you can do it with custom circuits and settle on chain and all that stuff, and it works. And yeah, and it's a little bit slow right now. We're working on optimizing it. But I am imagining a world, like I think a cool world would be one in which the ledger that we all have, or something like that, that has a wallet on it, also would have a Snark Accelerator companionship or something. I don't think that's coming anytime soon, but something like that could be cool. And even, obviously in the shorter term, taking advantage of GPUs, FPGAs, and silicon at any level I think would be helpful. Because it makes these proof systems that I think have a lot more features faster, so you can do different kinds of applications than you could if they were slower. I would distinguish. This is actually a very different. So when we talk about zero-knowledge proof, we can distinguish between small proofs and big proofs. Because big proof are for CKVNs and for roll-ups and for when you want to do aggregation and this high-computation stuff. And then small proofs are more related to something like identity or a game or something that you want to run, especially in the client. I think in the client has a lot of sense, because it's where the hardware you are building. You do a big investment, but then you can spread to many mobile phones or to many places. So this makes a lot of sense. And for things like these privacy things, for things like gaming, for things like identity, just proving things about your data. I know all these that. So I think this makes a lot of sense in that direction. For the hardware, for the big things, and I'm talking more specific to maybe a specific project, which is the CKVN, but as a representative, it is that. For us, I tell you why we are interested in hardware. Maybe here you will see why. The idea is that if you want to have a competition, the idea is that you want to have a competition that approving competition, there is a market for generating proofs, those who speak proofs. Here you can have a problem is that you have a centralized, maybe you have some provider that has a lot of power, and this monopolizes the system. In the case of a CKVN, this is not a big problem, because you can only build a proof that's already deterministic, so you cannot do crazy things, but you can't degrade the network. So the problem is that somebody that has a huge power is able to generate a lot of proofs, maybe just stops doing that, and then the network just degrades. So for us, the way for us to avoid that is to have the best prover available, even in hardware. When I'm saying hardware, at least in FPGAs or whatever, even ASICs if it's required. So the idea is to have the best prover, a valuable open source, and a valuable for everybody so that there is no monopolistic prover that can take that. That's why for us it's so important that the prover is open source, that the prover is valuable for everybody, and not only that, that the technology to build the prover is valuable for the community. Because this is the warranty that we have that the network would not be degraded. Well, even on top of this, in the context of StarkNet, the way we're going to look at this proving market is by actually having a kind of POS, a kind of mechanism where we have a leader, who would be the proving one, and we have in Cascade another one, and the second one would come afterwards if there is degradation. And so the other thing is that, I just want to say, I don't want to say, unless I have to talk about that, but people have been very focusing a lot on acceleration because Stark are very slow, and they're slow because of the operation, and they're very sensitive to the magnetic curve and so on. And some things, I mean, at least Jordy and I, working more on Starks, which are much faster and much easier to work on regular servers. So I think that may be a difference on our take related to that machine. I mean, so I'm going to answer the question. I wanted to hear instead of the one you actually asked because I didn't really know much about hardware acceleration. In terms of programming for hardware versus programming for software, I think proving systems are changing too much for at least us to focus on being so close to the metal just because the metal might change next year or the year after. I will say that for privacy, it's a different space. If you have privacy, you need to think about how to accelerate the prover because we usually use wasm, whereas if you don't have privacy, it's really FPGAs and GPUs. How do you accelerate this massive beefy machine? I guess just to, there was this discussion about markets for decentralized provers. And this is just my, I just have to say this because too many people are surprised by this at this conference. Sorry if it's a little bit off topic, but MENA has decentralized sequencers, decentralized provers, a marketplace for producing proofs that are sort of needed by the system and this has been in production for a while. And we encourage more people to look at it when they're thinking about their marketplaces and everything. Cool. Great answers. Thanks, guys. I have questions that may be continuing on the hardware thing, but most of the, I mean, a lot of people kind of compiling target now is actually solidity. So actually a lot of people are moving on Ethereum. And kind of this is DEF CON. So like the most artificial kind of constraints that I'm seeing is probably the fact that we have one pre-compile curve to work with, but we can only work with being 128. So I'm wondering what your intuition, why don't we have more curves to work with in your opinion? I don't know. It seems like it will kind of release a lot of bottleneck. So OK, I can make two comments. The first one is on the pure application level and why people should care about what you're describing. So I think that one of the biggest original sin that Ethereum made when they were built in orderly, in retrospect, right, we couldn't know, is to have had EOAs. Having EOAs was a mistake. But you're right, having new curve, like being able to use the P256 when it's on your phone or even being able to erase say verification, just to throw random ideas, right, would be unlocking a lot of use cases. I mean, just to give you an example, starting right after the company on gaming that is using that kind of abstraction to verify P256, using your secure enclave of your phone to sign signatures for gaming. That's awesome. This is awesome. That's actually not for me the biggest lock of verifying the EVM. The main difference, the main problem is that the computing paradigm of the EVM is very different from when you have an EVK. I mean, Bit's Boolean logic is expensive, and the EVK is cheap on the EVM, right? So this is kind of the starting point. And while a lot of the team are looking into, I mean, diving, and Jordan would talk more about that than me, about, you know, proving the EVM, we didn't make the choice because we were, in our opinion, we were optimizing performance of the proving system. And then now that we have that, can we make it backward compatible? And so we have, in our case, Warp, which is like a tooling that enable you to transpire from Solidity to Cairo. The practicality of it, I can't really judge, no one really used it in production, but they managed to recently announce, they managed to compile the new V3, which is a significant bit. And there are an existing effort right now to rewrite the EVM in Cairo. So I don't know, once again, if it's a POC, if it's for fun, I don't know. Maybe, maybe it would be practical. To prove it, I don't see the numbers, I won't believe it. Going to your question, why not others? I think maybe you should ask to the EF people, maybe better, but what's clear is that, at least the BLS381, this is a must, and this will be introduced sooner or later in Ethereum. Among other things, because signatures, so because all the staking stuff, it's done with this BLS stuff. And also, at the beginning was that was not safe, because there was something discovered, but then it's much more safe that we thought that was safe. So it's there, there are some doubts of the BNCorp, so I think that sooner or later, Ethereum will have. There has been some efforts. I know that there has been some stops at some point on this, adding this, and here is, for example, we are implementing the fully BN right now. We don't have to compile anything, like we just get the code and put it there. Doing that has been an effort, but what are the difficult things? What are the pieces that's really difficult to add? So we were able to build the EBM, but there is one piece that's complex. We'll do it, but it's really difficult to do, is this pre-compiled contract of the BN1. So complicating the layer one's BMs is not a good thing. And there are some, of course, there are some mistakes that the EBM was created, one was created. Wasm was not even an option at that point, just for you two, just people to realize on that, of ZK, nobody knew, so they didn't exist. But so here, and what I've been hearing at what I see is that the EBM should tend to simplify, not to get more complex. And I think this is a feeling that in Ethereum, the people that's working in Ethereum, we have, EBM is probably too complex and we need to tend to simplify, the gas models, the sum of codes. And if you see the proposals that are coming are more in the, not in extending the EBM, but in simplifying the EBM. And said that, I think that the BLS is the most. Yeah, so I guess similar to Starkware, we started not with EBM compatibility in mind for MENA and SnarkyJS and our proof system, but differently from Starkware, we didn't start with performance in mind because we're not trying to build like a ZK rollup scalability solution. We were like, okay, there's all these cool things you can do with ZK with privacy and recursion. And we wanna build a proof system that enables people to take advantage of all those cool things. And so in the same vein as what you're talking about, like there's all these cool things now that you can do in our proof system and on MENA. And we're working on an Ethereum bridge so that the MENA state can just be fully verified from within Ethereum. And the idea here is like, we wanna bring all these great features to Ethereum developers, but I guess TypeScript is, people are writing frontends into TypeScript anyway, so hopefully it's not too bad. But yeah, rather than kind of trying to put all these things inside EBM, it's like, okay, let's use efficient elliptic curves and all these things to do these interesting privacy and recursion and all this stuff and then wrap it up in a nice present that can be verified quickly on EBM. I can say my question is because, standard are evolving sometimes and this is like the hardest the metal will be in terms of zero knowledge standards. So some of the time the constraints on the standards come from things that are not necessarily very efficient and eventually downstream, you don't understand how you got to something that is using a certain aspect. So a lot of the downstream decisions are being taken because of that, right? You have baby job job because it fits the prime field. And I mean, it could be more efficient or my skin like kind of from the philosophical kind of aspect. At some point you need to leave it with there. That's the point. And you can, I mean, you know, you can wish us, we could wish and we did many times, we should most of you can, but the reality that you won't get we may be all convinced here that the future but maybe all the people outside are not. And so it's a consensus. We're only willing to agree on what's good. So I tend to agree with Jordy. We need actually a much more in favor of simplifying the network than doing it than trying to bring more things to make our life simpler right now. And just to give you an, yeah, that's basically it. Yeah, I don't know. For this is, look, in the CKBM circuit, we are using a different prime field. We are using the Goldilocks prime field. We are building a stark. We are building a recursion of a stark. We are doing a lot of crazy cryptography in there are just in the last moment. We just built a glow 16 or a plonk. But this is the last piece. So it's like the adapter if you want to the network. If so, maybe the net, so maybe the ABM you just need one thing. It's a hash functions, hash functions. How many hash functions needs requires the ABM? Well, we have Ketchak. We have a Chateau 56, recently a teriumated Blake. Is it necessary? Maybe with Chateau 56 would be more than enough from the beginning. You couldn't argue that the pre-compile could have been like, we could have made things cheaper or in first place. Or maybe you know, main thing that are, why is it so much more efficient to have this pre-compile? You know, there's many things you could discuss in the back. But this is, why this is possible right now? And maybe, and this is changed from three, four years ago. It's because we have this validity proof. We can compute things of chain and validate them on chain. So we just need to care how we validate these things on chain. And BN128 or if you want to BLS is more than enough. So probably if the terium had to be done now, it would be less cryptographic primitives and more basic thing. Because we can accelerate that off chain and validate that with a validity proof on chain. And just to conclude on this, if you want to have much more impact and make one change, changing, I mean, it's going to be very ballsy, changing K-check, but Poseidon might be more helpful. No, because you changed, Poseidon in which prank field? Yeah, yeah, but that's the thing, you know. And I've been thinking, you know, all of us have been thinking on that, is why Poseidon is not in the, or why don't we have Mimsi at some point is not in the ABM and this is the thing. But as we evolve, as we evolve, we see that this is less important because we have, because this computational proof are getting much faster and much better. So we require less this. So we can sacrifice a little bit of optimization. Right now we can look in the ABM circuit, we can validate 500 K-checks in the proof. We can validate, I don't know, like something like 100 or 200 ECDSAs with ECDSA, Ethereum corpse in there. We can do paintings inside there. We can do everything in there. It's not optimal, but it's doable and it's getting every time more efficient. So at the point that we have this, we require less help from the L1. We require less help from that. So that's why I think that this tendency to simplify is a trend. I also feel like proto-dunk sharding is moving in that direction. Like they introduced that pre-compile for point evaluation for KZG and then you can understand KZG proofs. But beyond that, it's all blobs. It's all delegated to the roll-ups. Well, we are over time. So there's one last question. Okay, ask it, ask it. Yeah, I just wanted to know that I see a lot of sense in breaking this EVM ice and creating DSL, not only for opening new area of innovation and your question to basically Mina. I mean, does it make sense for you also? For example, to collaborate with Cairo and maybe since it's a better model for the ZK than imperative languages. And yeah, basically work on better tooling for operational verification. I think we already are, right? I mean, we have someone at O1 Labs, which is like the team that incubated Mina, is looking into getting Cairo to work with Kim Chi. Yeah, and there is even one in the room where she took her work and made it into work with Winterfell. So you can look at the guy in a very nice blue shirt over there, Max, shoot out. So yeah, Max has a practical Cairo, a verifier and a prover in Winterfell today. I mean, to be honest, it's actually go back to the question of simplifying stuff. Why, we have now L2s, we have L2 that can do stuff. Let's use their advantage to bridge where we could do it before. And so, like, you know, an example, there is a team actually, maybe in the room over there. SnapshotX is working right now to do N1 voting using L2 to make them cheap. And that's something you couldn't do before. So that's this kind of thing that we can do with L2s now that we couldn't do in N1, and we don't have to change N1 for that. So simplify, please. Cool, thank you, all panelists.