 But I see these layer twos not just as a scalability solution, but rather a way to extend what Ethereum can be, could be, or perhaps should be. So I'm very delighted that I have with me these three amazing people who are running. What in my bias opinion is perhaps the most unique and coolest layer twos in the space. So I would love for three of you to just introduce yourselves, talk about what you're building very briefly before we get into the cool things of your systems. Yeah, hey everyone. My name is Joe. I'm one of the co-founders of Aztec, and I do product Aztec's Privacy Layer for Ethereum. And we currently have Aztec Connect Live, which is a VPN for all of DeFi, and we're working on a fully private VM which has private smart contracts. Hi, my name is Louis. I'm ecosystem-led Starquare. I've been at Starquare for over three years and a half. So Starquare is building scalability solutions using zero-nudge proof, more specifically, Starks. Our first product you may have used is called Starquax. It is the one powering Sorer, Immutable, DYDX, diversifying plenty of other that are going on soon. And in the last year and a half, almost now, we started to work on Starqunet, which is a general-purpose layer two on the Ethereum, enabling scaling using a string-complete language called Cairo. Hello. So my name is Nick. I'm the CEO of Fuel Labs. And, you know, to describe fuel in a simple way, you can think of it like a layer two. We launched the first optimistic roll-up to Ethereum. There was a fully trustless, you know, optimistic roll-up, no multi-sig, anything. And to describe what we do, we have a new kind of transaction processing system. And this can take the form of something like a Volidium. It can take the form of something like a layer two. We use UTXOs for this. We use a different paradigm from, you know, typical Ethereum systems. We have our own language called Sway, which is used to target this system. We have our own tooling. We have basically a revised version of everything Ethereum offers. And, yeah, you know, we're here to help Ethereum scale and basically incorporate all of the lessons that we've learned from, you know, production Ethereum over the years, provide Ethereum a new and different kind of pathway to scale that isn't the EVM, you know, tool chains and EVM processing systems. Very cool. All right. So we have, like, we have a good mix of people here. We have, like, two companies building zero knowledge. We have two companies using UTXOs. But all of you have, like, such unique branding and products. So, like, let's go for a very first principle approach. Before I ask why you chose non-EVM, can I ask why should blockchains just not use normal VMs, like, I don't know, x86 or RAM? Why is it so necessary to build something new? I think I'll take a stab at that one. So x86 and these other instruction sets are really designed for different computing architectures. So, you know, when you're designing x86, you're designing ARM, you're designing them to target a hardware system. And so that's going to come with its own criteria and its own restrictions and physics. Whereas with a blockchain virtual machine, you're designing for a different kind of physics. So you're pricing every operation, you're designing for adversarial scenarios. And because of this, you end up having odd design constraints that really put you in a corner and you have to really understand those physics to design a good virtual machine that will both be, you know, great for processing but also be great for security. So it really is kind of an art designing some of these systems and we've learned a lot since the, you know, kind of inception of Bitcoin in 2008 all the way through to now. And, you know, basically, those instruction sets are for different purposes. Whereas, you know, virtual machines for blockchains are categorically a different thing. Now, you can do it, but you do lose some of the nice things whereas if you had designed it from scratch, you basically can kind of design a better world. So, yeah. I was just going to add, I think, like compatibility is one of the main kind of reasons why we have to limit the feature set on blockchain kind of VMs. Like, you have multiple clients trying to all get to the same kind of state update and that doesn't exist in the world of kind of like Intel chips. So it's just, yeah, it's a much harder problem and that usually means we get a limited feature set like we have in the EVM. No, I don't really have much to say. I mean, the technical aspect was very fine. Blockchain VMs is an art. It's really a good way to put it. So, what are your expectations from a blockchain EVM then? From a blockchain VM, like, what are the things we've learned from the EVM? What are the things that you wanted couldn't get built your own things? I think just a starting point is probably like the more complex the VM is. I think the harder it is to scale. We're seeing this with kind of people trying to prove computation of EVM and you need like giant data centers to actually construct that proof. So a simple kind of maybe tailored feature set of the VM is probably going to result in more scalability is maybe the first point. What are your expectations from a blockchain VM and this goes to like things you can get from EVM? Yeah, so the thing is that what we were expecting from a VM, a blockchain VM before and now changed significantly between the origin of Bitcoin and Ethereum and is what we know now. And I guess one of the main difference is that we now realize more than ever that we need scalability and we now discover that we are tech that gives us scalability while preserving the core principle of blockchain. And the core principle of blockchain is you know, trustlessness and variability. And before, and I'm going to toot my own horn obviously and talk about ZK here, before the arrival of ZK at the practical technique to bring very fast verification, we just went to the shortest, to the simplest, which was the EVM. And now that we are looking at those new tech and this new knowledge that we're learning, you know, what's Furie's doing or what Solana is doing and what others chain are doing, we are bringing the new, you know, more top-notch computer science to it and which are providing more feature sets that we are looking for to develop good product, good depths on top of blockchain. Very well said. Yeah, I guess over time our expectations have changed quite a lot. Yeah, I would say so coming from sort of like 2015, 2016 blockchain to now, you know, like I've been using Ethereum almost since it started, so for me it's been like, you know, a long road. And I would say that the expectations have completely changed, I think, but just as well we didn't, the community didn't fully understand all the design constraints when they were putting it together and there was decisions made over time, particularly with the kinds of architectures that Ethereum chose. That ended up being really costly for just compute and for all the different kinds of design potential that you want. And as well, backwards compatibility was something that, you know, was not really kind of in part of the picture and I think it's ended up locking us into a design that wasn't really educated on what could potentially happen if we had this design and so we're just sort of stuck with it. So, you know, I think the expectations are that, you know, again, you design a safe VM, you design a virtual machine that can provide all the behaviors we like for Ethereum, but as well open up a lot of new kinds of designs that we currently don't see, that we'd like to see. And then on top of that, you know, just designing for a lot of different scenarios where if you could have done it a different way, you would have factored in all the research that we have, you can basically you can create a new reality for blockchain that I think is much stronger than what we have now. So, yeah. Yeah, I was going to say, one example is probably like the curve that we all sign over. We've all got a seed phrase and it hasn't really changed in a while. People have tried doing smart contract wallets, but if you control the VM, everyone in the room's got an iPhone or an Android that's got a TPM in it and you can actually build that into the VM and help get adoption. So, yeah, recreating all of the EVM I think would be a mistake and just trying to kind of focus on adoption in the feature set would be a good thing. And, you know, specifically about the change that we are expecting now from the VM, there are a couple of examples which are just so significant, so clear that the goal target moved. Aragon, when they launched, there was like, you know, this massive OS for DAO, it was like very well filled out, the point that it was way too expensive. I mean, back then no one cared. You know, when they started, using a whole block for yourself was like, who cares? No one's here. And now, you know, you try to optimize all those people on Twitter doing those gas golfing to a point which is just ridiculous, right? And so, one thing that we were observing specifically in the context of StarkNet and the new language that we have is that when you provide new, like, feature set and new capacity to the chain, you get, like, a combination of creativity that you, like, new things that don't exist before. And so, Joe here was talking about the curve we're using or the fact that we have EOAs. EOAs was a mistake in retrospect. And the problem that EOAs is that they put us in the local equilibrium. There is no way Smart Contract Wallet will pick up on Ethereum today because it's always cheaper to use EOAs. It's always cheaper to use my product keys. And so, there is no incentive for the dApps to actually build their application to be very well compatible with Red Connect or just, you know, to work well with Smart Contract Wallets. And even some just re-band them because there's Smart Contract. And so, in the context of StarkNet, because we don't have EOAs, we only have Smart Contract and therefore only Smart Contract Wallets, we have people using the native curve of the network called the Stark Curve and now we're also people trying to build using the trust and enclave of your iPhone. So, using the same wallet, you can actually use your iPhone, you can use any curve that you want, you can use your brothers that have a full curve in it. And so, we, once you unlock the limitation that we know, you start having new things that get created basically right away. Yeah, I agree with you. I think asking engineers to gas golf instead of thinking about innovative things to do is a big waste of time and probably prevents a lot of innovation that could have happened in this space, by far. Yeah, just to speak to that too. So, like I've done my share of gas golfing. Like I just, I want to say I'm like, you can check my GitHub, like it's sort of like an art for me. But it was more of a therapeutic thing to gas golf than it was like a, this is a good use of my time. Mind you, that was before I worked with fuel. So essentially, yeah. The thing is, is you can gas golf all you want. But it's not enough. And it's never enough. Because when enough people use the system, it just gets congested. And then you're back to where you started. And you keep asking the same questions. Like, well, how could we do this? How could OpenSea run in this way? You know, and then the thing is, is you lock yourself in so much to just trying to support this thing. Which by the way, I mean, there's some controversy around it. But the EVM was sort of designed a little quickly and was put together a little fast. And at DEVCON 1, there was some conversations about that particular thing if you want some spice. But anyway, the reality is, is, you know, I've been sitting here looking at this machine for years. Like years and years and years. And it's held a lot of different kinds of designs back because it can't move forward. Like it really is very, very difficult to move forward. So, you know, the fact that these teams are sort of bold enough to still be part of Ethereum but try to do it differently just for the sake of, like, getting to global adoption and getting somewhere else I think is a, is a, you know, it's a sign of how good the Ethereum community is. Because, you know, we're not afraid to challenge what people have made a culture of the system. And I think that that's a, it's a very beautiful thing, you know. So, but yeah, it's not all just Vitalik and Gavin's design that gets to run the show, you know. I mean, we can try to do other things. And with Layer 2s, we can now. So it's great. Yeah, exactly. Like Layer 2s should be seen as a way to extend everything that we can't do in Ethereum, not just a cheap transaction machine. With that, sorry. Yeah, about the, you know, everyone come to say about, yeah, it's always about cheap. No, it's not about cheap. Really not about cheap. I can give you a few examples of things that are real, that you just like dream of anywhere else, in Ethereum. And so, for instance, smart contract is most specifically like an important one because smart contract is no working crypto if we not get global adoption if we have to keep a key. And if the EOA's mechanism, like having a private key remains, basically we're going to go back to the financial system where we have five global custodians. And that's not what we want. And so, on the smart contract part, we are now a thing that we're going to unlock. For instance, Argent around here actually, or it's working extensively on Starknet and they're working on a plugging system. Meaning you can actually install a nap in your wallet. Meaning that for instance, every time you spend, 10% goes to saving. Or every time you want to play a game, you actually don't have to sign every transaction. You can open a session key that's going to last six hours. And that is only authorized for a set of operations. And so really the cheap transaction is like an afterthought. It's just a requirement. And honestly, to be honest, I don't think it's going to last long. Like L2s themselves will not be cheap. I have a theory which is you can't have a cheap successful economical layer because there is no reason in the universe that my ticketing app that is trading or where I can change my NFT of my ticket be impacted by the fact that the big price of ETH dropped by 20%. And all of a sudden my app doesn't work anymore. And so that just doesn't happen. That's just won't work. Insane the refreshing to hear. Maybe I want to go into a pivot over here and like what enables all of your architectures to actually be able to do these things. I'd love to, I think everyone here would love to know more about these architectural decisions they've made that enable these new paradigms here. Okay, so if we're, so we all get what you're saying is we can all shill a little bit. Just like a little bit for each project. Okay, all right, I'll try to okay, I'm going to try to keep my shilling in a kind of a dome, so some highlights, but okay, so first of all the fuel VM is highly inspired by the EVM. So all the lessons that we've learned with the EVM over the years, it tries to incorporate it doesn't leave that behind. It doesn't try to say we're arrogant enough to rebuild everything. So that's the first thing. We've basically taken all the great EIPs, all the great research that the Ethereum community has done and other blockchain communities have done and put it into a virtual machine. Now we've made some very interesting decisions and they all impact the kinds of things you can build, the kinds of experiences you can create, and as well the scale that you can achieve with this particular system. So some highlights are it's UTXO based, that's the first thing. Secondly you get smart contracts just like Ethereum. There's no loss in any kind of behavior from a developer. Secondly, scripts is another one. So in Ethereum you have to go through a smart contract to make multiple calls. It's ridiculous. It never should have been that way. So we have scripts and then as well we have account abstraction via what we call predicates. And so this allows you to send to the hash of a script and essentially if the script returns true, then you can spend the output. And this gives you all kinds of things you can do. For example we can support signing with a Solana key over a UTXO that is USDC from Ethereum. You like that's pretty nuts, you know, and that can happen at, it's cool and then that can happen in like an output. You can also do things like basic, well some other cool stuff is we've redesigned all the processing within the virtual machine as well such that when you make a smart contract call with fuel, instead of having to serialize and the engineers will know this. I'm sorry if you're not an engineer. So it's just bear with me. But when you make a call you don't need to re-serialize the data between smart contracts. It's all in one chunk of memory, but the memory is segmented per call frame. And what that does is it allows you to go I'll write 5000 things to memory here, I'll call this contract over here and that other contract can just reference any one of those items. So you can imagine trading engines, things like this would love that because you can write so much to memory, you can really abuse memory and you can abuse compute which we have a lot more of and not storage. So we give you far more options to use that are not storage oriented. So these are all part of the processing model but you know, lastly for the last bit of shill, this also because it's a UTXO model you get all the nice things of Ethereum, but you get full, complete, parallel processing so you can basically have all the benefits of some of these newer ecosystems that are parallel processing but because the fuel VM is designed to be arbitrated it can also be a roll-up or layer 2 on Ethereum directly and on top of that it has trust minimized light clients so you're not leaving behind the nice security properties that we have with Ethereum. So that's my shill on the VM. Yeah, you go. Okay, ours is also a UTXO model big founder UTXOs, they're very difficult but they do enable kind of some important features which the account model lacks and in our case that's privacy so it's very hard to do privacy in account based model because every time you update an account you leak which account you're updating so in UTXO model we can create and destroy UTXOs and they all look random so it's one of the key design choices we had to make to get privacy and then more into the VM we also have account abstraction built into Aztec so we have this concept of a viewing key and a spending key when things are encrypted there's a different set of people who may need to see the data to those who can spend it so I think being able to control that has been super beneficial for our architecture today the VM only supports kind of circuits or programs that we've written but we're expanding it with a concept called Aztec 3 which Mike from Aztec is actually talking about tomorrow on one of the other stages and the main kind of improvements there are that every program is actually a client side generated CK Snark so we've built a language called Noir which enables developers to write these programs and then users will actually instead of sending this to be executed on kind of a node they'll actually compute the Snark in the browser in Noir prove the correct running of the program and then send that kind of packaged up kernel circuit to a rollout provider and that means you actually get really cool features like code privacy confidentiality and anonymity so we're excited about that so in the context of StarkNet and Starkware we also created our own VM but focusing only on scaling similarly to fuel and I just want to make a differentiation in terms of scaling between the fuel approach which is basically parisation enabling the execution layer to do more and the ZK approach which is basically requiring less from the verifier and those are orthogonal completely orthogonal so you can get both essentially but the reason why I'm making that differentiation is because in some way the way StarkNet scale is by saying you know what validators can have stronger machine than the rest of the world today when you look at Ethereum or any blockchain like Bitcoin the scaling is limited by the weakest machine in the network like CPS or whatever the throughput of Bitcoin versus the throughput of Ethereum versus the throughput of Solana we are comparing Apple, RNGs and Ferrari and the reason is very simple Bitcoin targets a Raspberry Pi which is roughly the cheapest computer you can find Ethereum says okay we are targeting this is a bit too constrained for the real world application maybe we can have like a 2500 dollar machine so for instance I think if completely it's in my lab, my 2021 M1 Apple book like a month ago and Solana is like you know what 2000 dollar machine is very constrained because companies they pay that for a flight for their employees to Bogota and so maybe we can have a 2500 dollar machine per month that's a practical cost for servers for an entity for a corporate and so we are really not comparing the same thing and so when you are looking at the limitation of scaling for all those blockchain is the weakest point what is the minimum machine the network needs to have and so when you use a regular traditional execution layer without using cryptography to scale it you basically don't really change this symmetry the guy who makes money is still requiring the same machine because the guy was very fine in his garage and regardless of the fact that this miner, this validator can spend millions of dollars at stake or millions of dollars in machine he will roughly run on the same laptop that you have at home and so ZK breaks that parallel all of a sudden it doesn't matter what kind of machine the validators have I can verify it on my phone in a millisecond and so they can have a data center for like here, I can still verify it on my phone so Stark were creating its own VM called Cairo because the product we have with existing VM is that they are optimizing for different programming paradigms that ZKPs so the best way to explain it is that you talked before about X86 so your regular VM is optimizing for basically your CPUs and your transistors and transistors know one thing very well which is Boolean logic and bits and bytes the thing about your own easy KP environment you are working in an arithmetic environment where the base element you are working with is what we call a field which is basically a big Uint and the cheap operation that you get is multiplication, addition, division, subtraction and Boolean logic is expensive so you move the model on its head and there is other differentiation that I can expand on like non-determinism or something like this but roughly speaking to make a chain that is verifiable cryptographically verifiable using ZKPs variety proof, I don't know what you call it you want to have you prefer to have a VM that is optimized for the computing paradigms and so the specific dimensionality as I said is orthogonal to the execution layer and so start net right now is basically roughly taking the same structure, execution model than Ethereum with a few distinctions like we try to do optimistic parallelization, we try to do we are looking into different data structure while remaining the data structure of ETH also expanding and what's the new state of the art is coming like three octos we are looking into what they are doing because it's cool and so what matters is the separation, so start net is focusing on the the separation between the validators and the rest of the world and then afterwards we are looking into optimizing the execution layer to provide the the throughput that they even expect I'll just say one last word because no requirements came up so I'll just make one comment which is you know we are in Latin America and this is a new place for Ethereum to be I think having been through a bunch of them there was a bunch in Europe etc I think with fuel and the way that we interpret node requirements we want people here not like in Switzerland like people here to be able to afford to run a node in this global peer to peer network not only for their own research for building for interoperating with the network but we want them to be able to afford it just for the global security of the system itself and so I think with fuel we've been designing the best system we can possibly think of but we're also making sure that when we talk about node requirements which ends up being really important because it really dictates how much throughput, how much processing you can put to the system you know I think this is like a very key factor so for us we would like someone in Colombia to be able to actually verify this and not have to pay an enormous sum like someone's whole years wage or whatever it might be to be able to just run a node you know I mean to that point the genius of having roll-ups like the ones you all through a building is not everyone has to run their own node so that really adds up yeah I think it gets worse for privacy because there's like a censorship component so if you kind of restrict like your kind of nodes to AWS Google Cloud and Microsoft quite quickly you don't have a decentralized privacy network you have fang so our node requirements we do the same as fuel and we have to kind of think about how do we get it working on a laptop, how do we get like roll-ups actually being built in a peer-to-peer network and actually some of the ones are doing a really good job here like Mina's got a model which is kind of like a federated prover so I think there's a lot to learn from some of the other chains as well and by the way you said that not everyone has to run nodes I 100% agree with that we should be able to run node on your phone you should and that's the target that's what we should achieve so maybe it's not practical and the fact that it's not practical is irrelevant the fact that it's a goal is what matters that's what drives us that's what brings us towards one point and so I really don't I really want ideally in the future in fear I would be make redundant to some extent of course not to some extent at least on your phone or you should be able to run the network locally that makes sense so we spoke about all of the UTXOs and stuff and I know you mentioned about how it feels parallelization and the stuff you're doing with verifications are orthogonal we all over here obviously are cardano maxis and we all obviously were quite sad when Cardano couldn't work out with UTXOs how are you guys doing it possible what Cardano couldn't and I guess for stock where for Louis when UTXOs yeah I can go first I think it's the elephant in the room so currently an Aztec client has to sync every single UTXO test it, see if it's your UTXO try and decrypt it basically that doesn't scale at the moment we use brute force and we have some very advanced multi-threaded wasm so pushing the browser right to the edge of what it can do and that gets us the current throughputs that we can think about today to get to world adoption we actually have to look at the network layer of privacy so we're moving to using something like NIM so you can actually request UTXOs from a more centralized data store without revealing who you are and if the UTXO is completely random when you request that through like a network layer, privacy layer you get the same anonymity as kind of a full sync so there's ways to do it but it requires kind of the whole privacy stack in our case and some pretty advanced web browser computations so I'll address that one pretty simply so basically this is one of many things Charles has done to damage the reputation of something completely innocuous to this particular thing so basically just for some spice for the panel yeah yeah just a little spice so the main thing is Cardano's model at least from the way we can interpret it and this is sort of how we think about it too is it was implying a certain kind of determinism across the system that was basically blocking or bottlenecking how they were using the UTXOs for example if you built something like Uniswap what would happen is and again this is just my read so let's fight on twitter about it or something but basically you have something like Uniswap well you can't really have it because with their model you had to sort of sign off on the change state but what if you don't know the change state so what if there's a bunch of people in front of you and behind you in the mem pool who are actually manipulating just one thing then when you produce the result of the UTXO well you don't know what it is so in their model they couldn't do that so it caused this issue where like you would use Uniswap and there would be like one transaction per block for that one app because you didn't know what the state was so you had to just use it one after the other obviously this is horrible like you know imagine one TX or whatever for block for Uniswap on Ethereum like we wouldn't probably wouldn't even be able to do something like that you know so basically the reality is is that that was more of a design decision on the whole system and it doesn't actually relate necessarily to UTXOs per se UTXOs are just a way to notate and define the transaction model which is something that you can do in various ways so with fuel if you have a smart contract as an input you can have like say a Uniswap like system that's one input and we basically have an output so it is noted that it changes the state but like Ethereum there is this kind of reasonable malleability of what could happen under the constraints of the system such that you know this output basically is you know it's notating there's a change but it's under certain constraints and certain conditions similar to what we expect with Ethereum and Uniswap when you use Uniswap you don't always know what the state's going to be or what it's going to change to but in Ethereum we're willing to accept that reality under certain constraints so I use the word determinism here maybe the academics don't like that particularly but it is sort of how I would interpret the situation but with fuel we don't have any of those problems at all you can build a Uniswap like system or whatever you can use the transaction UTXO model we get all the parallelism benefits there's no downsides to that and it's not an issue it was publicized as an issue but in our case we have different designs that don't feature this issue at all in the context of fuel is there any problem with composability or is it because of scripts how do you manage the coding multiple states at the same time so basically fuel has just normal smart contracts so you can have a smart contract that calls many smart contracts if it does then you're notating those various other smart contracts as UTXOs as well their inputs to the system putting potential changes so it's very simple in its design we've inherited a lot of the work that was done in research for state access list for Ethereum so really this is just a reinterpretation of that research but it's in a cleaned up model which is in a UTXO setting so we again get all these nice benefits from UTXOs but we don't lose the user experience or behavioral elements of what we get with Ethereum so it's a really nice model in that sense and then scripts just allow you to make a transaction and you could say call multiple contracts so for proven transfer from you have to make two transactions in Ethereum from the origin sender which is ridiculous why are we doing that? that makes no sense at all and it should never have happened and the main thing, the main reason why is because of just the design of Ethereum itself is funneling everything through single accounts and that restricts you so in the UTXO model and with scripts you can just have a script that calls approve and then transfer from in the same transaction and that's it and you don't need to deal with that anymore so it's really not crazy it's really actually pretty simple and again just you can read all of our work and our research on this so it's all public and available you can try the testnet in our testnet we do that already so yeah anyway so the question was how to Starkware look at this question of state management and basically the question we're asking here is how do you paralyze? how do you paralyze stuff? so Starknet at the moment do not do paralyzation as of now because we were focusing on making the whole thing work right we had a new VM, a new language we focused on make things work the simplest model which was the one that was used and bullet proof by Ethereum and now that we have all the features that we are focusing on actual execution scaling so I just want to say a verification scaling from execution scaling and so one of the things we are going to have relatively soon which would dramatically improve the throughput of the system of optimistic execution so it's not an ultimate solution it's as sometimes it has theoretical negative like I'll say that adversarial reaction that could be impactful of the network we are planning to solve it one after the other it's not right now we're focusing on bringing scale we are looking into which is within the line of this domestic paralyzation that UTXO enables we are looking at various data structures like a model that exists in the space we look at Solana, we look at Aptos, we look at SWE, we look at all the sort of like chains that are actively building for paralyzation and look at how we already have enough mind change with the new language so we focus on keeping it simple in that sense but we are looking at the new ways whether we are doing it and take the best idea that exists there so you should expect more on that front in the coming month but there is nothing to announce because it's still in the research phase at the moment Alpha drop no but stockware does have paralyzation but at the verification level it requires a proof but that's not okay so we are focusing on the scaling of the execution and the VM itself the the recursion enables you to do paralyzation of the verification but you have to know today on stocknet verification is not an issue whatsoever like a verification, proving is not the issue right now our problem is the sequencer our problem is that our execution layer is pretty bad at the moment and we are working very actively to improve it roughly in the coming month and so recursive proof does enable you to scale like the proving it also enables you to scale the execution by going into fractal scaling which is basically the ability for instance to have a stocknet or a stocknet but before we are talking about the sticketing solution that we don't have to we shouldn't leave on the same environment than uniswap then using another three you can have the sticketing with the same requirement the same trustlessness than you would expect on the regularity but in an environment where the fees are more stable because there is no sort of like strong economical incentive to be like a dex and so on so to answer your question all those prioritization topic is very high up into our roadmap they are still into research and we should expect to have things coming to production in the coming month but they have nothing to show today that makes sense so speaking from the perspective of say a DAP developer not a protocol developer who is trying to use one of your systems throughout the panel all of you have dropped some really really interesting paradigms let's talk a bit more like I know if you all spoke about having these scripts and StarQuest were about having no EOAs whatsoever and directly contract wallets and you use the term programmable privacy I'd love for everyone to think about what these new features are but thinking about it from a developer or DAP developer perspective yeah I mean from a developer perspective you know at Fuel we look at it mainly from a few different points of vision my own which again is reflective of many many years of trying to build apps on Ethereum and struggling with a lot of key things in the system that make both the developer experience horrible but secondly the end result being a sort of odd disjointed experience between the wallet what the wallet can do and then what the application can do and then as well the kinds of applications you can design so with Fuel we open up a lot of the compute so you have far more available to you you have far more memory to use you have far more just general compute to build anything you want you have more options for user experience with things like account abstraction and with other aspects like native meta transactions so in Fuel you can have a party that just builds a piece of a transaction you can have another party that just tags on the fee on the other and just send it and that's it and you don't need to have this situation where it's like you have to wire through five contracts to have some form of account abstraction so I think from a developer perspective we're looking at the space the thing is there is always going to be hurdles with a new execution environment with a new development environment with something that's bridging liquidity so you're bringing your USDC from Ethereum into Fuel but the result is is that once you're there and once you're actually using it and once you're actually seeing what's possible I think developers can open their minds a little bit to where Ethereum should go and where it will be soon once we go to mainnet native meta transaction is pretty cool I know quite a few companies would kill for something like that but also I assume when you said memory how in Solidity we have string memory if you see even more like RAM I mean literal RAM just having a lot more access to RAM and memory and being able to do a lot more with it and across many other contracts so just having that kind of access is enormous it gives you so much more flexibility because with the EVM and Solidity paradigms you're so constrained by so many factors the kinds of designs you can do are extremely limited so from my personal perspective I've walked around DevCon a lot I've heard a lot of new designs for a lot of new DeFi and to be honest it's still okay but it's not what is possible there is so much more and it's good but there's a long way to go yeah I think it's maybe healthy to also tell developers that not every kind of VM is going to be suitable for them so we focus on privacy and you work for Reddit and Reddit has a point system that's public so that's not really a good fit but a high throughput VM that focuses on public data may be a better fit so some of our competitors in the ZK EVM space pitch everything as possible all the time and I think that wastes a lot of dev time trying to build something and you realize kind of a while later that it's not possible so for Aztec what we really care about is applications that have private state that could be things like ZK Games consumer finance which just doesn't exist on DeFi today we have over-classified lending you can't really have consumer finance unless you're willing to have a public passport salary address on chain so I think privacy is kind of what we care about you're not going to build an AMM on Aztec because by definition it's the ratio of two public pools so being really transparent about what you can and can't do I think is something we all need to do to help attract the right developers so like instead of storing variables in the contract itself you'd store it in your address kind of thing yeah so each user basically controls their UTXOs which are encrypted and you can feed those into an Aztec transaction and prove something about it and then kind of an in an Aztec program if you prove that your salary is above X you may be entitled to a loan or not entitled to a loan so having that data being able to be fed in as an encrypted input to a program is really powerful in our case you mentioned how some zero knowledge VMs are now realizing some things they ran into can you expand a bit more just for all of us I just mean like I think you need to kind of let me think a second go ahead it's hard to do privacy in some applications is all I'm going to say and then I'll see what Louis has to say about that I mean 100% and so StarkNet is not built for privacy whatsoever but that's not nature what's true is that you can build privacy protocol on top of it and so I was literally saying in a random idea yesterday to Joe here why not Aztec and Noir as an L3 I mean there is no reason to not you could have this kind of composability just not the same purpose so you ask what kind of things things you can build so the main thing that you can build on StarkNet I always give as a dev the only thing you care about three things which are you get cheap computation cheap core data and icon abstraction and you get like a three and a half which is a long term vision of scaling which means that even if you're priced out of this layer itself you can go a layer up which would be cheaper so what you get when you get those three things you start having new project so I named before the ability to icon abstraction gives you the ability to sign me your phone without having private key per se your phone with your private key so now when you combine for instance cheap computation you start to have people making a physical engine on the blockchain so we have someone company called topology on StarkNet that is building a physical engine so you can prove collision you can prove games that exist in the 80s or 90s worms like you know you can make that happen directly on chain you can implement things like infinite risk you know like an infinite map using Perlin noise or still advanced algorithm that generate maps and we have those people making it today on the ecosystem so when you get cheap core data and cheap computation you get things like when not possible in Ethereum before like practical storage proof so if you're not familiar with storage proof storage proof enables you to prove the state of if in the past to if in the present so why is that useful for instance for voting so you know when you vote you basically vote using the token balance that you had at that proposal variation and so this company called erototus is building this and they already have a snapshot trying to create L1 voting through L2 using their tech in the now the amount of so what else I think very exciting I feel we have a lot of on-chain gaming because on-chain gaming was completely moved down priced out of if you I mean the proof is the only existing on-chain game today on if is basically I know this change in school dark forest so we have seen a massive boom of project that were priced out that couldn't build on if building on us things and you know even like flagship like loot literally building their own universe on on stock net right now thank you so much yeah I know we're running out of time all I'll say is all the developers I think because we work in crypto we kind of owe ourselves to just in broad I said that as a meme but what I really mean is I think we are here to experiment and try these new languages like sway and Cairo and noir and stuff so I hope you all like actually go out and please read up on what these cool things are doing I know if you have time but we are happy to take one or two questions if anyone has Hello a very interesting very interesting talk I have a question why create why each one create their own different language instead of using something more standardized like I don't know C++ I'll just say for our case none of the languages other than Circum are built for kind of private state and we found that Circum was too low level and we're trying to target kind of Web2 developers or Solidity developers and they shouldn't have to kind of know how to be a cryptographer so our language abstracts the cryptography part so you can just write an application and nothing was out the box so is it working? Yeah Circum is not sure and complete simple as that and so if you want to do an if statement which you kind of need so StarkNet is on OS the real name of StarkNet is an operating system StarkNet OS and so everything every sys code in StarkNet is going to have a if statement the program is going to be quite big and so another thing that Circum doesn't allow you to do is being able to prove multiple program within the same proof which is something that Kairos from Sharpe enabled you to do and so we need to create our own language for that purposes and finally Circum is I mean I guess it's a central target since Stark but that's not incomplete not being able to prove multiple programs in the same proof Yeah and in our case you know we really really really didn't want to create a new language we looked at a lot of options the main reasons are the some of the existing programming languages like C or Russ for example they're really not designed to target a blockchain and there's a lot of different ramifications around targeting a system that you want to catch in different stages of the compiler so that was one strong motivation for us to have our own language secondly the compiler and the tooling have to work really hand in hand so in a blockchain environment you want the developer to have extreme control over every aspect of the system so they can really simulate everything so the developer experience was another and lastly we wanted to create a rustish like language that really targets blockchains not just the fuel VM so the way that our compiler is designed is such that and this is unlike others in the space it's designed to be very modular so if we want to target a back end like Kyro or we wanted to target the mine VM or we wanted to target a different language the language ecosystem will be set up to target many different kinds of blockchains including the EVM so one we're working on right now is targeting and that's with Foundry so you'll be able to use Sway and maximize its value and its design without having to want to rebuild languages again now mind you I think some of the ZK teams have different criteria different stages of the compiler that they really have to factor into their design so like you know we can't speak for all ZK VMs or anything but it's a good harness of a blockchain language that's rustish that people will like yeah any other questions? I'm not sure how relevant the question is to your environments because I haven't looked too deep into it but if you build something like an uniswap on it which like excludes the S-Tech already would math be a problem on those networks as well because the indexes could sort transactions differently or is it somehow solvable? We have right now on StarkNet I think top of mind 5 different uniswap b2 so the answer is no it's not a problem that was a joke but no it's not StarkNet currently uses the exact same data structure in model than if you say it doesn't really exclude S-Tech we have swaps from S-Tech connect to uniswap and probably the most liquid uniswap because it's on mainnet so liquidity doesn't get fragmented we just have to kind of acknowledge that uniswaps are public applications so we can't make it private but we can make the users private so users come to S-Tech and in this case they bridge out to L1 in a batch to get scaling and privacy so it's possible but it just requires a different kind of paradigm to normal scaling For us we don't make any claims about MEV so you could probably extract it in a decentralized sequence setting but I will say that we actually take a slightly different approach in the sense that we actually want to give the node as many abilities as possible to extract as much MEV as possible or provide teams that either try to fight MEV or actually try to advantage it the best tools that they can have because we don't really we don't have a solution for it but we know those incredible teams either advantaging it or trying to reduce it or solve it but either way doing MEV facility and fuel should be very interesting For internal S-Tech transactions if they're fully private you can't see what's happening so our approach is usually to try and push it to L1 and then you can use something like flashbots to solve it in the way we currently know how but if there was public components to an S-Tech L2 application then there would be MEV on the network I think trying to push it to L1 is a good strategy Thank you so much everyone