 I'm not sure, can you? Yeah, you can hear me, right? And I hope this works as well. OK, it does. Yes, Francie said thanks you for the intro. And I wanted to also make sure that a lot of this stuff is my opinion. It sounds like the opinion of the team. I have been working on Solidity since 2015. And in the past year, I've been kind of taking a break. And I did write a lot of code in Solidity itself. And I think some of this is like a culmination of my experience using the language. But I hope some of these things I'm mentioning may come to fruition. OK, so 0817 is the latest release. It's 100 second release. We already had more than 100 releases. That is just insane. And we probably have more than like a thousand nightlies, which is just crazy. Basically, we had eight breaking releases, a bunch of feature releases. And sometimes we do have like bug fix releases as well. In some of the years, we had releases every two weeks. I think lately it's more like monthly. And I hope this is going to continue. Maybe you're going to get back to like bi-weekly releases, but at least month releases. And I'm saying this just to show that how active the development of Solidity is. I'm not going to go through all of these. You can find this really nice chart on the SolidityLang.org website. And it shows you all the interesting milestones we have accomplished since the beginning. However, it doesn't show anything past 0.6, because you made this two years ago. But as you can see, it's a lot of stuff going on. And based on like having over 100 releases, you may ask the question, when do we actually get to 1.0? And you may not be alone asking that question. We do get the question super frequently. Last year, there was like this big debate. This particular issue I'm looking here. The issue is much longer. The description is maybe 10 times as long. And the author of the issue provides a bunch of different reasons why he thinks Solidity should be at 1.0. I summarize like three different discussion points we have been debating on this issue. The author said or had the opinion that the current scheme doesn't actually allow breaking releases. The separation of breaking release and non-breaking releases. In his opinion, every single Solidity release is a breaking release. He also argued that the language is widely used. It has been for a couple of years. And so it should be a 1.0, because that signals that it is ready to be used. In fact, when 1.0 was released, we never signaled. 0.1 was released. We never signaled that it's ready for usage. But the nature of blockchain, people just start to use it. And it's there. It's always going to be there. But also the team had some concerns about this 1.0 stuff. Basically, we have the impression that 1.0 kind of implies that we have to have a long-term maintenance of that version. We have to add new features, non-breaking features, and all kind of changes. And if we still want to do breaking changes, that's going to be like 2.0. And then suddenly, we have to maintain two versions. Or we're not going to make breaking releases. So that is the reason we wanted to avoid going to 1.0. I'm not sure how many of you actually know how the versioning system works. Do you want to give hands on, are you a developer, first of all? So many of you, are you using Solidity? Are you using anything but Solidity? Not many? Any other language besides Solidity? Nice. So I think you should be aware how this versioning system works. We wanted the impression that it is semantic versioning. Turned out it is not. Because anything below the major 1, so major 0, is a breaking change. However, we did assume that the three numbers in the version is major, minor, patch. And we assumed that if we bumped the minor, that is a breaking release. If we bumped the patch, that is a non-breaking release. So it's not actually in conformance with semantic versioning, but it kind of works for us. Here's just an example how this would look like. And now suddenly, once we know what 1.0 could mean, maybe we should talk about how could we get there. Here's an example contract with the latest release. This is actually the Solidity socks contract, which is on main app. You can use it to mint socks, both the land and the right-hand side. If you're lucky, you can mint both. And I think the socks are gone at this point, but you could have gotten them a physical version of the sock at the Solidity stand, I think, on the floor above. I'm not sure. Now, this is 0.817. It says 1.0. Do you see any change? Yeah. I kind of think that actually for the users, you're not going to see too many changes at 1.0. You may actually have some changes. OK, so that was the non-changed version. Is this skipping one? OK, this is a non-changed version. And this has a few changes. They're not important. Yeah. Where is it? Like this stuff, this stuff. None of this is agreed on, but they may be tiny changes, but I don't think they're going to be big changes. So no major differences if you're a user, but giant differences if you are writing a library. I listed a few things on what you would be able to expect in libraries. We should have operators and literates for user-defined types, center library generics, enum data types. I just gave you a few examples now. But actually, Daniel from the Solidity team had an extremely good talk, which you should watch. And I'm going to share the QR code for the video as well. Here is an example of user-defined operators. I cannot say that I actually like the syntax, but the team kind of agreed on it. So without TIS, you have to use the usual function chaining. But once you have user-defined operators, what you can expect, you can just use them fairly easily. So I think the team kind of agreed that this is going to be a focused project. And I believe this is needed for 1.0. Another major thing is the center library, which we have been talking about for years. And the main goal of the center library is to move out the majority of the compiler code into Solidity itself. But it is also a really good exercise doing it because it shows us what kind of language features are missing. So here's one example. This actually doesn't require anything major. The only thing it requires is this pragma standard lib. And what it does, it just disables the built-in functions. Because if you don't have this pragma standard lib, this function definition is going to fail because there's already a SHA-256 implicitly in the language. If you have this, it's not going to be there, so you can define it. This is a simplest example in the center library, and this works. Doing anything more complicated going to require a lot of language changes. Here's one example of what we will require for the language changes. It is just generics. This is an extremely old example. And in fact, I copied this example from my talk from two years ago from the Solidity Summit. You're already seeing the same things back then. So now I'm going to go any further. If you want to take a photo of the QR code, this is the talk from Daniel. It is a full 30-minute talk, and it did a really good job at explaining the reasoning behind these features. And how are you going to work? Again, I'm taking a tiny break. These were the changes the team wants, and the team agrees on. But I kind of want to ask you, all of you, to look at the repository. These are all the issues tag language design. And we have 237 issues open. Those are the issues people want as a feature. And I think many of them actually are kind of obsolete at this point. But I ask you to look at this repository, look at all of these issues, or a tiny subset of them, find something what you like, and leave a comment if you want to have it, or leave a comment if you think it is a bad idea. Besides these features that the team wants, I think it would be nice to know what you guys actually want. So I think that's what 1.0 is going to be like. Not many visible changes for the user side, if you're just writing a token contract, NFC contract, et cetera. But if you're writing libraries, it's going to be entirely different. Now, 2.0, that is a crazy topic. And we haven't actually talked about this for a couple of months at this point on and off. And actually, 2.0 is two different things. And it has been super confusing the team even talking about it, because some people meant one or the other. And I just want to highlight it again. Nothing is decided here. A lot of this is just my idea of what I want to have. But I hope some of this is going to happen. So first I'm going to talk about this compiler rewrite, which sounds crazy, right? How many of you actually know the compiler? How does the compiler work? Or a compiler in general? Not many of you. Basically, a compiler has multiple stages. We take the source code. We process the source code into some internal representation. We do a lot of different analysis on that representation. Once we are happy with the code, that it is sound, that it works, we are generating some kind of a next stage. In the first version of the compiler, this generation was directly into EVM. In the current stage, we still have this, but we also have a second pipeline, where instead of directly generating to EVM, we generate to Yule, which is our intermediate language. But you must be familiar with Yule, because it is a line assembly, basically. And then we take Yule and generate EVM bytecode. So that is the pipeline. And then these are the different libraries we have in the compiler itself. This one is just really tiny helpers. I'm going to move to the other side. So these are just helpers used by the others. The assembler is kind of separate. It's standalone. It takes in a data structure, like the representation of the assembler source code you want to assemble. It generates EVM bytecode, but it also has optimization steps. Then we have this separate helper for language utilities, but it's really just helpers for the parsers, because we have two different parsers. We have the solidity parser and the Yule. So Lang Yutiles is used by both of these. The lib Yule has a parser, has a code gen, has another kind of optimizer. And lip solidity is a big one. It's a monster. It is the solidity parser, does all the analysis I mentioned, and does these two different pipelines for code generation. And then we have two smaller libraries. The SMT library is just for the SMT subsystem. But I think that that is also split up. So some of it is here. Some of it is in this lip solidity directory. And then lastly, the smallest one is lipsolc, which is a tiny binding. That is the only library which is C, has a C API. Everything else is C++. So this basically is a wrapper between C and C++. And that is what is compiled to M script and into JavaScript. If you happen to use any kind of web app that's has solidity, it would be used. I listed a few issues and maybe some benefits as well. But one of the bigger problems we are getting into is these libraries, while physically separated to different directories, they may not be super well separated conceptually, or even interface-wise. One example I can give, which is a kind of hit, it's not like a logical issue in there. It's more like just a source code issue. But we have a loop between libU, libEvmASM, and the frontend. They are interdependent on each other. We're just kind of bad if you want to separate these things nicely. Major components, they do have some kind of a clear boundary. But everything is in C++. The clearest external boundary happens in two places. The assembler has a JSON import and export feature. The export feature has been there forever. The import feature is still not merged, but it kind of works. And then solidity itself has this JSON ASD import export. I think some tools like Scribble may be using it. Basically, you can skip the parser. No, Scribble uses a TypeScript parser. But yeah, we wanted people to use it. Some people use it, but they ended up not using it at the end. But the main issue I kind of see here, all of this is C++, which is kind of hard to integrate with other kind of languages. It is hard to integrate with JavaScript. We have this M script layer, and we have the C wrapper. It is hard to integrate with Rust. You need a C layer as well. Although Rust does have a C++ binding generator, it is not really stable. If you want to integrate this into Go, it's the same case. All of the languages, they only work with C. They don't work with C++. So I have a few ideas how could we resolve this. And first of all, we want to just improve the separation of these libraries. We want to have clean interfaces between them. And maybe reduce the number of C++ features that the interface are using to make them more compatible with C. Then parallel to this, because this can happen on the existing code base, parallel to this, we could start working on components in Rust. And we, in fact, already have components in Rust. We have source Rust, which is used by Faye, actually. And I think at some point, Fandry used it, but they were annoyed by the size of the binary and completion pipeline. Said that they're not using it anymore. But we do have a bunch of like libraries, U libraries, in Rust, which is also used by Faye. And some people in the team now are working on some crazy features on top of it. The first useful step could be rewriting the assembler. And the reason behind this is the assembler is one of the oldest components we actually never changed. The parser itself has been mostly rewritten, or at least significantly improved. I think the type system, that is also kind of old, but it had more maintenance. But the assembler, that is the oldest component which has never been significantly improved on. It would be a really good task to rewrite it. And by rewriting it, it may not even need to be rewritten by us, but we could use an existing assembler. And in fact, ETK, the EVM toolkit, EVM assembler toolkit, I don't know what the acronym is for. But it is an assembler toolkit, basically, for EVM. It's written in Rust. We could just use that. In any case, once we have some of these components, we could think about creating a compiler skeleton. So basically, just the driver, which drives the completion process. And this doesn't mean that we would need to rewrite the compiler. All we would need to do is provide Rust bindings to some of those components, and then have this driver which just uses those components. And once this works, we could swap out parts of those components, for example, the assembler, because they are just components. And once this would be working, the biggest change would be actually doing a major rewrite of the frontend in Rust. And by the frontend, I mean mostly the parser, the analysis, the type system, et cetera, and, well, the code generation as well. That is an insane project. And why would you want to do that? Well, you likely don't want to do it for 1.0, but you want to do it for something else. So what is the reasoning behind all of this? The main reason is that we kind of, or at least me, want to turn solidity, all this code, into a usable compiler framework. I want to make sure that all the optimization steps we have, all of these features, they're not just wasted, they're not just there for solidity, but they can be used by other languages. Imagine if we would have had like this compiler framework early on, akin to like LLVM for EVM, how long would it have taken for Faye to come to fruition, or like for any of these other languages if you could just use any of these components? I listed a bunch of different projects which are already in the EVM space working on Rust compiler components. So of course, we have Faye. We have the two other solidity compilers slash parsers. So Solang is a full-featured compiler from solidity to WebAssembly targets. Well, actually, LLVM targets, they started with WebAssembly, but they also support like BPF, like Solana, and a bunch of other targets. One thing they don't support is EVM. And in fact, Solang, I believe, is used by Foundry for parsing solidity because it's in Rust. So they're not using solidity, solidity. S-Lang is a project by HardHead. They're trying to write a compiler as well. The motivation is slightly different. They want to have a parser which is flexible and supports every single version of solidity because they don't want to survive the compiler mid compilation. And that's the problem with the current compiler. It only supports a single version. So why would we want to do this? Because I want to attract more people to solidity compiler development. And it seems like C++ is not like a language people like or people are interested in. It has been kind of hard to attract people to write C++ code. It seems like Rust is extremely thriving. Every other project is in Rust. I've been using Rust for a long time, so I would be happy for this to happen. And if we do some kind of a re-architecting like this, that would mean that we have an opportunity to actually improve the architecture of the compiler and maybe improve the language itself. So that is the next one I'm going to talk about is what kind of language could we have here? You will be surprised or not. But the fact is, we have been talking about like this Rust inspired changes to the language since 2019. Way before some of these other Rust inspired EVM languages came about. Oh, it's actually another slide. I think the main motivations, the main reasons those discussions were started because there are a few issues in solidity itself. Or at least a few issues, I think, should be addressed. One of them is really the storage and the implementation itself is not really separated. So the other thing we have right now is this contract. It can have storage defined anywhere. It can have functions defined anywhere. It can inherit other contracts, which also define storage and functions anywhere. And you never know where storage is. It could be handled by any of these imported libraries. It would be nice to actually make this more clear. The other thing would be nice to kind of make clear when state changes or state access can happen. We do have some of this in the language today. We have payable, we have view and pure functions. But that's about it. It would be nice to you to have more clarity even within the functions when is a state change taking place. Functions can be quite long. I think good code basis started to create small helpers and limit the scope of state changes to those smaller helpers. But it's still, people can write, no matter what you do, people can write really bad code. But in solidity, especially, people can write giant pieces of coding. It would be nice to have clear view where is state access or state modification taking place. And lastly, if we take such a big step, we could even consider removing inheritance or looking at a different way of composing the source code. And in fact, if we have more clear control about the storage, maybe we also have more clear control of a storage layout, which has been a really annoying question. And I know many of you have open issues. How can we set slot numbers, et cetera? Some older contracts have these padding, storage items. It's insane. Now finally, what could it look like? So here's an example. I'm not sure if any of you recognize this. It looks like rust, but more specifically. No, it's not sway. It's fey. This is actually fey. And they started to basically do all these steps we were discussing, which means having a context, which is like a clear separation of state access. OK, this is what's on the website. But yeah, this is another one, which is nice. Not having special contracts, rather have it tied into state access. And I think they're also changing it from having a contract and all these things to having separate pieces. Now, here's a real example of what we discussed under rust solidity. Do you have a separation of, this could be contract, could be struct, people leaning towards struct. So this would be really just the data. And then that would be the implementation. I mean, it's not too different to what fey is trying to do now. And this is the stock contract I had as an example before. Yeah, I mean, that's it really. I only have 10 seconds left. Thank you. I guess we can take one question if somebody has a question. Please raise your hand so that the mic can find you. Hey, Alex. So just a simple question. You were talking a lot about rust. But have you guys give a talk to the carbon? That's like the new language that they are going to upgrade C++. So which language? Carbon. It's interoptable, so you guys could reuse. Oh, yeah, the new C++, right? Yeah, that's experimental, yeah. Guys would be able to use everything that's written so far. I'm not sure what benefit that would bring. It's worth a thought. Yeah, I don't think we had a look, but we heard about it. Unfortunately, time's up. But thank you so much, Alex.