 Okay, so I will talk about Solidity's roadmap for 2020. Having said that, the Solidity project is not a project that has fixed roadmaps. And this is also one reason why we're having this summit here to talk about future features and agree if we want to have them and how. So there is one thing we very much focus on, and that is completely set, and that is the re-implementation of the code generator using our Intermediate Language UL. We are currently, yeah, roughly at 50% implementation of the full language features. And yeah, we'll talk more about that later. And then a second important thing is the SMT checker, so our formal verification tool. We would also like to get almost full coverage of the whole language until the end of the year. But yeah, for both these topics, we'll have to see how far we can. And then maybe more interesting for you. We plan to have at least one, maybe two breaking releases with new breaking features. Some features can, of course, also be introduced in non-breaking releases. And yeah, I would just like to highlight some of the breaking features that I myself find interesting. And for one, this is a feature that makes the copying semantics more explicit and also makes at least reference types immutable by default. So if you want to have a memory array where you can change values, then you have to specifically mention that at the point of declaration. There will be a discussion session on that topic on day two at 7 p.m. CST. And then another topic, save math by default. This is arithmetic overflow checking at runtime introduced automatically by the compiler. We did not introduce that for a long time for two reasons. The first is that we think that the existing optimizer has a hard time dealing with that. And the second is that it can introduce new bugs, but we see that everyone is just using the save math library. And because of that, we would like to have a discussion on whether we should introduce that by default or not. And that discussion will be on today at 3.40 p.m. Then another interesting feature is call data variables. We already have the call data location specifier for function parameters of external functions, but there's no really big reason why call data cannot be used for any other variables. So local variables and parameters in internal functions. And I think this could yield nice performance improvements because it does not, so because when you have a memory variable and just use that for call data content, then you always need a copy to memory which could be unnecessary. And in addition, it guarantees that you can modify the content of such variables. Then another interesting topic is language server. So during the last, actually it's last week's few months, we noticed that people have a hard time debugging solidity code and just in general working with, so in general the development process could be better. And because of that, we thought about implementing a language server as part of the compiler. Language server is an initiative started by Microsoft to standardize the interface between compilers and debuggers and IDEs. And as soon as you have a language server for a language implemented, then any IDE that also has language server support can work with that language. So this is a nice generic way to provide access to compiler features. And the cool thing about that is things like go to definition are probably rather easy to implement for us. And while server IDEs already might have that feature, the interesting thing about implementing it in the compiler itself is that when you use the go to definition feature in the IDE, it uses exactly the same code that does the identifier resolution during code generation. So you're 100% sure that it goes to exactly the same definition that is referenced in the final code. Yeah, tomorrow at 4.30 PM. And then yeah, also in the same direction we want to improve the output of the compiler that can be used for debuggers or more general, I don't know how would you call it, code inspection routines. This is an initiative that was started by the Truffle team last year, or maybe even two years ago. And some more teams joined in in the meantime. And yeah, the discussion about that will be today at 8.50 PM. And yeah, two things that we always do in parallel regardless of specific features is improving the new Yule-based optimizer and improving the WebAssembly output. And these two are kind of interconnected. And yeah, talking about Yule, let's go a bit more into detail about Yule because there are still some misconceptions around about Yule. Yule is a simplistic intermediate language we have been using for quite a long time, at least for parts of the compiler. And the idea behind Yule is that it should be human readable and not only machine readable. And we hope that we can build our Yule optimizer or currently, yeah, we hope that it has that feature to generate code that is still readable by humans even after the optimization. There will be two sessions around Yule. One of them is about Yule Plus. That is an extension of Yule developed by Nick Dodson and that is today at 10 past five. And then right after that at 5.40, a discussion group about new features for Yule. And yeah, our hope with Yule is that it allows people to understand much more what is happening behind the scenes in the compiler that it allows to actually inspect the generated optimized Yule code that is then very, very close to EVM bytecode. And it allows this Yule code to be fully audited and because of that, you do not have to rely on yeah, a definition of the Solidity language or absence of bugs in the compiler. Yeah, let me just quickly check the time. Okay. Yeah, and the nice thing about Yule is that it has very few features but it is a structured language. So it has four loops, user defined functions and so on. And it is extensible and typed which allows it to be used for different, yeah, different purposes. For example, we're thinking about yeah, adding memory allocation features to the optimizer that allows lifetime tracking of memory objects and out of bounds access and so on. And yeah, all this will hopefully be covered in the discussion later today. As far as the Yule code generation in the Solidity compiler is concerned, we are pretty far ready. So early I said 50% but yeah, 50% of the features does not mean 50% of the smart contracts out there. So please try it out whether it already works for your smart contracts and you can try it with Sol-C-minus-minus-minus This shows you the, yeah, originally generated intermediate code for the smart contract but this is usually not something you would like to look at because we are writing the code generator in a highly modular way which means we have many, many different functions for each tiny functionality that constantly call each other and the optimizer will inline all that and most of these functions in the end just do nothing. So what you would like to look at is the output of Sol-C-minus-optimized, minus-minus IR-optimized. This is a bit weird. So if you just use minus-minus IR-optimized it will show you the intermediate code after optimization but if optimization means no optimization so if you did not switch on the optimizer then it will be the same as before optimization so please always use these two flags together and having said that you will output from Solidity is still experimental so we might change any of these flags in the future we might introduce different built-in functions and so on so this is still experimental while UL itself is not experimental anymore so if you want to take your code and use it as input then this is pretty safe I never say it's foolproof because no software ever is but it should be pretty safe and it is being used out there already then one last thing about the S&T checker there is a session at day two on 6 p.m. about more or less formal aspects of Solidity a formal specification language and I'm not the expert on the S&T checker in the Solidity compiler that would be Leo but he told me that it can do already quite a lot it does function abstraction which is really nice so when you call a function internally it does not always inline it but instead it tries to infer properties of the function and just use these functions and I think invariants are not yet implemented but this is something on the roadmap and also one of the main topics of this session on day two