 Hello, everybody. So my name is Armin. I do lots and lots of Python for many years at this point. Most of you might know me from the Flask framework, which is probably the most popular project I made. And as of some time ago, I'm working on Sentry with like on commercial sort of level which is also a project that came out of the Python community and it's an open source project at its core. If you want to find me on the Internet, this is where you can do this. Also the slides of the talk will be at the URL on the bottom and hopefully also on the website of the conference. So this talk is sort of the idea of raising a little bit of awareness of how Python actually works for us as a community and maybe how we can evolve it to some degree. So part of what made me do this talk and then two similar ones I gave before was that as you start programming more and more, you get involved with more than sort of the language where it started out. So Python is sort of my home, but I also use Rust, JavaScript, Ruby, many other things. And when you start using something, very often you are amazed by some of the things it does better than sort of what your home language is. But then as you start using it more and more, you also realize that they also have problems. So I want to bring some of the experiences from other environments a little bit into the Python community sort of maybe so that we can do a better job at evolving the language. So I think the biggest question is what is Python actually? Because this question is like obviously it's the language that everybody uses, but if you look into it a little bit more, it actually turns out that Python is whatever Cpython is doing. I think it's a very important concept because Cpython is sort of the standard Python interpreter. There's a language reference which tells you how the language is supposed to work and it comes part of the documentation of the Python language, but actually a lot of how we all program Python depends on very specifics of the Cpython interpreter. And I will give some examples of this and why this is relevant. But I think it's important to know that unlike JavaScript, for instance, we do not have a language standard. So a lot of the code that we use happens to work more or less because it works on Cpython. So many of us will have experienced that when we take our Cpython code and run it on other run times like PyPy, we encounter that not everything works exactly and then the sort of path taken for a long time have been to just make PyPy and other languages more like Cpython, but we never had a standard. So there are two parts where this comes up. One of them is the general language behavior, which is what happens when you add two numbers together. And the second part is the standard library. And that's what exactly is the standard library. It's also a little bit unclear, but for more or less everything that you import that doesn't come from PIP is sort of the standard library and it also comes from Cpython. And usually also means that the standard library becomes part of quote unquote the language specification. So this is my favorite example of Cpython code. What does this do? It looks simple. You have two values, A and B. You add them together, but what happens? To give you an example, what happens in JavaScript is A and B are converted into value representations, which are effectively numbers, and then added together. And if you go to the standard of JavaScript or ECMAScript, there will be an explanation of how this construct works. But this is not JavaScript, this is Python. And what we all learn, I think, is that this is more or less equivalent to this, like the special under methods, but this is really correct. This is sort of what you might read in the tutorial, but then some tutorials are a little bit correct and will tell you that it's actually like this. So you get the class of A and then you use this under method and you pass self explicitly as the first argument, which is sort of why you can't do, why you can't overwrite the add operator on an instance, so that's the explanation of them given. But is this really correct? Actually, it turns out that there's also this under under class, this is equivalent to type or not, and it turns out it's not at all equivalent. And the thing is like, they are not all necessarily correct or incorrect, because they are all wrong in explaining the language, because none of this actually happens. If you have A plus B, the interpreter will give you a byte code for this where it loads to values and it uses the internal binary at operation, and then if you go all down the rabbit hole, you will eventually realize that there is one interpretation in the interpreter of what binary at means, and it tries one of those two things. If it's an object as a number, then it will try to add them, if it's a sequence, it will try to concatenate them. And this is something that nobody really ever looks at, because it's for the most of the time, it's irrelevant. So it turns out that in Python objects are actually, they have nothing to do with these under under methods. Internally, for the vast majority of operations that we do, the interpreter is a struct for all of the types, and on the types there are methods, and these methods are in slots, and depending on how these slots are set up, these operations do different things. And the reason why it matters is because this was a design decision made a long time ago, and everybody else has been forced to copy this, like PyPy for instance. So for instance, the fact that an ad operation can do two different things on a type level has very profound consequences. In particular, what happens if you subclass something that already has a certain setup in the interpreter, and you create a subclass of it in Python? So in particular, if you add an under under ad method on an object, it will register, so if you go back to this, there are two slots internally on the object to do addition. One of them is if it's a number added together, and the other one is if it's a sequence concatenated, and if you add your own under under ad method, it always becomes a number addition. That's kind of relevant for the most part, because that you can still do concatenation obviously in this method, but it's stashed away into the number addition part. But what if you subclass, for instance, a list where addition is defined as sequence concatenation, and for quite some time there was a bug, I don't know if it was just in PyPy, or if it just in Cpy from also replicated in PyPy, but it happened that if you subclassed a list and you added your own under under ad method, then sometimes concatenating lists would do your own method and sometimes it would do whatever was there originally. And nowadays, the interpreter, if it finds an under under ad, it will also put the proxy into the sequence concatenation of other things. So there's a lot of complexity in the language as a result of this, but the the symbol is there is no plus operator. And the reason why there's no plus operator is because when the language was created originally, there was no standardized logic model. So there are two internal methods, one is called PyNumberAd and the other one is called PySequenceConcat, and they correspond to those methods like PyNumberAd will add numbs through the number protocol and PySequenceConcat will concatenate sequences through the sequence protocol. But if you actually look at the functions themselves, so they're different, but they will also do what the other one is doing. So PyNumberAd will attempt to add numbers first and then concatenate sequences, and PySequenceConcat will first attempt to concatenate sequences and then fall back to adding numbers. So this is this doesn't make a lot of sense anymore, but it still defines some of the effects that we get in a language. So why does this matter? Or does it even matter? And the different ways in which you can look at this, but I think it kind of does matter because it limits us in what we can actually do with the language in the future. So there is Cpython, but there's also PyPy, there's Python, and right now I think both of those, I don't know how active Trison is at this point, but at least PyPy attempts to replicate every single quirk in the language in an attempt to be as compatible as possible to already existing code. And I think it's cool that the PyPy people are doing this, but also at the same time it makes PyPy a lot more like Cpython for not necessarily the, we're not necessarily gaining anything from this. So why are they replicating all the quirks instead of cleaning up and making it nicer? It's because everybody wants to have high compatibility. And I think this is the part where we as a community also sort of demand compatibility because if our code doesn't run on PyPy, we're not willing to give PyPy chance, for instance. But then if you actually look into what this means for the future, it means that it actually prevents more innovative language changes features. And if you look very far in the future, what will Python look like in 30 years? I mean, will it just be the same or will the computers look so vastly different that we have to change the language? So here is small proposals like maybe as a community can make the Python, we use more like the Python we actually teach people. Maybe we can eventually achieve a Python variable. If you add two numbers together, it does nothing else but calling the special under, under add method. And this trying to achieve consistency, sorry, compatibility with stuff we had before is I think this is one of the strongest mantras in the Python community. And it's a very common story. So the same way as PyPy attempts to be as compatible with Python as possible, we as a community are building our ecosystem in very similar ways. We very strongly value compatibility, ignoring this Python freezing. And this is very well shown with packaging. So I don't know how many of you ever wrote a setup.py file, but this whole idea of a package being built through a Python script comes from DistioTills. And DistioTills was added eventually to Python, and it set up this idea that you import a function from DistioTills, you call it, it's called a setup function, and then if you run your setup.py file, it will execute this function, it will do some magic, and eventually end up with a table. This, we still do. Just that now we use setup tools. And if anyone has ever seen setup tools, it's an elaborate monkey patch to DistioTills. And the original goal was to implement something called Python X. And most communities in Python with some exceptions, like I think so people might still do X, Python users have stopped using X. But we didn't completely stop using X. We still used some part of the egg infrastructure with some of the things we are doing nowadays. But the monkey patch doesn't stop the setup tools. If you use setup tools at the time, one of the things that it added on top of DistioTills was the idea that you could run Python setup.py develop, in which case it would build binary extensions and put it into different paths than where they would be normally so that you can develop locally with a Python package without having to install it all the time. This idea was later picked up by pip. Pip added a thing called pip install dash dash editable. And the way this is implemented, scarily enough, at runtime monkey patches setup tools temporarily to get its logic in place. And then we'll also monkey patches setup tools to build wheels instead of eggs. And actually, I think even the guy who wrote the biggest wheel command said it's effectively unmaintained at this point. They're just doing like small little things. And I saw at least one fork where someone actually monkey patches wheel to get their own stuff in place. And we're doing it too. Because, for instance, we are distributing binary extension modules with a module called snake which lets us do Rust modules for Python. And it's a monkey patch for the biggest wheel. So this is like everybody is doing this. CFFI, very common module, also implemented as a monkey patch to setup tools. And in our case, snake is not just a monkey patch to setup tools, it's a monkey patch to CFFI which monkey patches wheels. And if you ever look at, like, if you try to do runtime introspection on the classes, they're like, if you add, CFFI, if you add two CFFI modules, you run the monkey patch twice. So you have two subclasses internally to the default build extension command from setup tools which extends the one from distutils. But instead of extending the one from distutils, it's replacing it with its own. It's really quite maddening that it's this way. And we could have at one point realize that this is what we're doing and maybe reconsider. This is also, I think, similar to some degree. The community is attempting to replace or get rid of the Jill for a really long time. But part of the reason we can't do it is just backwards compatibility. It's not so much that it's hard. I mean, it's hard, but it's only hard if your constrain is be compatible with everything you've done so far. So probably getting rid of the global interpreter lock would probably mean getting rid of ref counts and break everything. So the thing is, we're not really good at breaking this compatibility. Our only attempt of doing this, I think, was Python 3 and it went interesting. It was very radical in some ways, but it was totally not radical enough in others. So it changed the language that everybody had to go through this pain of upgrading. But then we ended up with more or less the same as we had before. Just slightly different Unicode. And especially Unicode that I'll talk about this a little bit later again. But I think it would be interesting to see if we can do, if we can learn from this a little bit and maybe attempt another incompatible Python version, but different. Because one thing is that if you actually look at what the future of scripting languages are, like scripting language in the sense what Python, JavaScript and some others are, I think they're not going to go away. But they will definitely look different. You can already see this that async programming has become a thing that's a first-class citizen in JavaScript in Python and many others. But what people really care about is the ecosystem. And in addition to the ecosystem actually care about some standards more than more than it did in the past. So it was perfectly okay for Python not to be standardized, but it would not have been okay for JavaScript not to be standardized. So the fact that there's a very strong JavaScript standard now is what enabled a lot of the stuff in the community. People really like JavaScript before Node and before modern browsers came along was very, very different. And that was sort of a necessary step the community had to do to actually come up with a way to standardize it, which Python was never for us to do. So I think if we still want to be relevant in 30 years, we probably have to evolve a little bit. So here's some of the things we did really well. This is why I don't think Python is going anywhere. The C Python interpreter code is really readable. And I think this is something that gets a lot of people interested in the language because you can actually figure out what's happening under the hood. I know this is from a lot of people that I talked with is one of the reasons why they got interested. Because they could very easily go to a lower level and figure out what the hell is actually happening. It also means that you're never really surprised if you run Python code in production to not know what's happening. If there's some really bizarre behavior, it's straightforward to figure out what's going on. Especially also because we don't really have chit compilation. It makes it even easier. It's super easy to compile a new Python interpreter. So to actually modify the language itself is I think it's easier than any other language I've ever used. And this is what makes the community stronger because a lot of people are actually interested in getting their own stuff into the language. And this is vastly different than, for instance, getting change into Node.js where there are even internal politics in the language that make it hard. Where the interpreter is actually a part of a Google project and sometimes it's not, sometimes it is. The fact that everything in Python is a compact package where you can modify it as you will gets a lot of people interested in and also makes them feel like this is a stable platform. Because even if commercial support would go away, they can still take ownership of the whole thing. The fact that we had this or still have to see extension modules meant that we could go into communities that our languages had a hard time going into, especially scientific computing. Because the Python developers themselves could never figure out all the things that the language would be used for. But because other communities could come in and adapt it for the use, that was really strong and powerful. And this is also what made VEP very happy with Python because you could embed it into our environments like web servers and stuff. And because we are doing such a terrible job at package management and we come to multi-version dependencies, it actually means that we have a lot stabbler and flatter dependency hierarchies than a lot of other communities do. And I don't know if you're familiar with JavaScript much, but there was one of these incidents was that someone unpublished a package called pet left, which added some spaces, I think, on the right side of a string. And you would think that such a simple operation if someone deletes it from the package index could never have an impact. But because everybody was depending on left padding a string by some spaces, somehow, deployments failed, people couldn't get the new code up because the build servers tried to install this very boring package from the Internet. And as a result, I was looking a little bit into what else the JavaScript community is doing. And it's really absurd in some ways. There's a package called isArray, which checks if an object is an array. And it's a one-liner in it. But because everybody, because you have one dependency that uses it or two dependencies that are using it, it's almost impossible for you to have a large JavaScript project and not also depend on this is array package. And while the code in itself is a one-liner, there is, I don't know, it is like a half a kilobyte of license file that comes with it. There is a JSON document which describes what it's doing. There's documentation in it. So you actually download like 10 kilobytes of data. And if you actually look at the JavaScript community as a whole, it's downloading. IsArray is downloading the excess of a terabyte a month. And we're not doing that. And I think that's good because it makes everything a lot more predictable. When we push out the security update in a library, it's very likely that the entire application will see the security update. Whereas with JavaScript, you might have to push in the media dependencies as well. It's a very common problem that we see using JavaScript is that there is so hard dependency pins that if you have a dependency, which in itself is a dependency, it might be that that dependency doesn't get a security update just because it was pinned too hard. Runtime introspection, I think, is probably Python's best feature. The fact that I can look at what it's doing, there's so many nice extensions to Python where you can connect to a process, see what it's doing, look at the threads. Sentry, the company that I work for, the entire origin of that project was that you could crash in Python and look at all the local variables that you had in a stack trace. And that's really powerful, and I would never want to see this go away. It's very painful to look at JavaScript in comparison where there's nothing you can do. It's a regular expression, you can parse the stack trace, that's the extent of runtime introspection. But here are some of the things we could probably do to make it, to make our language more feature-proof in the future. And I feel like there's really only one thing that we should care about, which is making it easier and simpler as the language core instead of just making easier and easier libraries. People in the Python community love simplicity, they love using libraries that they look simple to use. But a lot of those libraries that look simple to use internally do really crazy things. For instance, the very popular requests library in Python that is used to vendor, I think it still does to some degree, vendor packages, but the way in which it did it involved monkey-patching modules and a bunch of other things, and that sort of stuff breaks. And the reason it's doing that and many other libraries are also doing that is because it looks like that's one way to tame the beast, but then don't really see it until it breaks. And then we could have invested this time in actually figuring out, like, why is everybody doing this? Can we just make a simpler solution for this problem? So I want to bring some ideas from other communities into Python and maybe as a community as a whole we can figure out if we can adopt this. So the two languages I want to use as a reference here is JavaScript, which is eats the world and rust out of personal interest and also because it's one of the last languages that appeared and as such they had highest chance of learning from everybody else's mistakes. And they did learn from everybody else's mistakes. JavaScript mostly learned from its own mistakes, which is great, but it definitely also picks up from other languages. So this is my favorite topic, packaging and modules. JavaScript has one, so actually I used a lot of JavaScript packaging now and I would never use it as a reference point to learn from, but there's something that it has done really well. One of them is all the description of a package is in a file called package adjacent, which means that this is a static file, you can use it generated to generate it if you want, but you can also load it at runtime and you can figure out what it's doing. And the Rust community has a similar thing with a file called cargo tumble, which is like an eni file, but not really, but it also defines everything that is relevant in terms of metadata and in terms of installation behavior of the library. And we don't have that because we execute code to install a package and we do generate some metadata, but it's generally not available. It's very slow to load and the Python community never really was interested in package metadata, but actually package metadata I think is the most important thing. The fact that you can access your metadata at runtime gives Rust and JavaScript a lot of possibilities to make much nicer decisions than we can do. So for instance, a package can figure out its own version. That's a simple thing. But it can also figure out its own dependencies, which means that in Rust and in JavaScript, if you require a dependency from a package, that import code or in case of Rust, it's the linking code can figure out what your own dependencies are to give you the more appropriate version of a library. So this, for instance, makes it possible for a package in JavaScript to have its own left pad function whereas the other package has its own incompatible left pad function. But they can still work because they see their own little local references. And the fact that you have multiple versions per library has its ups and downs. Now leaning towards probably it has more downs than it has ups with some negative experiences made, but it's not like those communities are not learning from this. And in particular in Rust, for instance, and I think in the JavaScript community as well, there is now the talk about maybe we can find out a way to split dependencies into half, where like half the dependencies are like private dependencies which only are internal to a library and some of them are public so that, for instance, if you have a framework like Flask and you have an extension to Flask, that it's guaranteed that the extension of Flask will always be the same Flask as your user code does. So these communities are learning and we can also start incorporating some of what they're doing. What we are now is we are actually moving towards that, I think. Very few people still run setup.py install. I think that the point where we could get rid of setup.py, at least we have the infrastructure in place to build Python wheels without using distutils or setup tools at all. A wheel, once it's been created, is largely just a zip file. And we can use different tools to generate them. Because pip is already a separate tool, it could be extended to support Python packages which have nothing to do with setup tools or distutils. But we are still away from multi-version dependencies. We would need metadata access which there is no good API for. And also it's not just that we don't have a good way to access metadata. We also have an import system which doesn't support multi-versioning for various different reasons. But I think it would be a very realistic way to move towards a completely new packaging ecosystem with less work than we currently collectively spent on trying to make what we have work. Because you just need to look at all the packages that we have, all the issue trackers. There is so much pain and suffering hidden there. And that doesn't even show you the individual suffering someone does when they try to make setup tools work in newer and exciting ways. I think I wasted about a month of my life doing nothing else but trying to make Rust work with setup tools. And I'm still unhappy. So maybe we could just channel this a little bit and build a different packaging infrastructure. And I feel like the packaging community in Python is already going this way. Trick here would be to actually make a language standard. Because nobody actually wants to have the current language as a standard. I think everybody who worked with Python long enough knows that they would try to simplify things. So there's no point in standardizing what we have currently. And JavaScript just standardized what they had because the language was a lot less stuff in there than in Python. And when they figured out that some of the things that they had were impossible to make fast, they actually changed. They took away some features of the language. I don't think we as a community want to move that direction. But maybe there is actually something that will get us this way. For instance, there's MicroPython, which clearly has had its own experiences with the complexity of the language. And there is a page on it which says these are the differences between MicroPython and Cpython. And maybe if we get some more like Cpython slightly incompatible Python versions, we will actually find a common subset that makes more sense than what we currently assume the subset is, which is the entirety of Cpython. So now I feel like maybe PyPy would have been a little bit more successful by not trying to be Cpython, but trying to be more bold, trying to do more exciting things that people actually have a good reason for doing this. And I think the biggest problem that other Python versions still have at this point is that we as a Python community, there is this idea, if you go to the documentation, like how do I build an extension module? It says use these tools to build a Python extension module. And nobody ever told people that this is wrong. And the documentation should really say that unless you really, really know what you're doing, don't do it. Please don't do it. It's like it makes no sense. Please don't do it. Because there are so many negative parts about building a Python extension module with the Python API, you will suffer for this for a long time. And there are so many better ways to do it, like CFFI, where you actually build an independent library, you try to consume it from Python, and you get away from the idea of sending Python objects between your new world and the other one. But there was no, never anyone in the community said it's a stupid idea to build Python extension modules. And everybody still tries to do it. So maybe we should just put it into the documentation, like there are alternatives to building C Python extension modules. Because once we get away from this, we can actually liberate ourselves and use more interesting Python interpreters. This is my favorite topic, Unicode. I think we did it completely wrong. And the more I use other languages, the more I'm convinced that we have Unicode completely wrong. This is what Rust is doing. They use UTF-8 everywhere. And everything gets easier. And where they can't use UTF-8, because it's not possible, they use a thing called WTF-8, which is a wobbly transfer encoding format, I guess. And this allows them to be compatible with UTF-16 or UCS2, I guess, in places where they have to interface with the world that is not completely Unicode aware on Windows in particular. And it turns out, and the WTF-8 also came up with JavaScript, which for similar reasons as Windows decided that two bytes per Unicode character is everything they will ever need. And so they found new and innovative ways to deal with this problem. And they just embraced UTF-8 everywhere. And we should do two. But it's very hard. And the reason for us, it's hard is because we use strings differently than other communities are. But the benefit of using UTF-8 everywhere is there's very little guessing about encodings. Did you know that if you open a file on Python 3 as in Unicode read mode or write mode, it's not UTF-8 by default. It guesses its encoding. It's UTF-8 on most of the computers you have ever used. But if I add it to my server, it's ASCII because it guesses the encoding from the file system. And it falls back to ASCII. Unless it was changed recently. But it used to fall back to ASCII if it couldn't figure it out. And Rust, for instance, decided that instead of trying to shoehorn more stuff into the Unicode type, they will build a separate string type to interface with the operating system. So if you, for instance, use the Unicode APIs on Python 3 to interface with the file system, you will get Unicode strings back. Unless it can't decode the file name because it's invalid, then you will also get the Unicode string back. But it contains characters which are invalid Unicode. So if you pass on the string long enough, eventually it will break in the same way as it broke in Python 2, just with a much more confusing error message that it contains surrogate. And I remember that I had this conversation at one point like five, six years ago that the reason we don't want to use UTF-8 is because everybody actually benefits from, or not everybody, but there are lots of people that actually benefit from only living in the basic plane, which means only two bytes per character. Because, for instance, in Japanese languages, that might be a more efficient representation in UTF-8. And this also sparked sort of the idea that in Python 3, a string will attempt to stay in one byte for as long as it can, then it will upgrade to two bytes until it no longer can. And only when you have characters outside the basic plane, it will go to four bytes per character. And it turns out as of, I think, at least two or three years ago, this optimization was, it doesn't make any more sense for a lot of applications, because people use emojis. And emojis are way past the basic plane. So you are now in this really absurd situation where if you render a template in Genja 2, it starts out with HTML quote, it fits into ASCII, so it's one byte per character, you stream a little bit further, you hit your first unit code character, it re-encodes everything into two bytes, and then you hit the first emoji because someone left a funny comment, and it does it all over again with four bytes per character. So the world has evolved to a point where Unicode is now used more than 42 bytes per character. So could we move to this idea of having UTF-8 everywhere? We could, very easily, we just have to give up the idea that we can access a character in constant time, and we can slice strings. But we love slicing strings in Python, so I think that's a little bit in the way of doing this. But I'm now fully convinced that fundamentally the idea of representing, being able to access a character in constant time doesn't make any sense, and it's not useful, and also that we don't need string slicing. But we would have to start moving us away from doing this so that we could then start embracing UTF-8 as an internal encoding, and we are very far off that. I already talked about extension modules, I would love to get rid of them as much as possible, use more CFFI, and, as a result, use less LibPython. If you've tried to build a C extension in Python, and you want to distribute it to other people using Linux, there's a thing called MiniLinux1, it's a Docker image, it contains a very, very, very old version of CentOS. I think it's CentOS 5, probably eight or nine years old. The reason why you build it on a very old Linux is because then it's upwards compatible to the more modern Linuxes. It's very painful because you can't do modern SSL on this Docker container. But we can build a C extension in Python on a very old Linux, and it runs on new Linuxes. So in theory, all we would have to do is make one extension for OS X, one extension for Windows, two extensions for Linux, one for 32-bit, one for 64-bit, and we would be done. But because everybody links against LibPython, you actually have to build one for Python 2.7, two byte character unicode, Python 2.7, four byte character unicode, multiply this for Linux 32-bit, Linux 64-bit, OS X, Windows, then you have to do the same thing for Python 3.3, 3.4, 3.5, 3.6. If you're happy and you can use the stable ABI, eventually you don't have to do the unicode thing anymore. You can just assume one. But to do a release of one binary extension module that people don't have to compile themself, you probably have to do like 21, 24 different towels or wheels. This is excessive. And the only reason is LibPython. If you build a CFFI module, you can get away with four. And this is a benefit that was never really understood by the community. So now you know it. Only use CFFI if you can. But this is a realistic change to move towards CFFI instead of extension modules. It's impossible for a lot of libraries, like NumPy, for instance, would not be able to move to CFFI as far as I understand. Or anything that sends Python objects around. But if you want to make your JSON parts fast, or if you want to do something where you have a utility library written in some language and you want to use the functions in Python where you don't have to pass the entire Python objects around, it's very possible. And I think it's much easier to use as well. But because I think everybody was pushed towards regular extension modules, people don't even think that CFFI might be even the better solution. Last part is something that we should steal from somewhere else. I also put Babel on there. Babel is a library for Python where you can take JavaScript code and you can do stuff with it and generate other JavaScript code. And this actually turns out to have a really profound impact on the JavaScript community because you can use more modern language features on an older version of JavaScript. And because it was accepted as being a possible path of software development, there's a concept called source maps where you can still figure out where the error was in the original untranspiled code. And this actually made it possible to target newer versions of JavaScript on very old runtimes. And maybe something like this would also be an option in the Python community to use things like async functions more prominently in older versions of Python. Maybe there could be a thing where you can transpile Python to code on Python 3, who knows. And then, obviously, TypeScript and Flow are very popular extensions for JavaScript to get static typing in. I think we're moving this way with typing on Python 3, but we never really embraced it as much as the JavaScript community did it. Also, it's very common now in other communities to just run a program and format your code to the one true style and there's no arguments about it. Go doesn't even let you compile code unless it follows the naming conventions. We would never be able to go there because the standard library already has 20 different naming conventions. But maybe for our own code, we could start to embrace the idea that there is this one thing you run. Maybe there would be a flake 8 fix my code style. There's some attempts in Python to do this. But one thing we learned is that if you use this one tool to format your source code according to some standards, your linter will complain about the output of this tool of it being different because the linter was written by different people than the formator. And stuff like this. It's not great. But I think this is probably one of the more realistic trends that we have of moving the language into a new area where we can agree on standards and stuff. Yeah. So, what can you personally do? Abuse the language less. Don't do stuff like this. There is so much code that gets a random frame and the local variable. There is like soap interface for a very long time. Just modify the class scope through just get frame. Try to not subclass built-ins anymore. There's only suffering. Stop writing non-CFFI extensions if you can do so. And just stop being clever versus modules. Because if you stop being clever versus modules, maybe we can make a really cool import system which lets us do multi-version dependencies. One of the biggest mistakes ever made was the pickle addresses its types by the internal dotted name. And because, for instance, if you import, if you at runtime want to import a module, it used to be that the under-under import function was so impossible to use that everybody under-under import, whatever they wanted to import, ignore the return value and then assume that what they imported is actually in SysModules. So they will do under-under import food.bar, ignore return value, and then return SysModules food.bar. This obviously won't work with multi-version dependencies because SysModules would have to have different keys. And that was just an API design mistake that was copy pasted all over the world and now everybody is still doing this. But awareness is the first step. If we know not to do these stupid things anymore, maybe we can evolve the language. And with that, if there is still some time left, I will take questions. If someone has a microphone, how does it work? I will repeat the question. So the question is, if you cut away the hacks, would Python become less nice to use? Because a lot of the ecosystem actually depends on these hacks, like G event and other libraries. And probably the answer is yes, if you take it away completely, you should never take away people's ability to experiment with this. But it doesn't mean that everything has to stay a hack forever. Like, I think it's nice to hack around temporarily to make set up tools nice. But instead of continuing this hack forever, we could just do it and say, like, there is a legitimate need for this. Maybe we can do it differently. All right. Thank you.