 There's no set format for this. It's just supposed to be a panel discussion. I don't have any pre-arranged questions because they only asked me on Sunday if I would host it. So we're just going to take questions from the audience. Maybe we could get somebody. Is there a session runner for this? Do we have somebody who's officially representing the room? No, I guess not. OK, would someone run the microphone around and ask questions of people? Anybody who has questions? So thank you, Hineck. Hineck, sorry, I'm somebody I'll get your name correct. Hineck, Hineck. OK, so we actually have four people on stage. And there are a couple other people in the room. We'll just ask people very quickly to introduce themselves. So hi, I'm Victor Steiner. I'm a patterned code developer working on four red dots on OpenStack. I'm Larry Hastings. I'm the release manager for Python 3.4 and 3.5. I'm working on removing the GIL. That's called the GLECME. I just talked about it. Hi, I'm Christian Ames. I'm also working for Red Hat on Identity Management and Security Engineering. I'll also, for Python, mostly in the last couple of years, work on security. So as a model, hash lab, and some peps. Hi, I'm Yuri Selovanov. I work on the SYNC await and support and maintain the SYNC IO. So if you guys are out of questions, then we all have to leave. So keep asking questions, and we can stay for the whole hour, and you can enjoy the air conditioning. OK, I hope somebody has a question, because otherwise this is going to be really boring. A hand has been raised. Thank you. Hello, and thank you for being here, sharing your knowledge and experience. I've used an sysset trace in the past to build. Hold it closer to your mouth, please. Sure. I've used sysset trace to build debuggers and other tools controlling the execution of Python code, and that helps me do it on a line by line basis from the source code. At some point, I logged in to experiment doing that, something similar to that, on a opcode by opcode basis. Would it be something simple to add a sysset trace, let's call it equivalent, to play around at the opcode level? Do you want a debugger to execute instruction? Something like that, yes. OK, currently there is no such thing, but there is an open issue of Stefan to at least display the executed by code. But if you want something else, maybe you have to modify CPyton. What I usually do is to use GDB for that. And you put a breakpoint on use a regular debugger. Yes, I discussed that with Stefan exactly yesterday. And GDB is great, but then it's too far from Python source code. So my idea would be to be somewhere in between where Python bytecode is pretty much easy to match against Python source code and go from there. So with GDB, you have also Python bindings for GDB, where you can get more information from the actual Python object that might help you to know that. There's a Python plug-in for GDB. And also, Brad Cannon and Dino Phelan from Microsoft are working on a new feature to make the plucking system to plug in jits into CPyton. And the same hook could be used to analyze bytecode on a bytecode level that might get you there. They don't know the PEP number, but the PEP already released? Do you know? The PEP for line by line doing something to your code, I think the PEP has been released and it has a number, but I don't remember what the number is. But Brad's very good about writing PEPs and so I'm sure it's gotten out there. I can look it up if you were curious. If you just search for, the author would be Cannon, and well, just search for Cannon, C-A-N-N-O-N, in the PEP index, and you'll find it. OK, that's a great pointer. Thank you. Why does async.io not support UDP in the IOCP reactor? IOCP event loop. Ladies and gentlemen, this is the Victor Stiner panel. UDP, it works on Linux, but for Windows, we need someone to write the code. OK, it's in my total list since one year, but I'm not really passionate by Windows. I have a little more space in the question. It's about Windows and scientific users. Just a quick review of the state right now. If you want to compile Fortran extensions, it's no problem in Unix, and Mac, and Linux, but in Windows it's a problem because now the new compile for Python 3.5 is to Visual Studio 2015, and there's no MingGW yet that supports this model, so you cannot use MingGW to compile extensions. So you could use Visual Studio to express. The problem is there's no Fortran compiler with it, so if you use G Fortran, you need GCC or MingGW to link it. So the problem is now, if I want to release this, my package which has Fortran extensions on Windows, I can only deploy it in Python 3.4, which is not really long-term sustainable. So you're talking about compiling Fortran as a C extension for CPython, and this works under 3.4, but under 3.5 you have difficulties on Windows. Yeah, it doesn't work. So the thing is, I use F2Py, which helps me to generate a C extension from Fortran source code, which is something that's very nice. So you compile this Fortran, and for this one you need a G Fortran compiler, and the G Fortran compiler needs to work together with GCC, which is MingGW on Windows, but MingGW is not released on Windows yet. So the compiler, you work pretty fast with the new Microsoft compilers, but I cannot use a Microsoft compiler, but I need to compile Fortran, and I don't think they work together, so yeah. Okay, not to be flippant, but I didn't know that you could compile C extensions in Fortran on any platform. Like I'm baffled that all of this is possible at all. Does anybody on the panel know about Fortran extensions? Yeah, yeah. It's more just that you're aware of it. It's something I've been doing for more than 17 years, so the Fortran extensions, so it's there for a long time, and a lot of sign, the big part of the SciPy stack actually is written in Fortran. So there's the LPEC and LAPE and all kind of stuff, which is written in Fortran, a lot of those numerical routines are Fortran there wrapped. So what does NumPy do to distribute their extension on Windows then? They use the real C compiler, the real Microsoft compiler. They use the intercommercial compiler, so you can, but you need to have a commercial compiler, and I cannot just, that's a problem, if you'd like to. So the pain point is open software developed with free tools, you can't do three, five, Fortran extensions on Windows. Exactly, okay. Okay. Hey, you might be able to gather a free license by Microsoft, so Microsoft's donating for at least five or eight years, free version of the commercial tools for open source. For Fortran compiler? Everything, so I've got an MSDN ultimate license from Microsoft, you need every year that had really everything Microsoft ever offered for development at all, so Windows licenses, compilers, Windows Studio, if you're doing open source software, you might be able to get that too. I could rely you to the guy who's doing the open source license for us later on, just contact me. So it's not for a commercial product, it's just really for working on open source. It's open source. Really thing that as open source, you might get a license from Microsoft for free. But I don't think Microsoft has a Fortran compiler. They gave, they stopped the Fortran compiler more than 10 years ago. Do they have one? And who has this compiler? Is it Intel? Intel has a Fortran compiler, and Portland, Fortran compiler itself, I never use them, but Intel is probably the fastest Fortran compiler commercial one out there. And Intel actually needs Vision Studio, they need Vision Studio link or two link. I used Intel before, but you need to renew, every year you have to buy 500 bucks to have that license. Okay, well, I would say this is more of a question for the NumPy guys anyway. Like I've literally never heard of this, but the NumPy guys should have some sort of answer for you because if this is very common in the NumPy world, then hopefully they would have an answer. Yeah, I think they use commercial compiler for this, because there is a lot to it, it's fine. But if you want other people to compile things and you want open source, it's difficult to have a commercial one. I understand, but this is the wrong panel to ask is what I'm saying, is we don't have an answer for you. Okay, I just want to make you aware that when you move very quickly with those new Microsoft compilers, then the ecosystem might not be able to follow that quickly. That's maybe just something you might have considered. Oh, right, so the part of the pain point here is that we updated the compiler on Windows for 3.5. Well, so the update to the compiler, okay, so there's a pain point around 2.7. So 2.7 was released in 2010, and it stuck on a particular version of the Microsoft compiler, which was reasonably current at the time, and it's not supported anymore. Microsoft made a specific release of that for open source just for us, so that people could compile C extensions, because in the past, what would happen is that CPython would be released and it would be dependent on a particular version of Microsoft's compiler tools, and when those were no longer supported, it would be very hard to compile extensions for that version of CPython. So the version that we shipped on with 3.5 is the first version of Microsoft's compiler where they claim that they will actually be backwards compatible with future compiler releases. So you'll be able to compile an extension for Python 3.5 with future versions of the Microsoft C compiler toolkit. So it was actually kind of an exciting feature that we updated the version of the compiler for 3.5. Also in general, I think it's best practice to just be on the current version of the compiler anyway. So yes, I gather it's a pain point. I had no idea about this, but that would also be a good question for Steve Dower, who unfortunately is not here. But Steve Dower is really the single guy driving Windows development as a core developer these days, like he's the first name that's gonna come up every time. Okay, thank you. Sure. The Pope. Is Python the language getting less accessible for beginners with recent releases? I worry about that, yes. So this is interesting. I used to have a blog, or excuse me, I used to have a podcast. I haven't touched in a couple of years called Radio Free Python. And I interviewed Raymond Haddinger, and Raymond does a lot of training in Python, I should say, where he'll go to some institution where they wanna use Python and he'll train people who've never used Python before and get them up to speed. And he said that it was a big pain point for him between Python 2 and Python 3. Python 2, by the end of the day, he could be able to have people opening files and parsing them and processing them and doing all sorts of things. Whereas in Python 3, he had to teach them about Unicode first because most people had to encounter that. And they had to understand to the point where you're encoding and you're turning things from bytes into strings and all that. And what he said was with Python 3, Unicode is now day zero knowledge. And so in order to program in Python 3, you really have to understand Unicode. Whereas it was kind of an optional thing before. Now I view that as a good thing, but it is also a heavier conceptual load for the starting Python programmer. We're adding more and more syntax. I mean, it's not like to the point where we're not the D language by any stretch. But there are increasingly more syntaxes that are unfamiliar to you. And you say, well, what does that do? And then you have to go and read some documentation because you've never seen that construction before. I remember once I was positively baffled by seeing four else. And I'd never seen it before. And I was like, what does this even do? And I figured it out. And then I said, when did they add this? And then I went back and looked and, like, it was in the first version of C Python. So it was very clever. But I'd never seen it before. And so every time that you have one of these syntactic constructions that no one's ever seen before, that's where things get a little funny. So what I would say is that in general, the language is not changing very much from version to version. We had very little new syntax. I think in 3.5, we added the at sign for the matrix multiply operator, which literally isn't even used in C Python. Like none of the types use it. And I think there was maybe one other syntactic change and I can't even put my mind on it. Async, A, wait, that happened right at the end. And added a whole bunch of stuff and I don't even understand it right now. So yes, I worry about it. I think that in general, the language is already so complete and so old and feature rich that it's kind of hard to find new features that you want to add to it anyway. And so type hints was... Type hints. Yes, what about type hints? There is a lot of complexity there for beginners in... Well, my dodge on that is that type hinting has been there since 3.0. So it's only now that we're starting to use it, but it's actually been there for a while. Yeah, I just want to clarify about Unicode. So Raymond, I think he teaches mostly in North America and in North America it's mostly English. So for English-speaking users, maybe there is some conceptual overload about Unicode. They have, it's a new concept for them maybe before they just worked with late in one encoding. But for the rest of the world, it's actually a good change. It actually simplifies a lot of stuff. You don't see a lot of Unicode errors. You don't have to follow some specific projects guidelines like in Django, you have to do Unicode in some specific ways in Python 2 in order to avoid having Unicode errors. So it does simplify a lot of things in Python actually. Maybe it is a little bit harder to teach people about it, but it's important concept to actually understand and to spend less time debugging your Python code later. And about new syntax, a lot of new syntax is optional. Like if you don't need async await, you just don't use it. As a beginner, you don't need to know about async await at all. You just go through for loops, print hello world, stuff like that, it's not a big issue. As for the type hints, Guida himself said that this is kind of an experimental provisional feature. We won't be annotating standard library yet. We'll see how it goes. Does it require some additional maybe syntax? Maybe it doesn't, we'll see. But again, it's kind of an advanced feature for large software developers, for large projects, and again, it simplifies development a lot. Right, I guess the risk is that beginners might encounter this stuff on Stack Overflow. They've not been taught it, and it's like, is this even Python? Right, and at the point that we add type hinting to the standard library, the standard library is supposed to be a place to go and read good Python source code. And so if you tell the beginners, oh, go and leave the Python library, then you're gonna have to understand what type hinting is, and at the very least, know how to ignore it properly. But I've, somebody piped up and said type hinting is still a bad idea. I'm on the fence about it. What Guido says is that Python, excuse me, Google and Facebook both had their own independent projects to add static type hinting to Python, and it became very obvious this was something that large coding houses needed. You should have some experience in that. Are you familiar with a large coding house that might take some advantage of type hinting if it existed? Yes. Yeah, so it's not that there isn't a need for it, it's just that the guy often is garage coding up something and he's the only developer on his project. He doesn't need type hints, he doesn't need the help, but there are definitely institutions where it's gonna be very helpful, so I really can't say no to type hinting, and I hope that the rest of you will at least learn to love it the way that its developers love it. Okay, moving on, hopefully. Thank you. Well I don't see any hints raised. Do we all get to go home early? Oh, there's a hand up. Thank you for running the microphone around by the way, Hienig. You're very welcome. So question, how is CPython Core Development funded and are you happy with the situation? CPython Core Development is essentially not funded. There are a lot of people who contribute their spare time to it. There are a handful of people who are paid to work on it full-time, very rare. The one name that comes immediately to mind is Donald Stufft, who doesn't work on CPython itself, but he's the guy who's keeping PyPI running and he's also doing a lot of work in packaging. Are you guys aware of someone who's paid full-time to work on CPython? Like, oh really, on CPython, I thought you were working on an open stack. You're not working on CPython Core Development. In fact, Red Hat gives me time to work on CPython, so it's part of my time, but it's not my job at full-time. But to answer your question, if we are happy of the situation, I'm not happy of the situation because many huge companies use Python and I expect that such company have some money to fund the PSF and I would be very happy to see more developers pay to do that. So about that, we have like every year at the PyCon US, we have the Python language summit that were all the implementers of CPython, other Python plantations, and people from core projects come in, join together, exchange stories, and do exchange ideas for the next year. And one of the ideas, we had, even after the language summit, so officially, we should do, again, what we did before my time before I joined the CPython Core Development in 2008, have more events a year like a kind of a meeting or a sprint where we get together in one location and just hack on stuff and not even doing implementation of code, but rather exchange ideas, again, and more on a coding level and implementation level, and we're currently looking into that. Well, we're actually, we are having a small sprint in September gonna be held in California and it was invite-only, it's very small. That's sponsored, so like hotels are being paid for and things. But in general, most people who are doing Python Core Development are donating their own time to it. Guido very famously got to pick his own projects when he worked at Google, and I think it's kind of true when he's at Dropbox as well, but he prefers to spend about 50% doing Python development and 50% doing real projects for the company that he's working for. He says it helps him stay grounded. Most people don't have that luxury, most people are not Guido Von Rossum. And yeah, it is sort of surprising that so few people are paying for Python Core Development to progress and so many people are interested in it. It's the story of the little red hen. But I don't know how to change the situation particularly. The PSF has a pretty decent-sized war chest. It's got a couple of million bucks, I guess, and it hasn't been spending the money very much for specifically on Python Core Development. It's been spending this money, running PyCon and then sponsoring sprints and meetups, but not Core Development for the most part. So there is kind of an idea to try and nudge the PSF back into putting a little bit more money into Core Development stuff. Yeah, I think that's all we got. Oh, another hand. Could you share with us what's the future of Python in terms of features, in terms of where are we going or what are the ideas that you share between you and where are we? Well, we're removing the guilt. Beyond that, I really have no idea. So the general answer to this question is the future of CPython is whatever people add to CPython. And this is not, again, being flippant. Maybe long ago, there was a master plan for CPython that Guido had in his head. He was like, I want to add this and I want to add that and I want to add that. It's been 25 years. Guido's added all the stuff that he wanted to add. So there are people that have ideas about how to enhance CPython and we have this whole PEP process. And so if you want to see the immediate future of CPython, like the near-term future, running out maybe a year or two, I would say read the PEPs that are open. Beyond that, nobody has any idea. I mean, the individual developers have ideas of things they want to add to CPython. I have ideas for things I want to do, not necessarily visible to the user, but like internal implementation details. Things like the Galectomy, but there are other things as well, not really user-visible, though. But fundamentally, Python is changed by the core developers and the core developers, the one who proposed the changes and make the changes, they're the ones who are steering it. So we could individually answer what we're interested in working on and what we're hoping to do in the future, but there's no grandmaster plan, though. Yeah, as Larry said, most of the new features are proposed by core developers, but actually any one of you can suggest new features. First of all, you should Google. If it was suggested before, you should Google Python ideas, mailing list, archives. Maybe it was proposed before, then you should read and see why this idea was rejected. Usually there is a very good reason for it. If it wasn't ever proposed, then you can tell us about your idea on Python ideas and if core developers find that they can actually implement it, it will be implemented. Maybe you will need to champion a PEP, Python encasement proposal, but that's a pretty standard thing to do. There are guidelines how to do it. There are lots of PEPs to read through. So I'd say if you have an exciting idea, just go and propose it on Python ideas. Now, as Larry said, there is no global agenda for Python to what direction it should move. Each core developer has their own plan about that. I can tell you about mine. What I actually want to do in Python 36, if I have time, I want to add asynchronous generators to make Python even harder to learn. That's a very exciting new feature, I think, but there are lots and lots of technical details on how they will work and how they might not work. So we still have to figure out that. But I hope to have a PEP for that maybe in a month. Another thing that we are kind of focusing right now in for 36 is performance. So for instance, a huge patch was merged about a month ago that optimizes how opcodes are encoded and processed by the CEVAL loop, which boosts performance anywhere from zero to 15%. So it's quite important. I also have a couple of patches that touch CEVAL loop and opcode processing, and they can actually boost performance from, again, zero to 18 to 20%. But those patches are kind of huge. And I have to spend a lot of time to actually make sure that they are correct and that they don't impact performance in a negative way. And this is what Victor is actually working right now. He is redesigning the Python benchmarks suite, adding more benchmarks and making sure that the benchmarks launcher collects more stats and does it in the correct way to actually ensure that new changes in CPython don't harm performance. So yeah, a lot of things are happening. If you want to read about them, you can subscribe to Python Dev and just listen, read the emails, and then you will have a pretty good idea of what's actually going on in Python. For me, the future of CPython is to make sure that nothing's changed because I like the language and I don't want to see new keywords and don't want to see new things. For me, the language is perfect. But if you say Python, in fact, Python is much wider than just the interpreter. For me, Python is a wide community. For example, if you take the Django project, Django is not part of the CPython, but it's a very important project for Python and I hope that many new projects will pop up in a few years and will help people to migrate to Python and make new amazing stuff. So today, PIP is working very well. Much, much more better than a few years ago. So we don't have to add new stuff to the CPython. I prefer to experiment on new things outside Python. Publish them on PyPy. Just as a quick note, there's an immediate change in Python 3.6 that... Oh, no, you've already seen it. It's called fStrings. I think that... Was there a lightning talk about it like last night? Yeah, okay. So you guys thought about fStrings. It's... I think everybody's reaction is about the same. The first 10 minutes, you're like, ugh, that's terrible. And then you're like, actually, that's kind of cool. So hopefully you will enjoy that. Yeah, so there's a more general comment about adding new features to Python. So these days, we're all about agile processes and fast release cycles. In fact, CPython development is more like developing huge enterprise software. We have very long release cycles. So we release a new version of Python every one and a half year. We usually maintain Python for at least five or six years even longer and we need at least one version of Python to deprecate a new feature. So if you're planning to get a new feature now to Python, expect it to maintain the feature for at least the next six, eight, maybe 10 years. So it's... If you join... We welcome anybody to join CPython callable. We meet you. We need even more people. We have a crazy amount of patches and bugs on the bug tracker. But if you join CPython, you... Also a bit of warning. It takes long to get a feature into Python because we don't want to add feature creep. We want to add less features as required to get something working. Very good example was Python 2.6. When we added the JSON model, before that, a simple JSON standalone package and just another and me spent a very, very long time on getting JSON model even working properly and fixing very bad regressions. And because the original author landed the package and then decided to work on his own for most of the time. And so the simple JSON model still exists. It's not the same as the JSON model in the center library. And we just wonder why that we have to maintain a package that gets kind of like abandoned, thrown in the center library and then abandoned. Speaking of the speed of development, by the way, one change that is happening in CPython Core Development right now is that we are planning a switch from Mercurial to Git. And specifically, it's not that we're moving to Git, it's that we're moving to GitHub and that requires us to move to Git. But there's a lot about the CPython Core Development process that is kind of slow and backwards and ancient. Processes that were designed back in the CVS and subversion days. So there's kind of an expectation that things will pick up a little bit once we switch to Git. It's gonna make our workflow a lot smoother and it'll be easy to do things like pull requests. I don't expect that that's going to speed up the pace of the change of the language much, but hopefully it's gonna make merging bug fixes a lot quicker. Okay, I think we've answered that question in about as many ways as anybody should. Do we have any other questions please? Oh, this there is a hand held up away. Oh, wrong direction, Hinnick. Yeah, that fell on the. Yeah, I don't give the mic to Hinnick. At about nine o'clock, if you looked at the. I was actually wondering about static type checking. There is MyPy, which is under development and I know it's not really into the core, but is there any plan to include that kind of features into the Python? Okay, so as I understand it, no, there are no immediate plans to add the static type checker into CPython. The plan is to have a standard for how you express types in CPython. That's the typing module, that's shipping in three six. But the static type checkers are going to be written independently. That's MyPy and there's another one. I don't remember what it's called type check, PyType? PyType, okay. Yeah, so there are competing independent type checkers. MyPy is the one that Guido is working on, but I think he also contributes to PyType now and then too. And the idea, again, the original idea with static type information was we will add the syntax to the language and then let people experiment with it and let a thousand flowers bloom. And almost nothing happened with it. And so Guido said, okay, we're gonna define one, we're gonna pick one and that's gonna be the official one and now we're gonna allow people to write their own static type checker so that people can enforce the level that they're comfortable with and we can experiment with it and see what the best approach is. So right now, there are no plans to ship a static type checker with Python, but there are plans to have a standard for how you express the type information in Python. And then eventually, I think, my guess is that within a version or two, the standard library is gonna have required a type information, although Guido has said, no, that we're gonna be a lot lazier about that, but certainly new modules that would be added to the standard library would have type information probably in three, seven is my guess. Yeah, and then next question would be optimizations based on those type checkings. No, Guido has said in so many words that he doesn't expect the static type information to be used for optimization. Yeah, I'm just kind of not seeing it. We have, of course, an expert in optimizing Python and he has, Armin has said in so many words that you don't need static type information and in fact, static type information would be useless in its current form to PyPy because it is not nearly complete enough. They need so much more information than static type information as is defined in Python can express that you might as well not even bother. So I'm not, there are certainly no plans for adding optimizations based on static type information in CPython. No, no plans, I'm not aware of anybody who's interested in working on it. So even Stefan Beneloff on the Python project is not using any kind of the information to generate like bindings to C code and actually when you ask the question I saw a lot of faces smiling over there and maybe, I might wanna maybe come up and talk shortly about your opinion, your experience with optimizations. He's shaking his head, so. Either, if you wanna come up then stand up and if you don't wanna come up then keep sitting down. It's fine. I already said what he was gonna say. Okay, fair enough. Just to detail what Larry said is that in a JIT compiler you don't need the only, you don't have to know only that it's an integer. You have to know if it's always positive. You have to know the maximum value to use the most efficient C type depending on the range of the value. And that's just an example but in a JIT compiler you need very precise type information much more than just as a high level type. Two hands down here. So while I am walking the longest possible distance. Yes. I would like to abuse my power step bearer of the microphone. I've kind of promised to Yuri to come back to core development in Portland and help him out but I think I owe. And I'd be interested, can either of you give me like a timeline or the current state of the infrastructure work that Brett Cannon who sadly is not here is doing currently or. I can give you a quick update, yeah. So again, the plan is to get on GitHub. Currently the PEP repository has been converted to GitHub and more recently the Dev in a Box repository has been converted. I don't know what Dev in a Box is. The PEP repository is just static text files. That was pretty easy. But there's absolutely work happening. There's more of a grand like, you know, there's a certain amount of workflow redoing, retooling that's gonna happen around CPython. CPython is gonna be the last thing that changes over to GitHub. There are a lot of other repositories that are gonna change first. And there's a lot of arguments about how the workflow should work and how we're gonna use, you know, like name branches and yada, yada, yada, yada. So absolutely it's happening. There is a mailing list of course. It's specific for it. And it's probably one of these things where if you post to the wrong list everyone will yell at you and they'll tell you to go and post to this other list instead. But there is a core workflow mailing list and it is very active. I don't remember, I dimly remember there was something about, we'd really like to get CPython on GitHub by the end of the year. I'm not saying that that's absolutely what they said. I may be completely misremembering, but that's maybe. Did they say which year? I'm sorry? Did they say which year? Yeah, no they didn't say which year, of course not. It's probably smarter. Thank you. You mentioned f-strings and going through a little mental process of sort of initially being appalled and then kind of gradual acceptance, sort of maybe a five stages. But, The stages of grief. Yeah, exactly. So I wonder if you could take us through that because I think I haven't got past the appalled stage but I'm absolutely open to move. I mean, I want to move on to the next stage. So I wonder if you could take me through the thinking there. Well, simply that it's just going to save some time. How often do you want to insert a local variable formatted into the middle of another string? And the answer is, pretty often. Wouldn't it be nice if the language could just sort of pave the way for that so that there would be a special syntax where that would happen for you automatically. And the answer is, well, yes, I could get kind of behind that. So, fundamentally, I think f-strings are just going to save me some typing. Right now I have to do something horrible, horrible, like .format, underscore map, parenthesis, locals, parenthesis. Starts to our locals will work. Starts to our locals into format or format map and just locals. Yeah, yeah, yeah. But it's okay, but you also said, ooh, at first I was a little bit appalled. So I was a bit appalled because I thought, wow, I could put sys.exit in there and that can't be good news. I'm not sure. So it's for local variables. I'm not sure that it pulls out modules. It's uncertain to me. Do you guys remember? Like it supports module variables but not built-ins or something like that? I think you can actually use any expression in f-stream. So that's why it actually will save a lot of space and a lot of code to type. And also a lot of modern languages have this feature and users are kind of requesting and are kind of expecting from Python to have this feature as well. So I think it's a well-planned and well-discussed way of adding something like that in Python. To your reaction about sys.exit, you realize that you're the one writing the string. So if you write f, quote marks, curly brace, sys.exit, open parenthesis, closed parenthesis, close curly quote, close quote, and you're on your program and it exits and you're shocked. I don't know what's wrong with you. If you're worried about this string coming in from users, realize that this is a special string inside of Python, it's a static string inside of C-Python. You would have to go through some contortions in order to get a string from the user that could possibly be tainted with these curly brace sys.exit and have it automatically run. So I wouldn't worry about that. Okay, Andre was next. He's been holding up his hand for a while. I'm wondering how hard would it be to change the C-Python interpreter in order to run several C-Python interpreters in the same process? Ah, multiple interpreters simultaneously. So I have two answers for you. The first is that Eric Snow looked into doing this for a while and he has stopped. So he found it too difficult or something like that. I think fundamentally, someone was just talking to me about it, was that Yuri? Okay, maybe you could talk about it for a minute. But my other, you could talk about Eric's work. My answer is that I don't think it's actually that hard. So inside of C-Python, there are two separate structures. There's already a PyThread state, which represents all the state that's specific to a thread. And there's another one called PyInterpreterState, which is all the state that's relevant to a Python interpreter. There are a couple of other random global variables and static variables and things sprinkled through the source code. And we'll just take a pass of cleaning those up. And then there is a single global variable that stores the current global state interpreter, the global interpreter state. And we just need to tell people to stop using that and instead use a macro that pulled it out of thread local storage or something. Other languages just have a context variable that you're forced to pass into every function you call and we wouldn't get away with adding one of those. So we just have to hide the reference to the current running interpreter inside of the thread state or the thread local storage or something. But I don't think it's gonna be that hard. So as part of my golectomy work, I'm thinking that I may actually, like as sort of a stretch goal, hope to try and get multiple interpreters running. I think that multiple threads is a lot more exciting, but I think multiple interpreters will solve problems for some people as well. And so I'm kind of hoping that we could do it. Yeah, so as Larry said, to modify CPython to actually, right now you can actually have multiple interpreters running and for instance, there is an extension for Apache. It's called ModWizgi. And ModWizgi currently uses multiple interpreters feature. There is some ugly code in there. It breaks all the time, but it actually does it. And a lot of people use that in production. So it is possible, but it's not recommended. And they can kind of get away with it because the only extensions they use, they kind of control, they know what they're doing. So that's why it's possible. To fix CPython completely, to make it a feature, there are two issues. First is to fix all the CPython extensions that we have out there, which is a very hard thing to do. And the second issue is that if you just enable multiple interpreters, you probably won't gain much. You can just use sub-processes, multiprocess for that, because to actually use it efficiently, you need an efficient way of exchanging data between multiple interpreters. And this is really hard. This is an unsolved problem. It probably could be easier to exchange, let's say, bytes, objects between sub-interpreters, but if you want to exchange complex data structures, it's a very hard problem to solve. The other thing I'd say, by the way, is that there's a guy, James, I forgot his last name, the James Powell, is it the Pygothan guy you remember? Didn't you guys know who I'm talking about? Yeah, so he has this cute thing he can do. There's a couple of flags you can specify to DLOpen that'll cause it to load a shared library in a completely private way where the symbols aren't shared. And you just have, you have to pull symbols out of it using the handle that it gives you. And so you can actually load multiple instances of the Python shared library and have multiple, they'll each have their own GIL and they'll each have their own interpreter and you can run multiple ones simultaneously. I think it only works on Linux and one or two other platforms because it's not standard flags to DLOpen. But it does work after a fashion. He's like demonstrated in lightning talks and said, look at this crazy thing I can do. Thank you. Okay. So just going back to format strings, everyone's favorite topic. So to me, the reason why I see format strings as sort of why I can't get past that disgusting phase is that it sort of seems a little bit incongruent with the rest of Python and not to be too prescriptivist, but like the whole Zen of Python thing that it's adding a little bit more implicitly than maybe should exist like combining both what is in the format string is to what the local variable is called. Do you think that's much of a big deal or is it as far as it's doing things a little bit more magically rather than passing in dot format? Is that just a thing? Explicit is better than implicit, yeah. So I have two answers for you. The first is Raymond Hedinger has a sort of a mantra that he tells people that I very much agree with. Python is Guido's language, he just lets us use it. And Guido likes format strings and therefore they're going in the language and Guido's letting us use the language he calls Python and we should get to use the format strings too. The other thing I would say is that if you think that format strings are gonna be too magical, take a look at how super works where you don't have to pass in the object anymore. There's a lot of really silly stuff going on there and I would say don't read it on a full stomach. So there's already a certain amount of magical stuff happening under the covers in Python and at the end of the day, I think format strings are gonna make people more productive because it's one of these things where now you're not gonna have all this boilerplate where you're saying dot format parentheses, local parentheses, dah, dah, dah, dah, dah. Which A is ugly also and B is stuff that people are gonna get tired of looking at. It's like there's gonna be less code there and it's gonna be very clear what's going on and the implementation details may be a little yucky but that's what Python is. Python is the language where we take the hit, we do all the hard work and all the really nasty work under the covers and we give you this wonderful language that's very pleasant to use and you have to write less code and you can get your problem solved more quickly and you have less code to read and it's easy to read to understand. So again, at the end of the day, I really think that format strings are a win. I'm looking forward to three six. Let's talk about format strings. This is the format strings panel. So you mentioned that I don't like to soup up? Sure. Maybe that's a problem. You're a little late. So more generally, as I said, I'm opposed to any change in Python and when a new pep comes in, I try to fight to avoid any change but Guido has a superpower to accept anything. So at the end, I have to accept that it's in and I started to use f-string and also the new unpack generation generalization which is also a tiny change but if you accumulate all these new features of Python 3 and you use them without having to think too far, in fact, it's really much more efficient than before because the code looks more obvious. The code looks simple. And another answer from Guido when I was strongly opposed to any kind of change because for example, you can call exit in the middle of a C-string or you can import module or do strange things that the Python language must not restrict the user. In fact, it's a language. You are free to use it as you want. You can write a very crappy code but it's up to you and if you would like to validate, check the quality of the code. You have linters like Pylin, Pychecker, Pyflex and things like that which helps you to detect a stinky code. Yeah, I just want to add quickly. When this feature was discussed, the f-strings, when they were discussed on Python Dev mailing list, a lot of people were asking like, why we are adding force way of formatting strings? And I liked what Guido said. He said that 10 years down the road, nobody will use other three methods. Everybody will be using f-strings because they are convenient. So when you think about new features in Python when we add them, just think about big picture. What will happen in 10 years? Are we done? Does anybody else have questions about f-strings as long as we're on the subject? It's a nice group therapy. We should have sampled the stage of grief before and after. I have one question about f-strings. Shouldn't it be possible to make them faster than the format call? I think they are a little faster. Eric Smith was the guy implementing them and he was like, I am gonna keep going on this. I'm gonna keep working on this until this is the fastest way that you can do string interpolation. And so it's like, literally, I think that there's like bytecode, there's special, I don't know if it's bytecode support for it or if he's like using existing bytecodes, but it was like, no, this is faster than everything else. Yeah, that's what I assumed. I'm sure there has been a lot of grief progressions right now. I think polls are next. This is not about format strings. This is about bad ideas. It's a Christian's point about simple JSON. Made me think of this as a process kind of question. Think back two years, 10 years, whatever time scale you want. Think of a bad decision. Something that everyone on the core team thinks in hindsight was the wrong feature, change, design, whatever. And give a little post mortem on what went wrong. Was there a process related flaw in decision making that led to that? Is there anything you can learn from past mistakes? Oh, I got one. And I had a little discussion about this with Guido at PyCon actually. In Python 3.0, when you index into a byte string, you get an integer back. And I really think that it should give you back another byte string, kind of like indexing into strings gives you strings, indexing into byte strings gives you byte strings. And what happened there was the original idea for how byte strings were gonna work and see Python was one way. And then over the course of about 18 months it kind of changed and kept changing and kept changing and sort of cycled around to where byte strings really kind of behaved like strings again. And nobody realized that when you index into them you should get byte strings back again that would really be convenient. And then Python 3.0 shipped and we really couldn't change it anymore. I suggested to Guido that, so okay. At PyCon twice Guido walked up to me and he said, you know Larry, if you get this collect me thing to work maybe we'll merge it into Cpython and we'll call that Cpython 4.0. And I said, well if we change it, if we call it Python 4.0 then maybe we can make breaking changes. I've got a breaking change. And he said, what is it? Everyone's got their one thing. And I said indexing into byte strings should give you back a byte string. He said, oh that's pretty good. Maybe we could do that in Python 3.0 and it would be like a from future import and over a couple of versions and da da da da da. I'm still kinda, it makes me a little anxious where he was talking about changing something like that in the three series. But who knows, maybe we could change it. But anyway, fundamentally I, it's hard to say that there's a process change here. I worry too much about like when something bad happens that people say, oh we need to install a new process and prevent them from ever happening again. I think it's better to stay lightweight and just sort of handle problems as they come up. So I wouldn't try and add a new process around preventing this sort of thing. And fundamentally again, this is sort of a language design thing and the way the language design works is that you get Guido to say yes or no and there's nothing I would wanna change about that process. The development of Python is very, very open. Anyone is free to join the mailing list. We have the Python ID's mailing list to discuss new IDs. And there is a Python Dev mailing list to discuss more concrete IDs which are more mature. And in my opinion, there are too many discussions because too many people give their opinion and sometimes it's really difficult to read all messages, like thousands and thousands of messages. So from my point of view, we have enough people to check that the future will work in any case and we will catch most issues very early in the design of new features. And if I would like to find one mistake in Python from last year, for me it would be the migration from Python 2 to Python 3. Python 3, it's a great language, it's very nice. It's just a migration which was not really well prepared. If I have to do that once again, I would help people to migrate more slowly, step by step. Yeah, about the migration from 2 to 3. I think our time machine was totally broken and we got in the wrong universe. So I joined the Python Core Development Team right about when we are working on Py3k, the development name of Python 3.0 and 2.6 and we all had this grand idea that people would write Python 2 code. We had one tool that was able to migrate the code to Python 3 and it didn't even occur to have that there's a possibility to write code that works on both versions of Python. So in retrospective, at one point I had even the idea should be a bit easier for the migration and I added during the development of Python 2.6 the B prefix and the alias that str equals bytes for Python 2.6. But for me it was just the idea to give the 2 to 3 program like an indicator, yeah, that's really a byte string and everything prefixed with U is a Unicode string and if you find something it's not prefixed with either B or U, one the user that user has to make a decision about that kind of string. Yeah, so later on we came, we were made aware that it's actually possible to write code that works on both versions and later on we re-edit features like the U prefix again to Python 3 just to make it easier to write polyglot code. But yeah, that's one of the things, the biggest mistake we of the core development team did we totally didn't expect the way that the migration is going to work. I think all of our mistakes were around the conversion to 3.0, so. There you go, it's hard pressed, I would not say that 3.0 was a mistake but it's certainly been a tough process to get everybody up to three. More hands, oh come on, doesn't somebody else wanna complain about F strings? Okay, we've got some more hands. Would it be possible for Python to detect circular imports? I think Python already handles circular imports to a certain extent. If A imports B and B imports C and C imports A, then something happens. It notices, I don't remember what it does. Sometimes it even works. So the import system creates first like a model object and then fill the model object with attributes during the import. And okay, I'm not even sure if that's still true with the new import system bread cannon road, but unless you actually access any attributes on the circular way, you can still import them and then later on accept them. So that mostly works, but if you happen to use circular import attributes in the globals of the model, then it breaks. So if you carefully craft your code in a way that first imports all your models and then you actually execute code later on after you have done all the imports, then you're safe. That's one of the issues I solved. Try to solve it a long while ago is people did some funky things like during the import of the model, they spawned a new thread and the thread executed new code and the code has like embedded import statements inside the code were used functions like the string formatting method on daytime which did an internal import and that could cause deadlocks. So don't mix threading and imports and try if you do circular imports, try to defer any kind of code execution after you have fully imported your program. That's good practice. Yeah, so the two rules are don't use recursive imports and don't use recursive imports. There is always, it's always possible to actually restructure your modules in a way that they don't require this. Python, sometimes it works, sometimes it doesn't. When it doesn't, you will see an import error which is kind of hard to decipher sometimes. Brett Cannon, he is like the lead developer behind import lip, he knows about this issue. This is not an easy issue to solve, to give you a nice error message telling you precisely what's going on, what kind of cycle was detected. So he knows about this issue, maybe he will come up with an idea how to fix it or if you have an idea about that, you can approach him, I guess. I'd say also it falls a little bit under the consenting adults rule. In Python, we have a couple of guidelines that aren't necessarily in the Zen of Python. One of them is called consenting adults which is just the idea that Python programmers are adults and we shouldn't chide them too much. If they try to do something that's a little naughty, like let them go ahead and do the naughty thing. So yeah, if you're gonna have circular imports and it's important to you and you can get it to work, knock yourself out. And if it doesn't work, then it's kind of on you. So that's the sort of consenting adults rule applied here, okay? Moving on. Do you want to ask a straight question for a meanwhile? There's maybe nothing profound here, but I'm curious about how the galectomy will affect threading and multi-processing. Is that just gonna be a really straightforward transition that all of those interfaces will just work as expected so that when you spawn new threads that... The people just to interrupt you a little bit, the people are flooding in because we're out of time. So this is gonna be the last question. But your question is, is the galectomy going to affect CAPIs or is it going to affect Python code? Yeah, I was thinking more about the higher level threading and multi-processing interfaces. So the answer there is that Python today supports multi-threaded code. And on IronPython and Giphon, you can write multi-threaded code that actually runs on multiple cores simultaneously. You can do that today. And it's Python and it's supported by the standard library. So no, none of those interfaces have to change. It's just that now instead of running on a single core, if you write a program and it runs on Giphon and IronPython on multiple cores, C Python today runs on a single core. In the future it might run on multiple cores. So you already have to write thread safe code and use locks and those sorts of things. And in the future it'll just run on multi-core. So no, the interfaces aren't gonna change. Okay, all right, we're out of time. We gotta get off the stage. Thank you for asking questions and making us look like we had something to do. This is the end of the panel. Bye.