 Okay, so I'm Armin Rigo and I'm here to to let Romain speak. Hi. So it's the biggest crowd I've ever spoken in front of, so sorry. So yes, you can find me on the internet. I've done Pipe I Work and also a bit of Scython work. I tried to make Scython and Pipe I Work together and the approach was wrong, but it was interesting to see because now we know it's wrong. And I've worked on Python 3 and NumPy support mostly. So yes, last year we didn't give a Pipe I Talk and then people asked us if Pipe I was dead. So no, it's fine. Don't worry. Yes, well, we would have made 10 years of Euro-Python in a row, but we broke the streak. So yes, what is Pipe I? Pipe I is built on top of the R-Python tool chain, which is a subset of Python on which you can write dynamic languages. So the main advantage over C or C++ is that you get the JIT for free, basically. And also a good garbage collector. So on top of R-Python, the main interpreter we've built is Pipe I, which is the fastest Python interpreter around. So yes, over the last two years we've done a lot of small progress, things like arm support, CFFI, event-led J-Event support, incremental garbage collector. So if you're interested in games or low latency, it's better than previously. Fast JSON as well. So if you're doing web stuff, this can be useful. And yes, NumPy and faster JIT and stuff like that. We also have UwisGIS support, thanks to the CAPI for embedding with it. So this is not the same API as the C-Python CAPI, but you can use it to embed Pipe I. So yes, R-Python is a framework, so we have also multiple languages on top, so we have a re-implementation we've built called Topaz and Hippie, which is PHP VM. So yes, to do stuff faster, we need money. And so Python 3, I mean, we managed to do a lot of stuff with not a lot of money, I think. So Python 3 support is 50% done, and we've done 80% of the work, probably. And as well, STM, we made a second call for donation to have a more production-ready STM. So hopefully, if you have too much money, I mean, and then, well, if you've donated before then, thank you. So yes, we also have done commercial support for Pipe I, so if you're a big company and are afraid to use something that you don't know how to hack, then, well, you can hire us. And also, if you have performance issues, if you're open source, usually will help you for free, but if you're closed source, then, well, sorry. So yes, if you have Python code, I mean, we're very compatible. I mean, aside from implementation-specific stuff, it just works. And C code, well, we've worked a lot on being able to communicate with C. So we have CPYX, which is the compatibility layer for the C extension API. Well, the thing is, the C extension API is hard to do if you're not C Python, basically. So we've built also CFFI, which is as fast as the C API, but a lot faster on Pipe I. And so we also have the embedding API, as I said, and we can also talk with C++. Psycho-PG to CFFI is like the best database driver on Pipe I, for example. And we've also built CFFI-based LXML and Pygame. And also we have, we're slowly but steadily improving non-pycable. So yes, PyPy, I mean, it's just fast. And this is a lot of benchmarks that we have that represent real-world usage. Some of those benchmarks were contributed by an lading swallow. So we didn't write them, so we did not make right benchmark to show how fast we are. We wrote them to help us get faster. Well, we didn't write them, but we used them. And so ARM, thanks to the Raspberry Pi Foundation, we have production-level ARM support. It's faster. The speed difference between C Python and Pipe I on ARM is bigger than on X86 because the ARM CPU is not as smart and Pipe I just produces very nice code for ARM. And well, C Python doesn't. And so it's in the standard distribution ship with the Raspberry Pi, so that's quite cool. Non-py support is in progress. We have, we passed that much test, but it doesn't mean much. But it shows that, well, we've done stuff. And well, it's a hit and miss right now, so if you can just try it and tell us what you need, then we'll work on that before working on something else. And we don't have C Python support yet, but we have an idea on how to make it work, so hopefully this will pan out. And no, we won't rewrite C Python from scratch. Pipe 3K, so we've released 3.2 not so long ago, and we started the 3.3 branch. If you're working, if you want to get started into Pipe I, this is the moment to get started because once we've catched up with C Python, then there won't be any entry-level task lift, so you can find us at the sprint, and Pipe 3K is a good way of getting started in Pipe I. There's also a few missing optimizations on Python 3, but we are working on bringing them back. So yes, CFFI is a way of interfacing with C. It works on the API as well as the API level, so unlike C types, it's more type safe. It doesn't suck for as much. Unlike the C API in Python, it runs on Pipe I, which is good, and it's super fast on Pipe I. I mean, it's almost as fast as just calling C from C. And STM, well, the GIL is kind of a very whole problem, but the advantage of the GIL is that it doesn't, well, it hides a lot of concurrency problem, so software transactional memory allows us to have the same GIL semantics without being, well, without the GIL, so you can run on multiple threads. And it's also a mechanism for same concurrency, so threads and locks are horrible, so hopefully this will be better. And yes, we have released a Pipe I STM with a JIT, so you can find it on the Pipe I blog. It wasn't released so long ago, so you can try that. And that's why, and then now we're talking about having a production-ready Pipe I STM, maybe. And if you're more interested in removing the GIL, then you can see our talks tomorrow morning. So yes, you can find us on IRC and our blog on Pipe I, and well, if you have any questions, then you can ask on IRC or right now. Did you bring smart Pipe I compared to Swift? So the question was about Swift and how much Python sucks compared to Swift, according to Apple, and well, if you can look at Alex Gaynor's Twitter, he wrote the same algorithm Apple used for their benchmark, and he got a very good performance improvement on Pipe I, so I don't think, I think it was just marketing stuff, basically. Yeah, of course. Hey, coming back to maybe mobile, as for example, Swift, my experience in Python was that quote per quote arm would be about 10 times slower than x86 in my Python C-Pyton code, which I put down to very small or no cache and all of this view, which makes sense because cache takes a lot of power. How much is this penalty for Pipe I? Is it less of an impact or more of an impact or is it about the same? So the question was about arm and well, Python and arm and C-Pyton compared to Pipe I, so I think the main performance difference is in branch projection and Pipe I does better because it's a tracing jit, so generate just one linear trace, so basically we Do you think it's more due to code or data modeling? Code, I would say. And finally, a quick one, what do you think of Pipe I becoming possibly in the ideal world a defect of programming environment for Android? So Pipe I on Android and I don't know, I mean Google owns the platform, so ask Google. Yes. Why don't you skip 3.3 support and go straight to 3.4 support? So why aren't we going straight from 3.2 to 3.4? Well, 3.3 is a subset of 3.4, so if we do 3.3, then we get part of 3.4. I don't think there are any language changes in 3.4, it's just library. Well, then we will get 3.4 for free. Okay, so Pipe I support. We want to embed C Python, basically. And then you can pass the, well, the way we've written Pipe I, we use the same storage, like the storage is done in the same way. So you can just pass stuff around. I mean, you can pass around stuff between C Python instances, so you can do the same with Pipe I and you can use that. And Pipe is, well, it's tons of C, which Pipe I won't get any faster, so there's no part in that. Yes. So creating .exe files, right? Stand-alone exe files? We don't have really anything planned on this, but if you can use the embedding API, you can probably, well, you will probably have to write the compiler thing to make it into one binary, but there's the embedding API, so I think it would be potentially doable. How well can you currently constrain resources used by the JIT? I know that's the JIT maximum memory usage or things like this. Can you basically say I only want to have 50 or 80 megabytes used by the JIT X file is going to actually keep to it? So the question was about resource management in the JIT, so you can have, you can limit the size of your traces, that sort of stuff, so you can, yes, restrict memory. Yes, you can as well, I think. No, you can't. Yes. The NumPy problems that are still left, and the NumPy bugs are still left, are there, is that a few really hard problems that will solve, that need to be fixed, or is it a bunch of little problems or a bunch of hard problems, or what's the nature of it? So, yes, what's not done in NumPy, basically, it's a lot of very small problems and, yes, more testing, more things like interfacing with C libraries, like for doing FFT, that sort of stuff. So we're working on that, and it's also a good way to get started if you're interested. Well, for example, the bridge between C libraries and PyPy, you can do it in pure Python, you don't have to learn R Python, so it's a great way to get started. I think. It is between the standard C Python and the PyPy, you can write a program and runs okay in this implementation of Python, and in the other, it will run without any change. So the question was about differences between PyPy and C Python. So the garbage collector is different. So your objects aren't guaranteed to be cleared as soon as they go out of scope. So if you rely on your destructors to free resources, it can cause problems. For example, file descriptors. So if you open a lot of files in a loop and you don't close them, then you will run out of file descriptors. That's one of the most, well, that's one of the main differences, I would say. Yes. Is the version number related to PyPy? No, it's not, well, it's the number on my chart related to PyPy. No, it's related to Python. How so? As a question about garbage collector, what is about weak ref products? So the question is about weak refs on PyPy. Do you know any differences? No, no differences. Well, I mean probably you wrote the garbage collector, right? So he knows best. So PyPy used to have quite a large memory footprint when it was running, it was very increasingly improved. Where does it stand now relative to a significant implementation of a real world system? How strong is it? So the question is about memory consumption. So it depends on your system. I would say PyPy, well, it depends on your amount of code related to your amount of data. If you have small code but a lot of data, then the difference won't be that big. If you have a lot of code that you run very often and you have a small, and you don't have that much data, then it will take more memory. And we've also fixed a bug in where file descriptors weren't, well, we fixed a memory basically, so it can be better as well. Is there a threshold that you can set on the garbage collector? No, I was just repeating your question. Are there processes in the system in terms of how frequently it is being run? I don't think so. Yes, we're not sure. But you have various thresholds that you can set, like running the garbage collector more often, that sort of things. So you can do that. And then, I mean, if you use memory in your program and you run out of memory, I mean, there's nothing we can do. We can just free some objects that... Is it then a fixed percentage overhead that is kept as garbage? No, I think it depends. It depends on how big your heap is. Yes? So how is the code generated by the JIT shared? So yes, it's shared by... Well, it's shared between threads but not between processes. What goes into PyCache in PyFi? What goes into PyCache? Well, CFFI based extensions? No, no, you can't have traces. Well, you can't write them to disk. Traces have memory addresses, hard written in them. So if you reload them, then your entire memory space is different and you can't choose them. Yes? So, you don't make a strong argument or something that says strong argument for type annotations in Python as a way to increase performance and support. Is PyPy going to make it irrelevant or is PyPy going to use those in fancy ways? What's the plan? So type annotations for PyPy? PyPy doesn't care. Yes, it will make no difference. I mean, you can use them for checking your code or whatever, but PyPy is built for dynamic languages. So, I mean, by making the language less dynamic, you don't get that much. I mean, either you turn your language completely static or if you keep it in between, I mean, I don't think you can get much performance out of annotations. Yes? So, are Python as a general purpose language? Well, we don't recommend people who use, well, you shouldn't use Python as a regular language. If you're writing VMs, I think it's a great language, but if you're writing general purpose applications, I think it's horrible. You can do it, but it's your own problem. So, the question was about implementing statically typed languages using PyPy. I don't think you would get more. I think you should use other languages. I mean, LLVM is just about code generation. You still have to write tons of stuff to target LLVM, but I don't know. At some point, PyPy is also just a VM. So, if you target LLVM for your language, then that's wrong. I mean, a lot of languages are targeted to LLVM, for example. Yes. So, I know, for example, so the question was about other languages targeting PyPy. So, the other question was about static languages, statically typed languages. I mean, I don't think you can get as much performance as you could, but I know that, um, Hi, the Pythonic Lisp that runs, that compares to Python by code, and it runs pretty well on PyPy. So, it's a dynamic language. Yes. Back on the subject of RAM, as a while ago, it seemed to report a blog post talking about having a more efficient representation of homogenous types in lists. So, rather than storing the higher effects of the story from later time, can you say a bit about what that by is and what's the data type? Okay. So, the question is about storing homogenous types in lists and how that affects memory usage and performance. So, we'll have a thing called list strategies and dig strategies and set strategies. Yes. So, it works on basic types, basically integers, strings, floating numbers. And, for example, in a list, you just store unboxed ints, so you get just an array of integers like you would get in C. And you can do optimizations based on that. I mean, if you pass an array of integers to a loop and the loop is jitted, then the loop will assume that you always pass, well, the jits specialized on the fact that the loop is only integers. And so, you don't need to spend time unboxing the integer, for example. And if you put it back into the list, then, well, you don't get the boxing unboxing part, so it's also good for performance. So, what happens if you make it a non-homogenous list? So, you turn it back to a regular list of objects and then, well, if you pass that list to the code, then it will use compile a different path. Python 3 version of pipeline, are you still able to unbox integers because the integer is now arbitrary size? So, Python 3 integers, how are they optimized? So, that's one of the optimizations we removed, but we want to reintroduce it. I don't know if it has been already. Yes, it may have been already done. So, yes. I remember a very old look that was introduced in the software transaction and then, say, the initial implementation used a pure software information. And in the future, then, new Intel processors are introducing hardware support to accelerate it. So, the question is about STM versus HTM mostly. So, while right now we are having a pure STM, it would be possible to have a hardware-assisted STM, but right now, I don't think the CPUs are ready. The Haswell CPU has HTM support, but the limits are too big for us to use it. So, can we use the PyPyG to generate a binary? No. I don't think so. Well, the PyPyG generates linear traces. So, it uses types it sees to generate the traces. And so, if you wanted to compile it statically, then you would need to compile for every types possible in your program, which is, I don't know, I mean, which is something similar to what HipHop did for PHP and they ended up with a 10-gig binary, I think. So, we don't want that. Yes. So, C++ support. So, I think the project has moved from using GCC to using Clang, which is much nicer to have. Well, it's easier to write plugins. So, now it's using Clang for that. Yes. I didn't hear your question. Yes. We're going to embed, yes, Cpython inside PyPy. At least that's the approach we're working on. Yes, you could move potentially objects between... Yes, I mean, it uses CFFI, so you can do what you want. It uses the Python CAPI. So, basically, you can embed whatever you want. And, well, I think it would be possible, maybe. I have to try. I would like to try to embed Python 3 inside Python 2 or something. That could be fun. What's the biggest installation of PyPy in production? I didn't have the authorization to say it publicly, so if you see me outside, then I'll tell you if you don't repeat anyone. So, yes, I can't repeat that, but it seems that someone is using it for something complicated and interesting. So, another round of applause.