 All right, come in, everyone. Lightning Talks start at 5 o'clock. The best place to sit for the Lightning Talks is right up at the front, especially if you're a speaker. If you're a speaker in the Lightning Talks, I definitely need you up at the front. If you're not a speaker, there's still room at the front, or the lovely second row. Mmm, secondy, mmm, second. Okay. Remember how squirrels pack nuts? Fill up the rows from the middle first. First person on my list is Cast Votches. Are you here, Cast? Good. You want to come and start setting up? Great. Number two will be Fondo Batista, or something to that effect. Fondo, are you here? Very good. All right. I'm just going to go back over here and check number three now. Some people would check several at once. Paul Hallett, Paul, are you here? Yes. I can keep doing this, actually. Rafael Schulze. Yeah, okay, you're pretty close to the front. I'm going to allow it. All right, you don't have to sit in the front row. That's great. Thanks, Rafael. Ben Foxle. Ben Foxle. Ben Foxle. All right. Someone's going to get bumped off the list. That's great. We've got loads of lightning talks for you today. Come closer to the front, everyone. If you've just walked in, there's loads of room up at the front. When you're closer to the front, you can see better. The person giving the lightning talk feels like you're more engaged and more curious about what they have to say. It makes them happy. A happy speaker delivers a better talk and a better talk informs you more and makes you more happy in a virtual oral-borrow circle of knowledge transmission, passion, community, and fun. Come closer to the front, basically, is what I'm saying. Yeah, that person is right. If you close the display settings app, that little warning will go away. Oh, we can leave it. I hope, I don't know. Lightning talks are a succession of talks interspersed with people heckling from the audience about how to fix your display settings. Come on, lots of room near the front, everyone. The front is the best place to be for the lightning talks anyway. You get to see more. You get to learn more. You get to feel more. The whole thing is more engaging. The day ends on a high note. You go home happy, and the whole conference turns into way better value for money, which we can't possibly argue with. Come up to the front. Come up to the front. Lots of room at the front. Also a bit of room here in the middle in a little enclave behind the cameraman. That's probably because people don't want to sit behind the camera. Oh, okay, I'm looking at a camera. I want to look at a talk. Come on in. We're going to start in about one minute. So there's no lot of time now. If you're going to come to the front, it's right up here. Come on, walk, walk, walk, walk, walk, walk. Walk fast and safely towards the front. Now we don't start until exactly the time. You can, no, it's not my warning. My watch is the only watch that matters. You can start heckling me when the official lightning talks have started, but it's not fair to do that before. Get out you. Hi. Okay. We're about to start. So if you're now walking in, I need you to start walking in much more quietly. So tiptoe to your seat from now on as we're about to start. Same lightning talk rules as Harold. One hand for little finger claps, two hands for final claps. You have up to five minutes. You don't have to use the whole five minutes. Kost, take it away. Thanks. Everyone can you hear me? Okay. I'm Kost Viratius. I'm here to raise awareness for an open source project that we developed. It's called BQuery. I want to use these minutes to explain what it does, why it's really nice, and I think more people should be aware of it and possibly use it. I co-founded my company three years ago, and we started and we had some problems. So we had actually clients from the start. They were retailers with billions of records of data, but we were hardly any money, and we still had to need to be able to respond to that. So we had this technical issue that we had to address. And that's how BQuery in the end came to life through various other parts in between. What does it do? It runs everything that you see here in the back. I have actually something like 20 million records here in the back, and it runs through it. It aggregates basically on the back end and makes it possible for us to be very efficient in terms of aggregation and reporting. So that's basically what BQuery does. As you saw, the front end is HTML. It's what our clients use. The whole entire back end is basically Python. So that's what we do there. It's part of a larger part of which we'll also outsource later on, but in the start, I want to start with BQuery. Why did we make it? There are several solutions out there, like, of course, Hadoop. Spark did not exist three years ago yet, but there are other solutions, but especially it's a very small startup. They are very hard to administrate. They could be very expensive, use a lot of resources, and at the same time, there were more technological developments in the background. I think the lowest row is the most important one, which you see actually with MongoDB, WireTiker, and the moving towards a compressed on-disk saving. That also was happening in the Python community because of these two men here who made Bicles. Unfortunately, they're not here at the moment, but they also helped us a lot, really great guys. And what Bicles is, it's basically compressed data containers. So it takes data, puts it compressed on-disk, which means that it sends its zip to your CPU, it unzips it there. And the idea is that modern CPUs are so quick that you actually overcome the memory bandwidth issue and are nearly as quick as it would be in memory. And that's actually true. The only problem with Bicles is that it does not aggregate anything. It's basically bare bones framework for compressed data and for reading and writing that compressed data. So that's where we made BigQuery. BigQuery is basically on top of Bicles and it makes you, well, this is our poster slides on slideshare so you can read this later on. But basically what it does is aggregation framework and it's rather fast. So we have comparisons with pandas, for instance. So compared to pandas in memory, it's like 1.5 to 2.5 more times slower. That's, of course, comparing your on-disk aggregations compared to what pandas does. There's also the sources are down there. There's some examples of a New York taxi data set which does where you can see basically it's being spread over Hadoop cluster of eight machines with 30 gigabytes per machine, eight processors, et cetera. And what it basically does, and I hope its skills will, what it does is what my own quad core machine now is doing in 1.8 seconds. It does that in 0.5 seconds with the same eight machines. And what you cannot see at the moment, but it uses less than one gigabyte of memory and runs this fast on my own laptop. So that's basically what I thought maybe interesting for more people. And it's downloadable. Only if you have PIP-8, it won't work at the moment because of some strange reason. If you have PIP-7, you can still install it otherwise from the source from GitHub directly. We're still working on it, but you're of course free to look at it and also join the project if you want to. Yeah, so that's basically my short introduction of BigQuery and what it does. Any questions? Thanks very much. Next is founder. Good. How's everyone enjoying their conference so far? It's cheap, but it works. Like a free whoop. Just do it again. Are you ready to go, Fondo? Without further ado, Fondo, let's give him a big hand. Thank you. So raise your hand if you use virtual M. Right, a lot of people. Virtual M's are awesome, right? I mean, you have a lot of benefits. You can install whatever you want without making dirty or installation. And you can reproduce environments from one computer to the other, et cetera, et cetera, et cetera. I don't want to sell you virtual M's. What is the problem with virtual M's? We need to manage them manually. So it's not a problem really for when you're working in a big project because you enter in the virtual M and you are there eight hours. But what do you do for scripts? You have 35 scripts on your computer. And do you install the dependencies in one virtual M for all the 35 scripts? Or you have 35 virtual M's, one for each script? And you have to remember which virtual M was before executing the script and remember to enter the virtual M before executing the script, getting out of there, entering, et cetera, et cetera. What do you do in that case? It's a problem. It's a mess. So, faith is here to help us. With faith, it's very simple. You only indicate the dependencies that is all you care about. You care about your script and their dependencies. You only need to tell the dependencies, execute it, and nothing else. How you execute it? Well, very simple. As any other script, we're calling it with faiths or put faiths in the Shevan or even as a Python module, if you want, you can call it and you can execute with your script. And you only need to do specify the dependencies. How do you specify the dependencies? Well, it's very easy. So, for example, if you want to use a script in a virtual M that has requests, you just call faiths-d-request. And faiths will execute your script in a virtual M that only has requests installing it. If you call this... If you are in a clean machine, it will create a virtual M, it will install requests, calling PIP, etc., and execute the script there. The second time you do it, it will be super fast because it already has a virtual M with requests installed. If you want to execute another script with only requests, it already has the virtual M. You don't need to care about anything. If you call faiths-d your dependency and don't specify a script, it opens for you an interactive interpreter. This is the quickest and easiest way to try a new library. Did you try this library? No, I didn't. Oh, faiths-d library. And you have an interpreter and you have an interpreter that lives inside the virtual M with that dependency installed and you just try it. You don't need to do anything else. If you like to use IPython, you just tell it to use IPython. If you want to use any specific Python version, you just tell it to use any specific Python version. If you have several dependencies, you just call-d several times. If you have to put one of the dependencies in a specific version or greater than or less than whatever, you just specify it. If you have a requirement.txt, you have tell it with dash r. So, that were the simple one. Let's level up. The simplest ways to work with a script is this one. You have your script, you put faith in the Shevan and how do you specify the dependencies? See that Mashiq commentary fades in the import? Fades will execute this script in a virtual M that only has request installed because you tell it through a commentary. You can tell the commentary in several ways. You can put it in the doc string. You can put it in the faith Shevan line, et cetera. There are several ways. For example, another complicated task is to start a Django project with a version of Django that you don't have installed in your system. How do you do that? Well, with faith, it's easy because you call faith, telling it that dependency is Django 1.8 and with dash x, you execute something inside the virtual M. So, you are executing the Django admin of the version from the virtual M and this way you can start the Django project in that specific version. And also, for example, you can... If you have a specific peak requirement because you have a proxy or whatever, you can also tell it to faith to teach people how to work. So, keep calm and use faith. You can install it very easily. If it's already in Debian, it's already in Ubuntu, it's already in Arch, you install it and use it and know more virtual M manually handling. Thank you very much. All right, come on in. There's still a few people coming in, everyone. So, make some space at the end of the rows. If you spot there's space in the middle of the row, everyone move two seats towards the spaces. Make it so that people can actually come in, make yourselves helpful there. There you go. That's one space liberated there. That's fantastic. There you go. There's loads of people sitting down at the back of the room. They could easily be given a bit more... Oh, my God, the power! Everyone is doing what I say. There you go. All right, next is Paul Hallett. Paul, you're here somewhere. There you go. Yep, that's good. And then after Paul is Mr. Schulzer. Rafael. Yep. And then after that, we're going to have Ben Foxel. Ben, did you arrive? Good. All right, come a little bit closer, closer forwards. Are you ready to go, Paul? I am. In that case, give him a gun. Hi, everyone. So, my name is Paul Hallett. Does anyone here use Django? Yep. Does anyone know that the Django Software Foundation has a Code of Conduct Committee? Few people. Okay, so I'm a member of the Django Software Foundation Code of Conduct Committee. I'll call it just the Code of Conduct Committee from now on. And I have an announcement to share from Django and the Code of Conduct Committee today. But for those of you who didn't put your hand up, tell me a little bit about what we do. Pretty simply, we are responsible for making sure the Django community provides a harassment-free experience for everyone involved. We have members that are global and representative of you, the community. So we've got people from different backgrounds and different experiences. And as I said, we have something quite big to announce today. And that's our Code of Conduct documentation. This isn't the Django Code of Conduct. I'm sure you're all aware, if you've ever been to a Django conference, that we do have a Code of Conduct. But this is documentation on how we actually deal with the processes of receiving issues and making sure that the Django community is friendly and safe and inviting. Before I get into why we decided to open-source this, I want to tell you a little bit about what's involved. The first section is about membership, how we elect members, how members can maintain their status. We are trying to promote like a non-burnout style membership. So people opt in and they say, I'd love to be part of this for six months. And then after that, they have no obligation to stay. We also have the most important part, which is how we handle reports. We take each report as seriously as the next one and the previous one. And we have processes with exactly how we go through this. So people who feel like they're uncertain about making reports can see the exact processes we go through to make sure that we handle those issues seriously and justly and fairly. We also mentioned how we keep records of reports. There's no point in us actually running this committee if we aren't able to collaborate with Django organizers and other organizers of Python events if we don't keep a record of those reports. And that includes some processes around anonymization as well. You may not realize this. Not many people put their hand up. They actually knew we had a committee. So very few people even realize this. That we actually work with the organizers of every single official Django conference to make sure that we share reports between them. So they share with us the attendee list and if there's any reports, we're able to share them with other communities. And finally, you can also find our transparency and statistics there. This is the actual figures of the number of reports we've received over the past two years. As you can see, we've gone from 11 two years ago to seven so far this year. So hopefully we'll be able to bring that down and ultimately make ourselves redundant and make the community friendly and inviting. The reason we did this, there's three reasons. The first one is obviously to how hard ourselves accountable to you, to make sure that we're providing this safe environment for people and also to let you understand how we make these decisions. The second reason, which is closest to my heart is to help other tech communities who have not been able to adopt a code of conduct to show them that you can do this safely. You can do it fairly, that we don't witch hunt. This is actually done through a very democratic process. And finally to get feedback from you to understand if we're doing a good enough job, if we're making it more well understood and generally to get your feedback on how we can run this better. So a big shout out to these people. The reason my name here isn't because I'm pretentious, actually it's because Ola Satarska wrote these slides and she's announcing this exact same lightning talk right now in Philadelphia in the USA at DjangoCon US. So we're launching this right this second. So it's there, go and have a look. Give us your feedback and I look forward to hearing it. Thank you. All right, who wants to hear a story about a squirrel? I tried to, I thought, you know, we could just do jokes but I haven't actually got any new ones in last year. So instead I've got a funny story about a squirrel that I'm gonna try and read in an entertaining manner. And it starts off and we're not gonna have it for long. I never dreamed slowly cruising on a motorcycle through a residential neighborhood could be so incredibly dangerous. Little did I suspect, dot, dot, dot. That's your intro. Let's move on to Star Wars World Clouds. Hi everyone, I'm Raphael. This is a fun project that I recently worked on. Why? Just because I like Star Wars and I like to play around with data. So I thought it was a good idea. I also think that, you know, these very small projects are a neat way to just explore a basic data science concept and the capabilities of Python libraries. So in particular, I guess with this I just want to encourage maybe newcomers to Python or to data science to do something similar. In particular, I wanted to test this library which is the World Cloud Library. I don't know if you know it. This is from Andreas Müller. If you don't, then check it out. And so the first step that I did was of course to get some data for this. I went to this webpage which is basically the database which contains movie scripts. And this is a partial screenshot of how this looks like. Of course what we need to do here is extract data that is actually being spoken by the people because we want to create word clouds from Star Wars characters. So with a bit of introspection of the HTML and the help of a beautiful soup, we can just parse the HTMLs. This is an example code for episode five and you hope so we basically get the HTML using request, parse it using beautiful soup and then we iterate over it, extract the code for each character that is being spoken and then we end up with a dictionary where basically the keys are the character names and the values are the strings of the text that they spoke. I'm not going to go into details here. This is an example output of Darth Vader from episode four. As you see, this is raw text. So the next step is basically to go ahead and clean this by basically doing things like removing punctuation, clean removing stop words for instance, lower casing it. You do this for each character across all episodes and then you end up with something like this which is a nice string of just words. Again, the example of Darth Vader. There's bits and pieces that you still need to do like merge a dictionary entries that you know the same character like Luke and Luke's voice or C3PO and 3PO stuff like that and this is the list of the top list of the characters by word counts and then we're ready to create our word clouds. For that, it's as easy as instantiating this word cloud class with basically a list of words and their frequencies and so let's play a game. Who's this? Come on. That's Luke, of course. Who's this? That's easy. 3PO, exactly. What about that one? Someone said Yoda, right? That's Yoda. Well, now an easy one, easy one. This one, right? Han Solo. Okay, last one. Who's this? Of course, our all-time favorite character, Star Wars. Exactly. So what we can do is basically do, we can also pass, for instance, image masks to this library and create more beautiful stuff. So we come up with something like this. You recognize this. Here's Yoda. This is Padme. That is Obi-Wan. And of course, Darth Vader. Of course, this is just the beginning. There's a lot of more you can do with this data. For instance, use TFIDF instead of just word frequencies. Apply machine learning. You know, try to classify maybe dark and bright side, whatever. Do some network analysis of the interconnectivity of the different characters in the movies. I have new cortex in Github. If you want to check the notebook, it's here. Star it, fork it, clone it, run it, whatever you like. And thanks a lot for your attention. All right, Ben Foxle next. And then Christian Stefanescu. Christian, are you here? All right, you're ready to run up. After Christian, we'll have Jonathan Slenders. Jonathan, are you here? Very good. I was on Bryce Street. A very nice neighborhood with perfect lawns and slow traffic. As I pass an oncoming car, a brown furry missile shot out from under it and tumbled to a stop immediately in front of me. It was a squirrel. And it must have been trying to run across the road when it encountered the car. I really was not going very fast and there was no time to break or avoid it. It was that close. I hate to run over animals and I really hate it on a motorcycle. But a squirrel should pose no danger to me. I barely had time to brace for the impact. Animal lovers never fear. Squirrels, I discovered, can take care of themselves. Are you ready? Yeah. Give them a big hand. Thanks for that intro. That was perfect. Cool. So I built a little app over the last couple of days and I thought I'd show you over here. I actually made the last commit in seven minutes ago. So I've not seen if that works, but hopefully it will. So this site is a kind of a multi-device site. So what I want you to do is to get out your phones and stuff and visit this URL, which is eupy16.heroqapp.com. And we can see this device count going up. And this is a tiny little Flask app that's using our service, which I won't talk about just now. So we're getting 21-ish devices going up and down. It's kind of weird. And yeah, what you should see is you should see this kind of grayed out logo. And what this tool that I've hacked together is, is basically a kind of logo designer, I think. So what we can do is each of our devices is connected to this Flask app and through our infrastructure. And I can send out messages. So basically, let's choose the first color of this logo. So let's choose green. Woo! Cool, so that kind of works, which is good. I wasn't totally sure about that. And yeah, we can choose these other colors as well. So maybe green and blue. And you should see that updating on your phones and laptops. I put some buttons down the bottom, which have emoji in them. So the first emoji, this chooses a random color scheme for those two things. So you should see a different color scheme for each of your, from your neighbors. And I can press this a few times and you can get some new ideas for your logos, right? This speaker button, what I'd like you to do is put up the volumes on your devices. Cool. You'll get more of that. Okay, so this is using the Web Audio API to synthesize a note that's randomly chosen on your device. Right, okay, slightly linked to the last talk. We're going to make this a bit more real. We're going to use these two choosing colors and sounds to select a winner, okay? And the thing you're going to win is this BB-8. Whoo! So what's going to happen is everyone's phones are going to change colors and they're going to play notes and they're going to get a bit more kind of chaotic. And then eventually one person will have the kind of, like the proper logo colors. Everyone wants to have gray and that person will have one. So if you hold up your phones and turn them to the center of the room, that would make it kind of cool, right? So is everyone ready for this? Uh, wherever. Cool. So let's start this then. Is this going to finish before the heat death of the universe? Have you checked? It'll be like 40 seconds. Okay, okay. Cool, so it's kind of in time. What is happening? And we're going to have a session tomorrow at lunchtime so if you're interested, show how we build this, or whatever, have lunch. Thank you very much. Thank you. Christian? The organizers there, did you want to say something Fabio? Did you want to say something? I saw you creeping up, no? Okay. Oh, you have a talk. That's even better. There you go. Are you just ready to go Christian? Yeah. And then take it away. First of all, sorry I'm super nervous. I usually don't speak in front of more than, you know, three people. My name is on the slides and this is Namako. Namako is a Japanese word. I looked it up. It's basically a fungus which you use to make miso soup. So I'm sorry I pretty much gave it all away. I'm probably, you know I'm going to talk about the microservice framework. Yeah, so a bit of background we had in our company a big change in which we decided to go for, you know, the fancy stuff you built today, an API and some microservices. And then we were looking around for the fitting framework to build our microservices and we found Namako. And although I'm quite nervous still, I hope I can convey some of the excitement I had when I found Namako. So Namako can do IPC or event-based kind of communication. I'm going into these patterns in more depth soon. If you don't know what that means, it uses eventlet under the hood for async worker handling. It uses dependency injection and it uses it for so called dependency providers. These are all sorts of things you can plug into your Namako services like logging or think of it like some sort of middleware. And then it uses extensions and it uses extensions for instance to define the protocols to transport messages. So let's look at RPC calls and this is the main part we're using Namako for. Okay, so I already said microservices so in the grand scheme of things there's a method somewhere out there on the internet probably in some sort of service which I want to call and I'd like my code to be nice and readable and this is actually doable with Namako. So this could be the code you could be writing tomorrow. Under the hood in the middle layer let's think of Namako as or actually Namako does this. It serializes your call to JSON, sends it over RabbitMQ, let's see, MQP protocol implementation and calls the service then creates a new queue which will hold the result and returns it. So that's it for RPC. Now let's look at the code and this is the part I like the most I think. This is just a class and this is just a method and I think the only thing that's kind of unusual besides the import of some fungus is the decorator at RPC which transforms this plain Python method into something that actually works remotely. I was thinking about the parallel to Java but I quickly dropped it so Java developers call pojos or so-called plain old Java objects but I think this doesn't work for Python objects. So anyway, it's a simple Python class and if you're like me and you like testing you probably like this as well. Look at how nicely the method doesn't contain anything about the transport. Okay, now for the RPC part, I also like the tooling a lot. In the upper left corner we have a helper I can call from the command line which will run my service. It will bring it up and I can start calling it from, and that's the part in the button right, a shell. So this is basically a Python shell which is configured to talk to this said service. So interaction is quite fast and you can test stuff and you can also use this in production to talk to your services in production. Now I said events are also possible so for the scenario I have an event handler somewhere in my service, I can write this kind of code, I instantiate an event dispatcher and send it a hello and I also can pass a payload and then, yeah, sorry, there's no response of course, I'm just sending the event. Let's look at the code again, again the import is missing also, but the decorator is the only thing that distinguishes it from a simple Python class. There are three types of event handlers, I'm going to put them all there so you can see them, so maybe Singleton is the easiest one, you want exactly one delivery. Broadcast means you reach all and a service pool is actually just one out of a cluster of services. It can do HTTP but I wouldn't. Thank you very much. That was Christian, I've got Jonathan coming on stage, fantastic, after that it will be Matias Rav, inches before the impact the squirrel flipped to his feet, he was standing on his hind legs and facing my oncoming victory cross county tour with steadfast resolve in his beady little eyes. His mouth opened at the last possible moment, he screamed and leap, I'm pretty sure the scream was squirrel for bands eye or maybe die you gravy sucking heathen scum, the leap was nothing short of spectacular. He shot straight up, flew over my windshield and it impacted me squarely in the chest, instantly he set upon me if I did not know better I would have thought he'd brought 20 of his little buddies along for the attack, snarling, hissing and tearing up my clothes he was a friendly of a frenzy of activity, a frenzy of activity very much like Jonathan who will give us a short lightning talk about Prom Toolkit. Thank you. So this is a very short presentation about Prom Toolkit which is a library for building common land applications with a very strong focus on usability. So for that we go back to the normal Python shell, just to have a short demo. So let's start a Python shell and do some Python coding. So for instance we can do an F test and then print Hello World just as a demonstration right? Everyone has done this. Now the problem is suppose we have to execute this again. What do we have to do? We have to press the up arrow three times like this. We have to press the up arrow again three times. There we go. Once more and then we can execute it. That's a bit annoying. Even more annoying is if we're at this point and we have to insert a line, we cannot insert a line right below the F true, right? The only thing we can do at this point is press Ctrl C like this to interrupt it. Fetch the lines again from the last three and while fetching them insert lines. So that's really annoying. Now coming back to PromToolkit, that's a library I've been working on for the last three years and about one year and a half ago I released the tool called PT Python. So that's a Python shell built on top of PromToolkit. Now let's do the same thing here. We print, we do an F through we print Hello like this, we print World. And you see, as I type, I have syntax highlighting, right? And also I have very nice code completion. Now, if we have to execute this again, the only thing I have to do is press the up arrow only once like this. There we go. Now even more, we have multi-line editing. That means we can navigate in two directions. We can move the arrows up and insert lines in between here. And we can just execute the whole block in one keystroke. So PromToolkit is a library that implements those kind of things for people who want to implement interactive applications at the common line. PromToolkit does most of the red line functionality, so red line is the library that's used by the native Python shell and is used by many common line applications and most of the red line functionality, like VI key bindings, Emacs key bindings, reverse incremental search, all those kind of things that you expect, they're implemented in PromToolkit. So we can search back in history and execute from the history. Coming back to my presentation, we have seen this. So lately I got in touch with the iPython core developers. We collaborated a lot. And this resulted in iPython 5. So maybe you have seen it, it's released a few weeks ago. And iPython 5 has a front-end built on top of PromToolkit. So that means that the functionality that you have just seen in PT Python, like the syntax highlighting, like the multi-line editing, it's all present in the latest version of iPython 5 with Windows support, Mac support . Further, the only thing I haven't said is yet is that we support bracketed paste. So that means if you're pasting a chunk of Python code, it's recognized as being a paste and it won't execute it yet. It's inserted as one blob of Python code in a multi-line buffer so you can edit it before you execute it. And also means that the type, the sort of indentation will be kept like it was. And further we have more support. In the last year many tools were created using PromToolkit. So this is the list. The list keeps growing. Many people start creating tools on top of PromToolkit. This is one of the last one, HTTP PromT which is a combination of HTTP Pi and PromToolkit, which is a nice tool to do HTTP get and post requests. These are a few others, so there we see a few database clients like PgCLI and MyCLI and AWS Shell to do interaction with Amazon. I've even two full-screen applications. So Pi Vim, for instance, which is an implementation of VI in Python. It's more as a proof of concept, but if you want to play with it, it's fully functional. Not all the functionality of real VI, but it is usable. And we have Pimux, which is a clone of Pimux and pure Python as well. That one is very usable. I use it all the day as a replacement for my Pimux, and every time when I miss some functionality, I add it to Pimux or to Pimux. So that's it. If you want to find me, you can find me on GitHub, on Twitter, here at the conference. As a reminder, we have the pip install, so you can pip install iPython, pip install iPython. Thank you. Next is Mathias Rav. After Mathias, we've got Juan Luis Cano. Snarling, hissing and tearing at my clothes, he was a frenzy of activity. As I was dressed only in a light T-shirt, summer riding gloves and jeans, this was a bit of a cause for concern. The furry little tornado was doing quite some damage. Picture a large man on a huge sunset red touring bike, dressed in jeans, T-shirt and leather gloves, pottering along at maybe 25 miles an hour down a quiet residential street and in the fight of his life with a squirrel. And losing. Mathias, you ready? Yeah. Hello everyone. I'm going to tell you about a new feature of Python 3.6 which you may have heard of, maybe not. It's mostly for those of you who haven't heard of it, so a quick show of hands. So many are reading the Python 3.6 release notes draft. Okay. That's nice. So maybe half of you will learn something new now. It's called literal string interpolation and it's hep 498. So suppose you have a function a simple hello world function. You can use string interpolation to insert arguments into the stuff you're going to print. So it says hello world, hello Harry. You should probably know this. And the thing about string interpolation is if you have an application where you're doing logging in the real world maybe some of your log lines are really long and have several things to interpolate. Basically we would like to be able to do the same thing as we can do in the shell and in Perl and in PHP. That is insert variables inside the string literal and have them be inserted where they are. So my second example here uses the dot format which looks like what we want. But again the problem is that I have to specify the variables to interpolate next to the string literal and not inside of it. So the solution is the literal string interpolation feature of Python 3.6 and you basically put an F in front of the string literal so it's called an F string and you can put any kind of Python expression that you want and it will just be evaluated at run time. So in this example the name greeting will be taken from one of the arguments and the target will be taken from the arguments and we call the title string method to uppercase the word. So I started playing around with this a bit and I've made a small program that will automatically apply this transformation so you can upgrade your code to use Python 3.6 as soon as it comes out if that's what you fancy. So I took a real-world example from one of my machine learning handins I have some gradient descent algorithm I've implemented and I'm writing out how long into the method are we and which iteration is it and what's the current cost and blah blah blah and basically I have a program that can automatically turn this into the F string below where you can see how we do number formatting just like the old format method of strings and it has currently I have just integrated this with Vim because that's my idea of choice you can blame me on you want it works like this you can select the text you want to transform in your editor and press the equal sign and it does this transformation and there you go it has turned the print statement into something that uses F strings and we can sort of tweak it from here and that's all if you're interested the code is on github I'm mortal on github and you can go check it out and it's a small project using the AST library that does a simple parse of your Python input and it looks for the percent formatting expressions that's all, thank you Juan Luis after Juan Luis is Andrea Andrea Cropti good after Andrea is Dave McIver I grabbed for him with my left hand after a few misses I finally managed to snag his tail with all of my strength I flung the evil rodent off to the left of the bike almost running into the right curve as I recoiled from the throw the matter should have done it the matter should have ended right there it really should have the squirrel could have sailed into one of the pristine kept yards and gone on about his business and I could have headed home no one would have been the wiser but this was no ordinary squirrel this was not even an angry ordinary squirrel this was an evil mutant attack squirrel of death, twisted evil somehow he caught my gloved finger with one of his little paws and with the force of my throw and with a withounding sump and an amazing impact landed squarely on my back and resumed his rather antisocial and extremely distracting activities he also managed to take my left glove with him the situation was not improved not improved at all a big hand thank you very much this is going to be quick because I'm nervous as hell not because I don't speak usually in front of hundreds of people but because I don't usually prepare slides in five minutes my name is Juan Luis Cano I'm the chair of the Python Spain non-profit and we organize the Python in Spain but before getting into detail let's put some contests here ok this is the country that you are now so far this week and if we zoom to the southeast of the country you can see this white thing over here which is the only construction of made by humans that is visible from space this is one of the biggest greenhouse cultivation in the world if we zoom even more we can see this Fort Bravo scenery that was used a lot in the 70s and 80s to film some spaghetti westerns and if you zoom a little more you can see Clint Eastwood over there when he was young well we are celebrating the Pycon S this year in Almería which is the city in the southeast that I was talking to you before this is going to be our fourth edition we are aiming at 400 in this and what are you going to get if you come to the Pycon S well you are going to get to see a very beautiful city with a lot of Arabic heritage a lot of very beautiful monuments and also one of the most beautiful beaches in Spain and we even celebrated it in October instead of November so you can relax a bit and go to the beach because it's going to be very good weather still this is the other thing that I'm going to get because the food there is very very good maybe not as sophisticated as here in the Basque Country but I can assure you that the quantity is going to yes this is called the Tapas thing here in the south and you also get to know the wonderful Spanish community the Spanish community is so friendly and we've been celebrating this Pycon S as I told you for four years already and it's always amazing to get to know these amazing people this is a picture of the third edition which was last year in Valencia again like 400 people or so and this is the group picture at the end of the second edition in Zaragoza so the call for papers is still open so Clint Eastwood wants you to come thank you very much alright thank you Andrea you're next then David McIver and then Anselm Ling now Anselm are you here? Anselm yeah okay very good how's our timing we've got ten minutes quick announcement if you haven't picked up your ticket yet for the social dinner and you want to go you have to pick it up before 6.15 so the lightning talks are not going to run over we'll finish at 6, you've then got 15 minutes to pick up your ticket from the registration desk after which it's closed and then you can't go to the social event nothing that might be booing would be this guy on his motorbike his attacks were continuing and now I could not reach him I was startled to say the least the combination of the force of the throw only having one hand the throttle hand on the handlebars and my jerking back unfortunately put a healthy twist through my right hand and into the throttle now a healthy twist on the throttle of a victory cross county talk and have only one result talk the cross county tour is made for and she is very, very good at it the engine roared and the front wheel left the pavement the squirrel screamed in anger Andre! Hi everyone, I'm Andrea Crotty and I work for a company in London called Iwaka and one of the projects I worked in the last few months is a migration of a big codebase from a jungle project from my sequel to podcast so I just want to share with you what we encountered and what were the problems and try to help you not to do the same mistakes so why first well, I think this image is amazing so that's enough information I don't take credit for that but I think it's great so the reality is well originally the beginning of my sequel was kind of a coincidence because they had to go live and the only thing installed on the CTO machine was my sequel so they decided to go for my sequel so after a few years we wanted already to switch to podcast but then we found out of an actual real problem we had of some query which we need to do all the time everywhere which involves a self join and some other ugly stuff with my sequel and it has to be more or less done in the whole sequel or with some crazy hack with the ORM however, podcast has this nice thing called this thing on which my sequel doesn't have which allows you to write this query in a very simple non nested thing which can also be written in the ORM in just one line so that was kind of the final reason to do the actual switch so how you actually do that well, all you have to do is to set things the data will be moved automatically by Django and then that's it, well normally I think that didn't work so just a few numbers to understand how big the project was it's like a big project so 190,000 lines of Python code more than 100 apps 383 tables to my grade and at the moment we have around 3000 tests which takes quite fast but they are a lot of tests so the plan we are in progress actually it's not finished but almost there is to first adapt all the code then actually do the data migration itself and then we're done data migration actually was not as easy as I thought because I found this project called Pidgeyloader which is great it's fast, it's smart and everything the only problem is that you don't really get any live replication of the data so you kind of have to do the switch in one go which is kind of scary and that's not a very nice thing but yeah, that means we have to try it many times first to be sure it works and only when we are sure everything is fine we have to do it in one go Pidgeyloader however is really a nice tool and we also had great communication with the author that helped us a lot and also managed to convince the company to sponsor him to do some features we needed so that's also nice another problem we had was that we have very big tables just a few of them and we actually don't need to port them at all and we will move them somewhere else completely so to do that to kind of handle the situation we just drop the foreign keys, change queries and then do a database router to just keep all these things where they are now and we will handle them later so that's another option you can use the changes in the code first of all get everything running on your CI like Jenkins or whatever be sure that the new code doesn't break posters anymore look for all the places where there are row queries that you want to fix and test them and then you have to do a lot of manual testing maybe and check for things so one thing which I had to do for example since we had a lot of APIs which were not tested at all I wrote like an ipython notebook to run three parallel Django instances connecting code 3 different servers on different configurations and then with this notebook connect to all of them and then do the comparison of the results doing some approximation of the floating point numbers we get back so there was a lot of work but that's mostly because there were no tests for this thing so these are just the tips of the things you probably should do really use migrations for everything that's luckily we do that we don't have anything in the database schema which is not in immigration and that helps a lot test all your code and all your queries because you will make life a lot easier when you have to do anything like that never rely on implicit ordering of databases that's really a bad thing but because my SQL always orders on ID anyway everything is fine but possibly it doesn't do that and then try to make your Django apps independent from each other so they don't like import models all over the place and this all like a tangled mess and then split your monolith as soon as possible because that would really help with anything yeah conclusions this is the sentence I stole from this morning and that's it thank you Andrea our final talk today will be by Dave McIver David McIver that's the next one anybody who did sign up and didn't quite make the slot today please come and sign up again tomorrow if you're keen you'll get there super early and then we'll know that you desperately wanted to do your lightning talk that's about that now where were we we had a man there we had a man screaming the squirrel screamed in anger the victory cross county tour screamed in ecstasy I screamed in well I just plain screamed now picture a man on the huge sunset red touring bike dressed in jeans a slightly squirrel torn t-shirt wearing only one leather glove and roaring at maybe 50 miles an hour and rapidly accelerating down a quiet residential street hi there I'm not a squirrel sorry I'm David McIver I just want to tell you quickly about the testing library that I write called Hypothesis I'm afraid you've already missed the training course that was yesterday morning but hopefully I can intrigue you enough to check it out anyway the basic idea of Hypothesis is that it makes your test smarter and less work to write mostly less work to write by adding a source of generated data to them so that it can try probing your test code with a whole bunch of different values that you inevitably forgot because we all write our tests when tired this is what it looks like it's a simple decorator library it works with any testing framework you like it's not a test runner I use py.test it works with unit test it works with knows whatever and this is an example from when we were testing some mercurial code mercurial represents a bunch of stuff internally as utf8b which is for some reason taking arbitrary binary data and turning it into valid unicode and that works as well as you'd expect I don't know how widely this one is I want to claim that this is a great bug because mercurial has been in production and widely used for about 10 years but I suspect the reality is that this is a relatively small corner they're at this is fixed now by the way it found a bunch more bugs in the same code once they fixed this one it's also found bugs in the python standard library this is from the new statistics module in python 3 one of the many nice things you get when you finally get around to switching to python 3 and it even works now it didn't then it's to be fair this is a horrible thing to do to any statistics module it's in this case it's calculating the mean and it doesn't work well with very large numbers and it works with most of the things we're going to want to use this is a test using numpy this one's not actually a bug it's just floating point numbers being awful again and and so if you're doing scientific python or anything with numpy it will slot right into your testing I'm sure you're all testing your scientific code it works with Django too it works really well with Django it will automatically take a look at your models and just go oh you need these types I know how to generate those here let me generate them all for you from scratch in this example we're overriding a bunch of things but you don't have to do that the out of the box generation is pretty good this one won't make sense without the full example but basically in this case it tries to add the same user to a project twice and the code didn't expect that this is one I don't have time to go into properly but one of the really cool hypothesis features is that you can give it a complete model of your API and tell it what should work and it generates random programs against your API and eventually finds one that breaks this one's a bit experimental and slightly harder to use than the rest but it's pretty cool when it does the right but know when it works for you that's more or less all I'm going to say I hope you check it out the website is hypothesis.works and it's got a whole bunch of introductory articles but it really is quite straightforward to get started and we've got a good community and if I'm not around they always happy to answer questions as well thanks again thank you very much everyone that is us finished actually properly on time thanks to each and every one of you for coming to the lightning talks for proposing lightning talks for being speakers, for paying the registration fees for being here for free, for going out in the evening for having dinner, for being lunch see you all tomorrow thanks for coming good night and I'll post the link to that which I was about 20% of the way through I think that's maybe better read on its own time