 All right, let's start. Hi, everyone. I'm Sebastian. I write Python code for living, and I also teach people how to write Python code. You can find me on Twitter, so if you have some questions or comments, that's probably the easiest way to find me and get in touch. And I'm also sometimes blogging at this URL. I have a few posts about Ipython, and I'm planning to actually write a few more as a follow-up to this talk. So if you're interested, go check it out. And two technical remarks before we start. So first, there will be a lot of features that I want to talk about. And when I speak publicly, I tend to get nervous. And when I get nervous, I tend to speak fast. So if you miss something and you want to come back or maybe you're sitting far, you cannot see it, here's the link to the slides. I will also display this link at the end of my talk. And also, keep in mind that I'm using version 7.4 of Ipython and 3.7 of Python. So if you try to reproduce some of the things and they won't work, just make sure to update Ipython and Python to those versions. So why am I giving a talk about Ipython? Well, I've been using Ipython for over six years. And I thought that everyone else from the community is doing the same, which apparently is not true. Some people don't know about Ipython. Some people use just a small subset of its features. So I mean, Ipython is much more than just syntax highlighting and top completion. So I decided to gather all the most interesting features and show you how they can be used to boost your productivity. We'll start with the basic and then we will move to more advanced stuff later. So what is Ipython exactly? Well, for those of you who never heard about it, Ipython is the father of the Jupyter notebook project. Well, Jupyter notebook and Jupyter notebook is the father of Jupyter notebook project. No, just Jupyter project. So Ipython was initially created as 259 lines of code by Fernando Perez in 2001. And this code was just executed at Python startup and all it did at that time is to display numbered prompt, store the input of each command in the global variable and import some libraries for mathematical and mathematical operations and plotting. So it's been around for over 18 years. Initially it was just interactive prompt for Python. Later it was turned into Ipython notebooks to make data analysis easier. Then project Jupyter was born. The idea behind it was to decouple the notebook part from the engine part. So people could use the notebooks with different programming languages. Today project Jupyter is probably the most well-known form of Ipython. But this talk is not about Jupyter. If you're interested in learning more what Jupyter can do, there is a great three hour long talk that was given at Python US in 2017 by the core developers and long term users of Ipython. So you can check it out. And even though I won't talk about Jupyter or notebooks today, most of the stuff that I will mention will also work with Jupyter. So Ipython is a repo. For those of you not familiar with this term, it stands for read eval print loop. So it's a type of shell that reads a command, evaluates it, prints the results and wait for the next command. So Ipython is basically a Python repo on steroids. Like a massive dose of steroids. It has syntax highlighting. It has top completion and not only for keywords, modules, methods and variables, but also for files in your current directory or for the Unicode characters. It has smart indentation. So when you start writing a function or you start writing a loop and you press enter, it will automatically indent the next line. You can search in the history, either with arrow up and down or by typing the part of the command to match then using arrows or by pressing control R, then typing some text and then pressing arrows to switch between the results. But that's just the tip of the iceberg. So Ipython also has extensions, magic functions, shell commands, events, hooks, macros, it's fully configurable. You can swap kernels, you can use it for debugging, as some kernels called many, many things. So what I really love about Ipython is how easily you can access the documentation of basically any object you can think of. Classes, variables, functions, modules, you name it. All you have to do is to append or prepend a question mark to the name of the object. And if you want to see the whole source code of an object, you have to use two question marks instead. Also a nice trick. So if you're not sure what's the name of the function that you want to call, you can use stars as wildcards to see the functions matching certain strings. So here I want to run a function from the OS module. And I vaguely remember that it has something to do with a deer, so I'm just listing all the functions containing deer in the name. So Ipython stores the input and output of each command that you run in the current session. It will also store the input of the previous sessions, and if you enable it in the settings, it will store the output as well. If you want to access the cached input for a given cell, there are many ways that you can do this. Ipython will create a new global variable for each input command that you use. Or you can use underscore IH or IN lists to access the previous commands. Just keep in mind that those two lists are indexed from one, not from zero. The same with the output caching. You can access the output of each cell through one of the global variables, or using one of two dictionaries that stores them. You might be wondering, why do I care about the input and output caching? Well, did you ever run a command that returns a value just to realize later that you actually want to do something with this value? I did, many, many times. And if it's a fast command, then no problem. You can rerun it. But if it's a long running command, or maybe you just can't rerun it because you had authentication token and now it expired, then you have a problem. Unless you're using Ipython. So everything is cached, you can just go back and retrieve the value from the cache. On the other hand, if you don't want the cache, if you don't want to cache the input for a given command, you can put the semicolon at the end of the line. Ipython won't print the results and it also won't store the results in the cache. One of the coolest features of Ipython are the magic functions. So magic functions are a bunch of helper methods that starts with one or two percentage signs. Why the percentage sign? Well, to distinguish them from standard Python functions, as they behave slightly different. For example, they don't require parentheses when you're passing arguments. Just keep in mind that in Python, the Dunder methods, so methods that are starting with two underscores are also called magic functions or magic methods. But those functions are something completely different than Ipython magic functions. So there are two types of magic functions, line magics and cell magics. Line magic functions are similar to shell commands. They don't require parentheses when you're passing arguments. And if a function is starting with two percentage signs, then it's a cell magic. Cell magics can accept multiple lines of input. You can pass arguments right after the function name. Then you press enter and you type the input code that the magic function will run on. To let the cell magic function know that you are done with typing the input and it should run now, you need to press enter twice. As of version 7.4 of Ipython, there were 124 magic functions. Now I'm going to discuss all of them one by one. Now I'm just kidding. This is not a lecture at the university. So I have some magic functions that I'm using quite often, but it's still too much to discuss them, especially since the documentation of those methods are pretty good. So if there are some functions here that you don't recognize, I suggest that you take a look and maybe you will find them useful. I will just quickly show you a few interesting ones. As I said before, Ipython keeps the track of the commands that you run and the history function can be used to print those commands back. It can be run with no parameters. In this case, it will print you the whole history of the current session or with a number specifying which line of the history you want to print. I'm actually showing you the history because it's one of a few functions in Ipython that can accept a range of lines as a parameter. And the range parameter is quite interesting, so let's take a closer look at how it works. There are a few ways you can specify a range of lines in Ipython. The simple one is to use a dash between two numbers. You can also mix ranges and single lines. So in the first example, I'm selecting lines two, three, five, seven, eight, and nine. It's also fine if the ranges are overlapping or if they are duplicated. And if you want to reference lines from the previous sessions, you can specify the session number at the front slash and then the line number or a range. It's great, but usually you don't remember how many Ipython sessions you had before. Like here I had 457 sessions when I was preparing this slide. So Ipython accepts a different notation. You can use a tilde character prefix to say I want to print history from that many sessions before the current one. So in the third example, I'm printing the line number seven from two sessions ago. Also, you can provide the session number and skip the range parameter. That way Ipython will print to the whole session. And finally, you can provide a range across multiple sessions. So in the last example, I'm selecting, I'm printing the history from the first line eight sessions ago until the fifth line six sessions ago. So even though writing multiple lines of code in Ipython is easier than it is in the default Python REPL because you have the smart indentation, you can always make it even easier with the edit magic command. It will open a temporary file in your favorite editor where you can type the code and after you save and close that file, it will execute it in Ipython. And by favorite editor, I mean the one that's defined in the editor or visual environment variables. So if you don't set it up, you will probably end up with the greatest text editor of all time or none of them. So each time you run the edit command, Ipython is gonna open a new file. So if you want to go back and edit the same file as last time, you have to pass the minus p parameter. And also to save yourself typing the edit thing, you can just press F2, this is a shortcut. What's really cool about the edit command is that it can accept an argument and depending on what this argument is, edit will behave differently. So if it's a file name, Ipython will open that file. If it's a range of the input history, Ipython will open a new file and copy the lines from the history to that file. If it's a variable, Ipython will open a new file and copy the content of that variable to that file. If it's an object, but not a variable, for example, if it's a function name, Ipython will try to figure out in which file you define this function, open that file exactly on the line when the function definition is starting, which is super cool. And you can use it, for example, to monkey parts and functions. And finally, if you recorded a macro, you can pass the name of the macro to the edit command to edit the macro. So run magic. Run magic will run the Python script and load all its data into the current namespace. It seems pretty straightforward, but I find it very useful when I'm writing a module or just a bunch of functions in a file. And I want to test them. If there is a bug in my module, I can't just re-import it. I would have to import this, I can't just do from my module import my function. I would have to import the reload function from the import lib module and use that to reload my module, which is a bit of a typing. It's not 100% reliable. And to be honest, I'm usually forgetting the name of the import lib library. So instead of importing my modules, I'm usually rerunning them. I can run it how many times I want and each time I do this, I Python will update the current namespace with the latest code from my module. Also, as a bonus point, there is a configuration option of I Python called auto-reload. If you enable it, I Python will always reload the whole module before running a function from that module. So there are many other magic functions that you can use. For example, to rerun some commands from the past or edit them and then rerun them to save commands as a macro, save them in a file or in a past bin so you can share them with someone, to save macros, variables, and aliases because once you close I Python sessions, they are gone. So you can save them in the database and retrieve them back in another session. Or you can just print a list of variables or functions that you have created in a nicely formatted way. So far, all the magic functions that I mentioned were line magics. As for the cell magics, there is a whole collection of functions that you can use to run a piece of code written in a different programming language. One of the most interesting cases until the end of this year is when you want to quickly test a piece of Python 2 code. You can type double percentage Python 2, then write the code, press enter twice, and I Python will execute it with no problems. It works with other languages like Bash or Ruby or JavaScript out of the box. And also notice how in the last example, I don't know if you can see it, but I Python is actually correctly highlighting the Ruby syntax. So what if those 124 magic functions are not enough? Well, you can very easily create your own magic function. All you have to do is to write a function and decorate it with either a register line magic or register cell magic decorator. Let's see an example. So here I'm creating a magic function that will reverse any string that they pass. First, we start with writing a function that takes an argument and returns the reversed version. Each line magic function should accept at least one parameter, the string that will be passed to that function when we call it. Next, we import the register line magic function and use it to decorate the function that we just created. I'm passing a parameter to the decorator that will be used as the name of the magic function. If I don't do this, my new magic function will be called in the same way as the function that I'm decorating. So in this case, it would be L magic. So I want to change the name to something more descriptive. Finally, after I run this code in IPython, my new magic function is ready to use. It will reverse anything that they pass. And since all arguments to the magic functions are passed as strings, I don't really have to worry about checking the types to see if I can reverse it or not. Creating cell magic functions is pretty similar and you can even create a function that will work both as a cell and line magic function. If you want to learn more, the documentation of IPython has some pretty simple examples. And I also wrote a very short step-by-step guide on how to create a cell magic that will run the mypy type checker on a block of code. So creating magic functions is easy, but to be able to run our magic function, we had to copy and paste our code into IPython. If you want to run our magic function often, then each time you start a new session, you'll have to paste this code into IPython, which sounds terribly inconvenient. So we might want to turn our magic function into an extension. Extensions in IPython are an easy way to make your magic functions reusable and to share them with the world. And they are not limited only to the magic functions. You can, for example, write some code that modifies any part of IPython from custom key bindings, custom colors, modification to the configuration, and you can very easily turn that into extension. To create an extension, you need to create a file that contains the load ipython extension function. This is the function that will be executed when you load the extension. You can optionally add the unload ipython extension if you want to have your extension to be unloadable. And then you need to save this file in a directory called ipython slash extensions. Okay, that's a pretty bug explanation. Let's see an example. Let's say we want to turn our magic function into an extension. That was the code of our magic function. All we have to do is take this code, put it inside load ipython extension. Keep in mind that this function should always accept one parameter, the ipython object. So even though we are not using it in our example, we have to accept this parameter. Otherwise, ipython will complain. And you have to save the function in a file called Reverserpy inside the extensions directory of ipython. Now, if we start ipython and load our extension, the magic function reverse will be available in our session. All the load x magic method does is to find a file with a matching name and call the load ipython extension function from that file. So you probably noticed this deprecation warning and might be thinking, why am I showing you something that is deprecated? Well, it's not really deprecated. It's just a subtle way of ipython telling you like, hey, I see you have created an extension. How about you share it with others and publish it on PyPI? So I don't think there is any point in publishing such a silly extension on PyPI. But then again, I don't think there is a need for a package that puts text left. Yet some languages think differently. So of course, we're gonna publish it. So you can find a package here. You can install it with pip and voila. Now you can reverse strings with a magic method in ipython. So this package contains just the absolute minimum of code that you need to publish an ipython extension on PyPI. So if you want to publish your own, you can go check it out. So that was how we can write and publish our extensions. If on the other hand, you want to see the extensions that other people have created, there are two places. First and biggest place is the extension index. It's a wiki page on ipython GitHub repository that contains a huge list of extensions. Just keep in mind that some extensions here can be quite old and you might have problems installing them. But if you see an extension that you really like and you cannot install it, just copy the code and paste it into ipython and that should work. And the second place to find extensions is PyPI. The ipython developers are actually recommending to put your extensions there and tag them with ipython tag. But not everyone is tagging their extensions properly, so simply searching for ipython or ipython magic on PyPI can return you some more results. So what kind of extensions people are creating? Well, for example, there is ipython SQL that lets you interact with SQL databases from ipython. There is ipython Cypher that lets you interact with Neo4j or Django ORM magic that lets you define Django models on the fly. The popularity of those extensions is not very high. Many of them are below version 1.0 or have been abandoned long time ago, but sometimes you can actually find something useful. So what else can you do with ipython? You can, for example, run shell commands. And any command that is starting with exclamation mark is treated as a shell command. And some of the most common ones, like the CD or LS, can work even without the exclamation mark. You can create aliases. Aliases in ipython are basically the same thing as aliases in Linux. They let you call a system command under a different name. And in ipython, they can also accept positional parameters. Speaking of aliases, there is actually cool and probably not very well known magic function called rehash x. It will load all the executables from the path variable into the ipython session, which basically mean that now you can call any shell command right from ipython, which is pretty cool little curiosity. Like here, I'm starting a node repo inside ipython repo. I want to go deep down and start more repos, but I failed. So ipython has a four different settings of how verbose the exceptions should be. And you can change between them with the xMode magic function. You can select the lowest amount of information, a bit more verbose, a context, which is the default one, and the most verbose, which will also show you the local and global variables for each point in your stack trace. And starting from version seven of ipython, you can execute asynchronous code by using await wherever you want. So if you try to put await in a top level scope in standard python repo, you will get a syntax error. However, ipython has implemented some hacks to make it work. So if you're playing with some asynchronous code and you want to quickly await an asynchronous function, this is a great way to do this. Just keep in mind that this is not actually a valid python code, so just don't do this on production. And there is a demo mode in ipython. To use it, you have to create a python file with some simple markup in the comments, and then you need to load that file into the demo object. This is how it works in practice. Each time you call myDemo object, ipython will execute the next block of code from the demo in the current namespace. So you will have access to all the variables and functions that were created in that block of code. And you can play with them before executing the next block. So demo mode is pretty similar to what you can do with Jupyter Notebooks. And to be honest, for a presentation, I would stick to Jupyter Notebooks so people can actually see what code you are executing. But if you live in a terminal and you want to impress your colleagues with a pretty cool coding demo for your next presentation, this is a great tool. So ipython comes with a lot of good defaults. In fact, I never actually failed. I need to modify the configuration file. But if you want to change something, it's very easy to do this. The default configuration lives in the ipython config file. And this is where it's located for the current user. Well, actually when you first install ipython, the file is not there. You have to first run ipython profile create command that will generate the file with default values. And if you look inside that file, you will see a huge amount of options that you can change. For example, you can execute some lines of code at startup, execute some files, load extensions, change the color schema, change exception mode, select a different editor to use with the edit command, stuff like that. If you look what else is inside ipython profile default folder, you will see a bunch of directories. Most of them are internal to ipython, so there is nothing interesting for us. But there is one that is particularly interesting. It's called startup and it lets you start a startup. No, it contains a readme file that explains what's the purpose of this directory. So basically any file with py or ipy extension that you put in that directory will be executed when ipython starts. So we can use this folder to define some helper methods or maybe magic functions. Remember when we wrote our magic method and we had to create an extension to be able to use our magic method between sessions? Well, an easier solution would be to just create a file in the startup directory and put the code of our magic method there. Just keep in mind that whatever you put in that folder gets executed each time ipython starts. So if you put a bunch of slow functions there, then it's gonna make your ipython startup time very slow. So in this case, it's better to create a separate profile for those slow functions. So profiles are like accounts on your computer. Each profile is a separate directory in ipython folder, so each has its own configuration and startup files. You can create a new profile by running ipython profile create command. Then you can start ipython with that profile by running ipython dash dash profile equals full. And if you don't specify which profile you want to use, ipython will use the default one. So for example, I once had a profile just for debugging and profiling my code. And exceptions were said to be as verbose as possible and I was loading a few extensions for profiling. But I was not debugging or profiling my code all the time. So instead of putting all those things into the default configuration, I had a separate profile for that. So we talked about magic functions and extensions before. And I told you that a lot of extensions define magic functions that you can use, but that's not the only thing you can do with the extensions. Another thing that you can do is to register some callbacks to ipython events. ipython defines a set of events, like before I run the code, after I run the code, after I start ipython. And you can very easily plug custom function that will be executed during those events. In general, to be able to add a callback to an event, you need three things. First, you need to create a callback function. Check out the documentation to see what parameters each callback will get. Then you need to define the load ipython extension function and register the callback inside. Pretty similar to what we did with the magic functions. And finally, as with all the extensions, you need to load it to make it work. So let's see how it works in practice. Let's say we want to make a function that will print the variables after the execution of each cell. So this is all the code that we need for it. First, we create a class that will store our callback function. So I'm using a class to store the reference to the ipython object that I will use inside my callback function. Then I'm defining the callback function. The result parameter will be passed from the event. So even though I'm not actually using it in my function, I still have to put it in the function signature. Inside my callback function, I'm calling the magic method who's to print the variables. Since it has to be a valid Python code, I can't just use percentage who's as this is gonna give me a syntax error. So this run line magic function is actually a way to call ipython magic functions from valid Python code. And finally, I'm registering the callback inside the load ipython extension function. And now I'm saving the file in my extensions directory as a var printer. If I load it in ipython session, it will automatically start working and printing the variables after each cell. So speaking of events, there is also something quite interesting similar to events that is called hooks. ipython has a set of default hooks that are executed at certain situations. For example, when you're opening the editor with edit magic command, shutting down ipython or copying something from the clipboard. The main difference between the events and hooks is how they are intended to work. You can have a bunch of callback functions that are independent from each other and all of them will be called when an event happens. Hooks, on the other hand, will call only one function. So if you have multiple functions attached to the same hook, ipython will call the first one. And if it's successful, it will stop. But if the function throws an exception, ipython will try to call the next function and the next and the next until it finds one that's actually successful. So let's see an example of a hook. Here we are registering our own function that will be executed when the editor is open. This function will try to use the jet editor instead of the default one. An interesting piece of code is this tryNextException. It's used to indicate that this hook failed and ipython should try to use the next function. If for some reason there was a problem with the jet editor, ipython will try to open another editor instead of failing. Moving on to the next feature, debugging. So ipython is my default debugging tool. It all started because I was using sublime text for a very, very long time and I only recently switched to Visual Studio Code, which has a pretty good debugger, but using the one from ipython still works for me in most of the cases. So how can I use ipython as my debugger? Well, first thing that you can do is to embed ipython anywhere in your code. To do that, you need to import the embed function from ipython and then just call it. I like to put those two statements on one line so I can remove them with just one keystroke and also all the code linters will complain about it so I don't forget to remove it when I'm done. Now, I can run my script and when the interpreter gets to that line in the code, it will open the ipython shell. I will have access to all the variables set at that point so I can poke around and see what's going on with my code. When I'm done, I just exit ipython and the code execution will continue. Also, if I change some variables from ipython, those changes will persist after I close the embedded session. So embedding is nice, but it's not really debugging. To actually run the debugger, you can run magic function run with the minus d parameter and then specify the file name. ipython will then run the file through the ipdb debugger and put a breakpoint on the first line. The ipdb debugger is just a wrapper around the standard pdb debugger that adds some features from ipython like syntax highlighting, tap completion and other small improvements. And now my favorite part of ipython, the post-mortem debugger. So imagine you're running a Python script, a long-running Python script, almost there and suddenly it crashes because that's what programs do. And you're probably sitting there and thinking, man, I wish I run this script with a debugger enabled. Now I have to enable the debugger, run this slow function again and wait to see what's the problem, right? Well, no, you don't. At least not when you're using ipython. So you can run the debug magic command after the exception happened and it will start the debugger for the last exception. You can inspect variables, move up and down the stack trace, the same stuff as you can do with the standard debugger. Finally, if you want to automatically start the debugger when the exception happens, there is a magic function called pdb that you can use to enable this behavior. So that was debugging. Another interesting set of functions is related to profiling your code. If you're curious how slow your code is or what's more important, where is the bottleneck, ipython has a few magic tricks up its sleeves. The first magic function is called time. It's the most simple way to measure the execution time of a piece of code. It will just run your code once and print you how long it took according to the CPU clock and the world clock. Kind of boring. So there is much more interesting function called timeit. By default, it will automatically determine how many times your code should run to give you reliable results. For a very fast function, it might run a few thousand times and for a slow one, it might just run a few times. There's also a cell magic function, cell magic version of the timeit function. It's more convenient if you want to profile code that has multiple lines. One nice thing about the cell magic version is that after the arguments, you can pass some setup code that will be executed, but it won't be the part of the measurement. Once we know that our code is slow, we probably want to see why exactly it's slow. What's taking so much time? So we can run the prun magic function and it will show us a nice overview of how many times a given function was called, what was the total time spent on calling those functions, where a given function is located, et cetera, et cetera. So here we can see that our slow function is running for 12 seconds and it's performing 50 million function calls. And most of the time is spent in a function called check factor in a file called my file py. So now we can go there and check what's wrong with this function and if we can make it better. Another interesting type of profiler is line profiler. The prun will report how much time each function took, but the line profiler or LP run will give you even more detailed information and show you a line by line report of how your code was executed. Since this profiler is not included by default with IPython, you have to install it from pip and then load it as an extension. Once you do this, you can use the magic LP run command. Now, to run this profiler, you need two things. You need a statement, so a function or a piece of code that will be executed and then you need to specify which functions you want to profile. Let's see an example. So here I'm running a function called long running script and I want my profiler to check two functions, the long running script itself and the one that is called important function. So line profile will generate this nice report for each function that I specify where I can see how many times each line was run, how much time Python spent on this line and how many percent of a total running time was spent on that particular line. And the last profiler I want to mention is called memory profiler. And as the name suggests, it can be used to profile the memory usage of your programs. Again, to be able to use it, we have to install it from pip first and then load the extension. You run it basically in the same way as the line profiler. So you specify which function you want to profile and then a statement that needs to be run. And then you get output that is, again, similar to the one from the line profiler. You see how the memory usage has changed between each line of your code. So in IPython, the evaluation part of Ripple happens in a separate process. It means that the process evaluating your code, called kernel, can be decoupled from the rest of IPython. It has one great advantage. IPython is not limited just to Python programming language. You can easily swap kernels and use a completely different language. The interface won't change, but a different interpreter will be running your code. So if you want to quickly run some Ruby or JavaScript code, that's one way to do this. So how can we change the kernel? Well, first, we have to find the kernel that we want to use on the list that is published at Jupyter GitHub repo. It will contain a link to the documentation explaining how to install the kernel. Since each kernel has a different dependencies, there is no one standard way to install kernels. So let's try to install the iJulia package. And once we do this, we can start iPython with Julia Kernel. As you can see, the REPL still looks the same, but now you can use Julia syntax. And if we try to write Python, then we're gonna get a syntax error. So the new kernel will work with both iPython, REPL, and Jupyter notebooks. And while installing a custom kernel to use with the notebooks is a pretty good idea, installing a custom kernel just to use with iPython might be a bit of an overkill. I mean, nowadays, programming languages have a very solid REPL of their own, so it's probably easier to use that instead. Unless you really, really want to use iPython all the time. And if you really, really love iPython, there is still a bunch of crazy stuff that you can do, but I don't have time to discuss them all, so I'm just gonna quickly show some of them. So you can enable auto calls. So you don't have to use brackets when calling functions. Or you can start a line with a comma, so you don't even have to put quotes around the parameters when calling a function. You can enable the auto reloading that I mentioned before, so you can change the imported modules on the fly, and then you don't have to re-import them each time. And if you're writing Doctest, you can turn on the Doctest mode to make copying code from iPython easier. And you can use iPython as your shell, which would require, for example, changing the prompt to show you the current directory, enabling auto calls and running gray hash X for all the aliases. Or you can add custom keyboard shortcuts or input transformations, or if you're brave enough, the AST transformations. And since this is already a talk about the Python replacements, it wouldn't be fair to at least not mention the alternatives. So there are three main ones, B-Python, P-Python, and Con-Shell. The first alternative is B-Python. In the quest of making a replacement for the default Python repo, B-Python took a more lightweight approach. It has a lot less features than iPython, but it has the essential ones, like syntax highlighting, smart indentation, auto completion, and suggestions when you're typing. And it has a very interesting feature called Rewind that basically lets you remove the last command from the history like it never happened. You can see it in a moment. Here it is. Next, there is P-Python, a Python repo built on top of the prompt toolkit. It's slightly more advanced than B-Python as it contains a bit more features. The obvious ones are the syntax highlighting, multi-line editing with smart indentation, auto completion, or shell commands. But there are some more innovative ones, like a syntax validation that checks your code if your code is correct actually before executing it, a Veeam or Emacs key bindings, or those nice menus for configuration or the history. And finally, there is Conj. Well, Conj is quite different than iPython, B-Python, or P-Python, because it's not really a Python repo, it's a shell. It's a shell that's adding Python on top of Bash, so you can actually use both. And it has a massive amount of features, so if you're interested and want to learn more, there are two good talks about Conj. First one is from Anthony, who's a creator of Conj, and second one is from Matias, who's actually a core developer of iPython and a user and contributor to Conj, so we can go check them out. And that's all. So thank you for coming. Thank you for listening, and I would also like to say thank you to the creators of iPython for making such an awesome tool. So if you can give him a big hand, that would be appreciated. And we have time for questions. Okay, do we have a question? And again, we have time to thank you for your great talk and for providing a link to the slides so that we can rewind them at a slower pace. Thank you very much and give another round of applause for Sebastian.