 Thank you. At first it is a Star Wars talk. I will put my cap. Now it's better. And I would like to send a picture to my mom. She is in Brazil. And, please. Hello, yeah. Thank you. So, I know the background is orange, but the subject is green threads. Someone know what is green threads? Your hand? Okay. So, do or do not. There is no try. This is our agenda. At first we will understand what is threads and processes. And the threads and multiprocessing. After that, understanding green threads, applying the green threads, speak about concurrency and imperialism, and why, when, and how use that. Okay. This talk was intended over the C Python. So, the behavior in PyPy or Cyton could be different. Threads and process. In Python, the threads are real. What it means? It means the threads are in the kernel level. So, the KLT. It represents that the threads are P threads using POSIX. They are totally controlled by the operational system. It represents for us a few problems as the context to each at the operational system level. And the choice priority about which thread will be running first. It will be running first. So, there is our KLT space. This is our user space. Our kernel space. It is a process. Inside the process I could have the threads. And my process table and my thread table are in the kernel level. So, the context switch and the thread priority is totally controlled by the operational system. You can suggest the operational system to execute thread first. But it's not means that the operational system will work as you think or as you want. And the Python has specifically behavior, specifically think that changes the behavior. It's the GIL. The GIL is not so bad. He works for us in every single thread programs. It's good. It's good to work with C libraries. It's not so bad. Many times the guy told, okay, GIL is bad. No, it's not totally true. It's worse for us, but for multi-processing, for multi-threads programs, for multi-threading could be a problem. It's the Python threads behavior. When you create a few threads, I have three threads now. I started the first thread started, but the other threads are stopped. And why? Because the GIL will start a process to release and acquire. So the GIL will control the thread, and every time we run just one thread, and not multiple threads. In the Python 2, it's changed. And now after the Python 3. In the Python 3, it's changed. Yeah. After the Python 3.2 was implemented the drop GIL. And what it means? It means that the Python now in the version 3 has a different behavior. This behavior is in a field time, normally in five milliseconds, maybe, around that. The GIL starts a graceful process to change, to start the process to release and acquire. It could be a solution for a few things, but it's not a solution. Threads and multiprocessing. And I once told you, I once said to you, if you see a Yoda, there is a life code. So, oh, a Yoda! Great. Let's see. I have a very usual code here, a Fibonacci code. Hey, Fibonacci. So, why Fibonacci? Because it has an Italian name. Python is in Italy. And I once showed the GIL working. Okay. It's my code. They have the number 34. And I will run the Fibonacci 34. But twice. And why you will do that? Because I want. Let me see now. At first I will run in the Python 2. And why Python 2? If you are denied that it should work over legacy code, you should know what happened to Python 2, too. Fib 01. And after a few seconds, the code comes back. Five seconds. Great. Now, what I will do. I will execute the same Fibonacci, but using threads. I will run twice, but in two different threads. And it could be faster. Should be faster. But I don't know what happened. Two, two, two, two, two, two, two, two, two, two, two, two, two, two. Oh, so slow. Okay. It's not so fast. And why? Because the releasing the query process creates a gap to change it what thread we're working now. So the two threads don't start together. And it's a problem for us. But I can run that just as a solution. The same code using multi-process. And Python Fib 03. And using multi-process is faster. But what the problem is, why scaling your code or your application using process? The memory consume could be really big. The threads are not so hard, but not so heavy, but it's heavy too. And with multi-process it could be roast. So let's come back. Understanding green threads. A green thread is a ULT. So the user-level thread. The user-level thread are controlled by the runtime or your VM. The name of green threads comes from the Java. And why Java? The Java developers work on the team and the team was named it. Green team. Not green team like basketball. Green team. They start to apply the threads in the runtime level. And because that, the name is green threads. But it's a lightweight thread. So the same environment. And here we have the user-space. Again, the kernel space. The process. The threads. But now the threads are controlled in runtime. So we can control the switch. And we can control the priority. And the thread table is there too. So it is the normally behavior for green threads. The green threads we work in just one real thread. And could start together and start to concurrence process by the resource. For the resource. And many times green thread could wait or not. Or they finish earlier than other. But it is the behavior the green threads behavior. Applying green threads. Oh. So I think we have a life goal. Here I have a little server that I wrote in Gia Pronto. I like Gia Pronto because Gia Pronto is Portuguese name for the library. It's more like Flask library. And it works with Pico server. And it's very fast. So it is my service. Okay. And I will consume this service at first in synchronous mode. Okay. I will make a request, translate the JSON, get the variables, show the variables. And I will consume this 10 times. Okay. At first run my not push, not push, not push. Okay. Python server. Server. And I start a few workers to get my information. And now let me see what is 0, 0, 5. Python, Python, 0, 0, 5. Okay. I will consume that. One, two, three, four, five, six, seven, eight, nine, 10. Oh, great. I know count. It's too slow my service. Because I put a time sleep here, so it's a bad service. Okay. And it could happen. I don't know. A few APIs could be very slow. But now I can run the same application using another asynchronous application. And for this case, I will use Jivant. And why? Jivant is, in my opinion, the better option for Python 2 users. Okay. And you can use the same library for the Python 3.2. In this case, I will get my information, the same method, the same fetch method. But now I will create green threads here. So, and after that, join now. Okay. Now. And as I feel fast. So, they start to concurrence the process and spend two seconds to get the information. Maybe, in another moment, could spend just one second. No, two seconds. Okay. So, in Python 3, a good option is use the AsyncIO. Oh, but AsyncIO are green threads, too. It's not coroutines. Yeah, but green threads, coroutines and lightweight threads. All these things are like green threads. Works like green threads with a few difference. And at first, I will show the difference from AsyncIO for Given. And I think it's the big difference between AsyncIO and Given. To run this code at Python, what is the example? 007. 007. And he starts and runs and was slower than Given. And why? Because, by default, the AsyncIO will prevent the race condition. And it's create a fill, spend more time than Given. Given doesn't prevent the race condition to get information. But I want to see AsyncIO faster again. How can I do that? You could use an executor to split your processing. Okay. Now I will run I'm going to run the event loop. But I'm executing. I run in an executor. And it should be the 8th. It was fast. It was like Given. Because now I drop the race condition prevention. So, if you want to assume the risk, you could use the executors to do that. To do this tasks. But, you show the behavior, the behavior in the Python 2. And the Python 3. How it works for CPU bound, for example. We can run again our Fibonacci here. And we will see the time and spend 5 seconds. Okay. It was a sequential code. Now with threads. It will spend basically the same time. In this case, because the new implementation in the Python 2, in the Python 3.2, the gil are not so hard to create the release in the correct process. And with process is the same. It is fast to recall. But, okay. I can use a sync.io for CPU bound. Normally, we use a sync.io for IEO bound. But with executors, you could use the same API to get information from the using CPU bound. Okay. Show me that. I don't believe in you. It is the same code with 34, the Fibonacci code, and now with synchronous. I have two two two green threads. And I set the process PUE executor. By default, I sync.io use the thread PUE executor. And it's good for IEO. But for CPU bound, we use the process PUE executor and set this in your event loop. So, let me see. Okay. Fast. So, we can use a sync.io for CPU bound too. And not just for IEO bound. It's not so usual. And I don't recommend this. But if you want to play with that, could be a good idea to play. Just to play. I want to use this all the time. No, just to play, please. And here, I have a few things that I like to show. There is a flask. Okay. Normal. Flask code. Okay. It's nice. And let me see what happened with my flask code. I will start here. Let me see where it is. Flask and app. So, the flask is running. And now I will use . They do our key to get a few metrics. Let me see if the door is okay. Yeah. A few metrics with creating a concurrence with 62 guys and 10 threads. Two threads, sorry. By for 10 seconds. And running 10 seconds. And, okay. Near to 1,000 requests per second. But if I run in sequencing, a few problems could happen. Let me see. 10 seconds a lot of time. Now, it's just 400 requests. And why? Because, in fact, the flasks don't finish the work before. The first process with the working. They keep trying to answer my my request. Okay, thanks. And now I have a field called here. I use G event to patch the server. Where G event is used, for example. Unicorn use G event. And the Unicorn use green threads to work. And it is my code with G event to play the flask. And here I start my server. The same flask app, the same flask method. And now I will put on the G event. Okay. I will stop. And app G event. Let me see. It starts. I don't believe that it starts. And I use opera. Okay. Yeah, hello work. Now, the same test. With 10 seconds. Okay. 2,500. And if I run again, it's the same. And why? Because the green threads, my requisitions are was catch by my green threads and start to work with that. So I use one thread, or just workers, or kind of this. Because of this, because they use green threads to work. Okay. I will continue. Concurrency and parallelism. Are concurrency and parallelism the same thing? Yeah, exactly that. No. And to explain what is concurrency and parallelism, I will use the strike against the Death Star. And because I think it's pretty good to explain that. Now, I have a few tasks. My X wings. Okay. And this X wings we will start together to try destroy the Death Star. But if you remember the episode 4, the Death Star has just one flight path to throw the missile against the target. And tasks start together. But concurrency by the same resource it is concurrency. So Star Wars teaching concurrency. So Star Wars is a pretty good movie. And now I want to have parallelism to do that. I should have a proportional number of tasks to resource. And now it's not more a Death Star but a Death Constellation because there is two Death Star now. So it is parallelism. What is our conclusion with that? Multiple green threads could provide parallelism. Always we will provide concurrency. And the parallelism doesn't depend just our side. So our task numbers. But the resource that will be explored many times you think, okay, it will be parallel but not will be concurrency because the resource is not able to answer you. And why how use that? Why? It is to control the communication between tasks. When you try to use processes or threads it could be harder than use green threads because all the things are compatible and are split and divided in the same event loop. Thank you. It is easy to control failures. For example, g-vents inform an exception and what kind of exception and the messenger is already processing and reduce complexity to concurrency applications. When to provide a synchronous solutions and every time that concurrency could be applied perhaps for your bound. And how? In Python 2, when Python 3 you could use g-vents, event let, green let, and I think these three options are the better options to do that in the Python. In Python 3, I strongly recommend curio or async.io. Why I recommend strongly async.io more than curio? Because you can change the async.io engine, for example. We can change here from a normal event loop to start to use UV loop and the change is pretty easy. Obviously this example is not a better example to see the performance about the UV loop. But UV loop is async.io engine faster than the standard standard engine and async.io. And now may the green threads be a few. Vinicius Pacheco-Kanobi. It's me. I like Obi-Wan-Kanobi. Questions? I'll move my cap because I can't hear. Any questions? No. Thank you.