 Are we ready? All right. Well, let's get started. Good morning. Everybody cover from the Rackspace party last night, I hope. My name is Chuck Thier. I'm a software developer for Rackspace. I'm one of the original developers of Cloud Files, which ended up becoming OpenStack Swift and have been actively involved in the community since then. I'm a Python developer of almost 10 years now, been involved in many different projects, working on many different things, using Eventlet for quite a while, as my screensaver turns on right then. And used a lot of other frameworks, too, building big systems based on Twisted and other projects like that. So I have a lot of experience working in these realms. And the main purpose of this talk, and hopefully maybe some discussion later on if we have some time, is, A, we're a very quickly growing community. People are coming into this project are new to Python. And if you're new to Python, you're certainly gonna be new to Eventlet. People that have used Eventlet for a while, and maybe hopefully we'll learn some new things here today. But also, there's always this underlying thread that if we have some problem, it's like, oh, it's just Eventlet's fault, and we'll just forget it and go on. And so I wanna kinda address some of those threads a little bit and try to just re-encourage you guys to dig in and say, let's look and figure out what really is causing these problems and how we can fix these problems. And Eventlet as a async library is very capable and if you use correctly can work very well for us. So hopefully the purpose of this is to kinda inform everyone here today. And if you learn a little bit, that's great. If not, then please come tell me and teach me how to do this stuff so we can all improve and be better. So to get started, let's talk about the Zen of green. So the evilness of Eventlet that everybody talks about is always the, oh, the monkey patching. It's gonna be so horrible. So there's lots of different ways you can interact with Eventlet and how you're gonna use it. Now the very top example I have up there, the eventlet.monkeypatch just with nothing. Please, we don't do that in any of our source code right now, but if you're kinda playing around with stuff, unless you're just experimenting, that basically monkey patches the world. Don't do that, that's a bad idea because you may end up getting unwanted patching going on and just weird stuff happens, right? And so one of the big mottos of, you know, we're actually part of the Zen of Python is explicit is better than implicit, right? So if at all possible, the best way to use Eventlet when you're using it is import what Eventlet has these, what they call green modules. So from eventlet.green, there's all the standard Python library modules that have been pre-patched and pre-set up so that they can use effectively. So if you're using URL lib, HP lib, threads and you wanna patch threads to use Eventlet instead, you know, all these different libraries have already been set up and so that's the best place to start. And this is one of the actually great benefits that Eventlet gives us. Eventlet allows us to reuse all the current standard library Python libraries. Other libraries out there that all you do things like Twisted and things like that, you have to pretty much use a different implementation of every single library that you want to do. You want to do HP communication, you want to use some other resource, you have to use something that's very specific to that library. So that's something that's a very nice advantage to Eventlet. If you're using a library that's not a standard library that doesn't have a green implementation, it does provide a import patched function call. And what this does for you is, allows you to import some module. It'll inspect that module and try to any modules that it knows how to patch for, it will patch those modules. So if it's using socket, if it's using HP lib, if it's using URL lib, so that's a very useful thing. So if you're importing, and something that it doesn't already have Eventlet support, for example. So that can be helpful. And if all else fails, sometimes it's not too bad of an idea to do a monkey patch. You can send it all equals false, which means definitely don't patch anything else. But let's just make sure socket is always patched. We do that in Swift, just to make sure, because we're interacting with other libraries and things like that. And that can be useful sometimes. So let's, don't monkey patch the world. Let's be explicit instead of implicit and do good things. So the Zen of cooperation. So for those that are new to Eventlet, Eventlet allows you to write async code that's in a blocking style. And what I mean by that is you write code like you would normally write code that's not threaded. And then what happens is the current code path will keep running until the control is either explicitly yielded, like you run in Eventlet.sleep, or you run into a library or you use a caller method or something that makes network calls, like you're reading or writing on the socket, using a green HP lib and you're sending out HP calls and things like that. At that point, what Eventlet does is it uses a method called trampolining. So, and that allows, that jumps allows the, basically the mechanism that the scheduler within Eventlet allows it to cooperatively multi task across all your different green threads that are running. So the way Eventlet works is it builds off of Greenlet, which is basically a port of the stack slicing techniques from, gosh, why did the name just evade me? From Stackless, thank you very much. But I don't want to get into too much technical detail, but basically it's just an easy way to give us these real lightweight co-routine-like threads. They're not real threads, so they're cooperatively multi-tasked. And Eventlet provides this kind of runtime on top of that, the scheduler and things like that, and that uses this to do that. So as soon as you either do a sleep or network activity happens, then that kind of is passed off to the other green threads to handle the request to do whatever you're doing through the time. The really nice thing about this is your code path is deterministic. If you're using a regular threading model, since it's preemptive, your task could switch at any point in time. So when you have a section of code, at any point in time when your function is running, it could say, oh, task switch over this thread now. So what happens is we have to have this for critical pieces, you have to do a lot of locking and you have to make sure that things aren't going to stomp on each other and things like that. So one of the nice things about Eventlet is you're very much in control of when this switching is happening. And that also means that locking isn't needed so when you're sharing data structures and things like that, which is really, really nice and attractive. Now there is one side point to this and it's very important and it's also where people get tripped up pretty often with using Eventlet. And especially with you're using a project that has a ton of dependencies that if you're not aware of the dependencies you're using, the libraries you're using, there is the opportunity for writing functionality and using some other library and that library makes some sort of call that suddenly that's either a networking call or something like that that allows the task to switch when you don't realize it. So that is the one kind of downside to this method is that you do have to have a pretty good understanding of the libraries. You can't just go blindly use, oh, I'm just gonna use this. You do kind of have to have an understanding of the libraries that you're using to do that with. So there's an end of failing fast. So this is one of the new concepts that I hope I introduced most of you to and especially those of you that have been using Eventlet for a while. It's something that we use in Swift quite a bit and it's like one of those little things inside of Eventlet that's a bit unique to it that is super handy and really helps out a lot. And so when you're writing distributed applications one of the kind of tenants at least we've taken in Swift and is usually taken in other large systems that people are building is you wanna be able to fail fast. You wanna be able to recover from those failures and do things like that. That's kind of hard to do in Python. Who knows what the default socket timeout is in Python and has messed with trying to set that and make it set appropriately and stuff. And it's kind of a pain and it's pain to have, there's cases where it's very useful to be able to have different sets of timeouts. And so for example, this is something that I just pulled out of I think the proxy server. So it might be a little bit confusing but so what it gives you is a nice little context manager. So you can say with timeout, some amount of time. So up in here we have connection timeout in Swift is actually just a subclass of timeout so that we can differentiate between connection timeouts and regular timeouts. And so what's really cool is we can do a connection timeout with a very short amount of time. So I think the default is three seconds. So internally within Swift we have a very nice, usually a very well-controlled network and we know that if it can't make a connection within three seconds then something's probably wrong. So it'll throw an exception, you accept timeout, it'll handle it and move on and do something else and recover gracefully. Then so once we've got the connection, then we go down to with timeout, self-app node timeout. So that's just the config that we have for just basic operations are handling that are going on. I think that's 10 seconds. So basically what would happen here is some chunks would be read in and if that doesn't happen within, one of those chunks being read doesn't happen within that time, then that, so you have a machine that's behaving badly, it's not really accepting data very well, it times out, throws the exception. And we can capture that timeout and recover very quickly from those failures scenarios. And you can use this in a lot of different ways and we use it all over in Swift and it's been one of the things that's really helped us to be able to handle these failures. There is one sticking point that's very important that will get you if you're not paying attention. The timeout exception doesn't, if hopefully I'll remember this correctly, doesn't drive specifically from base exception. So if you just do a bear accept catch here, it's not gonna catch the timeout exception. You have to explicitly catch that timeout exception. And it's in the event like community, it kind of went back and forth of whether it should be or shouldn't be and this is the way it ended up, right? So this is one of the things you wanna be careful of. So you don't just do accept exception, blah, blah, blah and hope it catches it right. You have to specifically catch a timeout exception. Yeah, go ahead. Yeah, anytime you guys have questions, feel free to ask. So in this case, what we're doing is, so we do this a lot like the way Swift handles incoming data, for example, we'll write that out in chunks. And so what's actually happening here is a very tight loop of write a chunk, write a chunk, write a chunk, write a chunk and in between those chunks, we actually do an event let's sleep to make sure that it's cooperating between all the Linux buffers and all that stuff. It's cooperating, making sure all those chunks can be written across the disk, right? And so what happens is if one of those writing those chunks is taking too long because either the disk is too slow, there's a network issue or something like that, it'll kick it out and make an exception there. Sure, if there's nothing to bring you out into the event let's trampoline, basically, or the cooperative multitasking, then yes. And so that's why if you had something that was, and that's why we have the sleeps and the chunk writes because those are blocking, right? It's kind of like a CPU intensive task. And we'll get to kind of the blocking stuff in a minute. There is a way that you can do some of that and we'll get to that. I think that's either my next slide or the slide after that. But anyway, this is really, really, really useful. And so if you don't learn anything else today, hopefully this is new to most of you that have been doing an event like because when I look through the code, there's very little outside of Swift that's used in the rest of OpenStack. So yes, my next slide. So the Zen of blocking calls. So you have something that blocks that isn't able to cooperatively multitask so that can be intensive CPU tasks like what you're talking about. It can be disk IO. It can be database related. So event has this idea of a t pool. And the t pool is basically a thread pool that it has running internally. And it's very similar from the twisted world of the defer to thread idea. So basically what happens is if you're doing something, so in this example, Swift has to f-sync quite a bit. And one of the things you don't realize when you're just, you know, devying on your own machine, f-sync usually seems pretty fast, right? It turns out that f-sync on a lot of contention and IO disk IO and things like that can turn out to be very slow. So we didn't have this at first because, you know, in all our tests, it was like, oh, this is fine. It's not nearly taking all that much time. But then what happens is if you run into IO contention and all of a sudden that f-sync blocks when you run into a blocking code, whether it's IO contention or CPU or things like that, other operations within that whole process that are trying to be processed in other green threads don't happen at all. So it's very important. So something that should be discussed probably, and I think is big, we were talking just a little bit earlier, inside of Nova, one of the biggest points of contention, I think is always the database calls, right? There is a basic implementation of using a t-pool connection for the database calls, but it's not enabled by default. And if I recall correctly, there were some weird issues around it and things like that. But what that also means is every database call that happens in Nova, if it's not running in a thread pool, nothing else is happening in the rest of that process. All those other green threads that are sitting there, they're gonna be sitting there blocked and waiting until that database call completes. Yeah. Sure. Great question. So the question was, why don't we use one of the great event-oriented database libraries? So there's a couple answers to that. So there is, I'm much more of a Postgres fan than a MySQL fan, which doesn't get much love in an open stack, but Postgres does have a great async Python library for it. It works really well with Eventlet. I wish more people would kind of take a look at that. Unfortunately, there isn't a really great pure Python implementation that works really well with MySQL and there's not an async interface with MySQL that works very well. So it becomes a little bit more problematic, right? I think a couple of people have done experiments with the pure Python MySQL libraries, and the reason that the default library doesn't work very well is because it's a pure C module, there's no way for Eventlet to monkey patch the socket calls that are happening in MySQL. Does it work? Oh, well, then... Oh, really? Well, so last time I looked at it, so it's been a while since I've looked at MySQL stuff, but I know a lot of people have looked at it, and there have been other implementations that have tried to make it work before, and it's been very problematic. So if it's now a solved problem, that would be great to look at. But I just really encourage you guys, because I think this is one of the, in the more kind of Nova-oriented architecture of open stack, this is one of the glaring pieces, I think, where we're really not using Eventlet very well. And if we could just spend the little bit of time to figure out whether it's getting the threat, whatever issue with the thread pool in currently Nova that's for the database stuff, or figuring out a better driver library for using with MySQL, I think it would make a huge impact in the overall performance of Nova API calls and things that are going along that way. So the zen of concurrency. Another nice little utility, and this is actually used quite a bit in most of open stack, is the idea of a green pool. And what the green pool gives is a great way for controlling concurrency. So you can basically specify a set of however many in whether it's 100 or 1,000, whatever, green threads in a pool that are available for you to be used. This is just some code I pulled out of Swiftbench, which uses this, basically sets up a green pool for however much concurrency for the benchmarking tool that you want to be able to run at. And then what it does is spawn a lot of processes. So the spawn and the spawn in calls are basically just lightweight, spin up a green thread and run this function. So it's running the self.run function. And what it does is, so the total is also how many total requests I wanna make. So what it does is it loops over for as long as, I need to keep making requests, send to the pool, tell the pool to spawn this function. And so if the pool gets full, then it blocks until it has another space open, then it'll keep running and keep running and keep running. And then there's a couple of different ways you can interact with the pool here. It's just doing waiting for all of those to finish. You can make calls to say, hey, are they all done yet? You can do a lot of different stuff with it. There's also a slight modification of this called a green pile. I didn't have a chance to put it up there. But one of the nice things about the green pile is it creates a pool and allows you to iterate over the results of running things in a pool. You look like you were gonna raise a hand for a question. Oh, okay. So this was a great way to control concurrency and whether you're writing benchmarking tools or you're trying to do a lot of concurrent operations, but you don't want to do them all at the same time because you might overwhelm a system. So if you have an API call in that coming in that has to send out a ton of API calls to other systems, you might want to do that in a pool or something like that to limit how many connections are going out and things like that. The Zen of sharing. So this is one of the things that really confused me for quite a while when I was first using event a long time ago. It's got an idea of a green pool and an idea of a pool from pools. So to help try to alleviate what some of this, the understanding of it. So what a pool is is basically a way for you to share, it's just a very simple helper that allows you to share resources between green threads. A really good example of this also from, I was gonna put the example from Swiftbench up here, but the code was too much to show. But in Swiftbench, it keeps a set of persistent connections open to the proxy servers that it's sending these requests to. But you don't just want, you need some way to control all of these green threads having access to these connections. And so it has a really nice way, so you get this. So you create a pool, one way of doing this is you define a class that subclasses it. There's also a way to basically, there's a helper, you can do this by calling a function. But you have a create method that tells you how to create the pool. So in this case for Swiftbench, this is just some example code up here pulled from the docs, but in Swiftbench, it would create the connection to the proxy server. And then there's a context manager, so with an item from the pool as whatever you're using, so in this case, in the Swiftbench case, it would have been a connection, do stuff with it. And then the context manager handles putting it back in the pool of usable items to be used later on. This can be kinda handy when you have to share connections or share other types of things that you want to synchronize between different green threads. So the Zen of debugging. Debugging of it can be a bit difficult as with all async frameworks. So there's a couple of tools that kinda help us with that. There's this eventlet debug hub blocking detection. So if you set that up within your server, what that does is actually set up a special little timer and all the stuff that's going on. And if it detects that it's spending too much time, and I think there's a configurable amount of time when making that function call, if it spends too much time within a piece of functionality sitting there blocked, it will actually raise a sig, I can't remember which one right off the top of my head. But it gives you a nice little message. Here's where it's blocked. Here's the trace back of where my code was getting blocked. And so it can be really useful in debugging. One of the other pieces that's really useful is the eventlet.backdoor. And they kinda got this idea I think from Twisted has a very similar idea. But what that allows you to do is for testing situations or you're trying to debug a certain server. You can spin up this instance, a green thread basically running this backdoor server, which just opens up on a TCP port. You can tell that into it locally or you open it up on localhost, tell that into it from the local machine and you get this nice little Python prompt. And from that Python prompt, you can actually introspect and inspect the objects that are within that running process that's running right now. Nova actually has support for that right now. I think Matt Dietz added that in, allows you to, if you set the backdoor port configuration parameter on your Nova service, you can actually, I can't remember if it was Nova API or one of the services, allows you then to tell it into it, you can like look at how many green threads there are, you can look at your objects within the system. And it's kind of a nice little thing if you're just having a really tough time trying to figure out what in the world's going on can be very useful. There's also within the eventlet debug class, there are quite a few other kind of helpers that you can turn, flip various switches on and off to enable throwing a lot of extra exceptions and things when cases happen. Like in looking for thread pools and exceptions and thread pools and things like that, which reminds me another, something I meant to talk about during the thread pools, something that you should do in the thread pools is whatever unit you're doing or function, do that as a smaller function as possible within a thread pool. Like don't write a whole bunch of code and then just throw that in a thread pool because there are some problematic parts with like captioning exceptions correctly that happen within a thread pool and things like that. So you really kind of want to limit the amount of things that you throw in a thread pool. Also the thread pool is limited by default to just 20 threads. So if you overfill your thread pool, so I guess I missed a part of my slide, but if you, so the question is, why don't we just thread pool all the things, right? There are some limitations to it. It can be a little bit slow at times. It can also overrun the pool. So if you run out of threads in the thread pool, it'll actually sit there and block and just wait for a thread in the thread pool to be able to be available. Okay, so this one I'm calling the Zen of Caring. So this was a fun little project one of the co-workers and friends is working on. Eventlet is an open source project very much like we are, right? And we're as guilty as everybody else of immediately saying sometimes, well, that's a great idea, patches welcome, right? And so we're often, very often, the very first to criticize, oh, why does an eventlet do this? Why can't we do this? You know, there is still a community around it and they are more than happy to accept patches and accept help or test cases where things are failing so they can fix issues. So, but anyway, so as we've been using it in Swift, there's a couple of areas in Swift that we've been wanting to kind of help trying to make it better. And so Michael Barton has been working on what he calls Swift Accelerate to kind of Swift Accelerator basically. And so some things that he's been working on is a live event-based eventlet hub, so that's in there, and which can give you a little bit more performance for a lot of concurrency and things like that on a larger systems. He's also working on a faster Whiskey server with send file support. So a lot of this is written in Siphon, which basically allows you to write Python with certain aspects of it and parts of it written in C. So some of the parts that can really utilize some speed and then what happens is you take that Siphon file, it actually generates C code based off of that. And so then that gets compiled as a Python module that you can use. Yes? Sure. No, so the question was, does this bypass the global interpreter lock? Or does that affect it? So the global interpreter lock is only affects threading within Python itself. So it's a bit orthogonal to eventlet in general because eventlet is a cooperative multitasking unless you're doing things in threads, then that's gonna be, right? Right, so the main thing about the thread pool is the thread pool doesn't give you true OS threads in the sense that, well they are true OS threads, but because of the global interpreter lock, there's no ever two threads are gonna be running on more than one core at the same time. So that doesn't give you the capability, just like in current Python doing threading coding right now, or using threads by themselves right now, that doesn't allow you to scale to multiple CPUs. There are already really good patterns, we do this in Swift, I know that we do this in Nova as well too, of being able to fork processes to be able to like, so for example, your API servers and things like that, so you utilize more core, so you maximize what you can do in a core using like cooperative threading and threading in general within using event, and then using eventlet, and then you can fork that to be able to paralyze that across cores, right? And that turns out to be very effective and has worked very well with us in Swift. But the, so the purpose of this though is to try to speed up some of the things that happen in the WISG server that is currently just pure Python. And all this, by the way, is still very experimental, very work in progress, but it's just some example of some things we can look at and play around with, and if you guys are interested in poking at this type of stuff, it might be interesting to look at, but also just, these are ways that we can contribute back to eventlet. There's also, he's working on a faster T pull implementation, a little bit more flexible T pull implementation. There was a discussion earlier in the summit about Swift in general, about how we could better interact with disk drives in Swift, and having a custom Swift pool for each disk in Swift, so that we can have better disk IO within Swift itself. And Sam Merritt was doing some interesting stuff there, so hopefully the guys might be able to get together and work on that a little bit. And also some initial work on doing async file IO, which is kind of difficult to do well in Linux. And so I don't know if that will actually work or not, but, or get to the point of working all the way, but there's some interesting work there getting started. But the main reason for putting this up here though, is this whole idea of, we can make it better, right? This is, we're all part of the same community of, let's hire some ways that we can make this better and work better for us. And then my final slide, the zen of the green grass, because the green grass is, or the grass is always greener on the other side, right? And so the questions always pop up, well, what about G event? What about twisted and all these different things like that? The reality is, these are similar problems you're gonna have in all the async frameworks. How do you deal with blocking stuff is very similar problem in twisted and G event as it is in event let, as you wanna be paying to solve there too. So, but there's an interesting work going on. So PEP 3156 is, if you're not familiar with PEPs, you're new to the Python world, that's a way of introducing new functionality and new ways of doing things in Python that get vetted by the community and then get implemented. So this is a definition of possibly in Python 3, a way that they could standardize on an async framework within Python in general. And so rather than worrying about, oh, should we be using G event, should we be using twisted, we should really be evaluating this and getting involved with this and see what they're doing. Cause I think this is really gonna set the grounds for the future of async in Python. Also with all the talks of trying to port all of our tools and all of our services to Python 3, I think this really kind of relates well with going that route as well too. So I would really like to see more of us get interactive, be more interactive in this process of trying to figure out what the next, what async is gonna be like in Python down the road. So at this point, I wanted to finish a little bit early so we could have time for questions. I wanted to focus more, cause we have a very broad audience here today. I didn't want to go technically too deep, but and we have a lot of new people here. So I wanted to, hopefully at least everybody walked away with something new that they can use with event in their projects, but I also wanted to open up for any questions for specific parts about it, or specific ways we're using it currently in OpenStack. Yeah, mm-hmm, right. And well, you know, that's why we have code reviews, right? And testing and things like that. Yeah, yeah, I know, it's not all these Augusts, and that's why I was talking about earlier, that is one kind of the downside is you do have to be aware of the effects of whatever libraries you're using, the calls you're making and the things like that, which is a little bit of a downside. Now, on the logging specifically, you bring up an interesting point. I saw several references, I'm not real familiar on the Nova side, what some of the specific issues are with the logging function over there, but we did see logging issues on Swift, and if you look in the Swift Common Utils, we do some stuff to patch logging much better than EventLink currently does. So if that's causing you guys issues, I would highly recommend taking a look at that, because that definitely helped out a lot of issues that we had with logging. Sure, there are no stupid questions. I see what you're saying. So the question is, other types of coding environments and programming languages in general have this weight of annotating, this is a non-blocking section of code, right? Is there anything similar that we could do in EventLink? Outside of just general testing and some of the debugging stuff that I was showing you about earlier that could kind of help with that, there's nothing in the current library proper that you would want to run just all the time in production that I could think of, that said, it doesn't prevent us from possibly creating something like that. So you could create a contacts manager or something like that that utilizes that blocking checking code that says, this should be non-blocking, it's like a critical section or something, but that seems a little, that might be going a little bit too far. I don't know, it'd be interesting to try out and experiment with, right? Any other questions, I think we, there's actually, why don't we line up the microphone? That'll be easier for everybody to hear and for me to see, because these lights are very blinding. Have you guys experimented with PyPy and EventLink? Not yet, so PyPy, if you're not familiar with what PyPy is, PyPy is a re-implementation of Python within Python itself and how they did that actually is they created the subset of Python called Rpython, which is stands for restricted Python, and then they implemented the Python language in this Rpython by doing that, that allows them to do some really interesting things, like kind of create this JIT and things like that. It's only been recently, like in the past month or two or so that they've enabled the JIT with the stackless-like functionality that's within PyPy, so it's just now getting to the point to where I think it might be possible to start playing with, and we just haven't had time to play with it yet. And if you haven't messed with PyPy yet, it's still a bit unwieldy, you have to compile it yourself which is a day-long process and stuff like that, but it's definitely something to watch and it's something that's getting there, and I think has some, definitely has interesting implications for the Python community in general. A common standpoint, how does it scale compared to like the other libraries like Asyncore or Twisted and G-Event? I mean, I've played on a little bit with these libraries and one of the use cases I was trying out was like concurrent downloads at a large scale and I was curious on your thoughts on how even led would scale compared to others. So, depending on which benchmark you look at, there's been a lot of people that have played around with and benchmarked them, they're all gonna be pretty close in reality, right? And all these benchmarks are very trivial benchmarks, so they're not really even really good test cases of what they are. The important thing I think is not so much like, oh, can I get 1,000 requests per second versus 1,100 requests per second on this one node, right? Because in general, in distributed systems, you're having, especially in Swift's case, we're scaling horizontally, we have so many disks as it is that we have plenty of servers for server requests, so as long as we can scale reasonably horizontally and that's not holding us back, then we're doing good, right? It might be a little bit different on the Nova side because you have a certain number of API servers, but I don't think there's anything that limits us. For example, a non-trivial data point for you, and it's been a while since I've done this benchmark, but just our object server alone, so our object server handles writing things to disk and things like that. When I was doing some benchmarking not too long ago where it's doing the forking to take advantage of the extra cores in the server and things like this, and I think it was either a 4A core server, I can't remember, but it was doing almost 9,000 requests a second to that one storage node, and that's a non-trivial workload too, that's writing files to disk, that's sending out updates, that's doing all this different stuff. So that's a non-trivial amount of performance on, that's one data point for you. So, and I would expect that whether you're using Jivint or Twisted or things like that, you would probably get very similar performance, right? Our biggest problems aren't so much the library that we're using, but it's more of where we're blocking and where we're misusing things. So, for us, our problems are network and disk IO. For Nova, your biggest problems are gonna be database access regardless of whatever framework you're working right, because that's what's probably eating up most of your time, and a lot of the computational types of things where you guys are getting much fancier in the way of how you determine where instances get placed and things like that. So, that's where really you're gonna run into these performance issues, and I don't think the framework plays so much into that side of things. Any other questions? Oh, yes. Sorry, why don't you walk up to the mic if you have questions? Because it's hard for me to see with all the lights and everything, it'll be easier for everybody to hear. I got a question. Do you have any tools that will trace for performance reasons like profiling that work with the green threads that you know of? So, there's off the top of my head, I can't remember what it's called, but there's a couple of tools that people have built around. You can use some of the basic profiling, though it's a little weird. I can't remember what I've taught my head. There's some gotchas to do with it, but some of the general profiling stuff within Python you can use. Someone did create this really cool kind of visualization tool that introspects what's going on, and it gives you this kind of cool graph of like, here's when green threads are getting created and removed and what they're doing and things like that. I just can't remember what it is right off the top of my head. I'd never got a chance to use it, but it looked really cool and I've heard other people use it that have been pretty effective. I actually just personally use S-Trace whenever I run into issues to see what's going on, so that usually gives me a pretty, from a performance perspective, if weird stuff's going on, I can either see that something's blocking or a funny side story that the default Python HCP library doesn't buffer the header reads. So I was trying to figure out why my benchmark was not performing as fast as it could, and when I did a S-Trace, I saw all these one character reads going through the system. I was like, what in the world's going on? And then when I traced it down, it turns out they fixed it in three and somewhat in 2.7, but that's why we have a buffered HP stuff in Swift. But so when it's reading in those headers through HTTP, it's just doing them one character at a time, and that was kind of silly, but so there's definitely tools out there, it's not great, unfortunately, but that's an area that could definitely be improved on. Oh yeah, that's fine, sure. Well, so the question was, can EventNet monkey patch the blocking calls in MySQL? And that goes back to one of the things I was talking about earlier. One of the difficulties with the default MySQL library for Python is that it's written in C and exposed as a Python module written in C, and all those socket calls that are happening are happening in the C code. So there's no way for EventNet to inject anything into that C code. And so that's why those MySQL calls block. Now there are pure Python MySQL libraries that send the data over the wire directly and things like that, but I've been told that their performance isn't as great as the normal C library, so they kind of shied away from those. But if it's pure Python and pure Python, then yes, EventNet could monkey patch all the MySQL calls that are going on. Did that answer your question? Okay, cool. Any other questions? Yep, sure. Sure, I think so the question was, what's the future EventNet, and how does that apply to Tulip? So if you're not aware, so Tulip is a reference implementation of this PEP 3156, and what Tulip basically is, is a reference implementation, right? It's kind of playing around with this idea and trying to figure out how it might look, how a default implementation of this might be. So like, there's nothing that prevents an implementation of EventNet on top of PEP 3156. 3156 provides all the base framework that's required to do async frameworks and then provides a kind of reference implementation on top of that, which is kind of what Tulip's looking towards. So Twisted will have an implementation, I'm sure, certainly, based off of PEP 3156. It's quite possible that either EventNet or some, what I would imagine is a combination of, of course, GEvent is basically was a fork of EventNet that changed some of the functionality a bit. I imagine something between EventNet and GEvent will create something new that will run on top of PEP 3156. This is just kind of my, if I were to guess and look into the future, which nobody can really do, it's more of like something like that will happen. Somebody will write some functionality on top of this PEP that will implement a coroutine-based non-blocking I.O. with Greenlit and things like that. That answer adequately? Cool. Thank you guys very much for your time. I'll be down here if you have any other questions you wanna talk about or see me around. Feel free to ask me and thank you for being here.