 Hello, everyone. Welcome. Saul is giving the next talk about Libov. Please welcome him. Thank you. Thanks. So I'm Saul, or Sa'Gul, everywhere else on the internet. I'm one of the League of League Corps genders. And can I get a quick show of hands who knows about League of League? All right. So you may learn something, hopefully. For those who don't, it's a cross-platform is Incarno's I.O. library, which does a little bit more. So we try to do networking stuff, but also other cross-platform stuff that we need. It's relatively small. That is 30,000 lines of code without tests, which is not a lot for a C library that tries to do anything and everything. We have an extensive test suite, so I way didn't count it for this, but we try to test everything. And we have a vast CI infrastructure kind of donated by the Node.js project, which makes it robust. But of course I would say that, right? It's designed for C programs that means the joy of callback held in JavaScript, I guess, and is used by many, many projects these days. It started off as a way to bring Windows support to Node.js, like in a good way. So Windows is a first-class citizen here, in case you want to support Unix and Windows with a consistent API. And we have a Wiki link there with all the projects that use it. If you want to use it from programming language X, it's very possible that somebody wrote bindings for it, because there are many bindings out there. I personally wrote the ones for Python, and that's how I got started into all these things. So what is it we do in LiveUV? It's an event loop, so it's a single-threaded event loop, follows this model, and everything that surrounds this slide, it's actually tied into this event loop. So we can do timers, signal handling, so there is no problems with where to help the signals get dispatched. We go child process management, TTIs, TCP, UDP, name pipes, file system operations. This is, for example, a thing that you don't typically find in a networking library, but if you want to do a cross-platform application, you will need to access your file system, and Windows is a mess, and nobody knows how it works. So we take care of that for you, and you just have to use it. We have some trading utilities and the coolest logo in the open source community. As for if you approach LiveUV, I think that the best way to approach it is from the outside to the inside. So in a nutshell, we have three constructs. The biggest is the loop. So everything pretty much runs in the context of a loop. So all operations are related to a loop, and it's where all the magic happens, if you will. And then we have handles and requests. So a handle represents a resource, something that is there to do some job. Let's say a TCP connection. So this is our representation of something. It's a handle. And a request represents an operation that has a starting and an ending. So for example, writing on a TCP connection is a request. We use a request for it, and this way. So when we do it this way, we can know when an operation ends and if it finishes successfully or not. So we have this differentiation, and requests always operate on some handle. And then we also have a vast array of other utility functions, for example, a way to get high resolution time, which you can use for benchmarks and the like. Let's look a little bit at like a block diagram here. So we do a bunch of stuff, as I said. So network IO related things, file system IO related stuff, other stuff, and then other OS independent stuff. Because in some cases, we have implemented it in such a way that it's not tied to the operating system, necessarily. And then we can reuse it. So for example, when it comes to the networking IO part, we have TCP pipes and TTIs, which we abstract them as streams. So they have a certain API, and they behave like streams. So they get the read callbacks, call you can write to them, send file scriptors over them, and so on. UDP and poll handles are also dealing with network IO or sockets, but they are not streams. And they are all backed by this internal kind of thing, which is what abstracts us from IO polling in different operating systems on the Unix site. On Windows, we don't have this, because Windows works different. But on Unix, we have this layer, and then every different Unix system sits on top of it, and we can implement them easily. For file IO or related utilities, we have file system requests, work requests, which allow us to get a piece of work, spawn it to a thread, do the work there, and then come back. And we have name resolution functionality as well. So get a DDR info blocks, but we run it on a thread and give the result back to you. Just a quick word on threads. We use threads just for file system IO, not for network IO. The reason why we do it is very nicely summarized in this blog post by the BitTorrent guys. There is no way to do asynchronous file IO cross-platform in a reliable way. Our default thread pool size is four, and let me say it one more time. We don't use it for network IO. The internet is usually wrong oftentimes, and I've seen many diagrams of people trying to explain what liveUV is with incorrect diagrams looking at queues and thread pools, and I don't know what the hell. We don't do that. We only use threads for file operations. So it's single-threaded. There is a thread pool, but that's for file operations. And we get the results in the loop thread anyway. So to the eyes of the user, there is no thread pool. We have other stuff that you can use as well. So we got timers, some other types of handles that operate in the loop at different points of the event loop execution, and then signals and processes that are operating system dependent, for instance. So how does our event loop run? Well, so we start by thinking, do we have to do anything? Because if we don't, then we're done. Then we run the due timers. So timers that are due right now, because we scheduled them in 20 milliseconds, and they hit. So it's time to run their callbacks. In some pending callbacks, we also run them at the top. That is, for example, callbacks that have happened as a result of a write operation, we report the result there. Then we run other types of handles that are loop watchers, these things that run right before polling and right after polling. When we poll for IO, all the read and write operations run. And at the end, we run close callbacks. So when you close a handle, when you want to dispose of it, not use it anymore, this operation is asynchronous. So you call UV close, and then when the callback hits, you can free the memory. This is because we need to do some work in the background sometimes. Now, I mentioned that UV came in because Node wanted to use it. So let's have a quick look at how UV is used within Node. So the Node event loop, in a simplified way, basically runs timers. Now, the thing is Node.js, for performance reasons, doesn't use one UV timer equals one node timer. They coalesce them. So they have one UV timer backing potentially multiple node timers if they are scheduled at the same millisecond. So they have different buckets there. Then we run some pending callbacks. The polling happens. So all the data received callbacks are also fired. And so on and connection callbacks as well. And then there is two weird things happening. The first one is set immediate. So set immediate runs on a check handle, which is after polling for IO. So it's called set immediate, but it doesn't run immediately. Yeah. And then there is process.nexttick, which you probably know, which is supposed to run a function on the next tick. But what the tick is, nobody knows. And in a nutshell, it doesn't. It actually runs those callbacks run every single time that we call into JavaScript from the C++ code. There is a helper function called node make callback. And it drains a little bit of the callback queue from process.nexttick. It's a little bit counter-intuitive. And you should never program or architect your application with any of these in mind. It should be transparent to this. Not like, oh, I'm going to schedule these. And then because the developer is going to do that, I'm going to, no, don't do that. Because hopefully one day we will get this sorted out and then your application will break. So not a good thing to do. So if we look at it from node's perspective, we follow an onion architecture. So we got your net sockets wrapping a TCP wrap in C++, wrapping a UV handle in C, wrapping a file descriptor on Unix or a handle on Windows. And the idea being that you can happily put any of these layers as long as it's above this one, the abstraction levels should be high level enough so that you don't have any problem there or you don't need to take into account that Solaris, that's, I don't know what, and that macOS behaves some other way. Of course, a good way to learn all this is to write your chat application. Of course, why not? So I wrote down, I wrote one. It's on these repository. I wrote it to show different usage patterns in using LiveUV, so it's a TCP server, it accepts multiple connections. Which user that joins the room gets a Pokemon name assigned. And the idea is that you can see how the different moving parts work together and different patterns on how to deal, for example, with memory allocations. Because we got a little time, I'm not planning on showing it to you here. So the idea is you go and look at yourself and let us know if you run into any issues or whatever. Only yesterday I learned about two other applications using LiveUV while I was in a different dev room. So what do you know? If you're already using it, please do come talk to me and let me know. And sometimes some problems may happen in your event loop so I want to give a shout out to everyone in the core contributors team. It's seven of us at the moment. Five of us are active. And we work on it. The release cut-ins is when we feel like it's a good moment to do a release. And sometimes when Node.js asks us, hey, can you please do a release because we want these features in. It's basically, do you want something don't-driven development? So if you want something, don't you do it? And otherwise, well, things would stay. But we're actively working on it and hoping that maybe this year we can get 2.0 release with cleaning up some craft-like Windows XP support, which was like 2,000 lines or something. That was very nice to delete. And if you want to reach out, our website is just a quick way to arrive at the others. So our API documentation is at docs.libubi.org. There's an IRC channel, LiveUV, also Google Group. And also, we are in Stack Overflow. And I believe I have time for maybe one or two questions if there should be any. I have a question about compare with Boost Azure because it uses similar model. What is the difference from Boost Library? Right, well, so LiveUV is very small, very self-contained. It doesn't depend on any. So Boost, I haven't used Boost myself older than in other projects. So my perception of it is that it's a big library with different components. LiveUV was initially, from the beginning, it was designed to be a small thing that you could use in any project and it would abstract you as much as it can, but it's not the kitchen sink solution. We're actually thinking about creating a work, we're going to create a new project called LiveUV Extress, where we're going to add some more stuff that doesn't belong in core, but can be useful for some people. This one is also written in C, so there's also that difference, but in a way, they solve the same problem. So you want to do some cross-platform networking operations and also file system utilities and abstracting all this is a job as well. So I would, of course, say LiveUV is easier to use, but that's what I know and yeah, so that's pretty much the answer. They solve the same problem, but they are different. Anything else? Would you recommend LiveUV for a very multi-threaded application? Well, so as I said, the event loop is single-threaded, but the event loop is the context, so you can essentially run multiple threads with multiple loops on each of them, as long as you don't do cross-calling because our API is not thread safe, that's fine. So many projects do this, so for example, if you want to do a similar model to what NGINX does, for example, so multiple processes and then some event loops as well, you could do multiple event loops on multiple threads and that's perfectly fine. So I think that's all. Very quick last one. I don't know if the alarm clock wants to allow it. I don't want the clock to blow on me. Let's try. Since we all like Schwag here, I'm gonna drop some things on the table right outside FYI after I leave in case you're interested. Hello, Jess. I would like to ask for LiveUVXX, whether it's official? XX. There's a C++ wrapper. Oh, so no. Basically LiveUV is written in C and that's the one and only official anything there is to do. However, all the projects there that wrap it or not wrap it, some of them like were closed, cousins, let's say, so for example, I co-maintained the library and I also write the Python binding, but it's not official, let's say. We don't bless any binding. Here goes the alarm clock. Thanks a lot, Saul, for the talk.