 I'm going to see the slides, so I'm going to look at them. OK, welcome to my talk. Sorry for the technical problems. My name is Bartosz Głaszewski. You can call me Bart. My talk is going to be about the little project that I created and maintained called LibGPiOD. So I've been around for close to 15 years doing all kinds of stuff. I do kernel development, user space, bootloader, RTOS. I maintain the GPIO subsystem in the kernel. I created the subject of today's talk. That is the LibGPiOD project. I contribute significantly to Yocto and MetaOpen embedded. And I also recently picked up interest in Zephyr, which has led me to becoming the maintainer of LVGL, of the graphical stack. So how many of you have ever used LibGPiOD? OK, so for those who haven't, here's a little recap. So even though we, as the GPIO maintainers, try to push for mostly writing drivers in the kernel space that use GPIOs, many users still want to control them from user space. And historically, this role has been fulfilled by the CISFS GPIO class that has existed for many years. But for many reasons, this interface has many shortcomings. And this has led us to creating the GPIO character device. So the character device has been released in Linux for the first time in 2016. And as such, as a character device, it requires you to use low-level C code to interface with it using system calls. I have to read, write, open, close. And so that has led many people to simply bounce off of the new interface. For that, we created LibGPiOD, which wraps that functionality in a convenient CAPI, and also provides high-level language bindings and a set of command line tools that allow you to replace your shell scripts that would otherwise use CISFS with calls to the command line programs. So if you want to imagine the whole stack, it looks like this. At the very bottom in the kernel space, you have the GPIO drivers that talk directly to the hardware. Then on top of that, you have the GPIO abstraction layer called GPIOlib that provides interfaces for drivers to provide resources and for GPIO drivers to provide resources and for other drivers to use them. And on top of this, we have the layer that exposes the character device to user space. And then the LibGPiOD project is the first layer in user space. It's comprised of the core C library, high-level language bindings built on top of that, and the GPIO tools, which are the command line tools. And the users can create their programs based on the library or command line tools using the provided tools. So this project, when I started writing it, it was a professional assignment part-time pet project. And it turned out that I was caught by surprise by the fact that suddenly when I released it, distros packaged it, people started using it. And I started receiving a lot of mail with issues and problems being raised. And then this was the way I learned that API design is actually pretty hard. And it's easy to commit various problems, various errors. And the problem with that is that a library's interface is usually carved in stone. So unless you do a next major release, you're stuck with your problems. And the lessons learned during the development are that you should always plan to throw the first version away anyway. So this is why so many projects have multiple major releases. But also that you may not know what you don't know. So in my case, it was I wasn't aware that many people that work with hardware will not be experts in low-level Linux interfaces. And they will simply not understand certain concepts. And I should have made it easier for them to get into the project than I initially did. And another thing is knowing your programming languages. When I started working on that, I was pretty confident with C. But my Python was more that of a C developer writing Python. And my C++ was outdated. But anyway, so those questions are relevant both to kernel interfaces and user space projects because we already had to redesign the GPIO interface in the kernel, the character device about which I'm going to be talking in a second. But first and foremost, it's important for user space. So before I go into libGPIOD v2 itself, the character device v2. The thing about the kernel interface is that we released it in 2016. We were pretty happy with ourselves. But then the users started coming up with ideas for extensions. They said, OK, so now we have a reliable event queuing, event reporting to user space. But we cannot configure the debounce period. So what good is that? And it was, these requests were valid. But it turns out that the way we designed the API was kind of by omission, was not ready to be extended. We didn't leave any paddings and padding in structures and committed some other mistakes that made it difficult or impossible to extend the character device. So last year, we released the new kernel interface for the GPIO character device, which comes with a lot of new features. So you can now configure the debounce period for your interrupt reporting. You can read the sequence number of events in case, for some reason, the events are reported in the wrong order, which sometimes happens. You can now configure the internal pull-up pull-down resistors in GPIO controllers. You can use different clocks for time stamping. And also, you can watch, be informed by the kernel about changes in status of lines. Because for the chips, we have few events. But for separate lines, we didn't have any mechanism to be informed about changes. Interstate, like being requested, released, reconfigured, I'm sorry. And probably the most important new feature is the complex configuration for requests about which I'm going to be talking in a minute when discussing the library. And also, this time, we paid attention to make the kernel API extensible so that if any valid request for a new feature comes up, this time we should be able to fulfill it. So a little statement. I really tried my best to release the library before this presentation. But we don't want to rush it. We want to do everything right this or as many things as possible right this time. And so the libGPIDv2 is not out yet. But the API, at least for the C and C++ parts, the interfaces are complete, not likely to change much. The Python part is still reviewed, but also is pretty much complete, should get merged soon. We will still do some reworks to the GPIO tools. I'm going to talk about it in a minute, but there have been some requests for the tools. Many people have complained about certain things, so this is going to be reworked too. And I really hope that the library can be released before ELC22 in Dublin, and I hope to be able to present it over there. So libGPIDv2. My goals for this major release were to make the usage more intuitive, because certain design decisions in the V1 made it so that certain things were complicated and convoluted, not very intuitive, and required to read docs and look at examples to understand. So my first goal was to make the usage as intuitive as possible, make it more difficult to commit any programming errors, and for the high-level bindings, Python and C++, to allow very short code and perform whatever you need to do in the least code possible, in the least lines of code possible. And also, the main mistake with V1 was that we tried to really replicate and represent the kernel model for GPIOs, the data model that GPIO uses in kernel, in user space. So this time, we tried to simply make it work well without necessarily trying to follow the kernel model. So in the kernel, it looks something like this. You have the providers and the consumers. So the provider is usually a GPIO controller. It exports a set of lines, and then the consumers, what they do, they request lines for exclusive usage. And these lines are represented in kernel as struct GPIOD desk descriptor per line. And you can also package them in special containers, which are called, I think, GPIOD desk array or something like this. So in V1, I tried to follow that model. So the chip structure, oh, by the way, so I'm not going to bore you with code snippets in slides because nobody looks at them anyway. So I'm just going to present diagrams that represent the data model. And for the code examples, you can look it up in the repository. So when you open a character device, you get the chip structure. And this chip structure allows you to query the chip for various information, and also allows you to get line or get line multiple lines in a special container that we refer to as line bulk. And this looks like what we do in the kernel, except that at this point, when you call the get line, it's not the line that you get, the object that you get, it's not yet represent the line requested for exclusive usage. Yeah, so this is a bit confusing. And in order to avoid that and also the duplication of get line, get line bulk, we approach it differently in V2. So in V2, it looks like this. You still open the chip and you still get a chip handle except that now you have to call either get info or get line info. So when you will get objects that represent either the chip information at the given moment or the line information for the line at given offset. So these are these structures that you see here that these are immutable snapshots of information at given moment. So unlike what happens in V1, when you have a line and you query its information, like if it's used, what is the direction or what are other settings, these settings can change in the kernel. But we have no way of updating the or rather of informing the user of those changes. We can update the line. You can update it and see if it changed. But there is no dynamic way of being informed about any change. So the user may think that the line has certain properties, but it no longer has them in the kernel. Now it's clear that the info objects that we are returning are snapshots of information at the moment of the call. And we can also be informed now about any changes about which I'm going to be talking in a second. So next thing is requesting lines for exclusive usage. So you take the line objects or line bulk objects that were returned in the previous call and you have to request them now. This is V1. So this leads to certain confusion because these lines can, you can, for instance, take several lines, request some of them, then package them together in a container called, like in a single container. And then try to perform various operations on these lines, like read values or set them or wait for events, except that this will not work for the non-requested lines. So this leads to confusion. It requires additional error checking in the library and has lots of corner cases. So now what are we doing in V2? In V2, you still have the chip object that you got in the previous call. And you do request lines. This returns a new object that's called line request. So this line request is the single point of entry for all line operations. So now you use this request structure to read lines, set them, wait for events, reconfigure them, everything. And also what is interesting is that now the objects are disconnected. They are not interdependent. So you can request your lines and then close the chip, forget about it, free the resources and you can continue using the request just for the lines that are needed. So the configuration, when requesting the lines, you need to configure your request. You need to set all the properties of the lines, direction, default output values, whether you want to watch interrupts or not. And in V1, we have a single request config structure, which not only is not opaque, which makes it hard to extend it, but also is used both for requesting the lines for the first time and then reconfiguring them, which makes it so that it contains information that is not needed for or cannot even be used when reconfiguring the lines. So in V2, we have split the configuration into two stages or two parts. There is the request config that contains information such as which line set, which offsets to request, what is the name of the consumer, what is the buffer size for the kernel buffer for the interrupt queue, and the line config, which contains exact properties for the requested lines, their output values, direction, bias, drive, settings, and whatnot. So this is, and the first one, the request config is used together with line config when requesting the lines, but when reconfiguring the same request, you only use the line config. Next thing is the reading the events. So previously, again, so you see this pattern of duplicating the interfaces for lines and sets of lines, which is, well, you get this duplication, but also it makes it so that it's more, it's more difficult to check for all the corner cases. So this is how you would read events. You would have your line object. You would use the interfaces for polling them and then read the events either from a single line or from a bulk, which actually, if you have several lines requested at once, they would share the file descriptor. Again, another thing that makes it confusing. And so in V2, what we did is we, so again, the request is your point of entry. We have a helper structure about, which I'm gonna be talking in a second, that's called edge event buffer. You pull for events, you pull the request for events, you read them from the request into the buffer, and then from the buffer, you can get it, you can read separate events. And we do that because it's a way for us to avoid unnecessary memory allocations, which I'll be describing in a second. So now about watching the changes in the status of lines. So as I said previously in the V1, we have this problem where something like, let's say you get your line handles from your chip and then someone requests those lines, unless you try to actively request them and fail or unless you regularly reread the information, you will not be informed that something with them has changed. If a chip appears in the system, you get a new event. If a line gets requested or released, you get nothing. So now we have a separate mechanism that is called watch, like line info watch. And when you call on your chip the watch line info function, it works like get line info. So what it will do first is it will return you the snapshot of the information about the line. But immediately it will start watching this specific line for any future changes. And if such a change happens, so if a line gets requested, released, reconfigured, I think that's all, you're gonna get an event, it's called info event that you will be able to, the chip has a file descriptor you can pull or you can use dedicated functions. It will raise a flag, you will get notified that there is an event pending, you read it and then in this event you know what type of event happened, what is its timestamp and also you get new line information. So this event contains the changed line info about this line. So if it, I have an example here. So you have the chip and let's say you call your watch line info on it and you get your line information and it says that the line isn't used. And then someone comes and requests the line either in the kernel or in user space. So the request line happens and you get notified about the new event pending, you read it and now the line info read from that event says that the line is now used. Which makes it so that we can now easily monitor anything that happens with GPOS. So this is the event buffer that I was talking about. So in order to, well, it reading lots of events seems like the use case that is, that should be made as fast as possible. This is from what I get from the mail I get from users. So right now we have this container that you allocate once, the edge event buffer and you read your events into it. So the memory for the events is part of that edge event buffer. So whenever you read new events it doesn't get reallocated. Your events are stored in the buffer and then you can either get your event and this returns you a pointer to the event in the buffer which lives, whose lifetime is tied to that of the edge event buffer or you can copy the event and then you can, it can survive the parent container. This is how it works in C but also, because many users that require SpeedStall don't want to use Python for SpeedReasons use C++. In C++ we have a whole mechanism of making sure that if you, unless you copy the event or unless you get it by non-constant reference it's not copied then if you get the event from the buffer by non-constant reference it will be copied behind the scenes. So this is, I think this works pretty nicely. So probably the most important feature that we have right now and this is really important for the unified line request are the configuration override. So we have this line config structure and let's say you want to request eight lines. Then you will set the default configuration for it. You will say, okay, I want it to be an input mode. I want to watch rising and falling edge events on all lines by default. I want to set the pull up resistor and I want to use the real time clock for the timestamps. But also in the same request you can set overrides for certain lines. So you can take an offset and set its direction to output, drive it to some specific value and also set whatever you can make the drive. You can set the drive option to setting to open source or other and override the default output value or the bounce period or whatnot. So this allows you to create these very complex configurations if you need that. There are limitations in that. So the library doesn't set any limitations on what the user wants to do. But when at the moment of making the request the kernel will try to translate this into the kernel configuration and because the kernel has a limited size of the structure that can pass through it during the request these configuration options can simply become too complex in which case we're simply going to fail the request. Next thing that is interesting are the sequence numbers. So every edge event they interrupt by being reported to user space has the global sequence number and the local sequence number like per line. So the global is for all the lines in a given request and the local sequence number is for single line, for single offset. So it looks like this. You get the first edge event for offset zero. So both the line and global sequence numbers will be one. Then you get another for offset zero. Now the line sequence again both will be the same but it's gonna be two. Then you get a third event this time for a different offset. So now the line sequence number will be one for this line because it's our first and then the global sequence number will still be incremented by one to three and then the last edge event back to zero and the sequence number local is three, global is four and this is how it works. So this allows you if you are reading lots of events and it may so happen that they get reported in a wrong order to user space you can always get them back in order. For instance, if you want to implement some kind of bit banging or something else that requires you to use proper ordering. We have a new, so libgpi.v1 uses the kernel module called GPIO mockup for testing. It exports simulated GPIO chips except that this module has been around for years and it requires users to unload the module and reload it again in order to change any configuration when you want to run lots of tests with different chip configurations we need to unload them and reload them all the time and also had a very limited set of configuration options that would be passed by kernel module parameters. So we have a new kernel test module for GPIOs it's called GPIO sim. This one works differently. This one works with using configfs and ccfs. So you, how it works? You have your configfs directory for GPIO sim. Inside it you create a new GPIO device. Then inside it you create one or more GPIO banks and you enable the GPIO device using a special parameter and then bam, in ccfs there is a new GPIO device that pops up with a number of banks that you configured and you can, from ccfs, from special attributes that we export, you can control it. You can set the lines that would be reported to the user that uses, well the chip technically can be used in the kernel but it's mostly aimed at libGPID. So libGPID can, you can set the values that libGPID would read or you can read the values that libGPID is setting and this allows for easy testing of both the GPIO character device, libGPID and the GPIO system in general. So we have a huge set of tests that try to verify that all corner cases work fine. So general new features of the library. So this time all C objects are opaque as they should be. We simply expose all kinds of interfaces for controlling them. All C classes have their implementation hidden so that ABI changes are no longer, we don't risk changing the ABI unnecessarily. Python interface has been reworked and consulted with some Python experts so that it really, this time it follows all the guidelines for Python. And we have Rust bindings coming up. Viewresh Kumar is taking care of that and I hope he can get them in soon. We have some reworks to the GPIO tools. So we're gonna have a new program that's gonna be called GPIO watch which will allow us to use the info watch interface to be informed by any changes to lines. We will have an interactive mode for GPIO set. So an issue that many users have raised is that GPIO set doesn't keep the lines. So you have to keep GPIO set, the GPIO set program alive in order to drive the lines. So we have an interactive mode now which, well, you still have to keep the process alive and it will stay like this because this is how inherently the GPIO character device works, but this time we will have a proper explicit interactive mode and that will say, okay, this is the console for your GPIOs. So this is coming up. We will be able to specify lines by name because we have this program for finding the names but you would have to find the names and then put them in some kind of variables and then pass them over to any program that refers to them by numbers, by default. So you have to translate names to numbers. Now all the programs will be able to refer to lines by their names and why are we making it? So this is the only code sample that I'm posting. This is just, so the library right now has 20,000 lines of code but this is all so that you can use five lines of code to get the desired effects if you're using, for instance, Python or C++ or Rust in the future. So C being C is much more elaborate but yeah, this is an example of how to request a set of lines and watch them for events in Python and print the events. So yeah, I think this is quite simple. Okay, so I only have three minutes left so these are the links. So the first is obviously the repository. We have the branch that is called NextLibGPAD2.0. This is the branch where all the code lives right now. It's gonna soon be merged into master and gonna release a release candidate. Development happens on the Linux GPIO mailing list. Please help us review the new interfaces. I will soon repost the entire series. If you're subscribed to Linux GPIO, you can help us review and special thanks to Kent Gibson who's from Perth in Australia who helped me immensely with this interface. He does an amazing job reviewing patches and also is behind most of the code for the character device in the kernel. And yes, well, I will take questions. I made it in time. Yeah? Are we, do we have a microphone? Thank you, Gwyn Bartos. When we enable pre-empt RT, for example. Yeah. If you go into your code example, what level of noise are we expecting within, let's say, read event and how are we, is there any sense in expecting consistency in that noise between versions of GPOVD? Are you guys doing any time, any sort of timing analysis for the flow through from user space all the way from the kernel? I'm not sure I understand what you refer to as noise. The time taken from user space application that is calling into the GPOD, let's say read event, right? To the point where I get the value back. How consistent is that time period? If I put it in a loop, for example. It's, I don't have the answer now. I haven't tried it with our preempt RT. It's a good question. I will run some tests and, so for, in general, it's pretty consistent without preempt RT. I don't have the answer for preempt RT. But that's a good point. I will get back to it. Can you specify drive stones for some of the special, like in the driver, you can manipulate register directly, but is this allowed to set some other character for those pins? Like you cannot, so the whole goal of the GPOD so that you cannot modify, like write the registers of the controllers. Right. It's a whole abstraction layer on top of the drivers. Right. So in the API, do you have place to specify drive stones like some of the poor, poor down? Yeah, yeah, yeah. It has all sorts of options. Yeah, I didn't mention all of them, but there was a, like the, if you go to the current V2 branch and you build the docs, you will see all kinds, like if you go to the line config configuration, like the structure, you will see all kinds of options that you can set. Okay. It's quite extensive. You can override them, you can set defaults, you can set all kinds of values here. Comparatively. You can set the internal pull-up, pull-downs, register, you can set the way how the lines will be driven, like push-pull, open-source, open-drain, you can set your own debounce period and yeah. Okay. Pretty much everything that you would want to use from user space is available. Okay. But what if some chip doesn't support that? Oh, it will either fail or it depends on what the driver does. Some drivers just ignore these configuration options, some drivers return errors, like it operation not supported and it will just do whatever the driver returns. Right, makes sense. Okay, all right. Thank you so much.