 So, I'm Jonathan Cameron, I'm the maintainer of Industrial I.O., and yeah, been doing this for a while and it turned out when we came to sort of time to submit papers of conference it's been pretty much ten years to the month since the first few emails went out and the first few RFCs. So, I thought we'd do a talk on that, that period of time really and what happened. So, going to break it down to a number of different things. First of all, to understand basically any piece of code, one of the fundamental things is why did it end up as it did? We all know that if we start again, we'll end up with a different answer to any given sort of software project than we actually did. So I'm going to provide a bit of information about what I wanted to do, what some other people here wanted to do and how we evolved into where we are now. Some stuff on interface principles, this is because one of the biggest things that Industrial I.O. really is, is a user space interface. This is true of pretty much any kernel subsystem because you can change anything inside the kernel but what you can't change is anything that will make a regression in user space. I have a very, very brief summary on the I.O. architecture and this is more about understanding some of the issues that we're going to come on to talk about. This talk is not meant to be a tutorial, I've got some good references at the end if that's what you're looking for, but it's been covered reasonably well in previous presentations. And then we're going to focus quite heavily on some of the mistakes we made. I made most of them so I can be rude about them. And finally, I think one of the most important things about an area of the kernel is the community that's been built up around it. So I'd like to add a few words on that as we get towards the end. So first of all, some history. Now to quote from the Cathedral on the Bazaar, the famous sort of open source text, all good software projects start with a developer's personal itch. So what was mine? I was working on an academic project called Sesame, which is an incredibly contrived acronym that I can never remember, but has something to do with sensors and sport. And what we were doing was we were sticking sensors on athletes. The main target was pretty good sort of national level sprinters. And that was somewhat challenging. There weren't many platforms out there 10 years ago to do this with. We did have one. We had a thing called the iMote 2, which was very similar to the gum sticks if anyone remembers the early gum sticks. It was based on a PXA 270 processor. It had sort of basic wireless capabilities, but not Wi-Fi or anything like that. And it had the sort of early versions of what we now call capes or shields, those sort of things. You could plug another board in or a standard set of connectors. Now one of the first things we did with that is we actually wanted to run a Wi-Fi chip on it. The very early SDIO open source drivers were just beginning to appear. So that was a Marvell Laberitas driver. And to do that, we needed to actually have the current kernel as opposed to the one that we got from Intel Research that was some years out of date. So we did that. Upstreamed that. Board support's still there. Then we came on to what we actually wanted to do was to measure things. So we were looking at sensor drivers. And the obvious question was, yeah, well, we've kind of got used to this upstreaming thing. It's quite nice. It goes forward with new versions of the kernel. We get the new features. We support new parts. So, yeah, it's upstreaming the sensor drivers. So we had a bunch of them from Intel that they'd done. They were in various states. And it was sort of, OK, so how do we do this? So now we come on to some requirements. What do we actually want to do? We wanted a simple interface option. What do you think? Very, very straightforward. Hardware monitoring had been around for a while, used for monitoring things like fans, temperature sensors. You just read a CISFS file and you get a value as a nice string. It's very simple. We wanted efficient streaming, however, because you don't actually get to do much sort of inertial measurement of a sensor on a sprinter's leg if you only pull it a few times a second via CISFS. So we could look at input, which has the whole event system. And it's moderately efficient. But I targeted at one very particular application, which is human input. So we did what you did in those days. We asked the Linux kernel mailing list. The key thing was, back then, a reasonable number of people actually read the Linux kernel mailing list because it wasn't quite such a far hose as it's become. It was still a good few hundred messages a day. But you could actually get replies. And then the answer was, oh, no, we don't really do that in input. Don't really do that in hardware monitoring. Doesn't really make sense to expand them. You have to do something new. Now, here's the first of the issues that we're going to raise during this talk. These were my requirements. I had one particular project. I had that personal itch. As we'll see, there were quite a number of other requirements came in from other users. So what is IO? It's called Industrial IO for reasons that I'd be honest. I can't quite remember. We got into a whole naming debate, and that's what came out. So I'll start with a slightly backwards definition, because this is one people often get a bit confused about, is what isn't it? It is not intended to be a replacement for those things that are already done well. So here I've listed it's not replacement for hardware monitoring. It's not a replacement for input. They both do their job. These days, we've had a few sensors that overlap with audio applications. Again, if it's an audio device and they're using it for audio, leave it in audio. IO is broad. We're not focused on one relatively narrow area. We actually tried that, because during the very early development of IO, Intel, or someone working for Intel, I sent a driver for an ambient light sensor and said, well, it's silly. It's a big complicated subsystem. And all I want to do is to measure a light signal. So we gave it a go. We got all the way to a pull request to merge our new little subsystem. So we moved over a couple of drivers from IO at the time, and various people chipped in at the end, including Linus. And he made a very good statement. As you know, if you've ever had a pull request rejected, it's somewhat of a knockback. But he was right. So he said, I do think that it's crazy to start doing new subsystems for every little thing. That way lies madness. Yeah. So we went back and carried on plowing on IO for another couple of months before we got anywhere. So now we get onto what devices do we support, because that sort of defines what IO is. So basically anything that's an ADT, so it takes an analog signal in, gives you a nice number somewhere in your processor. Digital to analog converters. So the DACs go in the other way. That was added a while later. But you get parts that do both. So it makes sense to share as much of the infrastructure as possible. And there are certain similarities, although there are also differences for what you do when you're coming in and when you're going out. So just listed a few things here. ADCs, you've got accelerometers, gyroscopes, magnetometers, IMUs, light sensors, chemical sensors. A lot of volatile organic compound sensors fairly recently. Things we measure in pollution. Dangerous gas sensors have started turning up. Health sensors, things like pulse oximeters, where you shine a light through someone's finger and you can measure their pulse rate fairly indirectly. Rotation sensors and others. DACs tend to be a bit simpler. It's mostly actual DACs and digital potentiometers. So now I mentioned earlier that the key thing is the interface of what? What is the interface for IO? It's meant, the purpose of having this interface is to allow generic user space code. So I've listed a few examples here. So we've got LibIO, IO sensor proxy, which is part of Nome, I think, and some work in Teldid on an Android sensor, how? There are others. Lots of people spin their own because they're doing one very particular application. And we have example code in tools in the kernel. Now, the key thing is it must be consistent. And one of my personal aims in any sort of user space interface is if a tool possible, you shouldn't have to read the documentation to work out what it's doing. And what that value you just read from CISFS actually is. So there are no magic numbers. So I've listed a few of the principles we built upon. So we decided, and this is different, for instance, from what input does. We decided that all control and metadata would be via CISFS. When you read from the buffered sort of interface and come out on a character device, you just get data. You don't get anything telling you what that data is because that's considered to be predictable because you've configured what you want to come out. We do have single channel polled reads via CISFS. So this is, again, similar to what hardware monitoring does. It gives you a very quick and easy way of just reading what's the value in my sensor now. If nothing else, even on high-speed systems, it's awfully useful for debugging just to be able to poke it with cat. We then use character devices for getting the actual main data flow out when we're running at any significant speed with FIFOs so that you don't actually have to have user space constantly ready to get the data. It can come back in an asynchronous fashion. We also use character devices for events. We do have some other interfaces for high-speed devices, but I'm not gonna touch on those much today just because there's too much to cover. So, very brief summary of the architecture and this will lead on to our second issue. So there were two sort of fundamental ways to use IO. You either use a simple polled read where you just read in from CISFS or use this concept of triggers and buffers. There was a talk earlier today looking at UAVs and he covered this actually quite well, so I'm only gonna touch on it fairly briefly, but the basic concept is that a trigger is, it's not like an oscilloscope trigger where you one signal fires off a whole range of captures afterwards and you get, well, whatever fits on the screen or whatever your buffers set to. It's much closer to the sort of trigger you'd use on a camera where you're saying capture me all of the pixels as close as possible to now. So what you will do is you will gather a series of concurrent samples from all the enabled channels. As I mentioned a moment ago, we use buffering. It's actually just implemented as a K-5o in order to allow asynchronous reads. So here's the simplest one. This is just a CISFS read. So we start there on the top left of the green block and we read a CISFS file. It calls down into the IO core, down it goes via callbacks, gets some metadata so it actually knows which channel it was. So it associates it with the file you actually read from. And at the end it chats to the hardware. Typically this is over I2C or SPI or any number of other buses. Yeah, pretty much anything, including weird and wonderful custom buses that people seem to like with sensors occasionally. And then on the way back, the value is returned up to IO core, which is responsible for doing formatting. So what it does is it takes a number in a particular and it has a specified format coming from the driver that says this is an integer plus some micro element, is the way it's defined. And this is just to give us something that's a bit close to floating point, but is a bit more constrained and easier to handle typically in the kernel. And then we return our string up to user space. Now the classic question we got asked in the early days and it's come up relatively recently with hardware monitoring, which is kind of slowly moving in the direction we did, is why do you have the core in there? You're just reading a file and your driver's returning a value. It could do everything. And the way it was done in hardware monitoring is it was always, the ABI was enforced by review. You just looked very, very carefully at what every driver was providing in the way of CISFS interface and ensured that it met with the spec. If nothing else, that makes review really tricky because you've got a lot of CISFS files and names to read. So that's one of the advantages. You enforce an ABI by making it sort of structured so that the core gets a definition of what it is going to read and you have to keep to attend. All of the file names and things are generated from that. And the other use is the first of our requirements changes that came in. So this came, particularly after a conversation at this same conference five years ago in Edinburgh with Mark Brown. He said, well, I kind of want to do SOC ADCs and they get used for everything. I want one subsystem. I don't want to have a different subsystem for doing touchscreens, for measuring the sort of battery voltages, these sort of things. It's all the same hardware. Obviously we need some stuff above to do the sort of user space formatting and ought to provide that information to where it needs to be. Now the problem was in IEO at the time, we didn't really have that layer of separation. But because we had this pass through the core, we can have a slightly alternative one. And so this is the in kernel interface that as you can see looks almost exactly the same. So you do a read, but now it's not from user space, it's from another driver. And it goes down through the same path, callbacks, ads metadata, all the way back. But this time we don't pass a string, we just pass the actual values and the description of what format they're in to the consumer driver. And this is used for a number of things. We've got a bridge driver to hardware monitoring for those occasions when you've got a nice fast, typically, so people quite often do hardware monitoring with one mega sample ADTs, which makes no real sense, except for the fact that they had a box of those on the shelf. So they use them. And we're not gonna have two drivers in Linux to support the very, very slow case where you're just reading a temperature a few times a second. So we do it using this approach of an in kernel call into an IEO driver. And there are other users, so thermal, battery monitoring and other IEO devices. We'll have some examples of those in a few minutes. So this is the more complex flow. So this is the one where we're doing the triggers, pushing to the buffers. So down in the bottom left there, we've got, A, this is a typical device. It has a data ready signal. It's self clocked, it has its own sequencer, and it's just feeding data out on its own sort of internal tick that you've configured. So the first thing you see on the left here is we actually have a separate chunk in IEO, which is known as an IEO trigger. Now this doesn't actually have to be coming from the same hardware at all. And also it doesn't have to just get fed to the same hardware. So in this example, we've got it split two ways. So we're actually running two different IEO devices off one data ready trigger. Seems an odd thing to do, but if, like I was in the early days, you're working with inertial sensors, it's quite common to want to be able to grab the gyroscope at the same time as the accelerometer, or at least as close as you possibly can and in some sort of constrained time difference. So from there, we go across and we say, okay, so we now know there's data to read. So we go off the hardware for the data, up it comes, and we call this thing called push to buffers, which originally just pushed into a K5O and up to user space. But again, because we had this ABI abstraction layer, we had the option when we needed the SOC ADCs to come along and say, well, actually what we can do here is we can put a DMUX in there. We can split the superset over all the channels that anyone's asked for into two different data streams and we can send one off to our consumer device or to multiple consumer devices. You can have pretty much as many of those as you like and one up towards our user space interface. So on the left in that branch to seven BI, I've got a bit carried away here, it goes to an internal, in kernel user and on the right, it goes towards our user space where we add it to the tail of the K5O and we have things like watershed interrupts. You can control how long basically it is before you send an interrupt up to user space and the user space sort of portion of the code can be sitting there using select or poll to wait until there's that much data available and then it goes in and queries and does the read and gets the data. So the problem we had was we've now come up with this situation where we can handle internal consumers and we can handle user space. Now, what we do still have is we have very tight coupling. That user space interface is not currently optional. It's been on the list of things to fix for a very long time and we sort of slowly occasionally moving in that direction. So the idea is that in many SOC cases, you actually don't care about the user space IO bit because no one's gonna read it. Now, I don't know if to anyone that looked a little bit complex. So the classic question we always get asked is why? Why did you end up with something so complex? And there's basically one word for that, flexibility. We had a number of different use cases. We had people starting to do software defined radio, although that needs a whole load of extra infrastructure which we'll talk about later because the data rates are very high. We had myself who was still trying to measure stuff on the athletes and we had the guys trying to do power monitoring, similar sort of applications or light sensors or to take someone's pulse. So the key thing here is not all devices have to do it all. Pretty much everything in here is optional and when you first try to drive it, you may well write only some small subset. You might just do CSS polling. So you just do the simple reads of a channel. Some devices never go beyond that. They're slow. There's never any reason to support the other interfaces. And as I mentioned with the triggers, you can actually end up in the situation where one device provides the trigger and this could be something like a high resolution timer. It doesn't even have to be a piece of sort of explicit IO hardware. We have CISFS triggers where you just poke a file in CISFS and that results in all of your sensors that you've got attached to that trigger, grabbing a set of data. And the other aim ultimately is that IO user space should just be yet another in kernel user. And the reason for this is that we only end up with one code path. So there's less code to look after. Flexiberty also lets us do cool things. So I like these examples. This is a fairly heavy contributor to my own recent years, Peter Rosen. And he always seems to want to measure really strange things. So in this particular case, he was looking at envelope detection. So this is where you've got a relatively fast moving waveform. And all you actually care about is what is the maximum value it ever reaches and what is the minimum value it ever reaches. Obviously you could sample that with an extremely fast ADC, but the approach that's often used is to use a comparator. A comparator, if it gives you a value as your output, is effectively an ADC. It's just an ADC measuring the maximum, minimum, or one at a time of the waveform. So he implemented this using those consumer interfaces. So what we actually have is we have a comparator driver that takes the input signal that you're actually trying to measure, but it also is using a DAC via a consumer interface. Kind of the wrong name for it when we're driving a DAC. But anyway, it controls the DAC, and then you get a nice interrupt out if your waveform crosses the threshold. And if it does, you change your value of your DAC and you try again and you run a bit longer. And sooner or later, you convert to the right place. For reasons I never quite established, he also didn't actually have a DAC. What he had was a digital potentiometer. So we ended up with this chain of devices. So he implemented the DAC using a digital potentiometer, using the internal consumer interface, which was then used by the comparator. And then the ultimate thing, ended up building one device out of a number of independent components. And you can replace any of these with an alternative implementation. Just recently, we have also gained our first generic ADC touch screen driver. So this is a problem for a lot of touchscreen controllers because they often have some strange built-in sequencing hardware. So they're not easily mapped directly onto an ADC. But there is a class of relatively simple devices, particularly resistive touch screens, where you can do this. So you can have one driver, plug it in to, so you plug your touchscreen into an ADC and you can use it as a normal input device. So this is sort of the biggest issue with trying to design a subsystem. Even over 10 years, I've tried to think what it would be like in longer time period. It's very difficult to predict the future. Some of these, arguably, we should have noticed. ABI mistakes. So things not to do should you be trying to come up with an ABI. Don't ever think, oh, wouldn't it be nice if we could just get rid of that index there? Because it's hardly ever used. So we can make it optional because you end up with things like this. So initially we thought, well, if it's an accelerometer, it'll have a direction, typically associated with an axis. And we defined x, y, z. We also have, separately, the ability to give channels an index because if it's an ADC, obviously x, y, z doesn't make a lot of sense. But then we sort of defined it so that typically you'd only use one or the other. Of course, lo and behold, along came a three axis accelerometer with two accelerometers on each axis, covering different ranges. And suddenly we ended up having to have user space support what was in theory always possible but never implemented, which was both at the same time. So we should just have had the index there from the start. Here's another one. In the early days, we were trying to remain as compatible as possible with hardware monitoring, thinking, well, there's loads of code out there. Let's try and use the same naming. Let's try and use the same units so that people can just use their existing code. It's an ADC in a different subsystem, but otherwise you've just got to point it at a different set of files. Now hardware monitoring is targeted at one very particular application, which is monitoring pretty much currents and voltages and things like fan speeds and temperatures on a motherboard. And they tend to come in nice, well-defined ranges. So you tend to measure things in millivolts because, wow, you're probably not going above 12 volts, so plenty of room in a reasonably small number of bits. And then comes along some of the three phase power monitors and some of the stuff where suddenly we're up at, well, small numbers of kilovolts and then some of the extremely precise ADCs measuring at the other end and we're down in very, very small numbers of microvolts or picovolts. Now we do have the ability to cover that range because I mentioned earlier we do have this concept of a type that allows you to basically redefine what the value you passed out was as the gain sort of pseudo-floating point without the nasty maths. So what we should actually have done is in the first place just gone, we're going to pick a standard unit. We're going to go with normal SI units, volts, amps, watts. Don't get into the game of trying to match. The reason this is a problem is that you do have to look at the documents to know what the unit is for some of the very standard measurements, whereas if we'd just gone with the basic ones, everyone would have known. And I'd say once or twice a year, one gets passed and we end up with a fixed patch going, ah, this was out by a factor of 10 to the six, which is only noticed once it gets to some user space app and they get a crazy graph. So here's another one. We had this sort of wide range in IO. We have lots of different devices. Now, we spent a lot of time trying to work out how to abstract a new class of device so that we can represent it in a way that is consistent with what we've already got. Now we did this for counterdrivers. So this is things that are measuring quadrature encoders typically. They're measuring something that's related to rotation, but they may not be directly. They don't fit well. We kind of made it work, but it was getting stretched more and more and more. We were abusing various interfaces. It wasn't pretty. So actually, we've got in fairly final stages of review, a separate counter-sub system. It didn't make sense. Sorry, my screen disappeared. Yeah, so having moved that out, we've ended up with a much cleaner abstraction. It's much more appropriately flexible. It's flexible in the right places rather than allowing you to do things that make no sense on quadrature encoders. However, we do, of course, have to maintain the historic ABI. So those drivers that have been in IO for a while, they're gonna have to do both, which is not great. So this is probably the most complex ones. I'll only touch on this briefly, but one side effect of bolting in the sort of SOC ADC use case with the internal interfaces was that we have a problem with our original CISFS interface. Just if you've got an in-curnal user, say, a touch screen, that's sitting there wanting to read the X and Y axis of the touch screen and he wants to do it at nice high rates, it's very hard to stop that process because someone came in with a CISFS read and just wants to ping some random battery measurement. And this is because there's no sort of indication of interest. There's no way of saying, actually, no, I'm gonna want that in a minute. Could you add it to those channels that are gonna go be captured every single time and just buffer it so I can grab it later? We don't have any clean way of doing that and I'm very open to any suggestions of how we get ourselves out of that hole. Right now, we just dodge it by not supporting it. Now, here's another common question on IO, which is high performance devices. Rather helpfully, it's a load of analog devices guys here. So if you want to ask about that, there's that over there. But it brings some problems. You need a different way of getting data out of the kernel. You can't be running through K5O. The overheads are way too high. You've got to basically be DMAing into a buffer that's immediately visible from user space. Now, we've had DMA buffers for quite a while, but there are limitations in the interface because we've tried to bolt it in to our existing buffer infrastructure and the one classic limitation we were chatting about last night in the museum was that we don't support multiple buffers and certain devices are going to DMA different channels into different regions of memory. Often these devices have very complex triggering systems because you're capturing something at extremely high rates and you don't just want to capture the same eight channels, sort of 1D345678, 1D345678, you want to do something more complex. You might have simultaneous sampling of certain channels and not others. It gets really much more sophisticated than you typically do on a simple SPI ADC. You often need inline metadata, so sometimes these devices will auto-range at very high rates, so we can't just read it from CISFS. They can be self-describing, so you will often get the device putting a little record at the beginning of a thousand samples or every 10,000 samples or something like that, saying, what's in those samples? And we don't yet handle these in mainline, so it's an open question on exactly how we can do it in a nice generic way that works for a wide range of devices. So I think this is my final issue. This is the hardware we're dealing with is getting more complex, perhaps, every day. There were a lot of things that used to be done by proprietary microcontrollers wired up to a sensor and are now done. They've sort of got a nice Linux system there. We'll just move it on there. So a classic example of this is something like a pulse oximeter. Now, a pulse oximeter works by just shining an LED from one side of your finger and typically putting a sensor on the other side and measuring how much light gets through. It's a bit more complex than that, but more or less. Now, the algorithms to actually do that conversion from a light signal through to a pulse rate both rely on long-term data. You have to capture a significant number of seconds worth of data, typically, to do it. And it's all heavily proprietary. There are open source implementations. There are, they're usually things that came out of academic papers, so, yeah, use them at your own risk. But it's not something that's sort of considered generic. Now, the way we've handled this so far is that the actual signal you're measuring is still something we can describe in a nice, consistent way. It's a light measurement. It's often a series of light measurements. And then we can move to user space to deal with actually turning those into the signal you want. It's not ideal, but it's where we are. Now, I mentioned earlier the importance of community. To be honest, the IO sub-system is to a great extent the community. It's not really the code. Code is fine and, well, interfaces are very important, but the rest of the code isn't nearly as important as the feedback that everyone gives each other and the enormous amount of review and things. So, let's go through a bit more of the history, just to jump back again. So how did IO get into the kernel in the first place? First of all, we did the classic. We did some posts to LKML. There's nowhere else to send it because it wasn't an existing sub-system. We got some feedback every few months. We put a new version up. Everyone had forgotten what the code looked like. We got the same feedback sometimes. Sometimes we got new feedback, new people got involved. Wasn't great, but this was just around the period when staging came about. So we thought, yeah, we don't. We kind of know our code's okay, but we don't really know where we're going. We haven't actually figured out what the right answer for the user space interfaces and things are. So Greg said, we take a sub-system through staging. Yeah, okay. So we did. And actually it works pretty well. We got a lot of great feedback because it allowed people to get around to it when they had the time. We had people giving whole days worth of review comments. They obviously spent considerable time going through the drivers. Auden Bergman in particular did a very detailed review that changed a whole chunk of the interface and just came out of the blue because it was sat there in staging. It was great. After a while, we got to the point where our user space API was stable and we could finally look at moving out of staging. So I had fun gathering data just to show the sort of progress over time. So sort of on the left there, we started off. We had our principle three drivers, I think it was, in the first bit to go into staging. There were a couple more on the mailing list at the time. Around about 2.6, 0.36, we started getting significant interest from other people. I think I may have written the drivers up to there with a few contributions from others. And it took us a while. It took us three years before we'd actually pinned down the interface enough to really consider moving out of staging. And by then we had a number of companies involved with submitting drivers in the hope that we someday get it out of staging. So we did. And as you can see, a load of drivers moved over from staging quite quickly. A load more new drivers turned up rather faster than we got the existing ones out of staging. And there were still about 20 in staging, but we've had peaks where things have got very, very busy where we've had, I think the biggest was 20 drivers turn up in one kernel cycle. But now, typically, we're getting four, five, six drivers per cycle, so that's a more reasonable rate. And it is perhaps slowing down a bit. And I think this is mostly because the hardware manufacturers have finally started standardizing their interfaces. If you look at a new SD micro, accelerometer, for instance, it's often pretty much the same interface as the previous one. Whereas a couple of years ago, it was, oh, new device, new complete interface, every register changed. So you couldn't share a driver. So often now it's just a question of adding an ID and maybe a couple of little parameters to say what the range of the device is. So who wrote these drivers? Well, it wasn't me. I think in the current IO system, so we had 260-something drivers. I think I wrote three of them. I've got hardware for a few more, but not really very many. There are quite a few companies involved. I think we're at somewhere over 20 easily identified companies. I have no idea who some of the people work for, or if they're hobbyists or whatever. We do have quite a lot of hobbyists. I'll come on to why I think that is in a minute. We have some students. We also have always had, oh, for a long time anyway, we've had extremely good contributions from some of the outreach programs, so Outreach Program for Women, then Outreach Why, and also Google Summer of Code. Anyway, you wanna know who wrote something in the kernel. What do you do? You fire up and get DM. I was a bit annoyed, actually, that I still end up at the top of both these lists, but only just. So it's moderately close. There's some very familiar names who are sat in the audience on this list. But what's actually interesting is not really who the top couple of people are, to my mind. It's how long those tails are. So obviously there's a lot of data on this slide, but if we were to pick out the sort of level that means someone wrote a driver. So a short driver might be less than this, but if we take greater than 1,000 lines of code, so that means we've had 65 people contribute to Driver, of our 260 Drivers. We've had 11 people who realistically have contributed five Drivers, or one really, really big one. And yeah, significant contributions are in huge numbers. So overall, there's 512 people have contributed to IIO, but we do always get the ABI cross kernel changes, but they're down in the small number of patches or small numbers of lines of code. So now I think it's worth saying a few things about what makes a good community. I think this first one is very, very important. It also stops the maintainer burning out. If you haven't got good reviewers for a subsystem and people who are doing this a couple of times a week, they send out replies to emails and give excellent review, give mentorship. Often informal, they'll just spend some time talking to a new contributor, talking through the process, doing the classics, pointing out, please don't top post. And all of the other elements that come with it. Now, the key thing with this is that these reviewers, and we do have, I don't know, a steady maybe 10 people who do this now, is their willingness to engage. And one thing I put here is also to be persuaded. Often, and I do this myself, I'll review something, and I go, no, no, you've done this all wrong, as politely as possible. And then three or four emails later, they go, okay, I don't really understand why you're saying that's wrong. And then sometimes it'll turn out they were right, or it's somewhere between the two. So why do we get so many new contributors? One is we have extremely tangible things. It's motion devices. They're cheap. That's good. You can start simple. To just get SSFS sort of pulled into Face to Run, it's a few tens of lines of code. We have a history of new contributors. This is useful, because there's an awful lot of sources of information. It isn't our documentation. Our documentation is awful. So if anyone is interested in writing documentation, please talk to me afterwards. We'll get to it one day. So I just mentioned here briefly the Outreach programs. We've had some great mentors. So Daniel, Octavian, Allison and Greg have all mentored people over the years. And we've had some great students. I haven't lifted them here because they're in the reference list at the end. So one final slide, which is how to get involved. Perhaps this has motivated you to do so, or perhaps you've already been thinking about it. First of all, yeah, the usual. Subscribe to the mailing list. I definitely prefer people to send emails there, but I will quite happily reply to personal messages if you're uncomfortable with doing so, certainly for first posts and things. But you can get some cheap hardware, mess around with it. I do occasionally send to-dos, other people do as well, where we've found something that we're just not quite getting around to, and it's a suitable task for someone who's just starting out. If there isn't one at the moment, feel free to send a message to the list saying, hello, is that I'm new? Is there anything I can do? And yeah, we'll find something. We've still got a number of drivers in staging that need cleaning up. So to finish off, some references. As I said, this wasn't a tutorial. Some of those are. Some of them aren't, some of them are. Fun applications. I may have missed something incidentally, so if anyone's looking there thinking, you missed my talk, this is what Google gave me, because I couldn't remember them all. These talks were this week. One still to come. So, yep, there's an outreach why talk, which I think is tomorrow, in which Georgiana, who was a student earlier this year, is presenting, so do turn up to that. The other two people incidentally are around if you want to hassle them about the talk you missed, or you weren't going to the EL workshop. So here's very briefly is the intern blogs. Now these are great if you're getting started, because one of the things that all of these projects encourage is people to document what they did. Now if I get a new bit of hardware, I often check those first, because it's a lot quicker than trying to find the setup guide for a random I2C controller. And that's pretty much all I had to say. So, if you talk really quickly, you might get a question in. I kind of overran. Any questions? If not, I'm around, so just grab me. We're out of time anyway. Thank you very much. Thank you.