 Next talk is by Arion Skarapanesu, who is from Amsterdam, and I'm also a boss, so that's why Daniel asked me to introduce him. He calls me both of us, and we organize meetups in Amsterdam about most of the lecture. And today, Arion wants to talk about nerves, because sometimes in his spare times he likes to go around with old telephone systems, and those kind of things. And he also has contributed a lot to the lecture community recently. One thing I also want to mention, like today, this evening you want to have a drink, like chat, for you probably won't have much time to talk now, but if you want to find us and have a chat, this evening we're also organizing a meetup at BrewDog, which is in the center of Brussels, at 9 o'clock. So, if you have time, please find us. And that being said, let's listen to Arion, give him a big applause. Hello, hello everyone. Thank you, Tonci, for that nice introduction. So, my name is Arion Skarapanesu. I'm very honored to be here. And today, well, I work at Bot Squad, which is a company that does things with elixir and chatbots. But today I'm not going to talk about that, but I'm going to talk about basically what is like a hobby project of mine. But still, I think very interesting. So, it's about nerves. So, let me get the context first, because I've been working a lot with artists as well to build like interactive installations in my past. I did that a lot, and lately I've been using, starting to use nerves for that. So, one of these projects that basically this whole talk is about is called Eat Tech Kitchen, which is like an art performance of two ladies who do cool stuff with food and technology. And it was like performance, series of performances, but it was also, still is also an interactive installation. So, if the ladies are not there, you can still go into the kitchen and then interact with the installation and get a nice recipe at the end, which is like a mini performance, and you can for instance like eat a bite of in the internet or get some. So, they made these like crazy food things for you to experience. And the way you interact with the installation is through this thing over here, Google Home. It's a device, you can talk to it and it will say something back. Sometimes it doesn't make any sense. Sometimes it does. So, you interact with this Google Home device and it basically, I'm sorry, I have to send it within here. Okay, yeah, sorry, I'm sorry. Just in front of this. Okay. So, you interact with the Google Home and then Google Home tells you what to do and then at the end it prints a recipe for you and a recipe so you're on the wall and you follow the instructions and then you get your nice food experience. So, the way this is set up works like this. Google Home is connected to the internet which talks to a chatbot that I've built, a very simple one, a very basic chatbot. And then eventually it will print something on a printer, which is one of these like point of sales thermal kind of simple printers. And of course it needs to do that somehow. So, there's a Raspberry Pi here that I programmed to do that. So, I've been doing things with printers for this girl that I work with for quite a lot of time. And the first versions that I wrote for this software were in, I just took Resbian, you know, Debian distributions for Raspberry Pi. You can do a lot, you can install it. It's basically a mini computer that you can put somewhere like there's one over here. And I needed to somehow interact with this chatbot which uses Phoenix Channels which is like an interactive way for Elixir and Phoenix projects to talk over WebSocket. So, I wrote something in Node.js and I also need to drive some LEDs because there was some fancy lights. I wrote some more Node.js. But then I needed to download an image so I thought, fuck it, I'll just use WGet because it's installed, right? I didn't want to learn how to do that in Node. And then I know a bit of Python so I thought, let's just use Python for printing because the printer driver for Python was apparently better than the one that was in Node written for this thing. But, yeah, there was a printer driver so that was good. So, it was in no bunch of software stacked together on the board. But then suddenly when this was live, it suddenly got a phone call, yeah, sorry, but the Raspberry Pi broke, can you come? So, I needed to go to that place. It was broken and it turned out the SD card was corrupt. And what turned out was the SD card. When you unplug the power of Raspberry Pi while it's writing data, basically your SD card is broken beyond repair. So, it's really easy to destroy SD cards by just letting Linux write to your SD card and then somebody unplugging it. So, that's not ideal. So, I basically Googled that and then somebody said, yeah, you should use a read-only file system overlay in memory, bloody blast on. You know, I typed some more commands. But then also, basically then I just had like a read-only version of that software. And every time I unplugged and plugged it, it was back up and it restored that version and Linux could no longer write to the file system, which is pretty handy. But it also meant every time I need to update the note script, I had to boot again with a special flag and then made the file system read-write again and then et cetera. So, there's a lot of stuff. And there was a lot of moving parts. So, like a year later when Claesine asked me again, you know, do you want to do this project again with me? I said, yes, sure, but I'm going to use a different technology this time. So, I need an excuse for a NERVS project. So, here it was. So, who of you don't know what NERVS is at all? Okay, that's quite a lot. That's good. So, I'm going to give like a very short two-slide primer. NERVS is like a way... It's like an IoT framework, I guess. I mean, correct me if I'm wrong for NERVS and other Beam languages. So, basically, it supports devices like the Raspberry Pi and Beaglebone, basically any like small Linux device that's not very, very small because if it doesn't have a memory management unit, then it doesn't really work well. It needs to have like a normal version of Linux that can run and then you can boot into NERVS. It's not the Linux distribution. Basically, the only things that runs on your Raspberry Pi when you use NERVS is the Linux kernel and then it directly boots into Erlang. So, there's nothing in between. There's no system D, there's no init scripts. Well, there's like... Well, I think there's one thing in between, but it's like it's very little. Basically, everything runs within your Erlang virtual machine. You can write all your software services in your Beam. And Beam is like your container for your whole application and your whole hardware thing. So, that's the only thing that runs there. And it makes it very small. I think if you do like the Hello World of NERVS, I think the image that you need to flash on the card is like 12 megabytes or something like that. So, in that size, it's very quick. You write it, you upload it, and then it runs. You configure the network. And it's actually pretty handy to do it this way because you would just have one code base to maintain. You just have your software project, you put it on a device, and you know for certain that there's nothing else on the device that can wreak havoc on you. So, everything is managed by using Erlang Elixir and the Beam. Where like the supervision mechanism of Erlang really replaces a lot of the things that system D would do normally for you. And the builds are reproducible. So, every time you build, you are pretty sure that it's the same thing as last time. And what's also very handy is that it has an update mechanism. So, it has actually two partitions like an A and a B. So, while it's running, you can copy a new firmware to it. And then when the firmware is uploaded correctly, it will reboot to the other firmware. If it doesn't work, it will fall back to the old one. So, it has all these nice things for small devices built in. And then the last thing that's out there and that's fairly new is the Nerf Hub, which is like a remote management system. Because you can imagine that if you have, like I use this for a hobby, but if you have like hundreds of these devices out there, you know, managing sensors for you or whatever, you don't want to upload those by hand. You want some kind of management system so that you can be sure that some version exists on your whole fleet of, you know, little devices out there. So, Nerf Hub has all that built in and also some cryptographic security so that you really know that it's your stuff and not somebody else's. And that nobody can access it, etc. So, with Nerfs out of the way, let's see how I implemented this little device. So, version two of the Raspberry Pinter was basically it has these requirements. Like, we needed to join this Phoenix channel, right? Because we needed to get messages from the bot in real time. We need to download that image that we... So, the bot would send an image like, hey, print this please. We would download that image and then we would print the image on the printer. Well, for joining the Phoenix channel, there's, of course, an Elixir library for it. So, that was not such a big deal. There's one called Phoenix Channel Client here. I think there's several others. So, for downloading the image file, there's also lots of options because that's just an HTTP request, right? So, in case of Tesla, it's also very easy and I think it uses Hackney under the hood, or it can at least. So, basically, you just get the image and it would just return the body as like raw bytes, the PNG image. So, that's also not a big issue. But then now we have the bytes. So, what do we do with the bytes? Well, we need to convert it, right? We need to convert it to some kind of thing that the printer understands. And then we need to write that to the printer. So, first step, of course, is to, you know, you have a PNG and we need to, you know, get the pixels out of the PNG because we need to decode it into something that it was. And for the first step, I looked on the Hack's PM website. You know, it doesn't seem like a very interesting task, right? You know, load the file and just get pixels out of it. And there's several, actually, several libraries that do something like that. For instance, X image info, but no, wait, it only reads the metadata. So, you can see how big it is, you know, what kind of colors it has, I don't know. Some stuff, but not the pixels. PNG is an Erlang library, I think. It only writes images. So, that's, now we need to reverse basically that process. So, that also didn't help me much. X magic is like an image magic wrapper. So, that's based on a NIF. And NIF is like a C library that is built into, that you can call out to C from Erlang. Image, that looked actually pretty interesting, but it does not expose the raw pixel data either. It just allows you to transform images. So, then I stumbled upon Imago. I don't know how you pronounce it. And it looks fairly interesting because Imago is a library. Basically, it has a read pixels, and that's exactly what I needed. I needed a read pixels function. Now, again, I have a file or whatever I have in the data and just need, you know, a size, width, and a height, you know, a list, look, RGBA, et cetera. So, this is just the raw data. Although, you know, a list of bytes is not very the way to do it in Elixir because it takes up quite a lot of memory. So, you would probably use a binary for that because, you know, binary is just, you know, a chunk of memory basically. Anyway, it still could work, right, because the images are not so big. So, it turns out it was written in Rust. I don't have anything against Rust. But the thing was it doesn't support the latest version of OTP and I think NURBS only runs on the latest version of OTP or there was some kind of hard problems to do that. There was also quite a lot of, you know, it basically, this is an image library, so I'm just going to, you know, gimme all those decoders. So, it was, it went quite big and it felt like using a sledgehammer, you know, to crack this simple nut. And it turns out it doesn't quite cross-compile to NURBS and that was actually the hard thing that I could not figure out because I mean, I know a bit of C, et cetera, but I don't know so much about Rust and cross-compiling and environment variables and stuff. So, and I think everybody, like especially in the Slack NURBS channel, agrees that it should be fixed and maybe it's already fixed, I don't know. But it was kind of hard and I tried to investigate a bit and I got into screens like this and more screens like this and at some point I was just, I just do it myself. So, I decided to, you know, look, see if there's, because actually wrapping a C library in Erlang and like writing this NIF is not very hard. It's basically just, you know, taking something and putting some glue on it and then passing it back to the VM. So, I looked for like C libraries that were consistent just of one file, right? So, there's single header or single file that doesn't have any other dependencies because those you can just easily compile without having to do a lot of dependency stuff. And I named this library pixels. So, pixels basically takes two C libraries, like one's called load PNGs or one file and micro-JPEGs also just one file. And I just linked that to make a NIF and that way I finally got, you know, my RGB data as a byte. And basically right now the only function that it exposes is read file because there's no PRs welcome of course if you want to write file or whatever. So, this is actually, we got that problem solved. But there are actually two more problems that I want to cover. How am I doing on time by the way? Oh, okay. So, I have to speed up a bit, I'm sorry. There's a lot of stuff to tell. So, we need to convert this raw data into bitmap data and then write bitmap data to the printer. And printing is actually pretty easy because on Linux if you just plug in a device you get this thing, this LP device and just can write something to it because it's how a line printer works. You just write characters to the device. But if you want to do graphics, like proper graphics you have to actually read a spec like this. So, I looked at Python because there was a Python library and it turns out it does a lot of stuff and it does more stuff and basically builds up this hex encoded string and then decodes it to raw bytes and then writes it. I don't know. I'm going to skip over this a little bit because I want to do a demo at the end. But basically what bitmap encoding is is that you take your bitmap and you put all these different pixels, you put them in one byte. So, basically this is like the width 8. So, you can represent that as a list of binary encoded integers which is basically just this. So, this image encodes to this raw byte string here. And having said that, Elix and Erling have a really nice way of dealing with these bytes. You can really easily deconstruct bytes and then pattern match on them and also write bit by bit, not byte by byte, but even bit by bit you can append things into a new chunk of memory. So, basically that's what this code does. It recursively goes over the entire input data and it basically decides here if it's white or black. One of the components is larger than 200. It's black and otherwise it's white. And then we just append it here. So, just take the bit string and just append one single byte to it. Sorry, single bit to it. So, in the end you have this nice chunk of data. So, now I have this nice chunk of data. But I still need to print it to the printer. It was fairly easy, right? Because we had this nice LP device on the thing. But it turns actually out. We cannot really read it. But on nerfs you don't have these LP devices because there's no printer driver running. And it was actually quite a problem. So, I had to resort to use raw, like the libUSV data. Is it working? Yeah. And luckily somebody already made a libUSV wrapper for me. So, I used that to send the data. And this actually, I think nobody actually uses this library because it is a middle of a sad state and that's really unfortunate. Because there was some open things that I had to fix. It kept crashing on me. So, I'm just going to skip over these a bit. But it was not very straightforward. And I still have some PRs open. I hope that the original maintainer hears this talk. Please merge them. But once I did that, it was fairly easy. I could open the raw USB device and then send my data to it using this bulk send command. And that basically got my print working. So, then I only needed to ship it into firmware and put it on the device and then put it into production. So, this is how my development setup looked. It's a typical nerve development. You need to have a monitor, right? So, I have a TV. I think this is the actual Raspberry Pi connected to the printer and to some keyboard and I had to move some toys to the side. Anyway. And then at some point, it started crashing on me and I had to take pictures of the screen because it was scrolling so fast that I couldn't see what was going on. So, it made this nice gift for me, although I didn't intend to do that. But with some thanks to the Slack channel, I got it all working. And I actually, for this talk, I actually prepared like a slightly different demo than the E-Tech kitchen because I already made like a Hitchhiker's Guide Quotes voice action on Google Assistant and I decided to hook it up to the printer. So, let's see if it works. Okay, Google. Talk to the Hitchhiker's Guide Quotes. Mm-hmm. No. Wait. One more time. Okay, Google. Talk to the Hitchhiker's Guide Quotes. Fuck. Sorry. We cut that out. Okay, Google. Talk to the Hitchhiker's Guide Quotes. Here's en de beste zoekresultaat. Okay. So, once that went, it would print out this printer Well, I also did it on my phone. So, actually, now it it prints random quotes from the Hitchhiker's Guide. And so, you can say to the Google Assistant, you can say, print it for me, and then it will actually start to print it for you. Anyway, it becomes another one. But you can try it. You need to keep this online so people can, you know, print their own quotes. So, what's else there? Yeah, so, basically, the lessons learned here are I think NERVS is a really powerful framework, and it's really, you know, it's really well thought out, and there's a lot of support for the community. And it gets better every release. Like, in the last release, they released a new networking library to automatically set up an access point so you can log into it, and then you can configure the Wi-Fi for the device, and then it will automatically try to connect to the Wi-Fi, so it's really like a smooth, you don't have to hard-code to Wi-Fi credentials in your thing like I have to do now. So, it's really like starting to, you know, to be more solid and more user-friendly. And you don't have to, you see, there's a lot of support for different hardware and different little boards and LEDs and whatever. But if you want something specific like I did, you have to get your hands dirty a little bit. Yeah, that's a very helpful community. And there's also a conference coming up, actually, in the U.S., I think, in October. So, you should go there or send in a paper or... Anyway. So, thank you very much for listening. This is the software. You can find it online. And thanks for listening. Any questions? Do you have an interface to deal with the GPIO pins in the Raspberry Pi and if it doesn't, are you able to use a port command, like GPOD or something like that? Yeah. So, the question is, does the NERVs have built-in support for the built-in ports of the Raspberry Pi, like GPIO? Yes, it does. So, there's a library. I'm not sure. I think Elixir Ale, that's the old one, there's like NERV circuits now. NERV circuits. So, that support, basically for all the circuits, like also SPI and I2C and all those kind of... Yeah, those are all supported. So, if you have a device that works like that, you just have to write a protocol. You don't have to do an EC. Yeah. You showed your development with that new release, because I've heard that you can SSH into NERVs and defuse from your laptop. Is that a thing? Yeah, because of this thing, I could actually show that. Or maybe it's either this one. Oh, there we go. So, this is... I'm now inside that device, basically. I'm not sure what it looks. So, if I try to do another print, it might show something. Let's just try that. And there we go. Oh, there we go. So, it does something. You see? Yeah, so that's standard. So, you can go in, you can debug, you can run code here. So, you could develop it on your own laptop? Oh, yeah, of course. I always develop it on my own laptop. It's basically only like in this situation, where you really don't know if there's a black box and it doesn't respond. You don't know in which stage it is not booting. And then you have to connect the monitor. But for the rest, I can now just upload a new firmware and just have it reboot. And I wouldn't need to do anything special. Yeah. Did you end up using Nerv's Hub for the art installation so you didn't have to go there anymore? Did I end up using Nerv's Hub? Well, I actually tried, but I just had one device. And I actually tried to do that. But there was some issue when I run into this. But it was very specific. So, the device kept crashing due to some cryptographic error. But I would definitely use it. I actually used it. But for this, like N equals 1, it's not really helpful. Only if you had more than two devices, then it would be really helpful, I think. Or if your installation is remote, right? Because that's also really handy. If your installation is a museum somewhere, you cannot SSH there. But you can go there to the Nerv's Hub and then do the bug at life. Yeah. All right. Thanks.