 This one will be Joel Sten with FPGA, kill the radio star. Please give him a warm welcome. Thanks everyone, and thanks to our room hosts for pointing out the first mistake in my talk. I got the title wrong, so thanks for coming. So today I'm going to be talking to you a bit about a project I've been volunteering on for the last couple of years. It all started off when some of my friends, particularly Tim, got involved in conference video recording and recording conference talks. And it turns out there's a whole heap going on behind the scenes as well as these volunteers we've got here. There's guys back in the knock putting together videos and making all this conference streaming happening. So we're going to go through a bit of that today. We're going to go through some of the hope and hardware that I've been involved in to make that process a bit smoother. And I'll talk about the Tim's video project that kind of brings all that, all those different projects together. And gone into a bit of detail about the summer of code project that I mentored this year and some of the troubles we had and the problems we solved and what we're going to do next. So as I said, recording conference talks, it's hard work. Linux.com for you, Pycon, OSDC, all these conferences record pretty much all the content that's produced for the entire week. And often they also stream it online live. And there's lots that goes into that in terms of setting up the forehand, getting laptop images installed, getting the software going, getting the hardware prepared. And it takes a long time. So there's a lot going on the scenes, I guess, the messages I'm trying to get across. And this is your typical kind of setup for something like Linux.com for you. We've got the speaker at the front, that's me. We're recording my laptop screen here at the moment. We've also got a video camera over there recording me speaking. And a room operator who's mixing the two together live. They can get streamed out up to the internet somewhere so that we don't saturate the conference video link with all the people trying to watch the video. We replicate it using EC2 or something in the cloud like that. And then other remote viewers watch it on the internet. My specialty, the thing I've been working on, is the little green boxes, the HDMI to USB thing. So they're the bits that take the pixels and shove them into a PC. And the rest of it's just software. So the hardware that we're currently using, we've got one just here, I think. No. Yeah, it is one, sorry. It looks a bit different upside down. So that's a twin-packed. Linux.com for you has been using them for a while now. And it's expensive. It's end-to-life. It's proprietary. It only does VGA. And those of you guys who've got a laptop out, have a look at the side. And I'm pretty sure you won't find any Firewire ports on the side. So that's a technology that used to be a lot more popular for doing video, not so much these days. And that's a big inconvenience because now you're not only buying the expensive box, but you're also going to have to buy some kind of card to get the signals into your laptop. One of the things that this hardware does do, and it does it really well, is accept all kinds of inputs. So when you rock up here as a speaker, you've probably seen speakers fumbling around with cables trying to get the right resolution out. This thing does a pretty good job at presenting the right things to the laptop to make it work correctly. What the guys have found is you can buy a certain hardware that works really well with Macs, but it'll be rubbish with other laptops. Or it might work really well with other laptops, but a Mac comes along and it just doesn't know what to do. So this one's a good compromise. It works with lots of different things, including lots of operating systems. And LinuxConf, they use one of the real torture tests for hardware like this because you've got people with Macs, you've got people with Lenovo's, you've got people running Windows, OSX, and Linux, of course. So if it works at a conference like this, then it's going to work pretty much anywhere. So the other inspiration for this was Tim sat down and decided to make a conference in a box. So I think your local Linux users group, you want to be able to record that, you stream it out live so that attendees who can't make it along can still watch at home. Ideally you'd be able to unzip a suitcase, some kind of box, plug in a projector, press go and offer to go. And so Tim tried to build this. This is him in Christmas 2011 putting the box together. So this thing had a laptop in it. It had one of the twin packs. It had a 3G modem and nice colour code ports for the user to plug in all their cables and whatnot. It was semi successful, you know, it would work. But there are a bunch of things that Tim wanted to do that couldn't be done. Things like detect when the user plugged in the cable. Because it's a nice little white but it's a black box that you can't see inside of. You couldn't sit there and get a signal, some kind of notification saying, hey, the user's plugged in a cable, let's do our thing now. If you had open hardware, you could obviously hack it and make it do that for you. So a bunch of those limitations meant that that project kind of fell flat after a little bit of effort. So we started looking a bit further abroad. Now this is the Netv. This is made by Bunny. He gave a keynote at LCA in Canberra, if any of you were there. And so he's made a bunch of different open hardware over his life. This one was kind of an inspiration for our project. It had lots of features that we thought were useful for us. So it had a little FPGA in there that would take in a HDMI signal and pass it through to the other side, which is a requirement for what we were doing. It also had a Linux system that interacted with that. And that was useful because we obviously wanted to get the signal out and what's better to manage signals than a little Linux image. It also had some other cool features that we wanted to adopt in our system. One of those was edit override. So that's where you plug in your laptop and it presents a million and one different resolutions and you have to pick the right one. It would be much nicer if you could override that and say, no, we're just going to present the one useful one to you. So that was something we wanted to implement as well. And being FPGA-based, now all of a sudden your hardware problems become software problems. So it's a lot more malleable and you can sit there and improve it on the fly. FPGAs are a little bit hairy to deal with, as we'll talk about later. But it was a nice start. And so we started looking at what to do next. Can we take this little bit of hardware and turn it to something that could be used to record video? HDMI inputs from a laptop at a conference. And that's a nice little point to look at the bigger Tim's Videos project. So what's Tim's Videos? It's a bunch of projects unified by a goal of developing a simple conference or presentation recording system so that people around the world can participate in conferences like this without having to have the expense of flying down to Auckland. So there's a bunch of different software projects. There's a streaming system. So that's a virtual machine image that runs in the cloud that encodes and streams the video. There's GST Switch, which the guys were hacking on here in the catalyst offices last week, that mixes video and potentially one day will do things like speaker tracking with automated cameras and whatnot. But the one I'm going to talk to about is HDMI to USB. Tim's likes his name simple and straightforward, descriptive. So just like the name of the project, Tim's Videos, HDMI to USB does what it says on the box. He likes HDMI and turns it into USB. So we did a bit of brainstorming. What is the ideal hardware device? Open hardware, number one requirement. We need it to be debuggable and we want people to be able to contribute fixes as they run their conference and find new bugs in it. And it just makes lots of sense. I've been involved with a couple of different open hardware projects. It's fun to do, collaborating with people from around the world and it was really the only way to go forward. We wanted to have digital connectors. So VGA is still pretty prevalent. We have VGA here. This is the first conference I recall speaking out where every single lectern had a HDMI input and I imagine we'll see that more and more going forward. So the way forward is some kind of digital connector whether it be displayed for an HDMI or whatever. We wanted the output to be USB and Ethernet connectors that are on the side of your laptop. So it makes it a lot easier to get the video out. Ethernet, potentially you don't have to have a laptop up the top anymore like we've got here today. You could just run the cable back there and it could all stream back straight back to the knock. The other advantage of Ethernet is it's higher throughput. So Gigabit Ethernet can carry a lot more data than USB 2 again. And USB 3, the ICs are pretty expensive for doing that so they haven't gone down that path yet. Many resolutions are fewer. So as I just said before, overriding that it information so they only present to the user a single widescreen resolution and a single 4.3 resolution so that they plug their laptop in. They say, alright, I've got widescreen slides and off they go. And as I said, again, debuggable. We really wanted this thing to be debuggable. So a bit like a bit of software, you get a core dump out of it or some kind of trace or logs that either the user can use to diagnose their own problems when they're up here or maybe you capture it and send it back to the development team and they can go, oh look, we weren't handling that corner case. Let's fix it up for the next rave. So the first steps were to take the Netv and try and make it record video. It was an explicit design decision or it was designed specifically to never be able to record video because it was a HDMI encryption proof of concept device and if it could record video that would potentially make it violate the copyright rules that it was intentionally trying to work around. So Tim said, hey, let's do that anyway. Turned out that didn't work. So unfortunately, Bunny did a good job and it wasn't really feasible to use that board to do what we wanted to do with it. But the colleague that Tim did get in touch with and help him start evaluating the design suggested this board. So this is a Digilent Altis board. Digilent make lots of different prototyping and dev boards for different FPGAs and whatnot. And so this one was pretty ideal. It's got two pairs of HDMI input outputs. It's got USB 2 for getting the video out. It's got gigabit ethernet for later on expansions to do ethernet streaming. Potentially sound as well for capturing the sound and in case you're interested in Spartans 6 FPGA. So a pretty beefy FPGA for what we needed. So we grabbed that one and started playing with it. So that became a HDMI to USB prototype. I've got one out the front here. I'm going to stream my talk through it, but I made some heated trouble, which I'll talk about later. So it takes in HDMI, it outputs it, and spits out video of USB. Simple as that. It's got a bunch of switches and LEDs and a serial port and whatnot for debugging. And so we've taken that a fair way. It's more or less usable in a conference. Some stability issues notwithstanding. One of the problems we had to solve though in doing this was getting a video stream down that thin USB pipe. So a raw 720p, 24-bit per pixel image. It's about 600 megabytes a second. And the usable bandwidth in USB 2 is about 40 megabytes per second. It's 480 megabits per second, you know, ideal signaling rate, but you get about 40 megabytes per second in the real world going through that pipe. And obviously that doesn't fit. So we added a JPEG-ing bit of a mix. JPEG encoder, it down-sampled the pixel arrangement a little bit, but also what it was doing was compressing the video stream for us. And so now all of a sudden, 2 megabytes per second, no worries, that fits down our USB 2 pipe easily. And so to do that encoding, we used a JPEG core from OpenCourse. So in the FPGA world, you talk about IP blocks. In the software world, you might call them libraries. So this is a library that you put in the FPGA and it will take in a raw video frame and output a JPEG frame to another buffer. And so we used that to do the proof of concept and managed to get about 15 frames a second at 720p. Not horrible, but not quite what the goal was. We wanted to get 30 frames a second. So if a presenter put some video on their screen, it could record that. And also potentially you could use it to record off the camera if you wanted to be able to stream the camera over Ethernet, for instance. So there's some work to be done. Sorry, this is just going over that again a little bit more. So Tim gave a lightning talk last year in Perth at LinuxConf. And we were getting about that 15 frames a second and we believed it was limited by the JPEG encoder. We knew that there were commercial encoders that would fit on the same FPGA that would do 1080p at 120 frames a second. So we knew that the hardware itself was capable. We just had to optimize our design enough. Just. So what do you do when you've got a hard problem to solve? You give it to an intern, right? And in the open-source world, we don't quite have interns, but we do have as the Google Summer of Code. And I've been involved in the Google Summer of Code program since 2007 when I went over to America and worked for a long time per child. And since then I've done a few mentoring projects as well and it's lots of fun. And the way I love explaining the project is flip bits, not burgers. So instead of getting to summer holidays and working at Mackers, you can work on an open-source project, spend your time developing your skills, potentially getting a job out of it or something, and stretching your mind a bit instead of just your flipping muscles. So working an open-source project for three months is what the students commit to. In return, the project provides a mentor to guide the student through how to do open-source development, how the project works, what the problem is they're going to solve the student tends to propose the project and propose how they're going to solve it with guidance from the mentor. So this year Tim's videos had eight projects which was pretty huge for such a small project, but students from around the world, we had a bunch from India and different places in the world worked on all these projects listed here as well as the most important one which was HDMI to USB, the one I mentored. So I've got my student here at JIT he's come over from India to join in and to hack last week, so thanks JIT. So this was the requirements of his project. We're currently getting about 15 frames a second and we wanted to get 30, simple as that. So JIT sat down and came up with a bunch of ideas. One of the ideas we had was to divide and conquer. So split the video image into two halves and duplicate the encoding hardware inside the FPGA and so you encode half each, split them back together and off you go. So a few problems with that, JPEG doesn't quite work like that, but also we'd be reaching the limits of the FPGA as well. The number of logic elements inside might not have, it would have fit, but we wouldn't have had much room for future improvements. Another idea that he had was to optimize the JPEG core. So he had a bit of experience through his university studies taking HDMI, oh sorry, VHDL, and optimizing it. So his proposal was to pipeline and design a little bit further and try and get a bit more performance out of there. So here he is in his dorm room, he's I think stolen his roommate's laptops and getting the HDMI to USB going. But we wouldn't see immediate improvements, immediate obvious areas to work on. It's probably a good time to step back and then look at the architecture of the FPGA system. So what we've got here is the HDMI input-output. That goes into a HDMI matrix, that's the bit of logic that decides which input goes to which output. We also calculate the resolution and have that aided hack that that override that I spoke about earlier. That goes image selector, which picks from either the HDMI matrix or the pattern generator. So the pattern generator is a little log, bit of logic in there that generates a test image. So I'll fire it up in a little bit and we'll be able to see the test image, the stripes on the screen. So that way you can see that something's happening without having to have a valid input there. That goes into a buffer, that buffer sits in DDR memory. So at this point we've slipped in the HDMI frame buffer, shoved it into DDR and then that gets sucked in back into the FPGA by the JPEG encoder, which does its thing, spits it out to another buffer back in the DDR and it gets taken from there down the USB port by a little Cypress microcontroller. So it's the basic architecture of the system. We thought we were going to focus our efforts on the JPEG encoder and make that go faster. So the first thing we did was, well, if we make it go, if we increase the clock speed, surely you should get a few more pixels through and if we decrease the clock speed, we should get a few less pixels through. The graph doesn't illustrate that as well as I hoped, but what I'm showing there with the blue and the red line is as the clock speed goes up, we expect the frame rate to go a bit faster and as the clock speed goes down, we expect the frame rate to go down a bit. What actually happened was nothing. So head scratching time. Like any good engineer will tell you, when you don't understand a problem, you should stop and measure what you're doing. Before you try and optimize the system even, you should sit there and get some good benchmarks, get a good understanding of the system so that when you make your improvements, you can understand what they're changing and how they're changing it. When it implemented a bunch of debugging features in the FPGA, it would collect counters from different parts of the system and spit them out USB and they'd write a little cursors program that would display those stats. We could know the state of it, we could know it's detecting input on HDMI1, it's encoding the quality to certain quality, the resolution is this, the expected frame rate of the input is this, the actual frame rate of the output is this and what not. Those numbers gave us a better idea of what was going inside the system. It still didn't give us the answers we were after though. There were a few theories thrown around, maybe the DDR was too slow because we were pushing the image into the DDR and taking it back out and what DDR is the RAM that sits next to the FPGA. Maybe that was too slow, maybe we need to go faster. But after a bit more investigation and reading through the code, we discovered that we were dropping input frames. So on this previous slide, this was a real capture from early on the project, you can see the input frame rate is 50 frames a second or 49 and the output is 15. So where are all those frames going? It turned out that we were sitting there and sucking in a frame from HDMI and putting it in DDR and then kicking off the JPEG encoder. And while the JPEG encoder was running, we weren't bringing in the new frame, that was just getting dropped on the floor. The other thing was the JPEG encoder needs 16 lines to get started, so we had the DDR before kicking off the encoder. So there's a few pipe lining issues that we had in the original design that weren't immediately obvious because it worked, right? But as we were trying to speed it up, we realised there were some flaws in the design. So to illustrate that, we were starting the JPEG encoder when it got to the bottom of the frame. Instead we started when we get 16 lines in, which is the minimum amount required and all of a sudden we get a huge frame rate improvement. The other improvement we made was to create a second buffer. So although, by kicking off the JPEG encoder as soon as possible, you more or less get the entire thing in before the new frame comes in. There is that small crossover when you get into the last 16 lines where the new images started to come in. So with the second buffer that would fill up while we're encoding the first, meant that problem wasn't there anymore. So, here we are with the results. So our goal was 30 frames a second and we reached more than 45 frames a second with the work you did. So the summary code was a huge success. That's the firmware that's running here now. We've pressed on with further work, so that was really good. So the other work that we've been doing is trying to concentrate on keeping this open source, our free and open source, as far as possible. There's some limitations with working with FPGAs that at the moment we just have to deal with things like the tool chains are a big proprietary, horrible things. But where we can, we've made it open. So all the VHDL and Varylog that runs on the board is MIT licensed. Initially, we were using a proprietary JTAG firmware that the Gillian provided. We replaced that with some open source code. There was the Cypress, which I'll talk about in a minute. It was open source code, but it had a big blob that was a library that the vendor provides. So we've replaced that now. And we built it with SDCC, so if you want to install an IR for Windows VM or go through a complicated dance getting wine going with this Windows only tool chain, you can app get install it and type make and off it goes. So the Cypress. It's a pretty common ship. You'll find lots of embedded devices. It's got a little AD51 microcontroller. It's a tiny microcontroller that's not clocked very fast. But what it does is it sucks data from this 8-bit bus and DMAs it into a USB FIFO, and that appears on your PC. So pretty much the way it works is you describe the USB device to the PC, it enumerates, and then it just kicks off this DMA process and off it goes. So the process, the data is not going to the CPU at all. And that's how it gets USB line rate data coming out from a tiny little microcontroller. So we have it spitting out video to the PC. We also have a debug UI on there that you can see the debug information that's happening on the FPGA. You can always use that for the control. So while there's buttons on there, if you want to automate it, you can do that through the port by saying, you know, set this resolution off you go. This same device is used to program the FPGA. So as you can see up there, it's got the data buses. It's also got those GPIOs. GPIF, I think they call them in the diagram. They're used to BitBang JTAG on the FPGA. So we load the live FPGA link firmware on there and then we send down the FPGA firmware and that gets programmed in the FPGA by this microcontroller, which is useful. It means you don't need another dedicated programmer to program the FPGA. You can plug in the USB port, have a UDIV rule that sits there and first loads the live FPGA link firmware, sends the FPGA firmware down and then programs the HDMI firmware and it renumerates as a USB device. So as I said, it appears as a webcam and there's a serial port. So just a bit of an aside about USB devices. So there's this concept called USB class devices and class devices generally have inbuilt drivers for your operating system, no matter which operating system you've choose to run. Things like your USB keys, they're a mass storage class device. So that's why when you walk up to anyone's PC you can just plug that in without having to install the driver first. There's a similar concept for webcams. So most of the webcams are in your laptop, USB video class devices and to get the basic functionality working you don't need to install any of the software. So in theory they plug and play on all OSes. And we know how well that works. So as I said, perfect plug and play, no worries. So in theory it works out of the box, no worries. So we present as, just ignore the slide for a second, we present ourselves as a USB video class device which means you can plug this thing into any old laptop and start recording video of it, which is pretty handy. It means you don't have to worry about configuration of the laptop anymore. You don't have to worry about specific drivers. Anything that can play a webcam like mPlayer or VLC or GStreamer can display that video data and record it. Let's talk a little bit about the edit hack. So as I said before, you come to present, you plug in your laptop, it reports amongst the resolutions and it will pick the wrong one always. A bit like plugging in a USB cable always goes the wrong way up first. So if you only show one, you don't have this problem anymore. So Bunny implemented this idea in the Netv. In their case they were intercepting the bytes as they went past and rewriting them. In our case we don't even pass it through. We present our own custom one that's hard coded into the FPGA. So in theory there's no problems. In practice we've had a few bugs with detection. It may be the idiot, it may be something else, but that's what the team spent debugging last week. So if you've got an interest or knowledge in this area, come see me afterwards. So one other thing that, as I said, these IDEs are horrible if you've ever had to use an FPGA toolchain. You download about five gig worth of installer. You install it, you've got 20 gig of data on your drive and then you have to go get a license specific for that machine and it's all just a headache. So I did a bit of work trying to work out how to get continuous integration working for an FPGA-based system. What I did was wrote a small a user-based file system, Fuse file system, that just logged all the files that got touched. I'm not sure if this already existed, but it wasn't hard to knock up. And so we did a firmware build with the toolchain mounted in this user-based file system and then I got a log of all the files that I actually needed. So it turns out that of my 19.5 gigabyte install, I needed 250 megabytes worth of files. And that was interesting because it means it's feasible to have a small cloud image that dynamically sucks down this small 250 meg image and runs the build process and then deletes itself. In theory, that might be useful for running on something like Travis CI, which gives you virtual machines that are destroyed at the end of the build run. In practice, it's no good because you still have to worry about the licensing problem and then there's the whole kettle of fish of eulers and copyright of the FPGA toolchain and sucking it down on the internet like that. So that didn't end up being a good enough solution. What we did end up implementing was having a machine in the cloud, sorry, a machine in the cloud that was pre-installed with a licensed version of the toolchain and Travis would get pulled by GitHub when it comes in and it would fire off a build job over on that VM and report back once it was done. So this is all on the Tim Video's repository. So if you've got a situation like that where you've got a big, ugly toolchain that you can't go install dynamically on the cloud, it's not open source or whatever, you can still use tools like Travis to do your build process, which can be handy. I really recommend Travis. Once you get it going, it just works and the advantage of it is whenever someone forks your project on GitHub, they also fork the configuration file for Travis so they'll get continuous integration themselves as long as they can access your build machine. So that's about it, about the different aspects of the project and the different debugging that we did. Just in general, it was a really good dev board. It was a good decision to go with that one. Problem is, it's not open hardware. You can get schematics for it, but they're copyrighted and it's quite expensive because it's got a bunch of ports on there, a bunch of IO and whatnot that we just don't use. Also, the FPG ANA is much bigger than we needed so it's more powerful and required, which means you're paying more for it. Also, Tim was getting bored at this point so he decided he needed a new project and created some custom hardware. We've got some of these that you can come see afterwards. These are some photos of doing the bring up on my kitchen table. So this board, similar to the Alpsys, had an FPGA and a bunch of connectors on it, but in this case we had custom, we had footprints for all kinds of connectors and you can see there it's populated with some DVI connectors and it could also have the HDMI ones as well. So to bring up that, it was mostly successful. We had a few bugs considering the complexity of the board and the fact we did it completely with those open source tools. It went really well, but I think there's version two in production at the moment. So excuse me, maybe my next conference will be at a report on that one. The other bit of hardware we created was by another one of our Summer Code students who's over in New Zealand at the moment. We're not in Australia. Rohit created a VGA input board for doing legacy input and so it's a little ADC that has two VGA connectors on it and so that way you can still get VGA in for situations where you want to have that fall back for users that don't have a laptop with digital or whatnot. Again, that one's all open hardware, all done in KiCAD and the source is up on GitHub. We also, as part of the Summer Code, prototyped Ethernet output on the Digilent, Julyant. So that one would let us do higher bandwidth streaming so maybe you don't need to do JPEG encoding. You can output the raw stream because you've got that throughput there. And apparently the software guys did some work as well. So in the future, we're going to fix some bugs. So as I said before, this thing sometimes works perfectly, sometimes not so much. I had four of the Julyant FBGA set up last week and we had about 20 laptops in the room and people cycled through plugging their laptops in and for the most part it all worked perfectly. We only found one laptop that didn't work. But the previous day, my laptop wouldn't work at all. So we've got some kind of intermittent issue there. If you'd like to volunteer to help us find it, come see me. I bought myself an Ovena laptop, the open source ARM laptop made by Bunny again and it's got an FPGA inside of it and a HDMI connector attached to that. I plan to make a board for that so that you can do pass-through and recording using that system. You wouldn't have to have any of the USB stuff because you connected straight to an ARM using a high-speed bus there. But yeah, that one will be another fun project. We've also had a look at doing an all-in-one system. So back to the whole conference recording in a box thing, maybe a little embedded Linux system that was connected to the FPGA and could suck in the video and encode it itself or maybe store it straight to a hard drive. We've been playing one of these boards, a Zybo, so it's got a little Xilinx part on there that has a dual-core ARM and an FPGA in the same package. So a bit like the Ovena, pairing an ARM with an FPGA, you've got an all-in-one there and hopefully it simplifies the design of the system. That's about it. Has anyone got any questions? Sorry, point to me. So a lot of your problems seem to be related to, you know, fitting the video signal inside USB 2.0 with all the JPEG encoding. Does wood changing to USB 3.0 and not needing to encode in JPEG solve any of your problems or does that open a whole new can of worms? So potentially, we've got the JPEG encoding working, so that's not a problem now. But one of the things about USB 3.0 is the price. So this little Cypress chip, I think it's worth about $5. Getting a USB equivalent, it's worth about $50 to put on the board. So for a board that might cost $200, $300, that's just thinking in parts of the cost. So that's why we haven't really focused on USB 3.0. Hopefully the ICs go down in price as time goes on. But yeah, for now, better off going with Ethernet for that just because the cost. Plans later on for wireless standards like AirPlay and Chromecast and, you know, the stuff that comes stock with new laptops. Sorry, I don't quite understand. So a lot of the new equipment that comes out, new laptops and stuff used by presenters basically come out chipped with the new wireless standards like Chromecast and AirPlay and stuff like that. Even though they're non-free, although Chromecast is kind of free-ish. Any plans for support for those? So to be able to stream it into the device. If you had one of the Linux-based ones, I guess you could do that. If people start using that for presenting at conferences, then it absolutes all of this because you can just capture it. But yeah, if you're talking about being able to receive it into here, I mean, you can plug a Chromecast into your HDMI port already and that would just work if you want to better present using that. Yeah. The second lot of hardware that you've done, the custom hardware, what's that based on? I'd missed it if you did say. It's custom. So it's based on this board in that it's got the same FPGA in it. Okay, so it's using the same FPGA. Have you looked at the, just out of interest, the Zinc range? So that's what the Zybo has on it. Sorry, I didn't explain that very well. It has a Zinc on it. Okay, cool. So that's a Zinc 7000. Hi, Joel. Do you want to give a demo? Oh. I'm glad you asked. No pressures? There's be the picture and the other one. All right, my only truth. Oh, that's working. I promise. I am. It wasn't working before. So yeah, that's it. It's not much else to see. Anything you want me to show off in particular? Oh, that's me. Hang on. I will as soon as I get the video streaming working, which is not working. Cool. Yeah, yeah. So what we've got set up here just before the next one is my laptop going into the board and the conference recording and protecting system coming out of it and USB here, which I programmed earlier. At the moment, you have to program your power on. So we programmed the board. And so it's passing through. That's the pass through signal. JPEG isn't working. Go demos. So sorry about that. But I'll reset it while someone asks me a question and we'll see if it works. Thanks, go. Sure. Just interested what video standards the USB video class allows you to export to. Does it support MJPEG? Yeah. So it supports raw MJPEG. And these days it supports like VP9 and whatnot. I'm not sure how you go on operating system support for that. I guess, you know, Linux if you're using mem player and you'd be fine. But yeah, one of the advantages of sticking with MJPEG it meets the requirements, but also it's supported on everything. So you don't have to worry about installing any codecs anywhere. Okay, we have time for one more question. What's the test pattern? Does it handle the sound mixing and the video mixing for the presenter and the presentation? In theory you could do mixing on the board. It doesn't handle sound at all at the moment. But yeah, in theory you could have some kind of mixing controls on there and have it do mixing in the hardware. The guys have spent lots of time developing software to do that. Using G streamer. So I'm not sure if that would be the best use of our time. But yeah, you potentially could. Well, it's got two inputs. So you could take both streams in and maybe picture-in-picture or side-by-side on the board, yeah. Okay. Thank you, Joel. And could everyone give a big thank you to Joel? But before that... Before that...