 Tim Mirthro Ansel has come all the way from Australia to talk to us about dissecting HDMI and developing an open and FPGA based capture hardware for sharing talks outside of the room and he will be explaining how to dissect it and I'm looking forward to hearing the talk in a second. So please give Tim a round of applause again. Thank you. Okay. Hi. I'm Tim and in theory if my slides change you would see that. And I kind of have too many projects and I'm going to be discussing one of them. This is another project that I gave a lightning talk on earlier. If you didn't see it it's a microcontroller that goes in your USB port. People wanted to know when to hack on it tomorrow at 2 p.m. apparently. So first I want to say is I'm a software developer. I'm not a hardware designer. I'm not an FPGA developer. I'm not a professional in any of that. I develop software for full-time. So this is my hobby. As well this information comes from a couple of projects that I started but a lot of other people did the majority of the work and I'm just telling you about it because they're too shy to come up and talk about it themselves. So a big thank you to all these people who've helped me in various ways regarding this and these slides any of the blue things are links. So if you're playing along at home you can get to them by that URL and click on these things. And there's probably other people I've forgotten who are not on this list. I'm very sorry. So this title of this talk could be called software guy tries hardware and complains. This I've had a really hard time figuring out what to call this talk and you'll see some other attempts at naming this talk better. So a bit of history. How did I end up doing HDMI stuff? So Tim videos is a group of projects which are trying to make it easy to record and live stream user groups and conferences like this event. However, we want to do it without needing the awesome team that is doing the recording here. These guys are really really organized and professional. We want to do it where people have no experience at all with AV and can just make it happen. And so this is how you record a conference or user group. I'm going to be talking about these two things here the HDMI to USB devices that we created. They use in our setup both for camera capture and for capture of slides. And so the HDMI USB is FOS hardware for doing HDMI capture and actually has a bit of history with the CCC because it was inspired by a speaker who spoke here. Bunny spoke on his any TV board which was a FPGA man in the middle attack on HDCP secured links. His talk is really awesome. It's going to be that talk is way more technical than mine and gives you some really awesome details about the cool things he did to make that work. Mine is much more basic. You don't need much experience with HDMI to follow my talk. And so out of ice works like his does except his was deliberately designed to not allow capture. Our design allows capture it effectively man's in the middles. The presenters projector between the presenters laptop and the projector and provides a high quality capture out the USB to port. It used an FPGA to do that. This is because using FPGA makes hardware problems, software problems and as I said I'm a software developer. I prefer software problems to hardware problems. And the way it kind of works is it appears as a UBC webcam so that you can use it with Skype or Hangouts or any of those things without needing any drivers on sensible operating systems like Max and Linux. On Windows you need a driver that tells it to use the internal driver. It's kind of weird. And also serial port because we have the ability to switch which input goes to which output is kind of like a matrix. And so this is the open source hardware we designed. It's in Kicad. You can find it on GitHub. I'm quite proud of it. It's quite a good little kit. We don't use all the features of it yet but it's pretty awesome. And it's in use. We use this technology to capture at a bunch of conferences. Picon in Australia, Linux.com.au in Australia as I said I'm Australian. Debconf though are not Australian. They used it in, sorry, in South Africa I think. And there are a whole bunch of other people around the world who are using this which is pretty awesome. The main reason I wanted to be open source was so that other people could use them and learn from it and fix problems because there are lots of problems we've run into. The other thing is this is all full of Python. We do use FPGA to Python to create the firmware for the FPGA and all these other areas. If you want to find out more about that go to my talk at Picon.au which was recorded with the very device I'm talking about which is kind of cool. Sorry. But as I said, this is going to include lots of problems. The first one is people still use VGA. This kind of makes me sad because VGA is not HDMI. It was invented in 1987 and it's an analog signal. Well, HDMI shares some history with VGA. You can't use the same techniques for capturing HDMI that you can VGA. So why do you still use it? It's old and bad. We developed a VGA expansion board to effectively allow us to capture VGA using the same thing. By developed I mean we have designs and some exist but nobody's actually finished the firmware to make them work yet. So I'd love help there. There's also another problem. I want to do this all open source as I said. The HDMI ecosystem has commercial cores you can buy and they work reasonably well but you have to buy them and you don't get the source code to them or if you do get the source code to them you can't share them with other people. As well I want it to be outsourced because we wanted to solve all those problems that people have when plugging in their laptop and it not working and the commercial cores aren't designed to allow us to give the ability to solve those problems permanently. So we create a new implementation. Has anybody who's ever done a re-implementation or a new implementation or something it means that you've got new bugs which I will describe quite a bit. So this talk could be called debugging HDMI rather than dissecting HDMI because it includes a lot of information about how things went wrong. Okay so that's kind of the introduction of why we're here and why I'm talking about this. So how does HDMI work? Well HDMI is actually reasonably old now. It was created in 2002. It's based on the DVI specification. DVI was created in 1999 so DVI is 17 years old and DVI was designed to replace VGA and shares a lot of similar history. HDMI is backwards compatible with DVI electrically and protocol-wise but uses a different connector and so this is an HDMI connector. You've probably seen them all before. If you look closely you'll see that there are 19 pins on the HDMI connector. That's pin one. So what do all these pins do? Well there are five pins which are used for ground. There's one pin which is used for power. It gives you five volts at 50 milliamps. You can't do much with 50 milliamps except maybe some type of adapter converter or power hole microcontroller. Some Chinese devices try to draw like an app from this. That's not very good. So that's another thing you should watch out for. There are three high-speed data pairs which transmit like the actual video data and they share a clock pair. So that's these pins here. And then there are five pins which are used for low-speed data. And so that's all the pins on the HDMI connector. You might have noticed that there was a whole bunch of different things I said there and you need to actually understand a whole bunch of different protocols to understand how HDMI works. There's a bunch of low-speed ones and there's a bunch of high-speed ones. I'm not going to talk about all of those protocols because there's just too many to go into an hour talk. The low-speed protocol I'm not going to talk about is a CC or audio return and I'm not going to talk about any of the auxiliary data protocols that is high-speed or HDCP. If you want HDCP gone, look at Bunny's talk. It's much better than mine or Ethernet. What I will be talking about is the EDID and DDC protocols, the 8B10B encoding of the pixel data and the 2B10B encoding of the control data. Interesting enough, this is actually DVI. I'm not telling you about HDMI. I'm really describing to you how DVI works. Again, many titles. Starting with the low-speed protocol, EDID or DVI, DDC. I'm going to use those two terms interchangeably. They've been so confused now that they are interchangeable in my opinion. This is something they inherited from VGA. It was invented and added to VGA in August of 1994. It was for plug-and-play of monitors so that you could plug in your monitor and your graphics card would just work rather than requiring you to tell your graphics card exactly what resolution and stuff your monitor worked at. It uses iSquad C and a small eProm. These are the pins that it uses. 15 is the clock pin and 16 is the data pin and then it uses the ground and the 5 volts is used to power that eProm. In some ways it also uses 19 because 19 is how you detect that there's something there to read from. It uses iSquad C. iSquad C is a low-speed protocol that runs at either 100 kHz or 400 kHz. Technically EDID is not iSquad C but it only supports the 100 kHz version though in theory everything on this planet can be read at 400 kHz. It's also very well explained elsewhere so I'm not going to explain in detail what iSquad C is or does or how to implement it. The eProm is a 24 series. It's found at iSquad C address 50. It's 8 bits in size which gives you 256 bytes of data. Again, this eProm and how to talk to it is very well described on the internet. I'm not going to describe it here. If you've used eProms over iSquad C it's likely you've used a 24 series eProm. Probably big ones. 256 bytes is pretty small so like a 16 width one but EDID only supports the 8 bits ones. The kind of interesting part of EDID is the data structure. It's a custom binary format that describes what the contents of the eProm is. Again, Wikipedia has a really good description of this so I'm not going to go into much detail but the important things are that it describes the resolution, frequency and format for talking to the monitor. This is really important because if you try and send the wrong resolution, frequency or format, the monitor's not going to understand it. This is what EDID is used for. This is where things start getting a bit hairy. Presenters come up to the front and the first question you'll see anybody ask is what resolution do I use? They get a panel like this which has a bazillion resolutions selected and the thing is, despite your monitor saying that it supports many formats, they lie. It turns out that projectors lie a lot more than normal displays. I don't know why they're special. This is what a supported format looks like. It's really great. As well, I care about capturing the data. I want things in the format that is easy for me to capture. I also don't want to be scaling people's images and text because scaling looks really bad. If somebody selects a really low resolution and we scale it up, it looks really horrible. It makes text unreadable and presenters are very renowned, especially at technical conferences, for using tiny, tiny fonts. We need as much resolution as we can. How we solve this is we emulate our own eProm in the FPGA and ignore what the projector tells us it can do. We tell the presenter that this is what we support. You might notice that it kind of solves the problem of what resolution we do. Offer a single option. It makes it very hard to choose the wrong one. So that's good. We solved the problem. No, we haven't solved the problem. We were recording PyconAU and we found that some Mac laptops were refusing to work. To understand the cause of this, you need to understand a little bit about how the world works. There are two major frequencies in the world, 50 hertz and 60 hertz. 50 hertz is mainly used in the rest of the world and 60 hertz is used in America and Japan and a few other places, but that's kind of very rough thing. Laptop sold in Australia. Australia is 50 hertz. It's part of the rest of the world. You think that the laptop could do 50 hertz. Plus, everything's global these days. I can plug in my power pack for my laptop in the US or Australia. It should work everywhere. No, sad. So we solved it by claiming that we were American and supporting 60 frames per second rather than 50 frames per second. So I guess display with American accent. We deployed this hotfix on the Friday evening and on Saturday, all the problems that we were having on Friday went away. So this is kind of the power of an open source solution and having complete control of your hardware. Nowadays, we actually offer both 60 and 50 because for display capture, if you're displaying stuff at 50 frames per second, you're probably speaking a lot faster than I am. And it's really weird. These 128 bytes are really hard and the number one cause of why a person's laptop can't talk to the projector. It gets a trophy. To try and figure out why that is, we created edid.tv. It's supposed to be a repository of edid data. It was a summer of code project, Python Django Bootstrap and an edid grabber tool that you can run in your laptop. I'd love help making this work better. It hasn't had much love since the summer of code student made at work. But it would be really nice to have an open database of everybody's edid data out there. There are a bunch of closed ones. I can pay to buy one, but I'd really love to have an open one. As well, maybe we don't need the whole capture solution. Maybe we can override the edid. The C3 VOC developed a version that overrides edid for VGA. I have a design which works for HDMI. It just uses a low-cost microprocessor to pretend to be an EEPROM. Display port is not HDMI. They have an auxiliary channel like edid and CEC. I have boards to decode them here at CCC. If you're interested in that, come and talk to me because I would like to do similar things for display port. That's the slow-speed data. What about high-speed data? Each pixel on your screen is basically three colors in DVI standard, red, green, blue, and each one is a byte in size. Each of the colors is mapped to a channel on the HDMI connector. You can see the red and the green and the blue channels. Each channel is a differential pair. You get a plus and a negative and a shield. They use twisted pair to try and reduce the noise reception of these because these are quite high-speed. They have a dedicated shield to try and again reduce the noise that is captured. This is kind of where it gets the differential signaling part of the TMDS that is the kind of code name for the internal protocol that is used on the high-speed data. They also, all those channels share a clock. That clock is called the pixel clock, but each of these channels is a serial channel. It transmits data at 10 bits. Every clock cycle, there are 10 bits of data transmitted on each of these channels. There is a shared clock and each of the channels is running at effectively 10 times that shared clock. This is what the whole system looks like. You have your red, green, blue channels. You take your eight bits of input data on each channel and you convert it to the 10 bits that we are going to transmit. It goes across the cable and then we decode on the other side. The question is, what does the 8-bit to 10-bit encoding look like? How do you understand that? It is described by this diagram here. It is a bit small, so I will bring it up. This is what it looks like. Yeah, sure. What? This diagram, I have spent hours looking at this and it is an extremely hard diagram to decode. It is very, very hard to understand. It turns out that the encoding protocol is actually quite easy. It is three easy steps approximately. I am going to show you all how to write an encoder or decoder. That diagram is just for the encoder. They have a simile diagram that is not the inverse of this for decoding. Again, almost impossible to read. The three steps are first we are going to do control or pixel, choose which one to do, and then we go into either encode control data or encode pixel data. A couple of important points to go through first. The input data, no matter how wide it is, is converted to 10-bit symbols. Data goes to symbols when we are talking about them being transmitted. We talk about them symbols when it is decoded into pixels. We talk about them in data. As well, things need to be kept DC balanced. I have rushed ahead. The question is why 10-bits are pixels were 8-bits. I will explain why in the pixel data section. It is important that all our symbols are the same size. We are always transmitting 10-bits every clock cycle. Keeping DC balanced, long runs of ones or zeros are bad. There are lots of reasons for this. I tend to think of it like, HDMI isn't AC coupled, but you can kind of think of it like AC coupled. It is not to recover clock. We have a clock pair that is used to give our clock signal. There are lots of lies on the internet that say that the reason we want to keep DC balances because of clock, but no, that is not the case. What does DC balance mean? A symbol which has lots of ones or lots of zeros is going to be considered DC biased if it has more ones than zeros. This is what it is like. This symbol here has lots of ones. If you add up all the ones, you can see it has got quite a positive bias. If it was inverse and had lots of zeros, it would have a negative DC bias. That caused that DC bias over time caused those problems. Those are the two important things we have to keep in mind when looking at the rest. First thing we need to figure out is, are we transmitting control data or pixel data? Turns out that what is happening in your display is we are transmitting something that is actually bigger than what you see on your screen. This is not to scale. The control data periods are much, much smaller. The control data is in orange and the pixel data is in purple, pink. Why does this exist? It exists because of old CRT monitors. For those in the audience who were born after CRT monitors, this is what they look like. The way they work is they have a electron beam that scans across highlighting the phosphorus. This electron beam can't just get back to the other side of the screen straight away or get back to the top of the screen. These periods where we are transmitting control data was to allow the electron beam to get back to the location where it needed to start transmitting the next set of data. That is why it exists. Why do we care? Because the encoding schemes for control and pixel data are actually quite different. This is the main difference. I am going to come back to the slide a bit later. Again, an important thing to see here is that despite the encoding scheme being quite different, the output is 10 bits in size. That first step, choosing whether it is pixel or control data, is described by this bit of the diagram. You might notice it is not the first thing in the diagram. How do you convert control data to control symbols? First, we need to know what control data is. There are two bits. There is the Hsync and the Vsync signal. They provide the horizontal and vertical pixel sizes. They are left over from VGA. We don't actually need them in HDMI or DVI to know where the edges are because we can tell the difference between control and pixel data, but they still exist because of backwards compatibility. This means that we have two bits of data that we need to convert to 10 bits of data, so it is a 2B, 10B scheme. How they do it is they handpicked four symbols that were going to be these control data symbols. These are the four symbols. There are some interesting properties with them. They are chosen to be DC balanced. They roughly have the same number of zeros and ones, so we don't have to worry about the DC bias of these symbols very much. They are also chosen to have seven or more transitions from zero to one in them. This number of transitions is used to understand the phase relationship of the different channels, so if you remember this diagram, we have a cable going between the transmitter and the receiver. These again, very high-speed signals, and even if the transmitter was transmitting everything at the same time, the cable isn't ideal and might delay some of the symbols, the bits on one channel longer than others. By having lots of these transmissions, we can actually find the phase relationship between each of the channels and then recover the data, and so that's why these control symbols have a large number of transitions in them. More on that later when we get to implementation, and I'm running out of time. This part of the diagram is the control data encoding. What about pixel data to pixel symbols? Again, in DVI, each channel of the pixel is 8 bits, and the encoding scheme is described by basically the rest of the diagram, but again, it's actually really, really simple. This encoding scheme is called 8B10B because it takes 8 bits converting to 10 bits. However, there's a huge danger here because IBM also invented an 8B10B scheme that is used in everything. This is used in DisplayPort, it's used in PCS Express, it's used in SATA, it's used in pretty much everything on the planet. This is not the encoding TDMS users. You can lose a lot of time trying to map this diagram to the IBM coding scheme and going, these are not the same. That is because they're not the same. This is a totally different coding scheme. So, encoding pixel data is a two-step process. I did say it was three-ish steps to do this. The first step is we want to reduce the transitions in the data. Why do we do this? Because this again is a high-speed channel. We want to reduce the crosstalk between the lanes. They're actually quite close to each other. By reducing the number of transitions, we can reduce the probability that the signal propagates from one channel to the next. How we do it? We're going to choose one of two encoding schemes, an XOR encoding scheme or an XNOR encoding scheme. How do we do the XOR encoding scheme? It's actually pretty simple. We set the encoder bit, same as the first data bit. The next encoder bit is the first encoded bit XORed with the data bit. Then we just repeat until we have done the eight bits. This is how we do the XOR encoding. The XNOR encoding is the same process, except instead of using XOR, it uses XNOR. Then how do we choose which one of these to use? If the input data byte has fewer than four ones, we use the XOR. If it has more than four ones, we use the XNOR. Then there's a tiebreaker if you have even. The important thing here is that this method is determined by the data byte only. There's no hidden state here or continuous change. Every pixel has a one-to-one mapping to an encoding. Then we append a bit on the end that indicates whether we chose XOR or XNOR encoding of that data. That converts our eight-bit input pixels to nine bits of encoded data. Effectively, our eight-bit encoded sequence and then one bit to indicate whether we chose XOR or XNOR encoding for that data bit. That's it there. This encoding is actually very good at reducing transitions. On average, we had roughly eight transitions previously. Now, we have roughly three-ish, so it's pretty cool. I have no idea how they figured this out. I'm assuming some very smart mathematicians were involved because discovering this is beyond me. That describes the top part of this process. This is where in the TMDS, the transition minimization comes from, that step there, the encoding process. But there's still one more step. We need to keep the channel DC balanced as I explained earlier. How can we do that? Because not our pixels aren't guaranteed to be a zero DC bias like our control symbols are. We do it by keeping a running count of the DC bias we have. Then if we have a positive DC bias and the symbol is also positively biased, we invert it. Or if we have a negative DC bias and the symbol has a negative DC bias, we invert it. The reason we do this is because when we invert a symbol, we convert all the ones to zeros, which means a negative DC bias becomes a positive DC bias. As I said, we chose because we are already negative and the thing was negative, we convert it to plus. It means that we're going to drive the running DC bias value back towards zero. We might overshoot, but the next stage we'll keep trying to oscillate up and down. On average, over time, we keep a DC bias of zero. As I said, then to indicate whether or not we inverted or kept the straight through or we inverted, we add another bit on the end. That's how we get our 10-bit encoding scheme. We have the eight bits of encoded data, then one bit indicating whether or not it used XOR, XNOR encoding, and then one bit to indicate whether or not we inverted the symbol. That describes this bottom part of the chart. Now you can see partly why this chart is confusing. It's no way in what I think of as a logical diagram. This might be how you implement it in hardware if you already understand the protocol, but not very good diagram for explaining what's going on. As you see, it's actually pretty simple. In summary, this is the interesting information about the two different encoding schemes. Because we minimized the transitions in the pixel data, we can actually tell control data and pixel data apart by looking at how many transitions are in the symbol. If it has six or more transitions, it must be in control symbol. If it has four or less, it must be a pixel symbol. You now know how to encode TDMS data and how to decode TDMS data. Because if you want to decode, you just do the process backwards. Congratulations. How do you actually implement this? Well, you can just write the XOR logic and a little counter that keeps track of the DSC BIOS and all that type of thing in the FPGA. I'm not going to describe that because I don't have much time, but if you follow the process and that I've given you, it should be pretty easy. This is what we use currently. You could actually use a lookup table. What we're doing is converting eight bits of data to 10 bits of data. That is a lookup table process. Pretty easy. FPGA is really good at doing lookup table type processes. It also allows you then to extend this system to those other protocols like the 4B10B that is used for the auxiliary data. We're looking at that in the future. It uses a few more resources, but it's a lot more powerful. This is what your encoder will look like and your decoder. It's quite simple. It takes in your 10 bits of data and outputs either your eight bits of pixel data or your two bits of control data and a data type. This is what if you went into our design and looked at it at a high level, in schematic, you'll probably see a block that looks like this. The encoder is slightly more complicated because you also have the DC BIOS count that you have to keep track of. Again, the data goes in and the data comes out. That's simple. Cool, right? This extends to auxiliary data or if you get an error, there are 124 symbols that you can have in 10 bits of data. Not all of them are valid. If you get one of these invalid symbols, you have an error. However, things happen quite quickly when you times them by 10. Our pixel clock for 640 by 480 is 25 megahertz. When you times that by 10, you get 250 megabits per channel. When you're doing 720p, you're doing 750 megabits per channel. 1080p is 1,500 megabits per channel. FPGAs are fast, but they're not really that fast at a range that I can afford to buy. I'm sure the military has ones that go this fast, but I'm not as rich as them. But they do include a nice hack to solve this. They're called SIRDES. They basically turn parallel data into serial data. This is what the blocks look like. You give them your TDMS parallel data and they convert it to high-speed serial data for you. They're a little bit fiddly to use and your best option is to go and find a person who's already configured this for your FPGA and follow what they do. Mike Hamsterfield has a really good documentation on how to use these in the Spartan 6. These are also unique to your FPGA. Different FPGAs are going to have different control schemes, but if you're using a Spartan 6, then go and look at what Mike Hamsterfield is doing for configuring these. I remember how I said our system has a serial console. Because we have the system, we can actually delve quite deep into what's happening internally in the system and print it out. This is debugging from one of our systems. You can see the first thing is the phase relationship between each of the channels. The next one is whether we're getting valid data on each of the channels and then we've got the error rate for that channel, whether all the channels are synchronized and then some resolution information. You can see that this has got a 74 MHz pixel clock. There are three columns because there's red, green and blue channels. This gives us some very interesting debugging capabilities. If you plug in a cable and you're getting errors on the blue channel but nowhere else, it's highly likely there's something wrong with that cable. This is a very powerful tool that allows us to figure out what's going wrong in a system. It's something you can't really get with the commercial versions of this. What about errors? Everything I'm talking about now is a little bit experimental. We haven't actually implemented this, but some ideas about what we can do because we now have complete control of our decoder. As I said, there's 124 possible choices for 10-bit symbols of which 460 are valid pixel symbols, 4 are valid control symbols, and 560 symbols should never ever be seen no matter what. That's like 56% of our space that should never be seen. It's actually better than that. We know because of the running DC bias that there are 256 valid pixel symbols at any one point. If you've got a negative DC bias, you can't have a pixel symbol which continues to drive you negative. Actually, 74% of our space at any one time is not allowed to exist. This means that a huge number of the invalid symbols are only near one other valid symbol. We can actually correct them. We can go, this symbol must have been this other symbol because it's not a valid symbol. It must be a bit error from this other symbol. We can correct these errors. This is quite cool. We can correct about 70% of single-bit flip errors in pixel data, but sadly there is some that we can't. But we can detect that we got an invalid pixel data. The fact that there's an error is important. In this case, we've got two pixels that we received correctly and we got a pixel that we know is an invalid value and then two more pixels that we received correctly. You can imagine this is a blue channel. The first ones were not very blue and then there's the decoded value for this is very, very light blue and then some not other ones. This looks really bad. This was probably a whole blue block. One pixel difference of that size is probably not a valid value. We can cover them up. We can go the two pixels either side and average them and fix that pixel. This allows us to correct a whole bunch more of errors that are occurring. As we're about to take this data and run it through a JPEG encoder, this doesn't actually affect the quality of the output or that much and allows us to fix things that would otherwise be giant glaring glitches in the output. That's some interesting information about how you do TDMS decoding and how we can fix some errors. The thing is we can do even better than this because it's an open source project. Maybe you have some idea about how we can improve the serdes performance. Maybe you have some idea about how to do TDMS decoding on a much lower power device than we use. It's open source. You can look at the code and you can improve it and would love you to do it. The thing is that I have lots of hardware but not much time. If you have lots of time and not much hardware, I think I can solve this problem. These are links to the HDMI USB project and the Timvideos project and all our code, hardware, everything is on GitHub under open source licenses. Here are some bonus screenshots that I wasn't able to fit in other locations. You can see these small errors. That one was kind of a big error. This is what happens when your DDR memory is slightly broken. That is my talk. Excellent. Thank you very much. As you've noticed we have a couple of microphones standing around in the room. If you have any questions for me, please line up behind the microphones and I will allow you to ask the questions. We have a question from the internet. Yes, thank you. Do you know if normal monitors do similar error recovery or hiding? I know of no commercial implementation that does any type of error correction. The solution for the commercial guys is to effectively never get errors. They can do that because they don't have to deal with the angry speakers on the ground going, why does my slides look weird? As well, they're probably working with better quality hardware than we're using. We're trying to make things as cheap as possible and so we're pushing the boundaries of a lot of the devices we're using so we're more likely to get errors than they are. We have quite a lot of questions, so remember questions, not comments. Microphone number one, please. Yes. Sorry, I don't quite understand what's going on. Audio problem. I'll be around afterwards if you want to chat to me. We might do that right to you on the computer afterwards. Second question from microphone number three, please. Hello. Yes, can you determine the quality of a HDMI cable, for example by measuring bit error rate of each three pairs and maybe also some jitter on the clock and that kind of... Yes, we can. The quality of a HDMI cable should be their zero bit errors. So anything that has non-zero bit error errors, we chop up and throw away. This gets interesting when you have very long cables. We can actually see that the longer the cable is, the harder for them to keep zero bit errors. So yes, we can kind of judge the quality of the cable, but it's also hard because it depends on what the cister like sender is doing. If the sender is of a lower quality and the cable is lower quality, you might get bit errors, but if the sender is of a high quality and the cable is of a lower quality, they might cancel each other out and still be fine. And so we can't just go, this is a good cable because we don't actually have any control over how powerful our sender is on this device. If we could kind of turn down the sender and see where things start going wrong, that would be pretty cool. If anybody wants to look at building such a device, I would love to help you do that. We have another question from microphone number five. Your hardware, the HDMI to USB hardware, is it available for simply ordering or has it to be sold by hand? You cannot solder this board by hand unless you're much, much better than I am. It uses ball grid array parts because it's an FPGA. This is one here. You can buy them. We're working with a manufacturer on India who builds them for us. We worked with them and it was pretty awesome. We're also working on new hardware. I've got a whole bunch of FPGA hardware down here that you can come have a look at and I'll probably move it out into the hallway afterwards. Again, if you're interested in the hardware and you have a use case, chat to me because I like to solve problems of people not having hardware and my employer pays me too much so I get to use my discretionary funds for helping out people doing open source stuff. At least four more questions. Microphone number two please. Do you think it would be possible to get a 180p image out of the open source hardware board you produced? Yes, I do but it requires us to do some hard work that we haven't had time to do yet and for us 720p at 60 frames per second is good enough. The USB connection is limited in bandwidth because we don't have a H264 encoder. We only have MJPEG. If somebody wants to write us an open source, say WebM rather than H264 encoder, that might start becoming more interesting. We also have Ethernet, gigabit Ethernet on this board. It should be pretty easy to stream the data out to Ethernet. Again, need help. The Ethernet controller works. We can tell that into the board and control it via Telnet. We just need somebody to actually connect the data and the high-speed data side up. We use it for debugging and stuff. Mike haps the field again. Really big thank you to him. He is an amazing designer. He built 1080p60 that is a little bit out of spec but actually works really well on hardware that is almost identical to us. He also did the display port, like a 4K display port which we can do on our board. If you only need one or two of the 1080p things, the display port connectors can be converted to HDMI quite easily and you can do that on them. Yes, I think it's possible but again open source, hobbyist, need developers. We'll take one question for the internet. Thank you. Have you considered JPEG 2000? No, I have not. The main reason is that I want to pretend to be a webcam. The UVC standard which is the USB webcam standard does not support JPEG 2000. There's no reason we couldn't support JPEG 2000 when connected to Linux. We could fix the Linux driver to add JPEG 2000 support. Again, I don't know if there's any good open source FPGA implementations of JPEG 2000. That's also a blocker. If you're interested in helping out, come and talk to me. As I said, I would very much love to chat to you and solve the problems you're having with getting going as well. We have t-shirts. I'm wearing a t-shirt that we have and I will send anybody who contributes a t-shirt. Whether that's fixing our website, helping with documentation, helping people on IRC get set up, anything. I think you don't need to be an expert on FPGA stuff to help out. We also are working on a little project to run MicroPython on FPGAs. If you're really into Python and you like MicroPython, I would love to help you help us do that. It's kind of working. We just need more peripheral support. We have two more questions from microphone number one. Is there some sort of dedicated processor on that board or do you use a micro-blaze in the FPGA? We use an open source soft core, one of three. We can change which soft core we're using with a command line flag. We're using either the Lattice Micro32, which is produced by Lattice Semiconductor. We can use the open risk 1k or we can use a risk 5 processor. We genuinely default to the LM32 because it's the most performance for least FPGA resource trade-off. But if you like risk 5 or open risk 1k better for some reason, say you want to run Linux on our soft core, then you can do that with a one line command line change. We're looking at adding jcore support in early next year. Jcore is quite big though compared to LM32, so it probably won't fit on some of the very small devices. So it's Lattice FPGA? It's a Python 6 FPGA and our new boards will probably be Artec 7 but we're still in the process of making them exist yet. I've also been working with Bunny's Netv2, porting our firmware to that, which has been really awesome. He's doing some cool work there and he's kind of expired this whole development by showing that, yes, you could do this and you shouldn't be scared of it. Good. One more question from microphone number one. Yes, do you have any plans for incorporating HD-SDI into your platform? Yes and no. We have plans and ideas that we could do it, but HD-SDI and all of the SDI protocols are much harder for the consumer generally to access and we want to drive the cost of this down to as low as it can go and HDMI is a consumer electronic thing. You get it on everything. You get it on your like five buck Raspberry Pi. HDMI is probably a really good solution for this. We haven't developed an SDI cause or anything like that so I can't tell you like that. We're doing anything there but somebody is interested again. I like to remove roadblocks and we would love to have people work on that. We have one more question from the internet and we have two minutes left. Okay, thank you. The question is not related to HDMI but to FPGAs. FPGAs are programmed in a high level language like very local. After simulation you compile so every vendor has created his own compiler for its own hardware. Are you aware of a move to open source compilers or to independent hardware and do you see a benefit in open source FPGA compilers? Yes. If anybody knows about FPGAs you'll know they use proprietary compilers and these proprietary compilers are terrible. I'm a software engineer. If I find a bug in GCC I can fix the bug. I've got those skills and I can move forward or at least figure out why the hell the bug occurred. That is not the case with FPGA compilers. The FPGA compiler we use is non-deterministic. You can give it the same source code and it produces different output. I'd love somebody to reverse engineer why that occurs because I've removed all the randomness from random sources from it and it still manages to do it. I'm really impressed. So Clifford has done an open source FPGA toolchain for the lattice stick things. He said he's going to work on the Atrix 7 FPGAs. Please donate to him and help him. I would like if that exists I owe people like a bazillion beers because the sooner I can get off proprietary toolchains the happier I will be and it'll make my hobby so much nicer. Please help him and give him a big round of applause.