 Tim Mirthro Ansel has come all the way from Australia to talk to us about dissecting HDMI and developing an open and FPGA based capture hardware for sharing talks outside of the room and He will be explaining how to dissect it and I'm looking forward to hearing the talk in a second. And so please give Tim one round of applause again. Thank you Okay, hi I'm Tim and in theory if my slides change you would see that And I kind of have too many projects I'll be discussing one of them This is another project that I gave a lightning talk on earlier. If you didn't see it Microcontroller that goes in your USB port People want to know when to Hack on it tomorrow to be him apparently So first I want to say is I'm a software developer. I'm not a hardware designer. I'm not an FPGA developer I'm not a professional Any of that I developed software So this is my hobby as well this information comes from Couple of projects that I started A lot of other people did the majority of the work And I'm just telling you about it because that's just for one Big thank you to all these people who helped me in various ways regarding this This slide Any of the blue things So if you're playing along at home you can Directly have the url and click on it It will be easier So I might have forgotten people on this list. So I'm really sorry for that This title of this conference Could be the people who make the software Difficult to try as best as possible to make the hardware So you'll be able to see it in other ways so I could have named this conference. But here it is So a little bit of history. How did I find myself doing things on the HDMI? So Teamvideos is a project in a group In which we wanted to make it easy to record live streams Like this one, for example, at this event But we wanted to do it without having To have a whole team behind all the recording So they're all very organized and professional But we want to be able to do it with people who don't have experience And be able to do it anyway So that's how we record a conference normally So I'm going to talk a little bit about it So HDMI to USB that we created So they use it with our configuration It captures the slides And so HDMI to USB is a hardware to capture the HDMI It's a little bit of history So with the CSS in particular Because it was inspired by a presenter who came here Bunny talked about it on his METM board Who talked about the attacks on the HTTP secured links So this talk is super complicated and technical compared to mine But there's a lot of details and it's very interesting To understand how it works And mine is much more basic, much less technical You don't need a lot of experience with HDMI to understand it And so our device works like it does Except that ours wasn't designed to capture the time And mine was made for that To be able to have a projector between the laptop and the projector And have a capture in the definition And we use FPGA to do that It's because it makes problems Hardware and software problems I prefer to have software and software problems So the way it works is that we use an UBC webcam Like Skype with Hangouts We don't need a driver We can do that on Linux or Windows But we need something to use the driver It's a bit weird but And we also use the CDC serial port To be able to change To know where the input and output is So this is the hardware that we designed It's open source You can find it on GitHub and you can download it I'm quite proud of it I think it's not bad We don't use all the features yet But it's not bad And it's in use We use it to capture multiple conferences Like I did in Australia I said several things in Australia Like I said, I'm from Australia But they also use it I think they use it in South Africa And a lot of other people around the world use it So it's really cool The main reason I want it to exist Is for other people to use it And fix the problems Because there are a lot of problems Of course It's done in Python We use... It was used for this conference in Python We use all these tools So you can see my tool That was done And it was recorded with my tool So you can see it As I said There are a lot of problems That have happened The problem is that people still use VGA Which makes me sad Because VGA is not HDMI It was invented in 1987 And it uses an analog signal HDMI has a shared history with VGA We can't use the same techniques But why do people still use VGA So we developed an expansion board To help with the capture And by developing I mean that we designed Something that existed And no one had finished the work There is also another problem So I'm going to do all this in open source As I said HDMI They have a lot of Commercial cores to do this But you have to buy them And you don't get the source code And if you don't have the source code, you can't share it And I want it to be open source Because we want it to answer The problems people have They connect their computer And it doesn't work And commercial cores don't allow us To solve the problems by ourselves So we made a new implementation For those who have already done this It means that you have new bugs That I'm going to try to show So this talk could be called Debugging the HDMI rather than dissecting the HDMI Because it includes a lot of information About how things went wrong So this is the introduction About why I'm talking about this Now how does HDMI work So HDMI It's pretty old It was created in 2002 And it's based on the DVI specification It was created in 1999 So DVI is 17 years old And it was It was created to replace the VGA HDMI is Backwards compatible With DVI But it uses a different connector So this connector you've probably already seen If you look closely You can see that there are 19 pads 19 connectors So this is the connector 1 So what do all these pins do? What do all these connectors do? So there's the mass, the power It gives you 5 volts Around 50 mA So it's not much You can't do much with it It can be used by an adapter A converter To power a microcontroller And there are some Chinese cards That try to recover 1 amp It's not a good idea There are 3 pairs To transmit data With a high speed And it shares a pair of watches And then there are 5 pins There are 5 connectors To transmit data at low speed So here are all the pins On the HDMI connector So you've seen that there are A lot of different things here And so you need to know A lot of different protocols To understand how HDMI works So there are some protocols At low speed and some protocols at high speed I'm not going to talk about all these protocols Because there are too many So I'm not going to talk about The protocol at low speed CEC And I'm not going to talk about The auxiliary protocols at high speed Neither of HDCP If you want to talk about HDCP Look at Bonnie's talk I'm not going to talk about Ethernet But what I'm going to talk about Is EID and DDC The 8B and 10B To check the pixel data And the 2B and 10B control data So what I have to say Is that it's only 10 BI So I'm not going to explain How the HDMI works but 10 BI So starting with the low speed Starting with the DDC protocol EID or DDC So people use these 2 terms In an interchangeable way I think they sound like that In my opinion So this is something that was inherited from VGA It was added to VGA in August 1994 To have monitors that you can connect To use directly Rather than To say to your graphic card Exactly what resolution The monitor A And this kind of thing It uses I2C and a small TEPROM So it uses these connectors 15 is the clock pin 16 is the clock pin 16 is the data connector Then it uses the mass The 5 volts For the TEPROM And it uses the 19 Because the 19 is used to detect If there's something wrong So it uses I2C I2C is a low speed protocol And it has either 100 kHz Or 400 kHz In general I2C is not technically Exactly I2C Because It only supports 100 kHz But on this planet It can also be used 400 kHz So I'm not going to Detail what I2C is The EEPROM is a 24 series EEPROM 24 series So it uses the I2C 50 address It uses 8 addresses 256 octets Of memory And the way to communicate With this kind of EEPROM Is very common So I invite you to look on the internet There's a lot to see If you have an EEPROM It's probably a 24 series EEPROM 256 octets It's pretty small There are also 16 bit EEPROMs But EDID only supports 8 bit EEPROMs Then The data structure It's a personalized binary format Wikipedia has a very good description I'm not going to go into the details But the important thing To remember Is the resolution The frequency And the format To communicate with the computer It's important because If you see the bad code To describe the resolution The frequency and the format You won't be able to communicate with the screen So this is where things start getting terrible Presenters come up to the front And their first question Is what should I use as a resolution And they have a window With a lot of different resolutions That can be selected And Even if the monitor says It supports a lot of formats They lie And the projectors Lies a lot more than The projectors Lies a lot more than The rest of the devices So this is what It looks like A supported mode And so I'm interested in capturing the data So I want to use it in a format That's easy for me to capture That's easy for me to capture And I don't want to zoom On the images And the text because in general It's ugly And It's completely pixelized And it makes the text Completely invisible And Presenters are known In technical conferences To use really small police The way we solve the problem We emulate our own problem in the FPGA And we We provide our own ID interface And so we present the resolution That you have to use And so you can see That it solves the problem of choosing the resolution So So if you have a solution It's difficult to choose the wrong one No, in fact we didn't solve the problem So we recorded In Australia And we realized that Some Mac computers Didn't want to work And to understand the cause You have to know How the computers work So there are two main frequencies 50 Hz and 60 Hz So 50 Hz is the rest of the world And 60 Hz is the United States So in Australia We have 50 Hz So we thought that the laptop Was doing 50 Hz And all of the world So why couldn't I buy A food In the United States For something in Australia or the other way around It should work, right? So we solved The problem By doing as if we were American And by supporting 60 images per second Instead of 50 So it's a bit like a screen With an American accent We deployed This solution On Friday And on Saturday We didn't have any problems So that's the advantage Of an open source solution Nowadays we actually offer both Today we do both Because to capture The recording At 50 It speaks much faster Than what I'm doing So it's really weird There are 120 And it's really difficult And the first The first thing When you say Why couldn't it Speak with this projector The first to find it So to try To realize Why does it do that So we created Edith.tv It's a project Made with Django Bootstrap And we can use it directly From laptop I would really like to help With what's getting better There's not a lot of people Who have been interested In an open database Of all these things I could do one But I would like to have an open one So let's see that Maybe we don't need The whole thing Maybe we just need To redo the Edith The C3 VOC Made a version That redoes this VGA So I started I worked with them It uses a microprocessor That looks like That And DisplayPort Is not an HDMI I would like to say That you know They have an auxiliary channel Like Edith and CEC To decode and use CCC in my 15 If you are interested in that Come and see me Because we would like to do That's the solution we need So that was to talk about When it's slower At slower speeds So now I would like to talk about Faster speeds So each pixel on your screen Is basically Three colors With the standard Green, red, green and blue And each of them Is just Each of them Each of them is On channels On channels So you can see here Red, green and blue You can see them directly on the pins Each Has a pair So you can see the plus and the minus They use Different pairs To reduce The noise On These speeds And they try to reduce The noise On the capture And this is kind of where It becomes Different On All the parts of this This code That we use On the high speed data On the high speed data And All these channels Share a clock It's called the pixel clock It's called the pixel clock Each of these channels Is a serial channel And it transmits Information At 10 bits And every 10 bits So each cycle of the clock Is transmitted On each of these channels So it's a shared clock Each channel Turns 10 times per channel And that's how The system looks So you can see the blue channel Here There are 8 bits on top It's converted to 10 bits So we're going to send And it's going through the cable And it's decoded on the other side So the question Why How we do To encode What it looks like When we go from 8 to 10 So how do we understand that We can see with this diagram It's a little bit small I'm going to put it bigger That's what it looks like What And this diagram As you can see I spent hours And it's really hard to understand To decode It's very hard To understand And the protocol To encode And it's quite simple It's 3 different steps So I'm going to show you How we encode And decode This diagram It's really just for the encoder And they have They have the same thing to decode And it's really hard to read So the 3 steps The first one is going to be To control or to pixel And then we're going to encode To control or to pixel So Important point We're going to Go on that first The information That's going to enter And in any case Symbols on 10 bits So the information is always on Symbols on It's decoded And we talked about it So We need to Keep This balance So the question is Why 10 bits at the end While pixel is 8 bits So I'm going to explain to you Why we use pixel data But Keeping in mind That I always transfer 10 bits To keep a balance Of the DC On When we have a lot of 1 and 0 After that it's not good There are a lot of reasons for that And I want to see that as HDMI Really coupled Enough It's not To Restore the clock We have The clock pair There are a lot of people on the internet Who say that the reason for that We're going to keep this balance Because of the clocks But no, it's not the case So what does DC Means A symbol that has a lot of 1 Or a lot of 0 It's a symbol that we're going to consider As Having a deviation Of DC If it has more 1 than 0 It looks like that This symbol has a lot of 1 So After all these 1s We can see that it has a positive and a positive If it was the other way around We would see that it has a negative and a negative DC And When we have too much Of DC at the end of the time It causes problems So that's 2 things That we have to keep in mind For the rest of the explanation So The first thing we're going to determine Is that we transmit control data Or pixel data It turns out that Inside a screen We transmit something Bigger than what we see So it's not at the scale The margin Reserved for the smallest control data The orange control data The pixel data are In pink Violet So why does this exist? Why is it like that? It exists because Old CRT screens For those of you Who are after CRT screens And They work By having An electronic screen That scans on the screen And makes the phosphor shine This This screen can't just come From the other side of the screen To the top of the screen So in periods Where we transmit control data It was to allow This electronic screen To return to the place Where we have to transmit The next set of data And so that's why it exists So why does it exist? Why is it important? Because the encoding systems For the control data And for the pixel data are quite different So here's the main difference And I'll come back to this slide A little later But what's important to see here Is that Even if the encoding systems are different The output is always Of 10 bits So that's the first step Choosing if it's pixel data Or control data And that's written by This diagram It's the first thing in the diagram So how do we Convert control data To make control symbols So First we have to see what's in the control data There are two bits Horizontal and vertical That is to say hsync and vsync And they're synchronized Horizontal and vertical That's something that's left of VGA We don't need it anymore In HDMI or DVI To know where are the edges of the screen Because now we can Determine the difference Between control and pixel Control and pixel But it's always there for reasons Of retro compatibility So that means That we have control data We have to convert two bits of data To 10 bits of symbols So how? Simply, they have chosen In advance Four symbols That would be the control symbols Four symbols They have been chosen Equalized on the DC plan They are about the same number of 1 and 0 So we don't need to To investigate the DC bias On the control symbols They have also chosen To have 7 or more 7 transitions or more Between 0 and 1 This number of transitions Is used To understand The relation The relation of the phases In this channel If you remember We have a cable that goes from the transmitter To the receiver And of course Signals at very high speeds That are transmitted And even if the transmitter Transmits everything at the same speed The cable isn't perfect And it could delay A few of these symbols And a few bits on one channel A little bit more than on another And by having these transmissions We can find The phase shift Between each of the channels And that's why And that's why That we have on these control symbols A certain number Of transitions We'll talk about that later When we talk about implementation So this part of the diagram Is controlled The encoding It's about the encoding Of the control data Let's talk now about the pixel data So again In DVI A pixel is 8 bits And the encoding system Is described by all the rest of the diagram But in reality It's very simple This encoding This encoding system Is called 8B10B Because it takes 8 bits But There is something Important IBM invented a system 8B10B Which uses other things In DisplayPort, in SATA In many other things But it's not This system And you can waste a lot of time Trying to understand The relationship between this diagram And you'll waste a lot of time Because it's not the same thing It's a completely different encoding system So encoding the pixel data Is a two-step process I said we had about three The first step Is to reduce the number of transitions In the data So how do we do this? Why do we do this? Because It's high-speed channels So we want to limit The number of The volume of crosstalk Between the channels If we do this We can reduce the probability That the signal Is propagating from one channel to another And how do we do this? We're going to choose One of two encoders One of two encoders Either an XOR encoder Or an XOR encoder Or an XOR encoder Or an XOR encoder Negative So how do we do this? The first bit encoded The second bit encoded The first bit The second bit encoded The second bit encoded The first bit encoded For the XOR encoding The encoding based on XOR It's the same thing XOR So how do we choose Which one of these encoders we're going to use? If In input We have fewer than four One encoders XOR If it has more than four One encoders XOR And if it's equal There's a way to solve this What's important is that This method is determined By the data That we process There's no state There's no implicit rule Each pixel A-one A morphism A-one To its encoded form So after that We add a bit Which indicates we used XOR and XOR So now It gives us Eight bits of input And then one bit encoded We have a sequence Of data encoded on 8 bits And then a ninth bit Which indicates we used XOR or XOR So that's it there That's it This encoding is actually Very good at reducing Transitions In average We had Globally eight transitions Before Globally three In average I have no idea how they found this I think there's a very smart mathematician Who found this Because This discovery That's the top part That describes the top part Of this diagram And now We're in the TMDS Transition Minimize Differential Signalling the difference signals to a minimum transition. The idea here is that we have to keep our channel DC balanced, to have the balance of DC. So how do we do that? Well, you have to realize that all pixels don't necessarily have a promise that they will have a zero DC bill. So how do we do that? We keep the count of our bill. And if we have a positive DC bill and that the symbol is also biased in DC, we reverse it, or if our DC bill on the channel is negative and that the symbol we're going to send to also a negative bill, we reverse it. And why do we do that? Simply because when we reverse a symbol, it reverses the DC bill. That is to say that a negative DC bill becomes now a positive DC bill. So, as we said, if we had already a negative DC bill and that we have a negative symbol, we reverse it, which means that we're going to now increase our DC bill to zero. We're going to maybe move it a little bit, but the next step is going to make it oscillate and we average it over time. We keep a zero DC bill. And as I said, then to indicate whether or not we reverse it, we reverse the symbol or if we keep it as it is, we add a new bit at the end. And that's how we get to our 10-bit encoder. We have 8 bits of data, then one indicating whether or not we reverse the symbol. So that describes this bottom part of the chart. And then one bit indicating whether or not we reverse the symbol. So that describes this bottom part of the chart. And that describes the bottom part of this diagram. Now you can see partly why this diagram is difficult to understand. I think it's logical. Maybe it's useful to make a hardware implementation if you already understand the protocol, but I don't think it's a very good diagram for people who try to understand what's going on. And you see, it's actually pretty simple. And in summary, it's actually pretty simple. So in summary, that's the interesting information about the two codes. Since we minimize the number of transitions in the pixel data, we can recognize the pixel data by looking at how many transitions are in the signal. So if there are more than six transitions, it must be a control signal. If there are less, it must be a pixel signal. So now you know how to encode the tdms and how to decode them. Because if you want to decode them, you just do the reverse process. Congratulations. Congratulations. How do you actually implement this? How do you actually implement this? Well, you can just write the XOR logic and write the logic and the little counter that counts everything you need directly in the FPGA. So I'm not going to explain it here because I don't have much time. But if you follow the process that I described to you, it should be pretty easy. So this is what we use currently. But we could actually use a lookup table that would convert data into 10-bit data. So FPGA are very efficient to use lookup tables and we can use this system to extend other protocols like 4B and 10B that are used for auxiliary data. It uses a little more resources but it's more powerful. So this is what your encoder and your decoder look like. So you take 10 bits of data and it shows you either the 8 bits of pixels or the 2 bits of data and the type of data. If you look at our design and if you look at the level, you should see something that looks like this. Then for the encoder, you also have this counter for the decelerator. But it's pretty simple. And yeah, this kind of extends to auxiliary data or if you have some errors. So there are 124 symbols that you can have in 10 bits. So if you get one of those, they're not all valid. Things happen quite quickly. You have to be careful because things happen pretty quickly when you multiply the clock frequency by 10. So already it's 25 MHz. So times that by 10, you get 250 Mbits. When you take the clock at 25 MHz, you're doing 250 Mbits per second. If you take the 720P, it's 750 Mbits per second and 1080P, it's 1,500 Mbits per second. The FPGAs are fast but they're not as fast as the ones that go this far. I can afford to buy them. But they do include a nice hack to solve. In fact, we can cheat a little. They basically turn parallel data into serial data. So the idea is that they turn parallel data into serial data. So you give them your TDMS parallel data and you convert them into serial data. So you have to play with them. So the easiest is to find someone who's already done this on your type of FPGA. Mike Hamsterfield has made a very good documentation to do this in the Spartan 6. So different FPGAs are connected to FPGA. So if you have different FPGA, it's going to have different schematics for the commander. Remember how I said our system has a serial console? Because we have a serial console because we have this system and it's still quite deep in what's happening in the system and bring it out. So this is debugging one of our systems. You can see the first thing is the phase line between the channels. The next one is whether we're getting our data and then we've got the internet and we've got all the channels synchronized and synchronized before megahertz. So it gives us quite interesting debug capacity if you plug in a cable and you have errors on the blue channel, but not on the others. It's likely that there's a problem with that cable. So it's a very powerful tool that allows you to know what works wrong. And it's something you can't really get with the commercial peripherals. But what are the errors? So what I'm going to talk about now is rather experimental. We haven't really implemented all of that yet, but it's something we want to do because now we have completely the control of our decoder. So there are 1024 possible symbols in which there are 460 valid symbols for the pixels, 4 for the control data, and 560 symbols should never be used. So that's about 56% of our space that should never be used. But in fact it's even better than that because with the BIA-DC that we have, the 256 valid symbols for the pixels, at a given point, if you have a negative BIA-DC, for example, you can't have a symbol that would make this BIA even more negative. So in fact, there is actually 74% of our space that can't exist at a given point. So that means that the invalid symbols are often close to a valid symbol and so we can correct these errors. And so we can say, for example, this valid symbol should have been this one. It should be 1 bit of error compared to this symbol. And so that's not bad because we can correct about 70% of the errors where there is one bit that has been inverted. But there are some that we can't correct. But we can detect that we have invalid pixels. And the fact that there is an error here is important. So in this case, we have two pixels that are good and we have a pixel that we know is valid and then two more pixels that are correct. And so you can imagine that this is the blue channel. So the first two are not very blue. Then the value of decoded is very clear. And so this looks really bad, right? This was probably a whole blue block. And so a difference of a pixel like that is probably not a valid value and so we can cover it up. So we take the two pixels on each side and we make the middle and we put that in the middle. So that allows us to correct even more errors. And as we're about to take this data and run through a JPEG and come up with an actually affect the quality of the output. And it allows us to correct what would be otherwise really visible errors in the output. And so here's how we do the decoding of TDMS and how we correct a few errors. And in fact we can do even better because it's an open source project. And maybe you have ideas of how we could improve that, how we could improve performance, how we could do TDMS decoding on a peripheral that uses less power than what we have. It's open source, you can look at the code, you can improve it. And we'd love you to do that. So the thing is that I have a lot of time and not much hardware, but if you have a lot of time and not a lot of hardware, I think I can help you. So these are links to the HDMI to USB project. And the code and the hardware are on GitHub, under open source licenses. And there are a few screen captures, a bonus. So here you can see the few errors. That was an error a little bigger. And so that's what happens when your memory is broken and a little broken. Yeah. Yeah. And that is my talk. And that's my presentation. Excellent, thank you very much, Mitho. And as you've noticed, we have a couple of microphones that are out in the room. If you have any questions, put them behind the microphones and I'll let you pose the questions. We have a question on the internet, maybe. Yes, thank you. So do you know if normal monitors allow that? So I don't know of a commercial implementation that makes any kind of error correction. The solution for the commercials is to make sure not to make any mistakes. They can do that because they don't have to deal with anything. They don't have to worry about my presentation, because their presentation doesn't work. And probably they also use better hardware than what we use. We try to use the materials that are as cheap as possible. We try to push some of the peripherals that we use. We have quite a lot of questions to remember, but questions are not comments. So if you don't like to start with just questions, microphone number one. So they ask the question in sign language, so I'm not going to be able to translate the question to you. Sorry, I don't quite understand what's going on. Sorry, I don't understand what's going on. So maybe we have someone who can translate us. What do you have a problem? No, a problem of his, maybe. Because someone is suffering the question. So I'll be there afterwards if you want to discuss it. And we might do that, but we will have to deal with it on the computer afterwards. So we'll have to deal with it on the computer later. So another question, maybe. So the other microphone, please. Yes, hello. Can you tell us the quality of a HDMI cable? Can you tell us if there are problems with the clock, for example? Yes, we can. So the quality of a HDMI cable, it should be zero errors. So everything that has more than zero error bits, well, it should be removed. So that's interesting, when you have very long cables, we can actually see that the longer the cable is, the more difficult it is to keep zero errors. And so yes, we can judge the quality of a cable, but it's also difficult because it depends on what the emitter does. If the emitter is of poor quality and that the cable is of poor quality, you may have some errors, but if the emitter is of high quality and the cable is of poor quality, then it could be canceled and give a correct result. So we can't just say that it's a good cable because we don't really have control on how good our emitter is and how good this periphery is. So if we could see, to degrade the signal until it breaks, if someone wants to do that, I would love to help. Your hardware, the HDMI to USB hardware, is it available for simply ordering or has it to be sold by hand or...? You cannot sold the sport by hand. So you can't weld this card by hand unless you're really better than me. It uses FPGA with very small components. We work with a hardware manufacturer in India who does that for us. We worked with them and it was really good. So we work on a new version of the hardware. I have a lot of FPGA hardware here that you can come see. I will probably move it in the color after. And again, if the hardware interests you and you have user cases, talk to me because I like to solve the problems of people who don't have hardware and I can use what I have in my possession to help people open source. We can still have four more questions. Do you think it would be possible when we get 120p Yes, I do. Yes, it would be possible. But it would mean that we have to do a hard work that we haven't had enough time to do. And for us, 720p at 60 frames per second is enough. And the USB connection is limited in passers-by because we don't have an H264 encoder. We only have the Mjpeg. If someone wants to write us a webm encoder, for example, open source instead of H264, that could start to become more interesting. We also have the Ethernet gigabit on this card. It should be easy enough to get the data from the Ethernet. I would like to tell them a little bit of help. The Ethernet works. We can communicate with the card. We just need someone who knows the data. We use it for debugging and stuff. We use the Ethernet to do the debug. And Mike Hamsterfield, again. A big thank you to him. He is great as a designer. He made the 1080p60. It's a bit beyond the specification, but it works. It's almost identical to the hardware that looks like ours. And he also made the display port. A display port 4K. What can we do with our cards? If you only need one or two connectors to be converted to H2, you can do that on the display port connector. Yes, I think it's possible. But once again, open source. I'm a benevolent. We need developers. We'll take one more question on the internet. Yes, thank you. Have you ever considered doing JPEG 2000? No, I haven't. The main reason is to do Webcam. I want to use the standard UVC, which is the standard for Webcam. It doesn't support JPEG 2000. There's no reason we couldn't support JPEG 2000. We could correct the Linux driver to support JPEG 2000. But I'm not convinced that there's a good implementation open source for JPEG 2000 for FPGA. So there's this kind of blocking problem. If you're interested in helping, come and talk to me. As I said, I'd love to talk to you and solve the problems you have. So to make everything work. And we have t-shirts. I'll show them to you. I'll see a t-shirt for all the contributors. It can be corrected on the website. It can help people on IRC to start. You don't need to be an FPGA expert to help. And we also have a little project. We're also working on a small project to try to turn micro pitons on FPGA. If you like pitons and you know micro pitons, I'd love to help you do that. We just need some support. Any other questions? Is there some sort of dedicated processor on that one? Do you put the mic on or do you use an open source software? I use a soft core open source. We can choose the one we use. With a command line flag. We use the Latisse Myco 32, which is provided by Latisse Semiconductor. We can use the open risk or the risk 5. We generally default to the LM32. Because it's the most performant for the less FPGA resource. But if you like risk 5 or open risk 1k, for example if you want to use Linux on it, you can do that with a one line command line. We're looking at adding J-Core support. We're going to try to add a J-Core support. J-Core is quite big compared to LM32. We won't fit on very small devices. It's a Spartan 6 FPGA. What do you have as a FGA? The answer is a Spartan 6. We're going to have a new Artex 7 project. I've also been working with Bani's NETv2. I also worked with Bani's RTV2. He's doing some cool stuff. He inspired this whole project. By showing that you can do this and you shouldn't be afraid of it. Any other questions? Yes. Do you have any plans for incorporating HDSI1 in the platform? Yes and no. We have plans. We have ideas. How can we do it? HDSI and all the protocol are much more difficult to use for the consumer. We want to reduce the cost of this as much as possible. HDMI is an electronic consumer. You have it everywhere. You have it on Raspberry Pi for $5. HDMI is probably a very good solution for that. We've never done 2-curve HDIs. I can't tell you that it's going to work. But if someone is interested, don't hesitate. We would love to have people working on it. We have one more question from the internet. Thank you. The question is not about HDMI, but about FPGAs. FPGAs are programmed on a high level. We have to compile them. All the vendors do their own program. Do you know if there are open source software for that? Yes. If anybody knows about FPGAs, you know that there are proprietary compilers here. And the proprietary compilers are horrible. I'm a software engineer. If I find a bug in GCC, I can correct it. I have the capacity and I can go forward. Or at least I can try to find out how the bug is produced. It's not the case for FPGAs. The FPGA compilers that we use are not deterministic. We can give it the same source code and get two different outputs. I like that someone is wondering why it's happening. I tried to remove all the noise sources from it. And it always happens not to be deterministic. Clifford made a tool sequence for FPGA Lattice. He said he was going to work on Actrix 7. FPGAs, please donate to him. And I would like to make donations. And if it works, I owe these people millions of beer because the less I would have to use these proprietary compilers, the happier I would be. So please all of you, team, please.