 Hello everybody It's really great to be back here in Berlin for CCC. It's been a long time since I've been able to make it here previously, I've always been Around this time preparing for another conference CES, but I'm done with that now. Thank God and It's really good to be around so many really brilliant people. It's very humbling so Thanks for having me here Today I'm going to talk about a implementation of a man-in-the-middle attack on HDCP and The device that we use For the attack is actually being used to overlay Twitter feeds on top of an HDMI feed that is going to the projector And of course I had trouble setting up just a moment ago But it seems to be running now if you tweet to bunny studios or 28c3 You should see your comment going across the top provided that Wi-Fi doesn't fall over So that being said let's get started First I guess you know what is HDCP? It's you know, it's the high-definition content protection. It's a pixel level encryption operating on the link layer And the cipher structure is a little bit odd. It's a stream cipher it generates 24 bits of pseudo random data per cycle and Internally has a structure of a pair of block functions that Run once per clock cycle. There's a LFSR based Key scheduler sort of that whitens the block functions at the beginning of every horizontal retrace and The block functions themselves are initialized with a bunch of different things including a 64-bit Initial vector that also evolves during vertical blanking intervals Key the interesting thing about HCCP is a key management component. It's They use a key management system that allows you to just use distributed private keys With some sort of key revocation. It's pretty weak A public key in in their parlance is called a key selection better vector And it's a 40-bit number consisting of 20 zeros and 20 ones The private key is a vector of 40 56-bit numbers and there's a dot product between the two that generates a shared secret All of the private keys used in the system are derived from a master key consisting of a 40 by 40 matrix of 56-bit numbers the master key was leaked about a year ago to this day and it can be Computed directly from a collection of private keys that recovered perhaps from a set of video cards as shown here I had nothing to do with it But thank you to whoever did it It's actually been a lot of fun for me doing this and in a lot of respect to the people who did it so Why HCCP in the first place why would someone want to use it? Well, I don't want to use it But why why why do people put it into different things the idea is that they want to be able encrypt video transmissions They want to create a studio to your TV set fully encrypted path, right? So it's compliments ACS BD plus another Encryption technology to create the studio to screen cryptographic chain. However, the that chain was broken a long time ago ACS was the weakest link people can rip out movies all they want So it turns out that when the HCCP master key was was released a little bit of a no-op Because from the content access standpoint because people already had access to the content through others vulnerabilities And it's also possible to build strippers based upon legitimate HGCP keys that have been available for a long time. Basically people take out an HCCP receiver chip and then just hook them up to the LCD port or some some frame buffer and they can get the data they need out so the master key itself from the content manager standpoint wasn't big news, but So then the question is why would I bother to try to implement this man in the middle attack on HCCP and It's all about control right at the moment Broadcasters and studios Control your screen you actually don't really get the right to manipulate the video That is being broadcast to you because the DMCA makes it illegal for you to decrypt the video to allow you to manipulate it so there's there's a sort of control war going on over who can put something on your very own screen and it sort of irritates me that I am not able to go ahead and modify my You know data it within my own home because of some technicalities due to the DMCA so Just a overview of the things are restricted if you want to do like a picture-in-picture from two feeds That's not possible because of HCCP If we do content overlays like what's happening up here That would not be possible if you want to do filtering and modification the image That's not possible because it's all encrypted And as a result, there's actually very few commercial HDMI video mixing solutions that can operate on Broadcast and movie content. So the goal of implementing the attack is to Allow consumer side content remixing So the idea is that we want to be able to add web content You know internet content to existing TV live comment and chat So if you're watching your favorite program You can tune in to Twitter feed and see what people are kibitzing about Over-the-top advertising so you could imagine based upon some algorithms To be able to detect and add and eliminate them or replace them with entirely new ads are more targeted to you You can do interactive TV so we can add interactive elements to broadcast TV and Another goal of the project was to be compatible with any TV hence the name any TV but also net TV So that's the goal question is how do we do it? It's what I'm here to tell you about so this is a bit of a eye diagram but it's sort of the this outlines the Overall sequence of events and sort of also includes what the any TV does On the left here is the video source. So there'll be your set top box So your blue a player your ps3 whatever it is. This is a video sync. So this is your your TV set basically and there's a set of Handshakes that go back and forth and down here. I've put the man in the middle attack box. That's sitting on the link and There's a set of steps here a 1b 1b 2c 1 basically they go step by step for the handshake and Then here is a corresponding set of things. I do while the handshake goes on to intercept it and do the man in middle attack so the first thing we do is It's interesting we we read the device capabilities Actually, not what we do what the set top box does is it queries a monitor and says hey monitor What are you capable of doing? So we'll talk about that step first this one is Important to also man in the middle Because it's a single bus Called DDC it's basically I squared C over HDMI and it's shared between two functions the monitor capability Identification and the key exchange itself And there's two things that we need to do. We need to snoop it and we also need to squash it So snooping is easy. You just sort of watch a key exchange and see the bits go by the squashing is a little more complicated and interesting With squashing you basically want to force the TV to have the characteristics that we only support The reason is that you know, there's some TVs have 3d TV Some TVs do some strange scan modes that we don't support and we would like to have this work as sort of a seamless consumer experience So we actually rewrite the aided record on the fly as it's negotiated back and forth between the TV and the set top box So this is a circuit that enables the snoop and override It's this this is a standard sort of I squared C circuit here. You have a driver. You have a receiver of a pair of pull-ups And then on top of it we have The snooping of the clock is easy you just listen to it and then snooping of the data is easy you just listen to it and then To override the data because it's open collector bus. This is easy You just you know have a pull-down transistor and to override the one you essentially have to override an active Zero driver so we put a big wonk and fat on there And we have a little bit of a current limit resistors so in case they're both on at the same time You don't burn out the board It works. It's and it's we can go into more details later as to how it affects your liability But it in fact it shouldn't have any impact at all upon the reliability of the link Now in order to do the snoop and override the implementation uses a highly over sampled I squared C Implementation so I squared C itself is very slow 100 kilohertz clock we over sample at 26 megahertz and Based basically whenever the clock transitions we have a small window where essentially we can say Whoop we didn't want to zero we want to one and we actually switch around so that by time the next edge comes along At the sampling window the data is what we want it to be And The way it works in terms of the protocol level is you know We're listening basically only here at the address phase And then we decide on the acknowledge bit if we want to go ahead override the data So we say okay, you're reading this particular part of the record. I don't care. Let it through you're in the next part of the record I do care. I want to squash 3d TD capability. I go ahead and I rewrite the record on the fly So this allows me to basically change only the bits that are need changing during the monitor identification protocol Also we override the hot plug bus so there's a little wire that is used to detect when you actually plug in your TV set And the hot plug bus also has a fat on it so we can go ahead and Simulate the cable being unplugged without having unplugged the cable This is important because Basically the hot plug kicks off the whole state machine again for resynchronization and reloading all the records And so we do this a couple of times particularly on boot to go ahead and manipulate all the edits and all of the key state So back to the diagram again now that we've gone ahead and we've Intercepted and modified the basic handshake of capabilities. We now have to intercept and listen to and do something with the actual key exchange the key exchange is Pretty simple set of steps The link controller goes ahead and writes 64-bit random number Initialization vector and then it sends over its public key and it reads back the public key Once the public keys are exchanged you initialize a cypher state And and then you and then you go the man in the middle is again listening and accepting these The key exchange and itself goes ahead and creates a shadow rom of everything that's going and then we initialize a cypher to have Exactly same synchronization state as this so that we can on a pixel by pixel basis encrypt data to the identical cypher state and swap them on the fly Let's see is there anything I didn't say here right so getting the The all the initialization vectors is accomplished with I squared C sniffing And once basically the key exchange is captured The last right is a defined address that actually triggers an interrupt that goes to the host CPU This is all done in FPGA by the way goes to host Linux CPU that then triggers a unit of event That does all of the math for computing the shared keys so here's a I Like to write out the math all together because a lot of times I read these equations and I I don't get them and they're too complicated So this helps me Think it through When you're competing the private keys, it's you basically take K, which is the master key This is this is the matrix of 40 by 40 56 bit numbers and you do it dot product with the public key You can get essentially a vector of 40 56 bit numbers Which is the private key vector for the source and then you take basically take the transposition of the master key Multiply it by the other public key and you get the other private key vector This is done by the by the Linux host computer and Basically once you have these Private key vectors we then Multiply back again by the key selection ventures to derive the shared secrets the shared secrets should be identical I compute both to make sure we haven't got corruption on the line And then after the shared secrets computed we're good to go on the ciphers so The ciphers themselves it's as I had mentioned earlier Once you plug it into the hardware and you initialize the key schedules the cipher state has to Evolve based upon pixel clock first But then also an h-sync v-sync data garband timings and a bunch of other bits and pieces So in fact when you have a frame of video, this is a frame the active pixels are here and Then there's overscan on the edge for the h-sync and v-sync periods They actually only turn on the encryption for Actual data right so you so if you actually don't synchronize the cipher state and turn it off when you're not transmitting video data Your cipher gets un-synchronizing you get white noise At the back end, so we have to basically build a system that also passively observes the guard bands the sync patterns and so forth those are all unencrypted and Align our cipher state in addition the way that they do audio transmission and other things inside of HDMI as they have these data islands they're hiding inside the sync areas and they're marked by a special coding region and And once those happen, you know the next 16 or 32 clock cycles is is a data island and so we have to essentially extract all that timing and And use that to synchronize the cipher state So again, this is sort of a recap of how the pixel by pixels like Swapping happens here is sort of our encrypted video. I sort of hash over it to indicate. It's been x-word It's like the number one goes over a cable We synchronize pixel by pixel and then any TV will just go ahead and render only the portion of video that We really want to swap out it encrypts it with a synchronized cipher and then it just takes those and swaps those with the Pixels that I've encrypted and then on the TV side when you decrypt it instead of getting the number one you get the letter i Basically, that's how the overall system works So that's all the sort of the math and the crypto stuff the actual implementation is much more hairy There's a lot of like real-world stuff you have to deal with so I'm gonna go into a bit of that the Overlay pixels need to be exactly time to the video pixels and the overlay itself so for example this stuff that's coming across the top here is All being computed by the the local Linux computer that's on any TV and those look like dev FB zero We're just writing dev FB zero with with webkit to to create this this overlay There's a bunch of challenges to doing this the Linux interrupt jitter for a frame sync is way too high You can't just use that to synchronize frames the local oscillators also drift over time so Up to hundreds of pixels per frame is allowable within the PPM spec of the local crystal oscillators And if you don't if we don't have absolutely tight synchronization You would see the overlay sort of jittering and looking very nervous on the screen So there's a few Techniques we use to synchronize a frame buffers first We don't use the local pics clock to clock the LCD controller on the SOC We actually take the pics clock from the incoming video and feed that back into the processor and say Here's your clock for that so that gets rid of entirely all of the clock drift issues and PLL drift issues that you can have it's a it's a it's something that's a very convenient thing that helps solve a lot of the problems The next thing we do is we Listen to the data going by and we have to derive all the timings Dynamically from the video and we set the frame buffer properties to match So on the fly we listen we say the sink is this long the video is this wide and this high Corresponds to this CA a mode and then we program our frame buffer to match that so of course We don't do that programming right if you even walk by one pixel the frame starts drifting with time and This you would think would be something that's actually quite easy to do but in fact a lot of Vendors have buggy implementations. They're not actually CA compliant So you really have to just measure and use heuristics to try and make it bug compliant with all the weird equipment out there That doesn't actually quite work the way the standard says The other thing is that the you know when the LCD frame buffer starts as a DMA We basically trigger that with an interrupt based on upon the v-sync from the video stream but as mentioned before the interrupt latency of Of Linux is quite long and it has a lot of jitter So there's about eight lines of elastic buffer within the FPGA that allows us to absorb the jitter and we have a system that actually tries to measure the interrupt delay Between when the interrupt edge arrives and when the actual DMA starts and we actually pre-compensate that so we actually Start the LCD DMA a little bit earlier so that the first pixel arrives and the actual first pixel comes in the frame So there's a whole bunch of bits and pieces and stuff that we do to get that to work just so just right So The way we do the overlay over videos we use chroma key Many people know about go over it here in case you don't If I were to turn off the overlay you would just see a big pink screen like this with just the the data That you want to overlay essentially every color that is magic pink f 0 0 0 f 0 is swapped with background color It's a little bit tricky in some spots to do this Because you can have anti aliasing sometimes turned on and the pink will bleed into your images and so forth So you have to be a little careful when you design the content so that magic pink doesn't bleed and create pink rings around everything And so this is what the FPGA implementation looks like on the inside. There's a block diagram of it The HMI comes in we deserialize it to RGB. We have the I squared C like DDC bus, which is being snooped and squashed And then we have a local PLL sync that recelizes data. There's a multiplexer here that based upon the genlocked Data coming from the FIFO is Encrypted and then selected on pixel by pixel basis if the incoming data matches the chroma key That's basically the entire implementation of the pipeline For the device Now there's a few optimizations. We had to do to get this to work. Well One thing is key caching So every video is sourcing when you plug your blue a player into your TV the shared secret actually never changes Over the life of the device and people rarely replugged things in with each other So even though we can almost compute the shared secret on the fly for most devices Some devices are very fast and they run a little faster than protocol and they won't catch and you get like white noise So what we do is we cash the computed key The first time we compute it write it to a temporary location and we just check it's the same If it's not the same that we ain't the hot plug line and we and we re-initialize The other thing we do is we do eated caching This is a little more important because without eated caching eated is the record that describes the monitor's capabilities You see a double blink of the screen We do the first blink of the screen to read it value and then the second blink to overwrite it and And that sort of creates a bad user experience. We do eated caching And so that's the the core FPGA I want to talk a little more about the bigger system picture of how it's all wrapped together and Works in practice. There's FPGA up here, which we saw the block diagram of we use a PXA 168 and 800 megahertz arm CPU And 128 megabytes of DDR2 a micro SD firmware. We have an infrared remote detector so you can use remote control to control it We have a OTG connector on it and we have Wi-Fi on the board and so of course I like Hardware and so I conclude pictures of hardware This is the FPGA here. It's a Spartan 6 FPGA You can see that in fact the HDMI pins go straight up in the FPGA There's no Intervening buffers the Spartan 6 is capable of driving and interpreting at HDMI on the fly without any help of its own Micro SD card. It's you can swap it with a bigger disc if you want. That's the PMIC The IR receiver some status LEDs must have the blinking lights This is the Wi-Fi chip The CPU and a little boot spinor and this is an extender because typically you're going to hide the device behind your TV So you can actually plug in a little infrared receiver on a long wire and put it where you need it to be The Solution is also a complete open stack So a lot of times, you know, we focus a lot on sort of u-boot to linux to webkit and so forth this Solution here actually we open source the plastics the PCB the FPGA the verilog Of course the u-boot the Linux webkits open source the widgets are open source is stored in github And then we go one step beyond that we actually give you a provisioning server so that if you were to actually develop your Own system you can actually bring up your own update network and so forth and a build system on top of that Based upon open embedded I'll go into that a little bit more in a moment so the application environment for this This overlay here and looks like Wi-Fi fell over which is why we don't have any more Tweets going across The application over meant this guy here is basically HTML JavaScript The CSS template is configured basically make the background magic pink so anything you draw over it blocks the screen the apps are all JavaScript HTML programs and But you can extend it to use anything that uses devfb zero so if you use stl flash whatever it is you can also use it with this infrastructure We use github to store all of our demo apps so every time you reboot the client We do a git pull and if there's new widgets or something it just pulls right down to your device And you have new widgets on the device And the firmware updates are also served from an ec2 infrastructure So we provide an AMI that allows you to replicate this yourself Another thing we do is we provide an HTTP API so the so the events that you saw previously scrolling across the top here are actually You can actually issue them from any device in network. We actually Broadcast ourselves with a zero conf Bonjour, you can discover it with a smartphone for example and Based upon the discover IP address if you get an SMS in your smartphone It can go ahead and forward the event to any TV. So an SMS could then scroll across the screen The it's it's basically this is what an example call looks like it's just a get request and you just And you can send events to any TV So the idea here is that if you had a smart home environment or you want to do something where you overlaying something in Your screen you don't want to do a lot of work or programming You just go ahead and you just discover the IP address and use HTTP to go ahead and overlay the events very quickly and very easily This allows you to avoid having to do a lot of coding To get some sort of new system and new functionality Also, we provide a turnkey build system We have a public Amazon ec2 instance with a pre-built angstrom distribution So it's great that you can get sourced But the problem is if you want to build the kernel, you know QTX and everything you're downloading source for seven hours And then you're building for another seven hours and then you find out doesn't work You misconfigured something and then you're splitting another seven hours. Just get going. It's a big pain in the butt So what you can do is you can essentially just go to Amazon and then use a public AMI and it comes up And essentially it's like we hand you the keyboard from our dev environment and you can just go from there with a pre-built image it comes with a local git repo and And build bot so basically you can go ahead and Go to our AMI Go to Amazon launch it You can go ahead and commit your code to the local git repo on the inside and when you do a code commit It goes ahead and automatically triggers a build for you So you can watch your builds going and so forth and then at the end of the day There's a set of images. They're generated for either served via web page from the server and so there's a big Binary file here that basically you need to download that once Burn it on your device But once you've downloaded that and burn it on there the device is keyed to go back to your server for updates So then code development becomes not, you know code something scp it on and run it You just simply just do a push get get push It does the build and then through O package it triggers an update and your stuff just ends up on your device And this is something that you can just get without having to do a lot of work So we call it like 30 minute sort of running start We want to get developers going on the platform without having to do a lot of mucking around with cross-compilers and so forth The hardware is open. I'm a hardware guy. I like hardware. So of course, I'm going to include the porn for hardware This is the case design. It's done in solid works You can download and look at it. I put graffiti on the inside and bits and pieces It's made in steel tooling out in China I Love the the way it looks and the way it smells It's true The if anyone's ever seen me get like a new like gadget out I always like open the anti-stat bag and I smell it because you can I've been in the factories You can tell what flux they're using what process they're using and sort of how the weather was in Shenzhen when they built it that all comes with a box and It's and the other thing is is you know the Chinese will tell you that you can only use German steel for For making these tools because Chinese steel is too soft. I don't know what it is But they always say we can only buy German steel and they tell me that as like an advertising thing. So Steel comes from Germany Schematics are of course open online. I designed an LTM You can get the source files download them and play with them and the PCB layouts as well are available for you To go ahead and browse and modify So this is a you know, complete open Solution that we provided the hardware schematics PCB design FPGA The software a complete turnkey build environment, you know, including everything from top to bottom So we tried to be very complete completionist about this and the goal We had is we want to go from half an hour from getting the device to essentially a sort of a production grade Build environment where you can actually give it to someone else and say, you know Have the device and we can push updates to you and you can actually Use it for your own application and you can get it now actually at Adafruit.com They're actually kindly being my distributor for the product So just to recap and I said I go fast and I did go fast The I've demonstrated a HGCP man and middle implementation. It's a complete solution We intercept the key exchange on the fly We derive the shared secrets and synchronize the transmit ciphers we multiplex overlay video using chroma key We avoid decrypting data. So if you notice in the entire path, we never actually decrypt data So we're I call it DMCA safe DMCA says it's illegal to circumvent copyright People say, wow, were you doing this as well? I didn't circumvent anything. I didn't decrypt anything. So how can you be mad at me for DMCA? And we also modify edid records to force compatibility We enable a video compositing function So if you had an old TV or legacy TV that didn't have smart TV, you can go ahead and add that to it You can modify video content now You can block ads you can go ahead and show live internet commentary so and so forth on top of your device and it's a completely open hardware and software stack and I like to emphasize that it's a non infringing use of the master key so there's a sort of legal notion that if you have something that is Has only the purpose of only has an illegal purpose Then owning it is tantamount to sort of committing the crime So if you have a bunch of like I know TNT in your backyard, then they think you're a terrorist, right? So prior to this the only use of the HGCP master key was circumvention, right and the thing that I like the most about this is that it blurs the association of the master key essentially with privacy now You can have the master key. There's a legitimate use for it. It's not infringing. It's commercially useful. It's been well implemented It's not a sham There's a sort of argument that says well you have this like kind of like backup use or wherever it is and then Everyone says backup backup and everyone knows it's sort of tantamount to some something else This one's a you know bonafide use so it helps Sort of legitimize the distribution and existence of the of the master key So thanks for your attention. I'll take questions now Okay, normally you would queue up, but I'll make an exception for you Hey bunny. Hey So given that you're not doing you're doing pixel by pixel substitution Is there a limit with your FPGA and your processor on how many pixels you can substitute given the overall screen resolution? Right, so it's it's all it all happens on the fly and so the limit is limited by the pixel clock rate of the FPGA a Standard Spartan 6 without the really expensive transceivers limit to 95 megahertz picks clock, which is a gigahertz line right basically And that corresponds to a 1080 i resolution or a 1080p 24 It doesn't do 1080p 60, which is a common resolution, but You can actually overclock the FPGA and it actually works most of the time to get to that But as a commercial product we block the overclocking, but it's very easily disabled You could do it yourself if you wanted to If you have questions, please try to queue up at the microphones, but we have one from the Internet, I think Okay, we have two questions What can this do other than overlays, what can this do other than overlays? Well, so this the the UI you see here is a It's it's a webkit browser, right? So It's a little bit tough to in this environment for me to do the demo But basically there are Android apps and iPhone apps that work with this and I can take pictures and share them on the Screen, so just sling the picture to the screen and you can also enter a URL and view a web page on the screen as well And or anything else that you can do, you know imagine coding with JavaScript HTML, right? These are this is just sort of like an example Demo that we did sort of just give you an idea of flavor, but there's not actually the final thing That's it's really up to you guys to tell me what you can want to do with it Okay, right Yeah, and another question Sorry Why don't you hook up your own? Edid Edid yes, okay, or read the original and provide the modified copy from the controller Can you say can you say that one more time? I didn't quite understand it The whole question or the last just a whole question. Why don't I hook up? Your own ID oh Edid, okay, why don't I just go ahead and blast like a generic Edid in because not You'd not every TV is a 720p or 1080p TV There's some really crappy TVs out there You can damage them like if someone actually in the world had a CRT Right and they weren't very careful about it and we actually overdrove the sink You could burn out the tube or something so we have to read the monitors capabilities and respect that and then deselect the modes that we Don't allow. That's why we do the overriding Okay, and the last one how complicated would be to use your own hardware to decrypt the HDCP right how hard is it to decrypt HDCP? I knew that. Yeah, it's gonna come so There's a For I guess for you know strictly legal reasons. There's a substantial barrier that I've put in to Decrypting but it's not it's not like an impossible barrier. It's more like an effort barrier, right? so for example this device here the actual device itself has the master key, but it has no actual Public key of its own it must borrow the public key from a source and a sink for it to work I actually checks to make sure you have a properly licensed device on both ends of the link And that's for another legal reason you want to make sure that you're following all the licensing rules and so forth to do this In order to have it actually just decrypt on its own and needs to provide its own public key someone has to put that in there, right? that's Possible to the extent to this crowd. I mean there's many smart people who could do it But regular consumer could not let it you know easily go and hack the very long to do that So it's theoretically possible, but there is it's the same point for legal reasons I made sure there was at least a barrier so it's not like one was like clipped here and you have like a radio that can listen to like Radio bands or something like that which would create sort of a blurring of the line of the legitimacy of the product Okay question over there Hi there and how many people have been Working on this pretty impressive piece of hardware and how long did it take to actually develop and build? Okay, so the question was how many people are working on it and how long did it take This this hard was signed in Singapore. I've moved there about a year and a half ago We conceived it about last year this time had just some sketches showed to people see what they thought we did a a Stretch card so basically just the FPG itself and we plugged it actually into the LCD port of an existing Chumbie device that we had at the time and Just demonstrated the overlay in May And then we finalized the boards and did the tooling and so forth. We had production by October and Basically, we've been now going through sort of the process of getting out to distribution we had a team of I think at the biggest point it was about five or six people working on it very very talented people and I'm really lucky to be working with them And most of them are softer guys actually actually all them are softer guys I'm the only hardware guy on the team So the hardware is the hardware the the case design the board layout The board bring up is done by me and then the rest of guys do all that really insane stuff with the build system and Linux and you know that you know the software bits that I don't get quite so well, so Okay, another one over there How full is the Spartan and Have you enough room to do a full alpha mixer instead of chroma key, right? so The Spartans at 87% which is pretty full as far as FPGAs goes, but it's got some space There's stuff you can strip out so a lot of it is the line buffers It actually turns out we don't need eight lines you get away with like probably two lines in most cases So that would actually free up a lot of resources alpha blending If you are willing to just outright decrypt the data to do the alpha blending and then re-encrypt It is actually trivial to do. It's just a very small path. It wouldn't take much data There's some very smart guys here at this conference I've been talking to have given me really good ideas on how to do it without ever decrypting the data I haven't implemented that fully but We're gonna we're gonna give that a try and see if we can do something there, but should be able to fit Okay, another question from the internet and then over there Okay How would it be a footprint compatible? Compatible replacement to use the Spartan six parts with the high-speed transceivers Okay. Yeah, so that's the question. Can we put the the The t-series Spartan six and that has like the gigabit transceivers that lets you do 3d TV and stuff like that So unfortunately Xilinx did not make them footprint compatible You actually have to really out the board between the two and that's a difficult So there is a part you can drop in that would allow you to do 3d TV standards and so forth But it's like 50 bucks right the current part. I have is like a tenth that costs so The feeling I have is that the market wouldn't bear the cost of just that much added cost to the device So we we decided to use them a much lower cost part There's a thought that will do like a like a pro version or a higher-end version with like You know for people who are willing to pay that much more money for a device But it would require a full re-spin of the PCB and so forth to good to do it and a lot more money Okay, two questions over there Now the sliding up of the overlay which could be seen a few times during the talk Is it an artifact from the overlay going out of sync with the video feed or what is it? And can it be alleviated or is it just there? Yeah, no, that's um the whole as you saw I was struggling a bit with the demo here There's a bit of a hat going on here. It's it's going from a a Display port to HDMI to overlay to a VGA converter and then So and so forth so somewhere along the way the sink is getting a little corrupted and and when I lose v Sync it rolls a bit just like you would always roll if you lose v-sync on a TV chat set If I think if I didn't have this VGA converter in the middle, it would be much more stable I didn't I never see it on a full digital link But that's my guess as to what's going on Then our question over there Okay, then we have another one from the internet and then there Okay, this is a long question. I will try to make it short Okay How difficult would it be to Hs the FPCA part to make this Part of OSS drivers for this commercial Available hardware for HDMI capture I had trouble understanding how difficult to write what kind of driver a OSS OS X drivers, so you want to write OS X drivers for what purpose for Okay, I must say the question. There are commercial Available HDMI captures interface Unfortunately, there are no OSS Drivers for this yet. How difficult would it be to adjust this? FPCA part to make this part of OSS drivers for this commercial Available hardware for HDMI capture For each so basically how difficult is it to take this hardware and turn it into something that is HDMI capture with OS X I think no, it's about OSS open source software open source The question seems to hinge on the fact that there seem to exist FPGA to PCI Express boards that have an FPGA and an HDMI connector and then PCI express connector right and He wants to know how difficult would be to adjust your approach to Basically run on the FPGA in these existing hardware boards. It should be easy I think I mean the the the very long is open source, right? So you can you can get that and compile it and the interfaces are there So if you had an existing dev board with PCI express and two HDMI ports on it it's you just have to map out the pins and and and and Get it to compile. It's smart. I think Okay, then there's a question Hi, is it possible to split one source to different to two different or more different videos and TV sets and what about merging audio And what was the last part of the question what merging what audio because HDMI is also possible to transmit audio signals Oh audio. Yes. Okay. So the first part. I believe was asking. Can you split it to two different outputs? Yes, you can I mean the board itself doesn't have the second port on it, right? And so one of the limitations of the chip I have is I've now saturated the number of LVDS outputs I have and so you have to go to higher-end chip But if you want to just replicate to end different ports, that's actually quite easy That's not hard at all You just have to pick which one is the master e did that does the transaction and handshaking and the rest of them Slave off of it and it also assumes they're not unencrypted could each device will have their own key So if it's all encrypted you can only go to one As for audio the wires are there to do audio. It's a firmer update just a firmer update to do audio I haven't Audio itself is a little more complicated because there's CRC is around the audio and and some clock adjustment drift and so forth I haven't gotten through and put all the code to do audio But that's one of the things that you should be able to do in the future is to be able to overlay audio as well inject audio Okay, is there another question from the Internet? Oh, we don't know Okay Why don't you use that? Sorry a arm. Why do I use an arm? FIQ to avoid the Linux kernel R IQ latency. Oh, okay when I use an FIQ I don't know it was hard. It was harder than the other way. I did it, right? I think I seem to remember so, you know my a lot of the kernel stuff was done by some of the guys I work with and I help with that but I Think the explanation I had was that we had to do a lot of tweaking on the existing kernel get the FIQs to work And it would be difficult and so it was easier for me to do something in hardware to wrap around it And so that's what we did Okay, it's a hardware problem. I I may speak to that. I also had some similar problems and the FIQ also has some latency So that's not even completely foolproof solution. Yeah, even then okay any more questions from the audience or the internet. Otherwise We seem to be finished. Yes give a warm applause to