 Hello and welcome everybody to the very last session in this room testing and remote access to embedded systems DPI and LVDS display output by Marek. Please take it away. Thank you. Hello everyone. Welcome. I Hope you are enjoying Prague My name is Marek Vashut and this talk is going to be about testing and making DPI and LVDS displays available remotely I divided the Talk into five or six sections First of all, I'll talk about the motivation why I actually even had to build my own hardware for this stuff and what were the processes behind this then I'll explain to you hopefully The behavior of the hardware so you have some sort of an idea What it is that we will be capturing with this custom hardware After that, I'll talk about two approaches which I took during the process of the development of the hardware because This slide deck is actually a combination of two slide decks Two or three weeks ago. I discarded the entire slide deck as I found out something new and something which made it all useful And I Booked it all as a failed approach and now I have another approach at the end Which is so much better and so much simpler and so much nicer So I'll talk about both Finally, I'll talk about how you can build your own hardware Because what I'm going to talk about here. This is something which you can build at home And you actually didn't even have to solder if you really want to do it On the cheap, but if you want to build your own PCB, I'll show you how to do that as well I'll promise primarily talk about DPI, but then at the end I'll talk about LVDS as a little bit too And then we'll wrap it all up Quickly about me. I work mostly on the you would bootloader Linux kernel every once in a while I sent some old patch to the open embedded but The part I'll talk about here is the FPGA hobbyist part because this is what I do for fun So it's about me, but let's get to the motivation part of this talk so this all actually started with a Some vendor they had this development kit and I was testing software on it and they had like bajillion of displays Modules attached to it and I every time I had to test the software I had to test all the display modules and I was thinking like yeah, this is not really good I have to keep unplugging the modules replugging them and it sucks so I Decided that it would be probably a good idea to build some sort of a device which would allow me to not do it and just Maybe capture the stream from the display directly so I can just verify that what is coming out of the board is actually The expected signal and I wouldn't have to plug and unplug this these displays all the time Then a colleague of mine when I told him this he said like yeah If you can actually capture everything Then we can also use it in CI because if you have like the the whole thing from the display the whole wave form The entire frame then you can store it in memory and then analyze it in the CI and then validate between different software update versions that the stuff that is coming out of the device is Still the same because it could happen sometimes that software update will slightly change the timing of the display interface and The display vendors they are very sensitive to it. So if you are operating their displays Out of specification Then they may not accept the warranty returns if your devices start failing So if you can do this in CI then you can validate it that your display is still operating within specification and then slightly after that colleague of mine, he's located in Brazil He's been bringing up some sort of a display and the display has been in Germany. So I was like, yeah, okay So you have this webcam hanging above the display. It's not really optimal this mode of operation, right and It kind of clicked in my head at that point that if we had a device which would again be able to grab the display interface directly and stream it out somehow Then there would be no problem. There would be no webcam There would be no distortion of the image which is captured by the webcam and it would be all super nice That is why I decided to build me a hardware Now I was also exploring the potential software options What could I do how to avoid really building any hardware? So one of the options which came to my mind was okay, we can start grabbing Frame buffers directly from the memory of the embedded device and then stream it on the network one way or the other either FBDF or DRM subsystem or we can use Vestal with RDP backend and just payload it into the network and send it out the problem with all of this is that FBDF is obsolete DRM subsystem does not necessarily Guarantee you that you will have the full frame buffer in memory. There can be like a damage on top of it Not everyone is running Vestal with RDP backend and what all this has in common is also that You need to modify the software on the device and this may not be welcome Besides all that this all the information this would give you is That the buffer in memory has been rendered correctly You would have no idea what is coming out of the device that means From the point where the CRTC the scan out engine picked up the buffer from memory as It is passing through the display pipeline until the display connector and the display itself You would have no information what's going on in there This can be solved partly by using functional safety functionality of some of these bridges and CRTCs Because some of these devices allow you as a frame is scanned out through them to calculate the CRC of the frame and Return you the CRC This is a functional safety feature not all the CRTCs supported not all the bridges supported But the DRM subsystem does have support for that and the Intel IGT tools do make use of this for CI testing but then again, you do not get frame out of it and You cannot stream anything out on the network and again, it's not really useful So I decided to build my own hardware basically to grab the interface itself as it is coming out of the board or the embedded device Now there are three different types of buses, which you will find in embedded devices when it comes to display output I'm not going to talk about the pluggable ones. That means no HDMI. No display port. No VGA. No nothing I'll talk about the ones which are directly soldered on board or somehow attached through a Some sort of an FFC connector the oldest one is the DPI interface Lot of you probably know it under the moniker of RGB. It's The name is stands for display parallel interface. It's literally the oldest one It works in such a way that the embedded device generates clock on each edge of the clock this device generates pixel information This is clocked out on data lines, which there is one all the way to 24 or maybe even more and Then there are three synchronization signals horizontal vertical and display enable The signaling is usually 3.3 volt LVT-TL But people are getting really creative in what they put on the pixel data lines We'll talk about the very standard variant of this The other interface which is really common in newer systems is called FPD. It stands for flat panel display link and Very often it is called LVDS The reason for this is that this interface does use LVDS signaling on its differential and clock pairs But the encoding on those differential clock pairs and differential data pairs is Is called FPD This interface uses one differential Clock pair and three or four differential data pairs unless it's a dual link LVDS. I'll talk about that later It's basically a serialization of DPI interface. So it's possible to change Problem I want to capture FPD into a problem. I want to capture DPI And I'll show you how The latest greatest interface is MIPI DSI interface. It's a standard by the MIPI consortium It is quite different from the other two interfaces It is packet-based. It still uses differential pairs. So one clock One or more data at least in the DPI implementation and It uses in the DPI implementation here again Two different voltage levels on those different on those pairs and I'll not talk about this But maybe there will be a follow-up talk on how to investigate DSI First of all, I'll talk about the DPI bus because really this is the simplest one You basically get a clock out of the embedded device And per clock you get pixel data and then horizontal vertical synchronization Now if you have an embedded device and it has a display what you see on the display is the picture But what is on the bus actually is not just the picture. There is more The picture which you see on the display itself is just this the active area But in order for the display controller the chip which is on the backside of the of the display Which is called TECON timing controller To do its own internal Management operations there has to be what's called margins. This is the stuff around the active area So when a frame is clocked out on a DPI bus What happens is actually we start here on the in the top left corner and At that point both the horizontal and synchronization signals are asserted For a few clock the horizontal synchronization signal is kept asserted then the horizontal synchronization signal Does get D asserted At that point we enter the horizontal Back porch after that we enter what's called still dark pixels But if this was an active line, we would be clocking out active pixel data So there is a line of dark pixels, which is as white as your display image after that we enter a horizontal front porch at this point and This repeats for the entire duration of the vertical synchronization active pools, which is here After that there is a vertical back porch And this is again a couple of lines and only once we get out of the vertical back porch We have a first line here where the horizontal sync is asserted then it is D asserted We are in the horizontal back porch and then there is a first line of active pixel data That's what you actually see on the display clocked out on the DPI interface and after that there is a horizontal front porch again And this repeats for all the lines which we actually see on the display Once all the active lines with actual valid pixel data are clocked out. There is vertical Front porch and that's when all of this is clocked out. That's when the frame actually ends So it's not just like the active pixel data now if we want to do Capture of this kind of an interface with these kinds of timings We need to decide what it is that we want to capture. We have two options basically One of them we want to capture just the picture which is on the display But that's kind of useless because we cannot then analyze the display timing and tell whether okay maybe there is some sort of a timing problem, so we probably want to capture the whole thing and So this is what I decided to do capture literally everything and I was also thinking okay I don't want to capture just the pixel data But I also want to capture a state of some of the control signals that means the horizontal thing vertical saying a potentially even PVM and this actually works very conveniently because if I have RGBX pixel format I can store the pixel data into the RGB bytes and I can store the state of the auxiliary signals like h-sync v-sync data enable PVM potentially in the Beats of the last byte in the RGBX Pixel and this way I will have the state of these signals per pixel clock included in the data which I capture now What I had to do is some sort of an calculation of how much data I will actually be getting out of such a capture and I had a look into the Linux kernel into the panel simple and looked at the Highest resolution DPI display I could find that was 1024 by 600 at 24 bits per pixel So okay And obviously this this panel refreshes 60 FPS. So that was that is what we have to calculate with but We also have to factor in the margins and if we factor in the margins What we are getting is not 1024 by 600 but 1344 by 635 So that's the resolution of the frame. We have to multiply it by 60 and we have to multiply it by 4 because 60 FPS and for bytes per pixel and if we do that we get the The amount of data which we will be getting every second is 204 megabytes so 204 megabytes per second But that number grows very quickly if we pick a full HD display 90 20 by 1080 plus margins. It's roughly 500 megabytes per second, which we will have to capture So we need some sort of a high-speed interface and we have basically two options Ethernet gigabit not an option because that caps at 125 megabytes per second of the raw line bandwidth Very far from 500 Any sort of binding of Ethernet interfaces or 10g Ethernet is not an option because You don't have it on your laptop. It's just not ubiquitous So it makes it hard to use and also doing 10g Ethernet in FPGA just isn't super easy But there is another interface on your laptops, which has been there for a while Which is USB and actually USB 3 conveniently gives you line bandwidth of 625 megabytes per second 5 gigabits per second divided by 8 Of course the protocol overhead will be there So it will again be less than the 625 but it is way above the 500 which is fantastic for us and The bonus is that there exists Fifo chips which can accept external data and turn them into USB 3 packets and they handle all the USB communication for you So all you have to do is use one of these chips Feed data into it capture them on the other side on your PC one way or the other and Then somehow turn them into an image right I Found two of these chips which are USB 3 capable one of them is from FTDI and This is actually entire series of chips So they either make 32-bit FIFA to USB 3.0 or they even make a chip which behaves like a UVC webcam UVC stands for USB video class Which is basically what all modern webcams implement and it sounds fantastic, right? So if I could just turn the DPI image, which I get into a Webcams kind of like looking stream then it would be super compatible with all the existing operating systems right away and Well, that's not true, and I'll explain why shortly The FTDI device is also good in that it doesn't need any firmware It just needs to be configured one way or the other With some sort of an either EEPROM programming or some sort of an FTDI tool and then you can use it The downside is the FTDI device generates clock So either 66 megahertz or 100 megahertz on its parallel interface and it's clock output DPI itself is also clock output. So there has to be some sort of a glue logic in between those two devices to kind of do the adaptation The other device I found is from Cypress. It's called an FX3 This one is far more flexible. It has 32-bit CPU in it, RMV5, which is like ancient, but okay It has a DMA. It has a flexible 32-bit interface again On one side it has USB on the other But the thing is the 32-bit interface can also be programmed to a receive clock So it doesn't only generate clock. It can just receive clock as well. That's cool It is also some sort of a state machine which can be programmed with their tool But I'm not using that much They even have a UVC demo which runs on the RMV5 core, which is also nice, but oh Yeah, and one thing which is also nice is there is documentation for this chip and it's complete And it's it's like a really decent datasheet Everything is documented in there. It is well documented cool, but now the downsides so downside one Their SDK doesn't run on Linux fully the configuration of the interface the 32-bit interface Just is not capable of running on Linux in this day and age The other downside is that their SDK contains blobs, so it's not fully open and the license is kind of dubious so It's unfortunate Um But let's take a look at the possibilities and then maybe how this could be solved I took two possibilities or two approaches to this the first one I decided to do is like go for maximum compatibility because the UVC looked super appealing Basically, if I could build the device which you plug into any OS any computer And it will just be able to capture the DPI interface. It would be fantastic, right? And both of these chips could do that which is very nice, but this failed spectacularly So I decided to go for the simpler approach just Use the DPI feed it into the bridge chip and then directly do the Capture on the computer and then the soft processing of the data Which I get out of the bridge chip on the computer and display it somehow and this worked really well And it also removed the necessity for extra glue logic in the end So the failed approach looked like this. I basically had the device under test. This was the DPI source Then I had to have glue logic FPGA there and then the bridge chip Which I used the FX3 and then I streamed the data over the USB C over the USB 3 into a PC One of the problems was the USB video class because it doesn't provide enough flexibility for me the other problem was that The bridge chip here expects a parallel camera sensor in the uvc example and it's also a little bit problematic Let's talk about the uvc. So uvc is a usb video class. It's a standard by usb implementers forum And the standard defines the following basically It says that there are some pixel formats which are supported and sadly 32-bit rgbx pixel format is not one of them as far as I can tell But there are non-standard extensions defined by various os vendors which are documented at some random websites poorly One of them is rgbx 8888 or 32-bit rgbx So I implemented this extension. I now had to patch the linux kernel. Okay patch is upstream It's part of linux stable back ports as well. So you probably have it on your pc now But that's already a problem with the uvc That you have to patch the kernel At least I had to The other thing is when you plug in your webcam into a Any pc and the uvc kicks in it reads the usb descriptors and figures out the resolution Which the webcam provides And this is done once when you plug the device in basically when it's enumerated Uh, it the uvc doesn't support dynamic resolutions So the problem I ran into was that um Sometimes I wanted to receive lines which were of varying length And if the uvc video driver in the kernel detects something like there's like a frame Which is just short a few bytes or something it will drop the frame Oops, apparently other oses do this as well Uh, luckily the linux kernel has a module parameter for the uvc video which is called no drop Um, so that way I can at least receive a frame which is short and get it into linux user space But then linux user space tools like gstreamer and ffmpeg will do the same check They will see whether the frame might be just short and then they will discard it So I had to patch also gstreamer and ffmpeg and ultimately the benefit of using the uvc is just gone Now um, the other problem is um, these uvc Chips and the firmware Expects parallel camera sensor. This works differently than dpi display output Uh, the parallel camera sensor has also two sync signals line valid frame valid, but they indicate when Valid pixel data are produced by the sensor Uh, this stuff around the valid pixel data here this These margins. This is called dark pixels and at that point the sync signals from this sensor are are not active now. Um Here is a better infographic uh, basically what we have to do at that point is Uh We need some sort of an adaptation layer which would in the fpga turn the dpi input synchronization signals into something which looks like cpi Synchronization signals for the bridge chip. So there has to be an fpga glue logic. There is no way around it and Luckily for us Uh, what we want to capture is everything that means that we can basically confuse the The uvc bridge in such a way that we say, okay, we support fixed resolution And we report it to the computer and then we implement something which is called an asynchronous fifo um now asynchronous fifo is Device which allows you to transfer a lot of data from one clock domain into another clock domain in an fpga This is often what is used The idea is at least in this case that The dpi would be the slower clock domain we would Pull in into the async fifo as much data as possible and on the other side of the async fifo what we would do is we would Detect um horizontal synchronization pools within the data When we detect this Then we would wait until the fifo fills up with Roughly one and something lines of the dpi data um, and then We would start draining the fifo when it reaches some sort of a fill level We would drain as many data from the fifo as The end of another horizontal synchronization pools This way we would be able to pull basically one full line of dpi data out of the fifo And if that was short Compared to what we configured into the bridge chip and what we reported to the Linux kernel In the uvc descriptors, then we would just be sending blank pixels This way we can always send a full line of data as the uvc video driver would expect The other problem is We also need to send the correct number of lines and this is increasingly than yes problematic We basically have to do line counting and we have to use a trick where When we send Some amount of lines then we basically just start generating fake pulses to confuse the state machine in the FPGA fifo correction in the In the bridge chip that it would think that we actually send it the whole full frame already The state machine in it is kind of simplistic so that can be done, but it Again is another complication Um, by the way, uh, I tried this on an FPGA on cyclone for e With timing constraints and all in place. I could get the async fifo output frequency to 65 megahertz Which is just not enough for especially for The 100 megahertz capability input of the fx3 So I decided, okay, the uvc is not worth it. It's just not flexible enough The firmware patching of the fx3 and of their uvc example also problematic And then I need this dpi to cpi FPGA logic, which is also not great So I decided to go about this differently and really simplify this um, I took the Display output from my embedded device and connected it directly to the bridge chip and then the bridge chip to the pc The idea is that I will basically start capturing the dpi interface as the clock come in into the bridge chip And I'll just get the raw pixel stream I'll get the pixel stream on a pc in a buffer and then I'll somehow process it and do something about it The only problem that was remaining at this point because I can do the software processing That's not a problem. You just get a stream of data. It's no problem The only problem here was that, um The vendor tooling for the fx3 All right, and this is actually Where my previous slide deck ended um Then I was discussing something on the sigrock channel Now sigrock is an open source logic analyzer Measurement equipment control suite and so on um, there are multiple projects associated with sigrock like pool's view like, um The fx2 lafv you should definitely look into it if you're in the electronics stuff and you are interested in open electronics control tooling Now the fx2 lafv is an open firmware for the predecessor chip to the fx3 the fx2 And it makes the fx2 basically behave like a logic analyzer So it allows the fx2 to oversample its input bus at clock generated by the fx2 And stream each of the samples over usb into sigrock sigrock has support for this So this way you can build on the cheap logic analyzer Now I was thinking yeah, if only there was firmware like that Um, but for the cypress fx3 that would be fantastic, right? Turns out there actually is one written by this person marcus komstadt um, and the suggestion was actually given to me completely um out of band by someone who comes Uh on the channel under the nickname tank So I'm I'm very grateful for that suggestion. Thank you very much Um, and they basically said yeah, I'm using the fx3 as logic analyzer and it kind of works And it didn't click in my head immediately, but then after a few days. I was like, hmm Is this fully open it actually is wow So then I realized in All we would actually have to do is take this and flip the clock direction, which should be easy And since there is even sigrock integration for the fx3 lafv already. It's like a few patches I could use that so I rebased those patches. I tried the fx3 lafv and it worked I had the high speed actually super speed logic analyzer out of this cool So what is left? All I had to do was flip the clock direction in the fx3 lafv since this is fully open no blobs no vendor stuff and No proprietary goo. I could easily do that um, but Then all that is left. I Was to use sigrock Capture data somehow and then visualize them And then potentially remove the dependency on these tools because I didn't want to have too heavy dependencies, right? um So I patched the fx3 lafv. It was super easy because the documentation for the fx3 chip has been available So I just flipped two bits I didn't find out that one has to also disable dll when the clock input is activated. So I did The fx3 lafv just like the fx2 lafv even has a control and point to it So you can configure it this way from your host pc. I added configuration for pixel clock polarity at the same time and I now have Patched firmware which can capture on input clock cool You can download it here There is link in the slides. So just Grab the slides. Look at the links. This is how you compile it. You can just look it up later um Now the next step was to use sigrock to actually capture the data from the fx3 lafv And sigrock supports this but I was thinking yeah, okay I have to capture my data into a file and then I asked on the channel and gerhard was like Sure, look at the continuous switch Which basically makes sigrock just continuous capture data Into something possibly file in different formats um and sigrock also supports soft triggers which You can see the entire incantation here at the bottom. Uh, the soft triggers allows me to um Wait for the V-sync signal when it toggles That's basically the queue that this is the start of next frame And the sigrock will automatically align the start of capture to that start of next frame So now I'm getting frames into a file. So what's the next step? Well, maybe I don't want to capture these frames into a file but rather The idea is can I put these frames into a named pipe that means a fifo and it turns out this also works so um If I put a g streamer as a consumer g streamer file source as consumer At the other end of the named pipe and have g streamer stream data into it that just works I'm getting frames in the g streamer and I have the dpi captured there um And this is the g streamer pipeline that I used I actually had to specify the explicit width and height and other caps of the pipeline So that g streamer would correctly interpret the data it's getting from the fifo But I just got the g streamer sync window and The video data looked okay. So cool Um, then I decided To write my own tooling because I didn't want to depend on sigrock which has its own set of dependencies And I didn't really want to depend on g streamer immediately, but it seems like it's a good dependency Um, so I've written my own tool which just opens the usb device sends the fx3 Lafv a command start streaming and then reads out bulk data from the usb And this tool allows you to do three different outputs either Um, it writes it into a fifo This this data which it captures from the usb interface or it can display them in an x window Which is kind of the easy way to see what's on your display or it can feed them into a g streamer pipeline Which also allows you then to do fps counting overlay and it also allows you to do sync signal visualization so Demo actually This is how it looks like in the tool. This is linux kernel booting on some sort of a machine Um, as you can see there is a little bit of a gap here This is the these are the margins. This is the blanking area. The actual active area is here Um, I can do fps overlay cool. I can do sync signal visualization here Uh, you see no sync signals because this particular display has horizontal and vertical sync of one pixel But you will see it later um, I have a capture from different display And I can also do fps uh, displacing Using g streamer running in on this embedded device. I can just display it in the western and then I can have another fps overlay Which is on the host pc. So yeah, we can do this cool Um, one thing which you have to be careful about is when the embedded system reboots It will stop generating pixel clock The fx3 is little sensitive to it But uh, there are two ways to deal with this one of them You just disable interrupts completely on the fx3 And then the fx3 just recovers when there are new Pixel clock when the embedded system is done rebooting That's the easy way It also has a functionality which allows you to detect clock loss Which is super cool. The fx3 generates internally an interrupt which says oh, there are no more pixel clock Do something about it. Um, I used the simpler option just Disabled all the interrupts and let the fx3 recover on its own Now let's talk about hardware. So how did I build this? Well, actually I bought, um, development kit from cypress with this chip it has at the bottom two by 20, uh pin headers or plug headers And there is two of them Spaced 41.5 millimeters apart and you can plug into them these these kinds of easy to obtain cables And with that if if you can just Hand wire it with these easy to obtain plug cables to your embedded device This will actually work and I got it to 70 megahertz pixel clock this way. It's it's Not how it's supposed to be but okay, it worked So essentially you need development kit you need a few cables and you are Good to start with this Um, but then I decided to do a little bit more permanent solution So I designed my own pcb in in chi-kat And now it's going to be a lot more pictures. So this is the Uh schematic which I used in chi-kat as you can see here is the board connector of The development kit which I used here is the two two by 20 connectors of the fx3 development kit The rest is just wires Um, I used the chi-kat pcb designer And had it manufactured this pcb in some sort of a pcb house then I populated it with just connectors And this is the actually the end result. So this is the fx3 development kit. This is my embedded hardware which I wanted to test and the PCB is actually underneath this so you cannot see it but hey You saw the captures from this A little bit before that Um now what I still want to talk about is lvds pass Um, so the deal with lvds is that it's basically a serialization of dpi. It uses One clock lane differential and three to four data lanes also differential um Low voltage differential signaling But the point is with lvds you can deserialize it into dpi and then capture the dpi So you basically turn lvds problem, which you have into a dpi problem, which you just solved Easy This is how the pixel formats on the lvds look like there are only three so that makes it again easier This is all you have to deal with And um the chips which you have to use for this They are available from multiple vendors From ti from on semi from time here are some types which you can take a look at The only problem with this is that you have to be careful about routing the differential pairs for lvds You have to add termination just before the chip. There is like 100 ohm termination resistors in front of each differential pair of the d serializer And soldering the tssop 56 packages is a little takes a practice Um, but then you can probably have your pcb house just manufacture the board for you and populate it for you So it's just fine Um, one extra detail about lvds is that there exists something which is called dual link lvds so Lvds the single link one usually caps at like 1280 by 800 this place Vendors obviously wanted to pump uh full hd through it So the idea was that okay, let's duplicate the lvds bus completely. And so they now have two clock lanes and eight data lanes and they just send um two pixels per clock cycle of the bus one odd pixel one even pixel To capture this stuff I guess the best approach would be To capture this just like single link lvds And just get two buffers one with odd pixels one with even pixels and then just Merge them on the pc using similar instructions and that will be that So demo of the lvds to dpi capture Here as you can see more margins here because the display actually has a lot of sync signals that again fps Um overlay works Here you can actually see the synchronization signals. So the purple stuff here. That's the horizontal sync asserted The blue stuff on top. That's the v-sync asserted The gray stuff here. That's h-sync and v-sync asserted. So that's the sync signal visualization Linux is still booting cool Here is an interesting behavior. So as you can see, there is this kind of a smear which shouldn't be there, right? So this is the controller actually keeping the data lines In the last state when the last pixel was clocked out just before the controller entered the blanking period That is why the pixels are just replicated All around the image here I would have never noticed this if I was looking just at the display, right? But here I can see it because I'm capturing the pixel data all the time Cool, right? and now I built a Schematic for this in chi-cat again. The only thing which I had to do was design me a Schematic symbol for the dcl isop chip. Otherwise Um, yeah in the in the chi-cat integrated Schematic symbol editor. Otherwise, I don't again designed my pcb in chi-cat Um, this is the 3d visualization of the pcb This is the actual pcb in reality as it was manufactured and populated. Uh, this has been hand-soldered So it's kind of eh Not nice And this is the end result. This is what you saw the Captures from this is the thing in operation So again the fx3df kit the pcb is actually visible here the the one which I designed and this is the Development board which I'm using So this is the lvds capture going on right now there Um, and so what are the next steps? Well, obviously me ptsi, right? Um, so what do we do about me ptsi? me ptsi is a little bit More difficult and it will definitely need an fpga um Since the me ptsi defi is kind of ubiquitous I started looking around and I found an app note from intel which says basically, okay If you have an fpga which has suitable Bank configuration bank voltage configuration, then you can use a couple of resistors and connect the dsi Defi to the fpga and the fpga would be able to receive the signals um Now there has to be an ip in the fpga, which does byte line synchronization depacatization and so on and the upside is the open fpga um People already implemented this there are actually two implementations of csi 2 rx for the dfi and This is fine because csi 2 and dsi defi rx at the point where the depacatization happens Is basically identical so we can just pull that part from either of these csi to rx's and feed the packets into the fx3 and then Do something on the on the computer again, maybe like analyze the packets or whatever So there is that let's wrap it up um What I wanted to say is hardware is just obtainable by the fx3 defkit build an adapter board for cheap um software you can download at the links in the presentation The build you can do at home trivially sure And with that do you have any questions? Unfortunately, we don't have any time for questions So if you have any questions for marik you can ask him right now because I don't think there's any further session after this one Thank you