 So, okay, it's time for X-Trix update. So, one who haven't heard about X-Trix, and I think it's just, this giant mini PCI express from Factor, it's 30 by 50 millimeters, really small board. And we tried to use PCI express two lanes to get the maximum performance up to 10 gigabit per seconds, robust speed. We use LMS 7002 RFIC, and actually we're using two channels, so we do MIMO here. And as all our SDRs was, we use GPSDO on board. So we started our first X-Trix SDR, and it has GPSDO, so we're still using GPSDO on board, so it's our feature. We also try to put co-ox synchronization and all the stuff like GPO, SIM cards. So, you can see it's really packed, so there's no more space to add extra features. So, what's inside? So, we use RTX, currently it's 35 FPGA, where we have SIM5, so it's SIM level translation chip. We have a GPS model for GPSDO, but it can also produce URs, GPS data, if you need. Actually, since we're using PCI express device, it has really low jitter and really low latency. It's really easy to make, like network year 1588 clock source for the network, because it has GPS. So, but PCI express is not, mini PCI express card specification, as PCI express lane, one lane actually, and the second lane is optional. So we use both lanes, but not most motherboards support this feature. So for that, we develop special adapter. I'll speak later on. And for the motherboards who doesn't have PCI lines routed at all, so we add USB to five chip. So it's like a last resort, if you don't have a PCI express, you at least can do something using USB two point rule. X-Trix has 12 GPIOs where five of them has different pairs. So that's LVDS. You can do like really high speed data on this GPIOs. X-Trix use very efficient power system. I'll talk about it later. And we use all bands available from LMS seven ships. And the main feature was to support the most possible sample rate we can do. So it's above hundreds. So it's actually third to all KOTOSMODEF.com. The first talk was actually in 2016. So I'll try to reconstruct chronicles events during our development. So it's been a long while actually for developing and polishing things, but finally we are closer to final revision, the final production. So as you can see, there was several samples and we weren't happy with something. We'll try to improve in truth. And it seems we are like putting a second system effect like how X-Trix is second system, second SDR. And when you're doing second system, it's always easier to get into endless improvements. But what issues we're fighting with? So as I said, we tried to get as maximum sample rate as possible. So like LMS-7 documentation sales, it's theoretically it can do 160 mega-samples. So we thought that we can do it easily in our first revision turnout. That's not really achievable because even a 400 mega-sample rate, it means that it's 200 mega-gears DDR signal on LML on a line bus. And the line interface is CMOS. And the CMOS on that speeds is really tricky. Typical DDR RAM, DDR2 RAM, for example, use different file interface for that. But what we managed to do on our boards, it's 90 mega-samples for MIMO and the 124 CISO. And if you do some black magic, say that word, you can do a significant more. So what is the kind of black magic here? We introduced extra DC-DC for manipulating so-called VIO voltage. For VIO interface, we can change dynamically from one to 3.3 volts. So it's for both doing optimal performance and the optimal power usage for your application. So if you're doing like low simple rate, you can reduce voltage and it will give you extra power savings. And when you do need the maximum performance, so you can increase this voltage. But what we noticed in our design, doing more than 90 mega-samples in MIMO, it stops working when the temperature rise. And what I achieved currently, it's around 15 mega-samples per second in MIMO mode, but it works until 30 degrees of Celsius, so you need to actually cool the whole system to work with. And moreover, you need to do special calibration. So what kind of calibration I'm talking about? So it's all about DDR signal. Initially, DDR signal need to know special time to latch date on the best. So it's called like phase delay. Actually, DDR3 RAM, for example, has built-in mechanism in phase to actually find the best optimal point for this latch. DDR4 RAM is even more complex. They do like channel equalization for each bit and find the best like attenuation and other stuff. So we tried to do something similar here. So we wrote a special program that's doing LFSR stream of parallel bus and doing different phase difference, looking which data lane introduced some error. And based on that, we can find the best optimal phase for the mode. So you can see in this 140 mega-samples size mode, only like less two lanes, bit number two and bit number one are more flickery. And it's reduced more like noise somehow. And these bits are actually limiting for that speed. If you are doing like eight-bit RF signal, you can increase a bit more. Since the last bits will be like rubbish. But it's not the only issues for high sample rate. If you can get like 100 plus mega-samples per second in hardware, doing it in software is all challenging. For example, typical phosphor implementation on my laptop breaks at 65. It still can display something on more speeds, but it means they're constantly drop frames. So you can't see the picture in real time. So there need to be done more optimization here. Another part, it's really hard, not really hard, but it's challenging to get 12-bit roll streaming because 12-bit to 16-bit transformation at that point actually takes a lot of CPU. So I started to do special SMID instruction, but haven't done yet it, because doing really optimal ways is tricky. So we'll release it later. And what we can really achieve from bus performance picture. Actually, X2 PCI 2.0 means 10 gigabits per second it's what actually specification says, but it doesn't mean it's the actual payload you can achieve. Because first it use four to five bit conversion on the L zero layer. Well, on L one layer you'll get like eight gigabit per second. Then on a transaction layer, each transaction is up to 128 bytes maximum. And most system, some system, some modern Intel system and Xeon system and do this 256 bytes, but actually my laptop on the 128 that gives some restriction and do it control channel and other stuff that's all limiting up to like 6.8 gigabit. So it's not really like really efficient bus, but it's what you can really get. And our implementation of DMA is really efficient. So what I could achieve, it's 6.7 gigabits. So it's almost the theoretical limit. USB 3.0 is always the same picture. It says five gigabit per second, but it's only on net layer. So it does the same four to five bit conversion. And you'll get like four gigabit per second. So what we achieved in the USB 3.0, it's only 3.1, but I'll talk about it later. Another point was power optimization. Mini PCX platform factor strictly declares 2.5 power conception maximum. When we did our first revision of boards, we used three DC DC chair, north and two LDOs and turned out it was above three, was in most operational ways. So we found that we used a single LDO that was doing from 3.3 to 1.8 was for LMS chip. And actually it was like almost a Watt of heat dissipation. So we completely redesigned the whole power system for board. Now it's eight DC DC channels, all programmable and softwares. So it has a high to C. You can do any voltage from 1.0 to 3.3, but you need to be very like accurate because applying high voltage to low voltage network, you can just burn the chip. So you need to do like special care, that's why it's all hidden from software layer. If you need to do modification, you need just dig into like low layer library and try to tweak something. So we separate three DC DC lines, especially to LMS chip. So LMS chip use like three power voltages. So we have a special channel for each of them to get the maximum power savings. And actually the result is pretty good. For most cases, we are in the spec but doing something crazy, like doing MIMO channels, receiving and transmitting at the same time, doing preprocessing on the line chip and achieving maximum sample rate. Unfortunately, it goes above the spec, right? It's already impossible to improve with here. Another point, what we looked, although PCI Express is really a good bus, it's really a robust low latency, it's very hard to work with because most laptop nowadays doesn't have a mini PCI Express form factor. And the other point, Thunderbolt 3 is not really widespread yet. So as during previous year, Intel announced, so they like remove license fee for chip manufacturers for Thunderbolt 3 until we more open specs and so on. But still, there's not much Thunderbolt 3 in the consumer sector right now. So we have to stick for USB 3, especially for debugging, for developing and for other convenience. So initially I thought that it was completely impossible to do transformation from USB 3 for PCI Express. But luckily we found USB 3480 chips. It was famous in special R&Ds for DMA attacks and some security research on PCI bus. So they use it for many uncommon use cases, so but we used for exactly what it was developed for. So we're doing conversions from PCI Express to USB. But since the bus is not transparent, you can do DMA to USB 3 in point transformation. You need to write a special layer of software compatibility. So we developed a special library. It's already published in our GitHub. So you can use 4,000, any other PCI Express device that you want to use over USB 3. And actually the performance is okay. We were able to get more than 100 million samples in size and mode. So it's really useful. And one more interesting feature. This chip has 4 GPIOs, so I do, and I think it's really good to do GTAC. So when I do GTAC over these GPIOs using my simple Bing banging using Lib USB, just to rewrite the full FPGA image, it took me almost 40 minutes. That's not something, it's not really a user experience you want. After optimization in Lib USB, doing multiple transactions, so I was able to achieve only 27. So it was not usual anymore. But using 8058 macro control built into this chip, it was, I was able to get initial performance below 4 minutes and doing assemble optimization of bit banging. I was able to get like a minute, slightly more than a minute. And the minute is okay. So you can wait a minute to upload the new image. It's not an hour. Another challenge for doing transformation in this chip, it has limited number of in points, actually for end points, and each points can have four kilobytes for maximum. When doing like a real massive operation, it might be a problem. And actually it was. When I start and I wrote initial design using one end point, I could achieve on transmission up to 160 megabytes per second and the 128 on receive side. So what I did called inter-relief mode when I send odd packets to zero end point and even packets to first end point. So it's actually doubled the fee for size because it's operational kind of doing at the same time. And I could achieve more speed. Initially, Rix was more, actually Rix get more gains in two weeks. I don't know why, but the real disadvantage of this, you need to take care of some like USB timeout events. And when you get USB timeout event on first end point and you don't get any feedback from zero point, your data can be messed up. So your handling in the scheme is not so easy. Yeah, initially I wanted to add CDC interface for our GPS and SIM card, but actually there's no more points to do with this. And that's why it's not exported as usual. CDC, ACM device. And another thing that I noticed at working over live USB is more than 250 megabytes per second. It's where you can get the same jitter as in PCI express mode. So even doing real-time kernel, real-time option for stock kernel doesn't help much. So it definitely helps, but it's not like complete remedy for this. So probably kernel driver will help, but right now we don't want to do it, at least right now. But for other speeds, that's USB free, great. So what's it look like in FPGA? Well, we try to do in our FPGA design as simple as possible to use the least dependency possible. And it's like two bases. One bus is 64-bit, 125 megahertz. It's like DMA bus, high speed bus. And another is XA, XA is bus for peripherals. So we have simple UR, simple A2C. And we have a special course like USPI course for doing firmware update and to store some extra data, for example, some collaboration data and other things. And we have USB to core for thing for the reason where PCI express doesn't work. And although we introduced RISC-5 or soft core, initially it's using only for USB to enumeration. So USB to enumeration is done in soft core, but very useful for any applications we can do like tuning from soft core and doing some commands, some collaboration, any other scheme. So what's our current utilization? So we are using 35R checks now. Actually we occupy 100% of BRM. It's mostly the RX-P4 and TX-P4, but it can be reduced. It's like overkill to work well with any system. For example, if I use ATOM motherboard or Xeon motherboard, I can like use one quarter of this before it will work without any endurance for overruns. But using it on my laptop, it has this fancy 4K screen and when you do switching between windows, it can easily get bus stop for some time. And without proper buffering, you can get the same level of performance. For some system, it's a must. So we don't do any DSP here, so it's completely free. And we want to use it as an extra accelerator here. We are thinking of doing some acceleration for 4G, for LTE and other networks, but right now there's no release yet. And yeah, you can actually, it's almost one quarter of LUT, so you can add any other you want. So as mentioned, we add USB to support. Mostly, as I mentioned, not all mini PCI Express from Factor actually have PCI Express lanes and not all users actually need the 100 plus mega samples per second. But using it for like Raspberry Pi's and other stuff, it would be really useful for. Currently, I have an implementation of ULPI interface to end points. The enumeration is already done, but no host libraries is provided yet, so I can start developing for this. But hope to finish this work right after we ship the boards. So what's the current status of software? Actually, all our host libraries and kernel driver for PCI Express is already on GitHub, so you're welcome to use it for XTX or any other projects. And we provide some software for third parties. Definitely, we provide OPSDR plugin. Actually, it's not all feature implemented yet, but we are working with them. So if you have some application in mind, so let us know, we'll test. Whether this feature works fine, definitely JR Osme's DR works fine, then we started it, time with tech support. We will discuss yesterday, Osmo Turex also use our native interface. It was really interesting because in our stack, only Osmo Turex that was dependent on UHD like took really a leap boost and tons of other developed dependencies. And when we switch to using the strict tricks, we can get rid of like tons of our needed dependency. The FPGA code is not published yet, so it will be published as we ship the boards. And we are working on multi-strict synchronization right now. And definitely we want to get the more user experience, more like better user experience. So if you know what to test, hotels will definitely test this. So when we get everything all working, so we think how to, what to do next. And initially, we think that crowd support was a great idea to push the strict tricks. And one thing was what to actually put on crowd supply because we had so many ideas of strict tricks that can be combined with different FPGA options, with different temperature. It can be industrial, commercial, it can have GPS or can be without GPS. So initially we want to put many options, but it is a really bad idea for crowd source. It's more likely you'll get like few pieces of that, few pieces of that, few pieces of that and you run out in the higher costs and you had just increased price for the most options. So that's why on crowd supply, we just make one thing for each option. So it was one strict, one adapter for USB3, one PCI Express to mini PCI Express adapter and one massive MIMO package. So since it's just one USB3 adapter, I think it's a good idea to put everything we can do, everything we can imagine into the single adapter. So it was, it's what looks like our first design. In center, if it's not clear, it's OLED display that can actually display not just text, it can display graphics, it can draw like initial was ID to put some spectrum here and other fancy stuff. But we were afraid of doing another year of development on this, so we get rid of this. Yeah, you can connect the display driver data. So it's a brief history of our USB adapter history. So this picture of our first revision, actually it's pre-revision already and the last revision use six layer boards just to get rid of emissions on USB3 lanes and PCI Express lanes because current design has a high speed lanes on top layer and you actually can see it in spectrum. So actually what's our final design looks like? It's since in model, almost the same but without display. Yeah, it's here, actually I can pass around if somebody hasn't seen it. So what's interesting in our design? So as I said, we are fighting with power consumption and we want to have this SDR to be basically cooled. So that's why we introduced a special thermal optimization. So our case actually is a head sink to the board. So this aluminum brick and center is actually right below the LMS7 chip. So when you screw the board, it will attach to that heat stick. And all the excessive heat will be dissipated onto the case. So it's what's inside of this brick and it's what we'll look like when we get the final board. So actually our boards have been manufactured right now. We'll get during the next week and we'll can test everything is USB3. But yeah, another reason for that fancy board, you can actually remove the PCB from the case and work on table without case or you can install it in the case, so it's up to you. Another product that we put on crowd supplied was our PCI Express to mini PCI Express adapter. The main reason for that was since our X-Tig 4 utilize two PCI Express lanes and they're like no adapter on the market that actually used to PCI Express lanes. So our adapter use standard PCI Express X4 to our mini PCI Express socket. So it actually used to lanes and actually it has preamps, LNAs, GTEC and other stuff. That said, I should want to add that if you don't need all this fancy stuff, like no LNAs, no amps, no X2 PCI Express you can buy a very simple mini PCI Express to typical PCI Express on Alibaba for like $5 or something like this. Yeah, the only limitation it will be just one lane. So it's like performance for our so it has pre-driver, it's almost 20 dB the whole band. Actually, it has a small gain on low frequency but we'll optimize it due to low capacitors and the LNA gain is not so linear but it's probably LNA almost works here. So, what's overall status of our Karol supply campaign? So there's good news and bad news. Good news, so we have our hardware design finished and file prototypes either we get them or we'll be manufactured and give them soon. So no more optimizations are planned just like we need to manufacture it. The bad thing actually, we can't deliver initial date because one of components is only will be in July it's actually our crystals and we can't just use another crystal because there's no options for the price performance. So, what will be in future? So since Karol supply version has one extracts we are thinking to do two options like pro options mostly used for industrial embedded applications and light options and use small HPJ and for commercial usage. And the X6 it's not just a board it's almost a framework we'll be using to build some more products and currently there are two things we're thinking about it's M2 version but for M2 it's always complicated which form factor in which key to utilize. So we have a good idea, so please give our feedback because for us it's really hard to guess and another things we are thinking about to do is extracts to use parm to file this in. They may be during this year or may well not, we'll see. So thank you. As far as I understood you, you are not doing digital signal processing on the FPGA at the moment? Yeah, because everything is done on the LIMES chip. Yeah, but so you are streaming for digital baseband? Yeah. Well, I mean there is a down sampler built into the LMS7 chip so we can sample at a higher frequency than what we are streaming. Yeah, I just wanted to comment. So you said you have a native backend for Osmo tier X or something like that? Native what? Yeah, native support for Osmo tier X, yeah. Yeah, but so we are not talking about the UHD one then? Yeah. Okay, and where's that? It's on all GitHub? In your GitHub? Yeah. Okay, not in the OsmoCon one, right? Not in OsmoCon one. Okay, are there plans to? Yeah, there is a plan. Put them together? Or? Yeah, yeah. Because I need to do like more refactoring because what I did, I removed all UHD stuff and put my stuff so it can't be used with UHD. So we need to decide how to do better merge of this because there's no like abstraction in Osmo tier X to use one or other mode. So I had to replace some holes to be it more like compatible with the X-Trix. The cheap approach is what we do in Osmo BTS, which would basically be to simply link. So when you build Osmo tier X, you link one executable that talks to UHD, you link another executable that talks to X-Trix and you link another one that talks to the USRP. And I think it's fine. I mean, in the end, we're not talking about the 200 megabyte large executable but a very small binary. So I think it's okay that this way we don't need to introduce now big infrastructural changes. So we just, it's just linking different object files and we create one binary for each target. I think that's perfect. Yeah, that can work.