 So, the next speaker in this hall is Tim Mithrow. He will give you an introduction on Simbiflow, or basically, yes, finally the GCC for FPGAs. He's an open source developer, a Python guy, and maybe a few people know him about his other project, HDMI to USB, about which there will be another talk later on this Congress, so also catch that, and now I can wish you a lot of fun and a lot of informative takeaways from Tim. The stage is yours. Thank you. So, firstly, I'm going to give you a quick promo for another project I do. You may know me from a project called Tomu. That was an ARM microcontroller in your USB port. Since I really love FPGAs, we decided to do an FPGA in your USB port. I have a whole bunch of pre-production hardware here. If you help us out, I will give you one, but be warned, it does not have a bootloader yet. I'm still working on that. You can use it as a risk five CPU, if you want, rather than FPGA. It runs MicroPython, that definitely works, and it has a fully open source toolchain, i.e. what I'm just about to talk about. So if you're interested in playing with it, I'm interested in getting you hardware. If you contribute, this is a way I incentivize people. So the first question is, what is SimbiFlow? SimbiFlow is a fairly new project, you may not have heard of it before. The first thing and the foremost thing is SimbiFlow is a community. It's not just me. I am not the person who's done 90% of this work. These are all people who contributed in some way to the SimbiFlow project. This is important to understand because the reason I'm up here talking is because we need you to contribute to SimbiFlow, because to get to where we want to be, where we can replace the proprietary tools with fully open source equivalents, there's a huge amount of work that needs to be done. I don't have the time to do it all myself. I need everybody in the audience to contribute in some way to make this possible. So that kind of tells you what the project is, but what you probably came here to learn about is more the tool chain. And I have no idea if people in the audience are FPGA experts or not. So I'm going to start at the basics. If you think of software tooling, you kind of have a bunch of software and then you have a compiler that turns it into a binary. And you can kind of split the compiler into a generic front end, which has a bunch of generic optimizations and then a target specific back end, like there's an arm back end and an x86 back end and maybe a MIPS back end if you're into that. But ultimately, when you're doing C++ for say x86, you take the C++ software, you run it through a generic part that is shared and an x86 back end. And ultimately, you're turning C++ files through this compile step into a binary that you run on your x86 hardware. So we can parallel this in the EDA tooling ecosystem. You kind of have your description of your hardware in a language like VHDL, Verilog, System, Verilog. Then you have synthesis tools, which kind of take that and convert it into the digital logic. And then you have a back end specific part that converts it into, if you're doing asex, an image that you convert a chip out of, in FPGAs into a bit file that you load into your FPGA. And so SymbiFlow is currently Verilog tooling for taking Verilog and generating that binary or bit file that goes on to your FPGA end to end. This is the SymbiFlow tool chain. As you saw in the title, we kind of describe this as GCC for FPGAs. When I'm talking about that, I don't mean we're taking C and C++ and converting into Verilog or directly into something that runs on the FPGA. What I'm using it is as an analogy, some of the core parts of what makes GCC is that it's completely free and open. It's cross-platform and multi-platform, and it has a pluggable, interchangeable architecture. And so that is what we mean and we want for SymbiFlow. We want an end-to-end, fully free and open source tool chain. We want a tool chain that targets multiple platforms, multiple types of FPGAs, not just one vendor's FPGA, and we want to enable an ecosystem which has a whole bunch of different attempts and the different ways of plugging together the tooling. So for a bit of history of how we got here, Clifford Worf who will be giving the next talk started us all off with Project Ice Storm, which he actually presented about at CCC. I can't remember what year, and he demonstrated a full end-to-end tool chain that targeted the Lattice Ice 40. This was pretty impressive because up until that time, there was always this idea that doing a FPGA tool chain was too hard for open source. This is obviously silly because like they said the same thing, it was too hard to do a C compiler, it was too hard to do a C++ compiler, and every time it kind of turned out to be wrong. And Clifford delivered us the proof that this was at least possible. Then Clifford just kind of likes to keep getting stuff done. And so he started a project called Project X-Ray targeting Desylex Series 7 FPGAs. These are much bigger and more comprehensive FPGAs than the Ice 40s. And this is kind of when Simbi Flow or the idea of Simbi Flow kind of started. All of a sudden we now had two different parts from two different vendors that we wanted people to start contributing to. And so I talked to Clifford and we decided that what we really need is a project to bring everything together. And so at 34C3 last year, we launched what we're calling Simbi Flow, which was an attempt to target the both old Ice 40 architecture and the new Series 7 architecture. At that time, it was pretty primitive. But one of the important things we did with Project X-Ray was to kind of document the process we were doing for documenting the bitstream format. And this inspired then Project Trellis, which is a project to document the bitstream for the ECP-5 at FPGA, which is another large FPGA. We originally were targeting a thing called Verilog to Routing. Verilog to Routing's been around for a long time. But some people on the team wanted to basically create a new place and route tool from scratch. And so the people at Symbiotic EDA launched this new project called Next PNR as an alternative to Verilog to Routing to try and provide another option for doing place and route. And so this is what we call the Simbi Flow tool suite. This is what enables you to take a Verilog file and divert it into a bitstream that you can load on an FPGA. Going into more detail, the first thing you kind of need to create this tool chain is the documentation for the bitstream that goes on the FPGA. Sadly, the FPGA vendors don't provide this documentation for some interesting regions, most of which don't make a lot of sense. And so this is where the Project I-Storm, Project X-Ray and Project Trellis kind of fit into. They're trying to create documentation that anybody can use to create new tools. We kind of consider them part of the Simbi Flow project, but other people can take the documentation and target these FPGAs without having to ask us. Then you have the place and route tools. We're still continuing down the line of supporting Verilog to Routing, but we've also got Next PNR. There's also YOSUS. YOSUS I think is the best example of a really great open source project that does synthesis. And we use that as the front end no matter what the back end is, whether it's Next PNR or whether it's Verilog to Routing. And then we have this last thing that we call the architecture definitions. The architecture definitions are highly related to the bitstream documentation. The bitstream documentation tells you how to set the bits to turn on features, but it doesn't necessarily tell you what those things do or why you might want to use them or how you might use them. And that's what the architecture definitions are trying to do. They feed into the place and route. The place and route is the part of the system that tries to figure out how to map your design onto the FPGA. And so the architecture definitions describe the features inside the FPGA and should feed into Verilog to Routing. We're still trying to figure out how they might feed into Next PNR. They also feed into the synthesis because the synthesis needs to know what things inside the FPGA to map to. Lastly, what we're trying to do is make these architecture definitions describe the functionality that's actually in the chip so that we can do verification and testing and simulation of what is actually going on inside the FPGA. This should allow us to do interesting things like formal equivalence checking to prove that the design you started with when mapped to an FPGA actually matches. And this is invaluable in proving that our tool chain actually works. The other idea is that this executable, this architecture definition is executable documentation for what the FPGA is actually supposed to be doing. And so it should include all the silly things that, for example, on the I-40, the I-40 BRAMs do not, you cannot access the I-40 BRAMs for the first 40 clock cycles after the system powers on. That is something we want to include in our models because we want to have accurate simulation of what actually happens so that you can test things in a, say, continuous integration environment rather than having to load it onto hardware. A lot of that is what we want to do. In practice, this is still very preliminary. We're also hoping to make it so that you never have to visit a vendor's website to get the documentation you need. For example, if the Lattice, for example, became very hostile to the open source environment, we don't want to have to depend on the documentation that Lattice provides about the features in the FPGA. We want our own version of that documentation that describes what the things are and how to use them. So that's the part. What is the actual status of this? So if we're looking at the bitstreams or the bitstream documentation projects, this is kind of a summary of where we're at. I'm going to go into more detail now. If you look at the Lattice I-40, this is the first project that the documentation was done for. It's pretty much wide support for all the features you can find in the I-40. There's a couple of I-40 devices that aren't heavily used, that still aren't supported. And so if you want to contribute, we would love your help to add support for those. But pretty much everything, including the logic, the block rams, the hard blocks, the I-O, all that is already documented and things you can use right now. We also have pretty good Verilog models of most of the things. Some of the hard IP could use some good models which demonstrate some of the peculiar behaviors of some of the hard IP. And we still need some tech replacement libraries. For example, if you want to use a design that was originally targeting the proprietary tools, it would be good to have libraries which allow you to use the proprietary tool version on the open-source tool chain. But if you're using the open-source tool chain directly, you don't need those. So that's the I-40. I-40s are quite small FPGAs, they're reasonably simple, but they're great for very small devices like the FPGA Tomu that is inside your USB port, obviously there's not much space. That makes it perfect for this type of device. If we instead look at the Lattice ECP-5, which is Project Trellis, Dave Schaar started after us on this and yet managed to somehow finish before us, Dave's amazing. He has support for pretty much all the ECP-5 parts out there. The ECP-5 goes up to 85,000 logic, four input logic lookup tables, and it has high-speed transceivers. I believe the documentation includes the high-speed transceivers, so you can use them with open-source tools. It's pretty much documented every single bit that we know of so far. Maybe there are some bits that we have yet to see in the design, but pretty much everything you could think of is documented. Again, it would be really nice to have some better Veralog models. We have some basic ones, but things like the DSP, we don't have a really good model of what happens internally in the DSP. And also, we still don't have tech replacement libraries here. If you want to understand exactly what the bitstream inside the ECP-5 looks like, you can actually go and watch this video. It was given at a conference about two or three months ago called Orconf, and Dave Shah goes into excruciating detail about exactly how the bitstream works, and some of the very interesting decisions around how things work. This is the one everybody is probably here to talk about. I would highly recommend that if you don't have to use the Xylek 7 series, you really consider the ECP-5. For most people, the ECP-5 is a viable alternative that is cheaper and better for open-source use cases and is fully documented. Project X-Ray, however, is coming along really well. We're mainly targeting at the moment the Artex 7, specifically the EC7A50T, but that also includes the 15 and 35T, and the Kintec 7. We have some experimental support for Kintec 7, and very recently, like a week ago, some zinc stuff started landing, and our goal was to support all of the series 7 parts. That's the Artex 7, the Kintec 7, the Vertex 7. There's all the zinx with a series 7 style fabric. We already have complete documentation for all the bits in the CLBs or the logic tiles and the distributed RAM, and pretty much all of the routing. We have partial documentation for block RAMs and just recently, IOBs. We still have no documentation for any of the hard blocks or the DSPs. We also don't have very good Veralog models for any of these yet. This would be lovely to have help. We also don't have replacement tech libraries for a lot of the compatibility with Xylex designs. I'm assuming most people here probably have existing Xylex designs that they want to port to the open tool chain. These tech replacement libraries are very important for that, because there are a lot of kind of primitive objects that Xylex say exist, which don't actually exist, but instead just a configuration of a more primitive object. These tech replacement libraries would also be really awesome. If you look inside a series 7, Artec 7, this is kind of all the tiles that exist. This is about what we understand already. As you can see, there's a lot of green. There's still a lot of red, but it's getting there. This is actually slightly out of date. Probably the IOB should be orange, and some of the block RAM should be orange. That's quickly becoming more and more green. This is enough to do a lot of things. This is, for example, the CLBs, the logic slices. You can see that we have zero unknown bits for these logic slices. The sliceMs, which are the distributed memory, we also know all the bits for those. John McMaster, the person you can see on the side there, is here at 35C3. He is spending most of his time at the OpenFPGA assembly, and he would love to help you get into bitstream documentation for the series 7. We're not going to be able to do this all ourselves. John wants to have a life as well, so he would love help. If you, for example, want to use the PCI Express hard block, we would love your help documenting it, and we have a pretty good process for doing this. In summary, you can see that these three projects give us a semblance of that GCC that we were talking about. We have three types of FPGAs from two different vendors, and we have lots of bitstream documentation for many different parts. But because we want to expand this to every FPGA out there, we're doing something interesting as well. This here, if people know their history, you might recognize as probably the first ever commercial FPGA logic block. It comes from the XE 2018, which was available for the really awesome unit price of $55 each. It came with about 64 tiles. It didn't have any block RAM, which is why there's no documentation for the block RAM, but we have complete bitstream documentation for this part. You might ask me, why the hell are you documenting a part that hasn't been used in 20 years? The reason is, because it's so old, it's a really simple part, which allows us to demonstrate how you might go about adding support for this part to the tool chain. That's what we're targeting this. We would love for you to help take this and convert it into a tutorial that allows you to kind of understand how you might add a new FPGA to the full SimB flow suite, because we're very interested in expanding to, as I said, every FPGA out there. This is kind of a full summary of what parts we currently understand the bitstream for. You might notice that Project 2064 has a lot of orange, because it doesn't have parts that exist, like it doesn't have DSP blocks or any of these type of complicated things. We need your help to do this. We're not going to go and document every FPGA out there, because we just don't use every FPGA out there, and so if you're using an FPGA that isn't a XE 7 series, we would love your help in documenting it, and we've kind of come up with a fairly good process through these, our learnings on how to document these previous ones that should make it pretty easy for you to replicate our success. We would also love for people to look at, for example, the Spartan 6 and the Spartan 3. These are very popular FPGAs, there's millions and millions of them out there. I'd very much love to see support, but it's an old part that I'm probably not going to use in any new designs, and so not something that I'm currently targeting. We don't have support for any Altira part. If there are Altira fans out there, we would love your help there. There have been some attempts to document some Altira parts. I don't know the exact status of that. I believe Robert, who was working on that, may be in the audience or may be here at CCC somewhere. You should go and chat to him and see if you can help get some of that stuff upstream into Simiflo. But just documenting the bitstream is only like a third of the battle. You also need to create tools to generate the bitstream. The bitstream needs to be documented before you can generate stuff for it, but it's not just enough to document the bitstream. You also need the tooling to generate the bitstream. And so what's the status of that? As part of doing the mapping and placing route, you need support in both YOSUS and one of the place and route tools. Both the place and route tools use YOSUS as a front end, so you're going to need support in YOSUS to be part of the Simiflo tool chain. Both Next PNR and Verilog to routing, a timing-driven place and route. This is what all the commercial tools use. If you've used Arachna PNR, you'll know that's not a timing-driven place and route, which was another reason why Next PNR was started as a replacement for Arachna to give us timing-driven place and route. And so this is kind of a summary of where synthesis mapping and place and route is kind of apt. Next PNR is probably the best solution at the moment for Lattice Ice 40. You can kind of see the awesome GUI they have here. The status is that you should just be using Next PNR for Lattice Ice 40 right now. It should beat Arachna PNR in everything apart from runtime. And even in runtime, it should be pretty close. The people at the OpenFPJ table told me that you should just be using these tools and reporting bugs. So if all you do is help us by using the tools and reporting bugs, that would be awesome. It would also be good if you could start updating your tutorials. There's a lot of open-source tutorials out there that talk about using Arachna for place and route. It would be much better if they talked about using Next PNR for place and route now. Next PNR is also being used at tutorial sessions that the symbiotic EDA team and Esdan, who created this icebreaker, bored. I believe there may be a few slots left, but I'm not sure. They kind of were running out. If you go and beg, maybe they'll run a few more tutorial sessions. I highly recommend you go along and find out. This will get you started on one of the best dev boards to get started with from the experts. As well, as I said previously, the Tomoe FPJ is a nice 40 design. If you're interested in playing with one of them, I have lots of these here, but again, the bootloader doesn't quite work yet. That's a job for me tomorrow to get that working. I'd love help. I will give you boards if you have the ability to help. Next PNR also supports the Lattice ECP-5. Again, this is the awesome work of Dave Shah. It supports what I call IoT microcontroller level system on chip designs. This is things like little 32-bit RISC-5 processors, but it scales up to doing full Linux-compatible microprocessor system on chip. Dave has an example of Linux running on the OpenRISC-1K running on ECP-5. It's still early, but it's definitely ready if you want to experiment with it. Dave told me that, again, one of the best things you could do for him is use the tools and report the bugs. If you have a project where you could use the ECP-5, it would be really useful for you to try your designs and see whether they work. I would not say make this in your critical path yet, but it's definitely ready for experimenters. That's what Next PNR supports. Clifford will be giving a talk next after this talk on Next PNR so you can get all the really gritty details. I believe some of that may include even how to add new architectures to Next PNR. If you're interested in that, please stay around. Then we also have support for the Lattice I-40 in Verilog to routing. You can see some images of the Verilog to routing GUI here showing some designs. As you can see, it's a lot less pretty than the Next PNR one. This is the difference between having academics do GUIs and other people do GUIs. The Lattice I-40 in Verilog to routing has support for logic and block rams, but it only has a really simple I-O-B model. It doesn't have any DSP blocks or any of the hard blocks supported yet. You can definitely do a Blinky or a IOT microcontroller sock using Verilog to routing, but I would not say it's a production tool chain yet. There are definitely a whole bunch of serious issues with it we need to fix, like the fact that it might use more than 4G RAM to do a fairly small thing. None of this is fundamentally caused by Verilog to routing. More of it is caused by us experimenting on how to describe these things and learning along the way. We would love your help in improving a lot of these things. A lot of it is just Python scripts that generate the routing graph. Having those Python scripts be more efficient or maybe re-region and see if that's so inclined would make a lot of this much, much quicker and use a lot less RAM. I'm terrible at graph traversal. My graph traversal is probably end to a thousand. I'd love people who have better algorithms to come and make it end log in or whatever the correct way to do these things are. I am not good at that type of thing. It works. Please come and help make it better. But the thing that's probably more interesting is that we have basic support for the Xylex Rtec7 in Verilog to routing. With this, there's logic and DRAM, like distributed RAM. This is enough to do a blinky, like a counter, you can also do a RAM tester that writes a whole bunch of values into the distributed RAM and read them out. We know that because that's what we've been doing to test whether distributed RAM is working. Again, this is not a production ready tool chain, but it's getting there. We're very, very close to having an IoT level microcontroller sock. I was really hoping to get this working before CCC, but I got distracted by doing Tomu FPGA stuff. This is, as I said, very close to working. We would love your help in making it work. A lot of it is just working through the process and then debugging when it fails. Some of it is a little bit complicated and a little bit fiddly, but a lot of it is just walking through the process and reporting where you got stuck. Most people should be able to do that. This kind of gives you a summary of where we're at. As you can see, the Ice 40 is pretty much well supported by everything. Project X-Ray is currently only really supported by Verilogs Routing. I know the next PNR guys are chomping at the bit to add series seven support to next PNR, but have been holding off until we can prove that it works in Verilogs Routing. If you're using ECP five, then next PNR is your best bet because VPR doesn't support ECP five pretty much at all. And 2064 doesn't work at all yet. We would love people to work on the 2064 because it's a great beginner project and it would really help us figure out the type of documentation we need to write, for example, how do you add a new tech mapping to your sys? Problem with having been doing this for like two years now is I've forgotten all the things that you don't know when you start. So we really need help from, in some ways, beginners who are technical to help write that documentation and help describe tutorials on how to add something like the 2064, two things like next PNR, and two things like Verilogs Routing. So as I said at the beginning, this is a big project. It's probably as big if not bigger than GCC and it needs your help to become successful. We're putting a lot of work and we're slowly getting there. If you want this to happen faster, we need your help. If you know Python, I am 100% sure I can find something for you to do because almost all our scripts are written in Python. If you know C++, there's definitely a huge amount of stuff you can do. Both next PNR and VPR are written in C++. A lot of the shared libraries are written in C++. We would love your help improving those. Some of them are as simple as writing tests or improving the input output libraries. For example, the input and output library for VPR at the moment just reads the whole file in and then starts processing it. You don't need to know anything about how hardware works to fix that problem. That's just a simple event-driven parser. So even if you know nothing about hardware, if you're a C++ programmer, you can definitely help us. If you're one of these weird people who know Tickle, we would love your help because I'm not going to learn Tickle. And pretty much all the EDA tools out there use Tickle. I know John has an interesting relationship with Tickle from using it through Vivado. I'm sorry that you had to do that, John. So if you know Tickle, please help us. Pretty much all the tooling for doing the fuzzing needs at least some Tickle. And you could probably help relieve the stress of Tickle on John. If you know Verilog, as I said, a lot of the simulations and models are written in Verilog, we would love your help doing that. As well as simple designs, like even if you aren't extremely competent with Verilog, writing things like the simple thing that writes a bunch of values into the DRAM and then reads it back and checks that the values it got were correct, we need that type of simple tests to verify functionality. And if you can write Verilog, you could probably write that pretty easily. And we need a whole bunch of different types of things like that that aren't particularly hard to do but definitely need to be done. And so you could definitely help there. Here's kind of a blast from the past. If you know XML, almost all the file formats going into Verilog to routing at XML, a lot of the formats coming out of Verilog to routing XML, a lot of that is done by printf. That would be nice to be replaced. As well as things like having style sheets which confirm that your XML matches a description is useful things you can do or having tools that transform the XML from one format to another format is also really useful. So you can brush off those XLST skills from the late 90s and early 2000s and help us out there. If you know English, which most of you hopefully do, otherwise you're probably being very bored, you can help with things like documentation, things like improving the read me, improving how to get started documentation, improving the website. If you know JavaScript, I'm terrible at JavaScript, please come and help make our website better. If you're a sysadmin and know things like Docker, we would love for the setup to be much, much easier. Things like Docker can apparently make that happen. Things like improving our CI system with Docker is another way you could definitely help. If you have time to contribute this project, I am sure I can find some way for you to be helpful. And the thing is, I've been known to give people hardware for contributing to my projects. And while I won't promise that I will give you hardware, I definitely have a lot of spare hardware that maybe you might find turning up at your place if you contribute to projects like this. Plus, you'll get gratitude from everybody who's ever had to suffer through the proprietary tools. So surprisingly, I got through this much faster than I expected. So we're about 43 minutes, apparently. I'm going to go into questions. I cannot see anybody out there. The lights are quite bright. So you're going to have to wave at me to do questions or something. First but first, thank our speaker for the talk. Please do stay around for Clifford's talk if you want to understand the really nitty gritty details about how things like place and route work. And I'm sure Clifford would love help with Next Pinar. I'm sure Dave Shah would love help with Next Pinar. It's a very cool new tool that is less than six months and is already useful for production stuff. OK, for the questions, remember to get close to the microphone so we can hear you. And if you really have to leave, do so quietly. OK, microphone one, please. Yes. How did you reverse engineer the bitstream formats? So what we do is document the bitstream formats so that tools can be written that are compatible with them. What we don't do is reverse engineer the tooling directly. We are forbidden by our legal advisors from attaching, for example, GDB to Vivado. What we are allowed to do is basically put a lot of input into Vivado and take a lot of bitstreams. And then you do this process of cross-correlation between the stuff you put in and the stuff that you got out. And what you're looking for is every time this feature was on, this bit was set. And every time this feature was off, this bit was not set. And because you put in a huge number of designs and you randomize a lot of the designs that are going in, this correlation approach allows you to resolve that bit uniquely. I'm, however, no expert on the bitstream documentation side of things. John Master has done a huge amount of work on the Series 7 documentation. And Dave Shah has done a huge amount on the ECP-5 work. Both of those guys are here at Congress. And both of them will probably be around the OpenFPGA table most of the time. So if you really want to go into the super detailed about how this works, you can go and talk to them. As well, if you go to the Project X-Ray, read the docs, there's every one of the fuzzers which document various different bits, like the various bits of the logic block, should have a readme which describes what it does, why it does that, and those type of things. Some of the places were being a bit naughty and did not include such a readme. That's a bug. Please log it, and we will try and add it. But definitely, we're trying to make it so that this process is described in a bunch of different places to allow other people to do the same thing on other FPGAs. And so please do come and ask the questions. Please do come and tell us where you don't understand what we're trying to do or how it works because fixing those things is definitely what we want to do because there's no way this scales if only John and Dave are doing the documentation. We need it to be easy that any university student or any person out there who has the time can find their favorite FPGA and do this documentation. Microphone 2, please. Is there any support for VHDL in the Toolchain? Sadly not. There have been a bunch of attempts to do VHDL support in USIS. None have been successful yet. If you happen to work for a company that has deep pockets, Symbiotic EDA will actually sell you a license to a proprietary library that does VHDL. They didn't write that, but it has been integrated into USIS. I don't know many people who do VHDL in the OmbaSource world. It's kind of a chicken and egg problem. The OmbaSource world doesn't support VHDL, so everybody uses Verilog. And hence, the Verilog support keeps getting better. And the VHDL support goes nowhere. It would be awesome if people wanted to work on that. I personally would say you're probably better off switching to Verilog. I do think VHDL is probably in some ways a better language, but it's not so much better that I would suggest spending your life adding support to USIS for it. I mean, you'd get my gratitude if you did, but I can't actually recommend you do that. Is there a question from the internet, Sydney Angelin? Yes. Go ahead. We have two questions from the internet. The first is, would it be feasible to add support for CPLDs? Depends what you mean by CPLDs. Pretty much everything these days, people call CPLDs are actually FPGAs, mostly internally. If you mean, what are they called, something like PLA, logic-style devices, there is some experimental support for that type of device in USIS. I'm not sure USIS is the right tool, but Clifford will probably be able to tell you a lot more about whether or not that's a good idea. I definitely know that the CoolRunner 2, which has some experimental support, was a CPLD, and that does have the Andanor type system, but I'm not an expert on CPLDs. But definitely, like most things these days, that people call CPLDs are pretty much just FPGAs with flash that allow them to basically boot instantly their normally luck designs and have very similar things to FPGAs. OK, first microphone 2, please. Can you explain the difference between Next PNR and Verilog to routing? It seems like there's a lot of duplication of effort between what features are supported and which chips are being supported. So I knew this question would come up, so I actually have slides for this one. So there is a lot of duplication between Next PNR and VPR. That is correct. The first question is, if you just want to know which one you should use, well, the first thing is, which device do you want to use? If you want to use an iS40, I'd probably recommend using Next PNR. The work there is much more stable and production ready. If you want to use Series 7, at the moment, your only choice is Verilog to routing. If you want to use ECB-5, at the moment, your only choice is Next PNR. The reason both exist is that VPR has been around for a long time. It was the academic standard for doing FPGA research. But because it was mainly targeted as a research tool, it wasn't really well suited to targeting real FPGAs, which tend to have little bits of area where they tend to be a bit less regular than you really want to model when doing research about, for example, how big your FPGA should be. Next PNR was started because of that. Next PNR was an experiment to show that maybe you could do a place and route tool from a fresh start and produce something that's useful in a short period of time. The symbiotic EDA team did a really good job of proving that, yes, it was. It has a really cool GUI and doesn't suffer from the fact that it's over 15 years old, and so it's written in modern C++. It doesn't have its own smart pointers and all these other things that co-basers that have been around for that long due. So that's why this exists. There's still an open question, though, if Next PNR is going to hit some type of barrier that VPR has already solved. VPR has been around for a long time and has already solved a lot of problems. It's solved so many problems that it's kind of forgotten about half the problems that solved. It's also been used as the basis for commercial place and route tools before. If you know Altera, the Qortis tool chain was originally based on VPR. That's also been a little bit of a challenge because you kind of look at VPR. It doesn't quite map to the Xylek style of way of doing things as much because it was much more heavily influenced by our Tira design devices. So that's kind of why these tools exist. And we actually think it's healthy to have some type of competition in the open-source space. If you look at GCC and LLVM, for example, GCC has gotten a huge amount better since LLVM has become a viable competitor to GCC. If you look at the improvement in error messages from GCC6 to GCC9, it's been significantly improved. And in some ways, it's probably now better than LLVM. Before that, they just didn't care, and they didn't have the competition to drive that forward. And so that's kind of like a parallel we see here. VPR, in this case, is probably closer to the GCC analogy. And next BNR is kind of the LLVM one. It's kind of the new hot one that's started from scratch with all the learnings. VPR is kind of the older, crafty one but has been around forever. It was also really good to have the academics who are doing all this research into new FPGAs, architectures, and designs to be using the same set of tools that we're using for real designs. And so we think there is also importance in getting the academics to start targeting real devices rather than these virtual fake devices they've targeted for the last 20 years, because that will help them provide better research that we can use in things like next BNR. So I'm sure Clifford will have an opinion about whether you should be using next BNR or VPR. Next, I will give you two guesses about what his suggestion is. But next BNR is only new. It's only been around six months-ish. Hopefully it will stand to the test of time, but we don't really know. So yeah, that's kind of why this has happened. It's also like in the open source world, herding cats is like impossible. I would love to have everybody work on one tooling at first, but I don't always get my wish. So yeah. OK, we have time for a quick question. I think there was one from the internet left. That's already been answered. Perfect. Then we have one at microphone two. Hi. I was wondering, you were talking about legal issues surrounding the reverse engineering of the Bitstream. Are there any FPGA vendors that are willing to collaborate on this project? Or are they generally hostile? There are levels. So there are vendors who are actively hostile. There are vendors who are kind of neutral. And then there are vendors who are starting to think that maybe this could work. No vendor has yet to come to the support of the open source tooling. I would hope that any vendor that does would then be well supported by you guys, because ultimately they care about the number of sales of FPGX. They don't care about the tooling. They don't care about anything else. They care about their bottom line. And so it has to be a profitable thing for them to do. We believe that this is going to make FPGAs way more accessible. And so the FPGA market's going to go grow to 100 or 1,000 times the size it is now. And so even if they lose percentage market share, they're going to have a smaller slice of a much bigger pie and so be making a lot more money. But they're obviously skeptical. They have to report to their shareholders. And so convincing them is an uphill battle. If you have a project that needs a million FPGAs and are willing to go to a vendor and say, I'm only using your tools, only using open source tools, do your part, support it, that would be really helpful. Come talk to me. If you happen to work at a company that would be willing to buy a million FPGAs, please do come talk to me. I'm sure there's lots we can discuss about. OK. Then thank Mithro again with a warm round of applause.