 We're ready to go. I'm talking more digital hardware design. And for those of you that have been staying here for the previous talk, you have now a perfect introduction that kind of leads up to this talk. So I'm by kind of when I started at the computers, I started at software. And I've been doing software for a very long time. And then just at the end of my studies at university, I came to hardware. And then when you first have a look at hardware design, you obviously do kind of a VHDL, a very long course at university. And they show you something like that. And that's how hardware design kind of looks at first thing. So you have this very strict model. So this is a waterfall model. So you do your requirement specs. You have a couple hundred pages. You do the word or Excel or something like that. And then you do your design. You have your architects there. Then you have a couple of people that do implementation. And that's actually outsourced many times to some cheaper countries. Because as we've seen before, kind of writing very long in VHDL can be extremely boring. And then we have verification. So you make sure everything actually works. And in the end, you have maintenance. That's kind of your problem stick with you forever and ever. And so this is kind of the very simple version. If you're kind of more than one person, you usually split that up. So you have kind of one part here. And then one V-like other part where you do this verification engineering kind of in parallel to your regular software development. So you start off with a lot of PDFs or work documents. And then you work your way through to the implementation process. And this is kind of really done in parallel usually. Because yeah, why? We'll see that in a bit. So now coming from the hardware of software world, kind of that's how the flashy software development processes look these days. You have all those agile methods. You have very short sprints. You have a very kind of dynamic process that you do. You don't start with a 200 or 500 page PDF. And then you don't see each other again for the next couple of months. And then you meet again to do the final integration. So the things looks different. Actually, it is different. And also kind of to note, in the end, we had kind of maintenance at the last point. If you look at that, and that's just a real random picture I found on the internet, there is party at the end. So if that's not a good reason to have a look at why those processes are different and what we possibly can take from one of each other. So why have been or are hardware development processes the way they are? And they are the way they are. And that's what we've seen before. Making Asics is expensive. It's really expensive. And you want to be first time right. So if you do a tape out, you want to make sure that it actually works. And that's actually, seems to be funny. Sorry, you said first time right, and then there was this exciting thing in the picture. OK, exciting. So first time right doesn't seem to go together. Let's see. No. So the thing is, if you're doing Asics like stuff that costs you real huge amounts of money to get produced and fabricated, there you want to make sure that you're actually your first table is correct. Because otherwise you have just spent a lot of time and actually even more money on that. So that's why we have this very strict flow. Thing is, we're in a world of FPGAs now more and more. And they're coming closer to you when you look at Intel putting them closer to your own processor, putting them in data centers. For now you don't even have to buy a huge FPGA board anymore. You can just rent them by the hour at Amazon or Microsoft. So this makes this kind of hardware development much more accessible. And so the question I'll have a look at here is, can we treat hardware development that actually targets Asics or reprogrammable software a bit more like software design? So we have a look at three things. And they're probably formulated a bit to upset people. So first thing is, be less confident in the quality you get. Second is, iterate, iterate, and iterate again. And third is, we should have a look at where we can actually make a difference in the things we do. First of all, yeah, don't worry, be happy. If you look at this very gut feeling graph, which is showing up here but not there, yeah. So this is source gut feeling, but it's actually more or less right from experience. So if you do a bit of testing, you say, OK, yeah, it compiles. It's probably going to work. Then you do a bit of testing. And at some point in time, you're pretty confident that things work out. The thing is, if you have a software, you just run it. And if it reasonably performs reasonably well, you say, OK, let's ship that to our beta customers or the paying customers. OK, or they're just going to come back to us. But the thing is, if you want to produce an ASIC that costs you a couple million, you want to be really, really sure that you're actually delivering a good quality product. So if you're doing software development, you can be OK with less confidence in your quality. Don't meaning you intentionally ship crap. You just are not sure that you're not shipping crap. So why can't we do that? Can FPGAs, speaking in the kind of ASIC terms, enables you cheap re-spins. You can just reprogram, re-flash your FPGA, and that's about all you need to invest there. So you don't need to be first-time, right? You can go with this 80-20 approach. So OK, the question that comes up and that kind of makes this approach still a bit harder or a bit more difficult to execute at the moment is, what does a testing approach for a pure FPGA targeting system look like? So if you Google around and if you have a look at the standard tutorials, either they leave out testing completely, just as we've been seeing before. And that's actually not just here. It's also at university courses. We talk a lot about design, but not that much about testing and verification. So how do you do actually verification and testing for an FPGA targeting thing? So there is not that much information out there yet. So this is a place where we need to get a bit more active, where we kind of also publish our best practices. Because I'm sure in this room there is many people that actually have experience doing that. And there is actually some software tools that help with that. But there is not that much knowledge about that. So this is something we should change. Just to mention one simple example, there is, for example, CoCoDB, which is a test bench kind of generating framework written in Python, which is more actually targeted at FPGA design. It gives you very nice productivity. That's one example that people should know more about. But this 80-20 approach is the prerequisite, the required thing that you need to have if you want to go more to a software-like development flow. And if you look at software development flows, they work. And they have been kind of moving in this direction, this agile direction, because you want to iterate. You want to be very fast in your trials and errors. You don't want to start with one specification, but you want to go in small steps. And actually, if you look at kind of learning and how learning works, you're much more productive if you get very fast feedback. Imagine if you're at university or been at school somewhere, if you have been writing a test and just get the results back after three months, you're not going to learn that much from it as if you're kind of one-to-one with instruction to get immediate feedback. So the sooner you get feedback, the more value you have and the more you learn. And that's why kind of software development is more fun for people, usually, because you see quicker results. Yeah, then having a look at hardware, and we say, okay, let's go with this very nice iterative flow. And actually, I've seen that in Brussels on Friday. Everybody who has been using a synthesis tool flow knows kind of the main time you spend actually designing is waiting for the synthesis to finish. It's kind of not uncommon at all for have a synthesis for a larger FPGA to take an hour, two or three. And if you get a feedback that you did actually a syntax error just after an hour, it's just annoying as hell. So FPGAs give us cheap iterations, but they don't yet give us fast iterations. And as we said, that's what we want for productivity. So I've seen a couple of C-GaZi guys around. There they are. I think they're heading in this direction. So intelligence ideas. So we want to get feedback as soon as we can, meaning if you type, you want to get feedback. If you want to do, if you do a bit more static analysis, linting, that's all stuff that can be done at much earlier stages. You don't need a full place in route and synthesis for that in most cases. And IDEs that we can get today, most of them, are still stuck in the 80s, I think. And they don't usually give us feedback. They just have some syntax highlighting and actually are able to call with one button some IDE, some synthesis tool. And we also need more automation. And every large company, also in the hardware design, of course, uses continuous integration. Continuous delivery is, I think, for them pretty far off. But anyways, the nice thing about continuous integration, and I'll give you a hint how that looks a bit in the future, nice side effect. You also get reproducible results. And that's always a good thing. Talking about IDEs, I just installed the 2016.1 version of an IDE from Mentor Graphics. It's called HDL Editor, or HDL Designer. And well, the user interface usually always looks like tickle and crap, but you're just going to get over that if you do hardware design, which is fine as long as it's usable. And then it asked me this very nice question, do you want to set up version control? I said, yes, that's what I want. And they gave me kind of those options. And as I said, this is the 2016.1 version of this product. It's kind of not, yeah. The user interface doesn't look like the software is like that. So there is much more effort needed in this regard. Last point, differentiate where it matters most. If we look at software, we see a lot of standardization. We see built tools being standardized. How many different compilers are you using these days? You probably use either GCC, LVM, or if you're targeting Windows, maybe using Microsoft Visual Studio. But that's about it for compilers that we usually care about. If you look at hardware designs, how many synthesis tools do you know of? How many different front ends that just parse your language in subtler different ways? So the question is, can we actually find common ground somewhere and actually differentiate where we can make a difference? And I found this very nice picture of a keyboard that was kind of modified to, I think a CNC drilling machine. But yeah, it's still a standard keyboard anyways. Good. So question is, what does make your HDL project better? Is it really the build system? Is it the way you include your dependencies? Or if you get dependencies from the web somewhere, the way how you include those? Is it really that much the choice of your programming language, especially if you want to have people contribute to it? Is it your coding style? Does it make that much of a difference if you're indent by three or four or five spaces? Oh yes, it does, I see. Is it your FIFO implementation that makes a difference in your whole project? Maybe it is. There are some cases where all that makes a huge difference. But the point is, in many projects, it just gets in your way. You want to do some coding, you want to build your stuff and you want to do it in a language which is reasonably sane to use. So this is all questions that I obviously don't have an easy answer with. And it's kind of always hard to get over personal kind of preferences, which I have very strong ones about many things. But what you see, if you work together, it's not that important anymore if you're indent by three or four spaces. Just make it consistent in some way. And that's actually what we see with a lot of newer programming languages, if you look at, for example, Rust or Python or very modern PHP or so. They have kind of a standard coding style by now, which you're obviously not required to follow, but they say kind of just take that one and you're going to be reasonably fine. Or for build systems, just use automake or cmake and you're going to be reasonably well off. It's probably not going to be the perfect solution. That's going to be the solution that makes you entirely happy, but it gets the job done and it gets the job done quickly. So for HDL, there is no such build system out there at the moment, which is kind of established over a wide range of people, just like automake or cmake are. There are a couple of attempts in this direction. One I am personally a tiny bit involved on the slides that don't move, okay, is Fuse Sock, which is a project by Olof Kinkren. And this is actually kind of a build system. So you specify some some any like files. So this is the VHDL, the Verilog files I need for this component. That's the dependencies that has, and essentially it packages all up together, builds the dependency tree, and puts that out into project files that Vivado or Simplify or other tools can actually make use of. Nice thing about that. Now you want to drop in your other 5-fold implementation. You just say, okay, take this core, put it in there, and it already knows how to build it actually, because what you do today is you go to, for example, OpenCourse, download some Verilog file, put them in there, manually put them in your Vivado or other kind of project files, and make sharing very hard. Speak a common language is probably more tricky. System Verilog and VHDL are here. They're probably here to stay. They will be amended by some other tools, but I don't think they will go away. So, and that's, I think, usually compare it a bit to like C in a programming world. It's not a nice language. It has a lot of flaws, but it will stay and will stay for foreseeable future. And actually the huge benefit those standard languages have is you have a lot of common ground without us. It makes it easy for others to get started, because you don't need to learn something new, a complete new programming language to get started on a project. So kind of the trade-off you have to make is how many people do I want to get involved, versus is it really making me that more productive to use another language? And something I've been coming across writing JavaScript is they have this very nice feature. They also have this arcane JavaScript language, which everybody says is full of bugs and flaws and weird things, but they still are able to evolve it. They're not replacing it entirely, and they evolve it in a way that is kind of backward compatible. So they do essentially a source-to-source compilation of the newer features, and get them down to the older features. That's what they call polyfills. And that's for HTML as well. So isn't there something that we can do so? For example, Sister Veriloc has nice features, but if you look at the tool support, it's just not there because every tool has a lot of different parses for Sister Veriloc, just like Vivado has two different parses that parses to Veriloc. So you can use one construct that works in simulation, but not in synthesis. That's just annoying, and this won't get any better any time soon. So can't we have a kind of source-to-source compiler that says we go to the common denominator and go from there? Yeah, that's where the kind of the advertisement part starts. Right now, and I mentioned it a bit before, you get your course usually from places like OpenCourse. And OpenCourse is there because it helps you collaborate. It helps you find other projects and integrate them in your own to make a difference where you can make the most of your time. So OpenCourse, unfortunately, is kind of in hibernation mode, and there is actually no science that this will change, unfortunately. So a group of people that were rather active at OpenCourse and around the open-risk community that published their open-risk CPU there got together and said, this is something we need to change. And we need to change this, and we can't, unfortunately, change OpenCourse itself. So we launched a library course, which is a site with just a similar target, so giving you access to very nice IP course, making you aware of the quality there are, and teaching you a bit how to do coding and how to get started in this ecosystem. And it's online now, it works. It's still in its early stages. So there is a couple of things that we want to improve in the future. So what we always get is, I have now this project listing of 10 FIFOs, which one should I use? Which one is high quality? Which one, do others tell me, is actually useful? So we're looking at improving or adding more quality metrics to the site. And those can be machine generated, of course, but they also can be user generated. That would be like reviews and things like that. And one site I think that got that pretty right is a other repository site from a totally different area. It's Puppet Forge. And what they have is make it easy for you to say, okay, I used this stuff at production and other things like that to get very useful but kind of quick feedback from users. So that's what we're trying to integrate. And the other thing is, if you ever have worked with Node.js or modern PHP with a composer or other things like that, it's very easy to install dependencies. You don't need to go some site, copy a zip file and move them somewhere. You just type one command line at one thing to your configuration file and be done. So we're looking if we can have a look at using this repository site and integrate them into build systems like FuseSoc. And there's one thing I just briefly mentioned and I won't go into much detail. There is also a continuous integration setup much like Travis's. So we're currently kind of trying out things in this regard. If you don't want to know more, there is actually a site here and there's also a presentation linked from one of the Jenkins core developers that got involved in that and kind of is helping us out there to make testing and compiling hardware designs in the cloud or on your own PC much more easy. So what things to do for now, add your project to LibreCourse itself. You can do it right now if you have one. There is Planet LibreCourse, which is a block planet, gives you already very nice overview of some hardware projects that are out there. So if you have your own block that you want to be listed there, let me know. And finally, the last one, there is documentation how to get started working on the LibreCourse site itself. It's open source, it's hosted a GitHub and we're always welcome for new contributors. Last announcement, this FOSDEM is usually kind of set to the software and we see the hardware interest grows just by how packed this room is. There is Orconf 2017. This was a rather small conference that we have been growing now to over 100 people now. The next edition will be at Hampton Bridge in the UK, somewhere in the middle where I've never been. September 8 to 10, Orconf.org, it's especially about the digital hardware design and it's a very good place to talk to people and get very helpful insights on what the hardware community is doing at the moment. Thanks for your attendance and time for questions, at least a bit. Open source, I know the copy left concept does not work for HCL code and it's a problem that me and a couple of people are in the middle of it. So do I have an answer for the licensing question? And the licensing question essentially is we know that permissive licenses work, so that's why we have a lot of MIT, usually licensed cores around. We also know that copy left licenses are still written in a way that is not entirely clear what they mean to hardware designs. I don't have an answer to that. I've been talking to many people and we at 44-Dashlet actually. Runs, LibreCores have been in contact with many people and we're trying to figure this out so if you already have an idea in this direction come talk to me. Essentially what things are looking like, so GPLv3 was written with hardware kind of in their mind. It's not clear if that actually what they intended it to be is actually working out. So let's see. So but we definitely see a need for a copy left license but the problem is how do you find boundaries of a copy left license in a hardware design and until that's figured out I think the safest way is stay permissive if you want people to make sure that they're able to use it. Everything else you need to probably amend the license or make explicitly clear in your licensing what you mean where your copy left should have a boundary. Yes, there is the CERN Open Hardware License. The thing is digital hardware designs and I think we might have a couple more additions to that. Chip designs are a bit special because they're kind of special from the legal framework that you can use which is different from PCB designs or 3D prints and things like that. So there is some overlap in this CERN Open Hardware License with a license that would work perfectly fine for digital hardware designs. It's not a perfect fit yet. So we're looking how to kind of get this figured out. Any other question? Thank you.