 Okay. Hello, everyone. Welcome to the fifth installment of OFE's open source policy series. I am Paula Grzegorzewska and I am a policy advisor at Open Forum Europe. Let me first say a couple words of introduction and present our speakers, and then we can follow up with the presentations. So I guess most of you know us, but for those who don't, Open Forum Europe is a Brussels-based think tank working at the intersection of open technologies and public policy. So specifically for this event, we know that there have been vast developments in the open source hardware ecosystem in recent years in terms of technology, collaboration between different actors, emergence of new players and business models. With this event, we want to understand more about the challenges, about the opportunities and good examples that can be followed in order to reap benefits of this growing trend for the benefit of European businesses and citizens. Certainly, there is a reason why we talk more about open hardware. With the increased focus on technological independence, we need to think about all layers of the technological landscape, including the most fundamental digital infrastructure elements. Open hardware already today is implemented in many technologies, products and services, including 3D printing, maker spaces where global movement, as well as global movements to develop open designs for medical appliances and ventilators during the pandemic, but also many other services and products. Recently, we have seen the chip shortages in the automotive sector, and this has proven Europe's dependence in the area of semiconductors. Moreover, the European Commission identified 137 products with significant dependencies in the recent industrial strategy in the most sensitive ecosystems, and also the study that we have conducted on the impact of open source software and hardware has shown that there are significant gaps in policy thinking and research on the impact of open source hardware. Open chips can be one of the critical components of a digitally sovereign technology landscape, and there is a need to discuss it more on the European level. So, we will start this event with a few words from Rick O'Connor, President and CEO at the Open Hardware Group, partner of the sponsor of the OFE Open Source Policy Series, the Eclipse Foundation. We are thankful for their support, enabling us to put this series of events on. Then, we will have Andrew Katz, partner at Morecraft's LLP and the co-author of our study on the impact of open source software and hardware, who will present key concepts and definitions useful for discussing open source hardware. After that, we will hear from Javier Sedano from CERN, who will be discussing, who will present a talk on combining the commons and commercial activity in public-private partnerships. Next, we'll have a short Q&A with the first two panelists, and after that, in the second part of the event, we'll have Svony Mir Bandit, who is a Senior Director at Western Digital and the Chairman of Chips Alliance, who will discuss how open hardware can be a part of a successful business strategy. And then, we will hear from Kalista Redmond, CEO at the RISC-5 International. Unfortunately, Kalista wasn't able to join us today as she is currently on the plane, but she prerecorded her speech, to be able to hear her contribution to the discussion. And we will finish with yet another Q&A to answer any remaining questions and maybe have some follow-up discussion. So, just a bit of housekeeping before we get started. We want this policy series to be a space for open exchange, and we are very happy to take questions from the audience. If you'd like to ask a question, please write it in the chat or use the Ask Question feature in Crowdcast. Let's also take note that this event, like all OFE activities, is covered by the OFE Community Participation Guidelines, which you can read on our website. And a reminder, this event is being recorded and will be shared on our channels afterwards. So, now, without further ado, I would like to invite Rick O'Connor on screen. Okay, perfect. Hello. Okay, I'm minimizing myself and the floor is yours. All right. I got a good echo going on here. I'm hoping you can... Yes, we can see it well. Is there an echo for you? No, it works well. We see it. Okay, so I'm going to give a brief overview of some of the challenges and topics associated with open source hardware. And as Paula said, what this discussion is about is the semiconductor open source activities. And people to think of hardware as everything from, like Paula said, 3D printing, the enclosures associated with electronic systems, big racks, you know, the mounting screws, you know, that's all hardware. And what specifically we're talking about is the semiconductor industry and digital assets and analog assets around creating an open source ecosystem and the challenges associated with that. So I'm with the open hardware group, an international organization, 70 members and partners strong today. And we are partnered with the Eclipse Foundation working to deliver open source artifacts into the industry. So let's start by taking a look at the challenges associated with chip development and namely the cost. This is a heavy OPEX and CAPEX investment for all forms of semiconductor development. Older technology nodes obviously have lower costs associated with them. But in a current deep submicron technology node, like at the 12 LP and 7 nanometer space, you know, we're talking about tens of billions of dollars for development for a large scale SOC. And the majority of that work is around the verification, the design work itself and the physical design work on top of that SOC. And if you add in the software associated with that IP, then we're looking at 90% of the development cost is tied up in those areas. And this is certainly warranted for highly differentiated IP that brings new innovative features and functionality. But for general purpose IP, we could do better and we should do better by sharing these development costs through reuse across the industry. Okay, so that sounds lovely, but what the industry that has had a history of such heavy CAPEX investments in order to play, there are very, very deep patent portfolios associated with all of the collateral, if you will, of the IP stack required to deliver high-end chips. And there's three significant barriers to adoption that we need to be good at in terms of overcoming these barriers and nurturing an open-source development community. The first is the quality of the IP. If you've got a huge $50 million investment in an SOC, are you gonna risk that investment as a large chip manufacturer by integrating an IP block that you downloaded off a repo on a website someplace? Maybe not. And even if you do convince yourself you can do this, how are you going to convince yourself you got a roadmap and all the necessary support from an ecosystem perspective in support of that IP, whether that's development tools, operating system ports, as well as an actual roadmap for the IP itself that can deliver a number of different PPA metrics associated with that implementation? Last, and certainly not least, going back to that deep patent portfolio is how are you protected by the IP that you're using that's open-source? And what about your IP portfolio? How much of that are you exposing if you participate in these communities and decide to give back into the community? So they're not unable to be overcome, but nonetheless the biz dev and legal departments of these commercial companies have a new consideration when choosing to engage in an open hardware development environment. So what about RISV? Why has there been a significant impact? Some of you may know I was involved in getting the RISV Foundation started many years ago and it's been a fantastic experience to see how RISV has penetrated different markets around the world. But what it's really enabled is unleashing a new frontier of processor design and innovation. You don't have to talk to anybody or get lawyers involved or anything from a licensing standpoint to decide that, hey, I want to start designing my own processor. So the group on the panel here, we could get together, download the RISV spec and decide that we want to create a new processor design, get that done over the weekend and throw it up on an open source repo repository online and say, hey, here you go. We got an open core. And that's interesting and wonderful from an innovation standpoint, but how many cores do we really need? How do we create industry momentum and establish critical mass around handfuls of cores that can be adopted in high volume production, thereby increasing the quality and comfort of all adopters to be able to use them? Much like today in the Linux and farm, there's five or six distributions that matter. In the early days, 15 to 20 years ago, there were many, many Linux distributions. Hopefully we don't take 20 years to figure this out, but part of the challenge is establishing this critical mass. So who are we and what's going on? What are we doing with the Eclipse Foundation? So the Eclipse Foundation has a very solid and well proven development process to curate, professionally manage open source artifacts and release those into the industry. And within the open hardware group, we follow that development process and are curating a whole range of open source RISC-5 cores called the Corefly family. We have over 70 members and partners around the world, many of whom are in Europe and very, very strong industry and academia as well as individual contributors worldwide. So that's just a subtle context. The work that we're doing specifically with Eclipse has a focus around industrial grade, robust, well managed open source and being able to capitalize on this new innovation frontier that we talked about earlier so that we can have industry communities adopt open ship architectures to solve some of the technical problems that Paula talked about in her opening address. And we very much look forward to more collaboration with industry, academia, other organizations. We're a member of the RISC-5 International Organization and are quite excited about the opportunity to establish a stronger footprint in the European community. So with that, here's some links and so on that you could go to if you want to learn more about the open hardware group and all the around for the Q&A after the main speakers are done. Thank you Paula. Thank you Rick. I think there were some good questions in there. Now as we are of course a bit conscious of time, let's welcome Andrew on the stage. Super, am I live is the question? Yes Andrew, you're alive. Hello everyone, thank you very much. Rick, thank you very much indeed for that. As Rick mentioned at the outset there are sort of broad range of different types of open hardware. So I'm just going to give you an introduction, put that into context, talk a little bit more about broader range of open hardware and how that works. So let me press a few buttons and hopefully my presentation will appear shortly. Okay, that's great. Thank you. So the best way to understand what open hardware is and some people say open hardware, some people say open source hardware. I don't get particularly excited about this distinction between the two. I don't think Greg Dill turns on it. But the best place to start is to think about open source software. So a brief definition of open source software is software which is available for anyone to use for any purpose. It's available for studying. You can modify it and you can redistribute it either as you got it or with modifications. And that is the characteristic, fundamental characteristic of open source software. And many of you will already be aware of this. It's absolutely everywhere. Open source software is eating the world just about every device that you look at will contain open source software of one sort or another. And it's even gone to Mars now. And it's in the helicopter. Apparently, not on the rover unfortunately, but there is a Linux-based system on the helicopter. So open source software has gone to Mars. So why has open source software been so successful? Well, the way that I look at it from a commercial perspective is that it is an ultra-low friction means of collaborative research and development between organizations that potentially would not normally collaborate with each other or if they were going to collaborate with each other, they would do so on the basis of really complicated collaboration agreements that would go into great detail and great depth about who does what, what intellectual property is owned by whom, who's allowed to apply for patents on what and so on and so forth. So the beauty of being involved in an open source development project is that the formal documentation required is absolutely minimal. It can be as simple as just agreeing to use a particular license, although some projects do expand that a bit and they have contributor agreements and codes of conduct and so on. But in comparison to the sort of collaboration that you normally get in the commercial world, it is vastly, vastly more straightforward. And that also means that there are fewer competition law concerns because it's open and inclusive. It's much more difficult for somebody to argue that organization A and organization B are collaborating with each other on another product, on a project in an anti-competitive way if it's perfectly possible for their competitors, organization C and organization D to equally get involved as well. The problem doesn't go away, but it certainly simplifies it significantly. And from a developer's perspective, they like working in open source. Most developers are now very used to working in open source software. Most of the tools that they use, and we'll talk about this a little bit later, are tools that are themselves open source software. They are used to working on a collaborative mechanism with people distributed all over the world. All of the tools are available, designs are very much in terms of collaboration. And it's just a working environment, working mechanism that developers are very happy to work in. And one of the other reasons for that is they know that they're not going to be asked to reinvent the wheel because they know that if they want, if they're asked to, for example, produce a library module that says, give me the day of a particular date, what day of the week it is on a particular date, they're not going to have to write that piece of code from scratch. They know that they can go out onto the Internet and they can find some open source code that will do that and they can incorporate that code into what they want to do. So what they are doing, they feel is going to be much more productive because they're not going to be asked to do something that they know somebody has sweated over before, who wants to do that. And the way that it works best in a commercial environment is for non-differentiating characteristics. And so what do I mean by that? I mean that if you're an organization who is producing a particular product, some of the characteristics of that product are differentiating, which means that's why people buy you. So people might buy a BMW because of the engine that it has, for example, but they're not particularly interested in buying it because of the operating system that runs within the engine control unit. Nobody's particularly interested in that. So that means that there is less incentive for BMW to work cooperatively on engine development if they feel that by doing so they're essentially helping their competitors, but they're not going to be helping their competitors if they are working on the software that goes inside the engine control unit. So it makes much more sense for them to collaborate with their competitors on that basis because it's going to reduce their own costs significantly, but it means that they're not giving their competitors an undue advantage, and they can do the maths on that and decide when it works. So that's typically where open source works best is where it's working on things that are not differentiating for the product that you've got. So that was open source software. What do we mean by open hardware? Well, a very similar definition. Open source hardware is hardware the design for which is available for anyone to make, use for any purpose to study, modify and redistribute. So almost exactly the same as the case with open source software. And what sorts of hardware are we talking about? We'll recommend a few at the beginning. My own involvement with open hardware many years ago was for an open source car, but you can have open source boats, houses. We're mainly talking about electronics and silicon chips here, but also 3D printed objects, and they don't even have to be solid. Liquids like beer and cola have been produced under open source licenses, but also things like hydraulic fluids and that sort of thing can also be described as open hardware. And even viruses can be described as open hardware as well. But today we're mainly going to be talking about electronics and particularly silicon chips, so open silicon. So let's talk about open source software development and compare and contrast that with open hardware development. So there's a number of reasons for the great success of open source software. And one of them is that the barrier to entry is very, very low. The minimal amount of friction is absolutely critical to the success of open source software. And you can be effective, even if you've got very inexpensive equipment. You just need to have a fairly basic PC, you need to have an internet connection. And even most of the tools and the software that you need are all going to be themselves, open source, operating system, Linux. For example, the compiler that you use can be GCC, the new compiler collection. There are IDEs like Eclipse. So an IDE is basically a sort of program as work bench that contains, enables you to actually write the program and compile it from in sort of one location and so on. So all of these things are themselves available as open source. So for the expenditure of a few hundred dollars and access to an internet connection, you can get yourself a perfectly functional workstation that will enable you to get fully involved in the world of open source software development. And the thing is that the final product is itself a digital artifact. So everything that we're doing happens in the digital domain. It happens in software, it happens inside your computer. It happens by tapping away on the keyboard, looking at the screen and so on. And that's pretty significant for reasons. We'll have a look at in a moment, but it fundamentally is based on this idea of a design, build, test cycle. And again, I'll talk about that shortly. But one of the key things is that Rick touched on is that it makes it very, very easy to collaborate with anyone anywhere in the world. This is development without borders. If you're working in the digital domain, there's no reason why you can't be collaborating with somebody who is on the other side of the planet from you. And in fact, one of the problems that we had doing the Impact Study for European Commission is that trying to identify where a particular project is based because sometimes you really can't put a finger on that. There are contributors from all over the world. And so that makes it extremely straightforward to create these communities. It's not always quite as straightforward as it sounds. Open source by definition is free to use, but that doesn't mean that there aren't licenses that need to be complied with and the licensing can be somewhat complicated, but I'm not going to go into details about that at the moment. So just looking a little bit more at this design, build, test cycle. So if you look at the diagram on the left, you'll see that basically you design something in terms of software that basically means thinking about what you want it to do and then write it in the computer language like Python, Java, C++, whatever it happens to be. Then you'll build it, which basically means compiling it. Then you've got an executable which you can test. And then as a result of that testing, then you presumably detect a few problems with it. So you go back to the design phase, correct the errors, build it, test it again. You've got a cycle. And because that's all happening inside your computer, that can happen very, very quickly. And it also means that it can be going on sort of simultaneously in different people's computers. Different people can be working on the same project, change different bits of it. And using technologies such as Git and sites like GitHub and GitLab, it's very easy for people to collaborate and identify where bugs are and change them or suggest that those changes are made and so on. And then eventually you reach a point where you want to release this product to the public and then you go to the productise phase here. And that means that the product is available to the customer. And in the case of software, productisation, it might not even happen in a formal step at all. There's quite a lot of projects that will exist on GitHub where somebody just says, right, I think we're ready to release now, but no, there's no fundamental difference. It just means that people can go on to GitHub, download your project and use it. And then, of course, you'll also get some feedback from the customers and that's where the yellow dotted line comes in and it goes back into the test cycle again. So this can be very rapid, it can be very distributed and it can run very efficiently. In the world of hardware, you can have the same thing, design, build, test. But especially if you're talking about physical hardware like cars, for example, it's much more difficult for this to happen in the digital domain. Some of it can. And there is software that will help you to analyse things like aerodynamics and the effect of suspension, component tuning and that sort of thing. But the reality is to get very far, you are going to have to physically make this thing. So that's why we've got in the right hand diagram this simulate bit that says that in the world of really hard hardware you can do a degree of simulation but basically you've actually got to physically build something to test it. And obviously that slows down these cycles continuously. So you're still going to need a PC and an internet connection but you're going to need a lot more than that. And for something like a car, you're going to need a workshop, you may need 3D printers, you may need things like access to significant industrial amounts of power, three phase supplies. You may need large industrial equipment like milling machines and lathes. If you're making open hardware viruses you might need something like a PCR machine. If you're making beer, you're going to need a mash tun and you're going to need feedstock for all of this. So for cars you'll need steel for 3D printers, you need plastic filament. If you're making electronics, you're going to need electronic components. And if you're making beer, you're going to need oats, barley, yeast and water. And all of those are additional constraints on what you're doing. Now some of the digital tools like the operating system and editors and some CAD tools are available as open source. But many of them aren't. So there's going to be a cost involved even if you're impure in the digital domain there's likely to be a cost involved. And that of course, it means that there's a greater barrier to entry as well. And that means that the design build test cycle it may be digital in parts with hardware but it will mean that certainly for the harder sorts of hardware the manufacturing, whether it's local or remote is going to be slower than it is with software. And it's going to be much more difficult to collaborate at a distance if you're dealing with hard hardware like suspension components than it is if you're dealing with software. And it's also the case that licensing is also complicated within the world of open hardware development as well. And for different reasons the impact of intellectual property rights and there are many more things like various sorts of design rights, semi-conductor mask rights in addition to the usual things like copyright and patent that the extent to which that impinges is more complicated as well. And also it tends those things other than copyright and to a degree patent the other forms of intellectual property right do tend to differ from country to country which has a sort of another layer of complexity as well. Open electronics sits in the middle somewhere. So here we're talking about designs for printed circuit boards and circuit diagrams and so on. And there is some open source software available, Kijkand is an excellent example of that. But other sorts of software for simulation and so on is proprietary tends to be fairly expensive. And one of the things about if you're making printed circuit boards if they're fairly simple you can make them at home you don't need particularly specialized equipment but there are services available now where you can send your files over to a company they can make them very inexpensively and send them back to you within a day or two. So that in itself is fairly rapid cycling it's much quicker than you know if you're trying to make an open hardware car for example and it's also something that can be fairly well distributed somebody can take your design for printed circuit board you know if they're in Hong Kong it's very incredibly easy for them to find somebody who can manufacture that PCB for them but even throughout Europe and throughout the Americas and so on there are plenty of organizations that will be able to make those PCBs to your design very quickly and very inexpensively. So in that case the design build test cycle can be fairly rapid it can be fairly well distributed and that means that open electronics are quite amenable to this open source development methodology. Now when we're talking about open silicon what we're talking here is about designs for silicon chips so typically processors and so on and their design using hardware description languages that are languages that look very similar to computer programming languages but they have a different specific purpose what they do is you write your logic in one of these hardware description languages and then you use a synthesis tool which compiles it into a form that can be processed into a more primitive code that can get actually turned into the chip for itself. I've got to say I'm running a little bit slow here is it okay if I take another five minutes or so or do you want me to speed up? Hi Andrew, maybe like four minutes you know. Thank you. So an awful lot of this can happen inside a computer in much the same way as it can with software to that extent the development of open silicon and an open source software is very very similar the main difference being that whereas if you're developing software the tool chain is going to be mainly open source with open silicon that is unlikely to be the case there are a number of open source products but that number is increasing but most of the products are still proprietary and there are plenty of open silicon designs out there based on risk five as Rick said and there are other ISAs as well that are out there as well but the thing about using proprietary software is that it includes proprietary IP blocks and that is a challenge. So I'm very going to briefly explain what these magic devices FPGAs are there's a little device there so they're basically chips that can be configured to do pretty much anything so if you think of them as an array of, a blank array of cells you can configure each of those cells to become a logic gate, transistor, whatever and you can, means that you can configure an FPGA to be a processor, an interface, a system on a chip virtually any sort of collection of logic gates that you can think of can be programmed into an FPGA and the only constraints really are how many bank cells you've got how fast you want it to be and how much money you've got and they can also be reconfigured multiple times as well so you can immediately see that when we're talking about the development cycle these things are sort of fantastic asset but not only that, they can be an expensive basic one started a dollar or so so actually you can design some, a chip design load it into an FPGA and use that for production as well so the design build and test cycle is almost identical to that used for software so where does that leave us from Politemaker's perspective and I think these are the main policy issues that I identified I don't claim to have the answers to them but these are things that we need to think about so basically open technologies are all about reducing friction and facilitating this distributed and collaborative research and development open source software has been around for much longer so as a result it can provide us with a lot of information to guide us but there are significant differences and we always need to be aware of those there is still friction in terms of intellectual property and licensing and we need access to more low cost and interoperable software in the world of open hardware as well I mentioned the problems that we have with proprietary IP blocks they tend to increase friction because they're proprietary and they're not as interoperable as they should be and at the moment we have market dominance in the world of open silicon by big free players all of whom are based in the US there's an extent to which this market is still dominated by proprietary thinking and that needs to be addressed and all of these tend to reduce opportunities for involvement for individuals and SMEs and I think the future of open silicon is much down to the involvement of individuals and SMEs in much the same way as that has driven open source software as well so apologies for overrunning thank you very much for listening that's what I have to say thank you Andrew I found it very informative and I think now as we are running out of time let's go straight to the next presentation from Javier Sedano okay can you hear me? yes we can hear you okay can you see my slides? yes okay great thank you so it's a great pleasure to be here to tell you a bit about open source hardware developments going on at CERN I'm going to take the angle of a public institution and I'm also going to speak about hardware which is not cheap design this is more bigger hardware printed circuit boards and closures and computer networks so the aim of my talk is to describe public core which is this paradigm that emerged naturally in our white rabbit project and hopefully can serve as a template for policymakers when they try to decide the role of public institutions in public and private partnerships so I will start with a very quick intro to CERN and to the white rabbit project for context and then I will continue with some words on our open sourcing experience things that we believe we did right and things that we think we can improve in the future then move on to this public core concept and finish with some plans for the near future so CERN as you probably know this is taking some time to come I don't know if you can see anything on your screen okay so CERN is the biggest particle physics laboratory in the world it's made of a big network of accelerators and the ultimate product is the particle beam that physicists use to examine the fundamental constituents of matter by having in this bigger circle here which is the LHC, the biggest particle accelerator in the world by having collisions and analyzing what comes out of those collisions now there is a lesser known part of our mandate which is sharing what we do and it comes from our founding document which is the CERN convention it was drafted in the 50s and it says among other things that the things we do should be published or otherwise made generally available now for those of us working at CERN today a legitimate question is to see how that mandate should be interpreted in the technological scene of the 21st century so open science is a big thing at CERN of course and it encompasses a number of opens like open source software, open hardware open data and open access and at CERN we're very proud that all these opens rely on a fundamental building block of a piece of infrastructure which is the World Wide Web which was invented at CERN so a few words about White Rabbit in a distributed accelerator, in a big accelerator like the LHC we need different pieces of the accelerator to be synchronized with one another and a few years ago we invented and developed a technology called White Rabbit which is an extension of Ethernet so in a White Rabbit network you have basically switches and nodes so they are interconnected and by virtue of the White Rabbit protocol you get a common notion of time everywhere so White Rabbit offers two extensions with respect to standard Ethernet one is the sub-nanosecond synchronization so synchronization better than one billionth of a second and the other one is guaranteed latency so the time it takes for a message to go from any point of the network to any other point has an upper bound and this is very convenient, very useful for controls and data acquisition White Rabbit is fully open source hardware, gateware, software and it is also standardized under IEEE 1588 which is also called the Precision Time Protocol which is also helped along with the fact that it's open source in the adoption in many domains as I will show later let me see the slides are coming a bit slowly I hope that's not too penalizing so just a quick look at the fundamental building blocks of a White Rabbit network this is the White Rabbit switch it is an Ethernet switch inside you have an embedded Linux system and an FPGA with the fast part of the routing of the packets of the frames and it is fully open source mechanics, printed circuit board, gateware, software and commercially available by at least for companies that I'm aware of then in White Rabbit networks you also have the nodes and we have a reference design so people can start quickly in the form of a PCI Express board which has support for White Rabbit also you can customize its function by plugging in different types of mezzanines so it can become for example an analog to digital converter by plugging in an ADC mezzanine ok, so a bit of a look at our open sourcing experience things we did right first of all we made things modular and we adopted this layered approach whereby the problem we needed to solve at CERN is the synchronization of accelerators we built this generic foundation which is White Rabbit and then applications can be built on top of it and we knew from the beginning that our friends in other scientific facilities would find that useful people we know in neutrino telescopes in cosmic ray detectors for example that was a very natural fit and already there for society there were quite a lot of savings by the sharing and not needing to develop this basic infrastructure again and again the next natural adopters were metrology offices so the people who make and distribute official time in each one of your countries very naturally adopted White Rabbit and then through the magic of open source and standardization White Rabbit found applications in many areas and this is a little known fact but many domains need very fine synchronization including electric power distribution mobile telephony and even finance you have to time stamp all the packets very precisely and that's what happened when we also did things right regarding companies so we involved them from the very beginning and this is a bit of a difference with respect to free and open source software in the sense that you can do open source software without companies if you really want to you can download code from github compile it, modify, publish again in open source hardware not involving companies is not an option for all but the most trivial designs you need at least a company that will make things test and distribute for you so from the very beginning we discussed with companies and they are an essential ingredient of our open source hardware practice it's also a very nice and easy way to get extra talent in a project and because everything is open source there is no risk of vendor lock-in now things that we can do a bit better we anticipated that if White Rabbit were successful there would be a lot of support requests and we thought we had a good plan for that, we thought that the companies would provide support and people would pay for that support in the end what happened is that companies offered support contracts but nobody bought them and people just bought the hardware and asked questions in the forum which means that the White Rabbit experts they felt compelled to answer but it's really outside their day job so it's a strain on them that's a problem for sustainability and also the same thing applies to coordination to managerial effort so we could do a bit with a bit more effort on the steering side of things and the lack of time devoted to these managerial activities results sometimes in missed opportunities in confusion and generally it's less likely that people will come in and adopt White Rabbit despite this and with these caveats we think White Rabbit has been a technological success and also a success from the point of view of knowledge and technology transfer there are many ways to synchronize things within one nanosecond and White Rabbit has become a de facto standard and I believe that that the key feature or the key reason is its openness it's the only option to do this kind of thing which is fully open source also very early on we decided that we would not choose between open and commercial that we would have both and that this is the winning combination actually because open gives you this lack of risk for vendor lock-in and commercial gives you the scalability the fact that you will have commercial support and you don't have to do everything yourself on the public institution side of things okay so I'm reaching the core of the talk which is this kind of way that White Rabbit kind of self-organized I gave a name to it Public Core because I think there are some aspects of it which are new and it's always easier to refer to these things with the name so there is in White Rabbit and in many other projects a public core which is made of these in the case of White Rabbit is the switch it's the reference implementation of the nodes things that people can build upon and the way things have been arranged naturally in White Rabbit is that public institutions have been contributing to this public core mostly and in the periphery of the project there is proprietary innovation and the companies which do that can afford to have higher margins in the periphery because these blocks are not open source and they can more easily monetize them this is the way things have evolved I know in other projects they share between public core and the size of these blocks is not exactly the same the relative sizes but the reasoning still applies for those of you coming from the software world you might be familiar with the open core business model whereby for example a company has this community edition of their software which is fully open source and free as in free beer and then they have these proprietary plugins that you can buy and they are not open source in that case sometimes there is a bit of a conflict of interest because the same company is in charge of the core and the proprietary extensions they might have an interest to push users to buy the extensions and then in the course of doing so maybe making the core barely usable one thing we have in our project is that there is no conflict of interest because there is really clear distinction in the communities driving the public core and the proprietary extensions and in particular the public institutions are only concerned with the core and private companies are in the periphery but also in the core when it's appropriate, when it's in their interest and this can happen with public money because a public institution pays them to work on something or with their own funds if they think that's in their interest another way of looking at this is that we have this public core which is really a commons and then different actors build applications on top of these commons when you contribute to the public core you're lifting the ground for everybody so everybody builds on higher ground and the value gets bigger for all these applications so it's really in their interest to make sure that the public core is in a healthy state now there is also a very important time dimension so sometimes the community driving the public core might find it important for it to evolve in a given direction where there is proprietary development going on and get that into the public core and then the companies which were there innovating they go someplace else in order to keep the high margins and they keep innovating in a proprietary way so there is one case in which this cannot happen and this can be an issue it's when these proprietary innovations are patented and then of course the public core cannot go in that direction so this is an issue and I will say a few words in the next slide about how this can be dealt with so I just presented this paradigm of public core I had introduced some issues before then how can these ideas help with the issues we identified in White Rabbit and how applicable is this for other projects so the first thing we decided a bit inspired by many successful examples around us including those were which have been presented and are going to be presented today is to add a collaboration agreement in addition to this public core paradigm formalize things a bit and this collaboration will have as one of its objectives to bringing revenue which can be used for a number of things sorry which can be derived from a number of things none of which has an impact on the open source nature of the public core so things like certification training, consultancy and so on and that will be used to pay labor for dealing with the shortage of support and resources and the managerial side of things there is also a no patent rule that can be part of this agreement so that people agree that these extensions in the periphery of the project will not be patented so the core can grow in that direction if needed and why do I think a public core will work well first of all because it's not that new it's the experience elsewhere in many projects for example the Linux kernel is a big example but also the collaborations we are going to discuss today the only refinement so people are already used to collaborating and pooling resources for creating common infrastructure and build on that to monetize extensions this is a very common theme the only refinement is a clear mapping between public institutions and the open source core so you don't ask from public institutions something which is a bit unnatural in my opinion which is to keep things secret to not publish things and conversely you don't expect commercial companies to publish everything they do under an open source license and the other reason I think it will work is because as I said white rabbit very organically and naturally evolve this way so all the actors are already at places which they find natural so what next we are going to explore these combination of public core and a collaboration agreement we said to bring in revenue we have a draft for a collaboration agreement which is has been sent to the white rabbit community and it's going to be discussed in a meeting on the 25th of this month and the goal is really to ensure the health of the open source core but also very importantly provide for a thriving economy in the periphery and we really hope this can become a template a source of inspiration for policymakers in particular to prove that commercial interests and open source hardware can not only coexist but actually reinforce each other and also something very important from our perspective as a public institution in the past many people have explored the way public institutions can be economic engines but mostly through patenting and royalties and we really think that there is a case to be made for being as active economically as much of an important actor economically or more without sacrificing this basic mission of contributing to the commons so that's what I wanted to tell you and I will be very happy to participate in the Q&A session now okay thank you Javier let's maybe bring Rick and Andrew on screen as well I didn't see any questions in the chat right now but of course I have some questions myself and I guess also our other speakers might have some questions but the first one that comes to my mind is that you showed us the outlook and Andrew showed us the challenges and sort of the questions okay we have a question from Zvonimir so maybe I will give the priority to Zvonimir does white rabbit support P4 programming? I assume that's the question too yes I must say I don't know what P4 programming is so maybe Zvonimir can say a few words about it but probably that means no P4 programming is a programming language for programmable internet switches okay I see there are definitely open source and partially open source software stacks for P4 which indeed started on large FPGA chips from Altera and Xilinx before people started making custom silicon for P4 programmability okay no we don't have that in the base distribution I believe it's probably something that can actually be developed in software I don't think you need design change yeah we have a running embedded Linux inside the switch and anything you can run in there you can add things in the past like adding SNMP support and other ways of talking to the switch that were done in that way by developing software that runs in the embedded Linux this would be kind of a hardware layer I think it's not a software because of very low latency you can check out just by finding P4 thank you Zvonimir I think that this shows quite well the blurry line between software and hardware and how they interconnect and this is actually where my question was going a bit because I mean Andrew you talked about the challenges and you have your talk about the outlook and sort of the plans you know how to make it a bit more sustainable but my question is is it very diverse in the open hardware landscape because we know that for open source software we can find some business models we can take quite an easy example from other projects other initiatives or business models and is it also the case for hardware that we can you know even though it might be on a bit slightly different layer in terms of the software and hardwareness can we still take the same example and you know just follow the model in a way it is very diverse there are a number of different models and what has been interesting that there are projects like LimeWire we read RF for example where you potentially have you've got a software community which is coalescing around the firmware and the software that you use to communicate with the project and then it's got an FPGA inside it and then you've got another community that's coalescing around writing the HDL to configure the FPGA and then you've got yet another community who are working on the PCBs around that and those communities they work in different ways and I think quite a lot of this has to do with the speed of the cycle and also the ability for people to be able to collaborate at a distance so basically the harder the hardware so the more the more that you're talking about the physical part of the object is more important the more difficult it is for people to collaborate unless they're actually physically close to each other okay thank you I don't know if you have any thoughts on this Javier yes I agree the more hardware like the project is the more added value to get a bit organized as well in software it's very easy to have an informal organization that evolves very quickly and that does the interesting things very quickly in a lightweight in a lightweight way and in hardware it is important in my opinion the more friction there is for this cycle of iteration producing and so on the more important it becomes to have a bit of organization hence the success of these initiatives I think that we're seeing in the form of consortia, foundations and so on okay thank you we have one more question and then we will move to Vanimera's presentation I'm going just to read it for large companies that are completely invested in the patent philosophy since decades what do you think could convince them to part with patents when collaborating with the public core is that a question for me? yes I believe yes yes I think there has to be an economic case I trust companies and this is something I never do I never substitute myself for companies and tell them what to think I think they are very capable much more than me to figure out if something is economically interesting for them or not and if we set things up in a compelling way including with their help they can also tell us what is interesting for them what are things that they would accept or that would make it difficult for them to collaborate with us so including that help from them I think we can find ways in which it can be a compelling case for them and the proof is that these collaborations are already working very nicely with companies in them okay thank you Havier I can add for myself that in our study on the impact of open source software we found the numbers that prove the economical viability of opening within companies who have very diverse business models but we weren't able to find it in hardware yet so this is this very big research gap that I think we might have to I think Franky is saying in the chat that we have people who could it's a great I think it's a great switch to the next presentation because Valimir will discuss basically this so for now thank you Havier thank you Andrew I know that Andrew will join us in the second Q&A and Havier has some other speaking engagements happening now so thank you again and now I will welcome Valimir on screen thank you should I now share the video yes please alright and then if you want you can also share your video alright and now share the screen also feel free by addressing the question that we discussed as the last one because I think there is some interest in hearing your opinion on that is this the sputnik question you mean on the patent philosophy alright okay we see your full screen for now okay perfect alright so I'll introduce Chips Alliance and kind of I'll try to zero in on some of these you know questions on what exactly is a business reason to work on open source hardware projects very briefly I'll introduce what Chips Alliance is I'll follow that with the blue hat business model and then if I have time I'll touch after having spent two years are a foundational block for sort of enablement of open source hardware projects and bring them on a big bring them on a level that's sort of similar to where the Linux operating system is today so Chips Alliance who are we it's an organization which develops and host open source hardware code think like IP cores like open source CPUs open source software design tools is our fastest growing component and something that definitely brings a lot of commercially funded projects into Chips Alliance and then interconnect IP physical and logical protocols which attracts a lot of a lot of interest and especially from academia it's a very free environment for collaboration it's a standard organization framework when it comes to collaboration and development and we have a legal committee as a part of the organization that helps set this up and we also have a legal framework that's built around the XUV2 license and the general idea is why would anybody want to do this is basically sharing resources dollars and engineering time to lower the cost of the hardware development for both IP and tools it doesn't necessarily cover everything that exists in the worlds of ASIC and hardware but it does cover those areas where company have common interests like RISC-5 cores that Rick Connor has mentioned for the open hardware design group who is who are the members of Chips Alliance this chart keeps changing on me and probably I have some logo incorrect or some organization missing large number of companies and universities and continuously growing in terms of organization we are organized similarly to RISC-5 we are part of the Linux Foundation and we are controlled by the board of directors and we are managed by general manager Rob Mains Rob is is a full time dedicated to running Chips Alliance and brought significant experience from the EVA tools industry we have Henry Cook that chairs technical committee on which we onboarded the number of projects and there is also some professional staff from Linux Foundation helping with legal finance and Brian Warner who is operations community manager and a program manager for various projects and programs in Chips and finally we have Michael Gilda who leads the outreach and marketing committee and is working on advocacy, outreach, processes etc there is a number of different work groups in the organizations inter-connect ROCKET, SOC, CORS, TOOLS AI Accelerator, Chisel work group and several more so this chart is getting actually more and more difficult to manage for detail list I suggest we will check Github for Chips Alliance major milestones that we achieved in 2020 we started a major project on system error log expansion of error later this is one of our biggest projects and I will try to mention a little bit later this is a simulation tool that is built in open source it is a major building block for enabling open source design and it is a very effective collaboration tool we started several new projects that joined in 2020 these are Chisel which is the largest open source hardware compiler project and OpenRoad which is trying to actually build a complete tool chain from the design to the mask set into the open source we released the AIB 2.0 chiplet specification it is a chiplet physical interface that is champion and led by Intel and adopted in a large number of companies we deliver a new generation of SOAR RISC-5 cores that came from Western Digital Team and this includes the first open source dual-threaded high-performance core targeting embedded real-time and we successfully demonstrated Omni-Extend memory-centric compute system which is a cache coherence over Ethernet architecture so now jumping on some of the motivation why do people do this I sort of tentatively named this Blue Hat business model I don't have a trademark but the idea is to make it the analog of the Red Hat business model that was championed by the Red Hat company in Linux the taxonomy that I want to use I will glean over this quickly I think Andrew has explained already open source software covers open source operating systems and applications where source code is available under one of the recognized open source software licenses like GPL and GPL Apache the interesting points are some require open source contributions to flow back to the community like in GPL and some are more permissive like Apache which are currently dominant in open source hardware open source hardware can be open source hardware designs and they're typically done in Verilog or system Verilog and they implement specific IPs like RIS 5 core or DDR controller and sometimes can be just the source code in Verilog but sometimes especially when you include FI's like what we do for AIB to point out a chiplet it can include actual circuit designs it can even include a whole GDS2 mask open source hardware design EDA tools on the other hand are open source software tools needed for hardware design and as such they are the building blocks for everything that follows and as we learned this is really important component of the complete flow of open source hardware IP and it's currently getting a lot of attention in chips lines so what can the Bluehead IP anything that you produce that works can actually become a cornerstone of Bluehead IP and the business model theory roughly looks like this it comes in phases the idea is to have a collaboration with partners, customers and open source contributors to develop certain technology and this can be done in phases in a phase one you sort of have a community driven project that comes via Linux Foundation Lifecycle and Linux Foundation Lifecycle is something that really allows anybody to start to start any project without any limitations without any sort of difficult boundary conditions and constraints and in phase two we have an attractor project where a lot of typically smaller companies startups, academic groups get really interested get really attracted because it's something new like AI or a new type of RIS5 core and this becomes a development vehicle new versions are popping out on an agile schedule it could be unstable it moves fast it's super suitable for academic projects and masters and PhD thesis not necessarily ready for the commercial application due to its stability the phase three brings opportunity to monetize and the foundation of that idea free IP is open source but it's not necessarily like free beer it's more like free speech and open source IP from chips at that point becomes basis it remains open source and remains available on github under well-defined licenses like Apache v2 but it's used to build additional 24-7 customer support this comes as integration support as a bug support additional services that are built around the IP and it's an opportunity to monetize the open source IP so I think this is what we've seen for the wide rabbit exactly like that you can have product built around this IP, you can have additional sub components that are not necessarily an open source and finally you can have services like in case of ISEC IP the design verification offers an opportunity for to build a specific service and let's see in the to support commercialization then just kind of to summarize this the phase three is really where the monetization comes and the the service support is probably the first thing for example Kodasip provides a design verification services around the swerve IP potential valued IP again in case of Kodasip they use their design tools to add additional features to the swerve corp and then it supports the swerve customers by providing bug fixes, guarantees for compliance and sort of several other valued services I added this section because I think it's interesting to sort of explain one more level of detail is what kind of things are important to develop in the open source so if you look at work groups the significant amount of activity that we have in chips alliance is happening around tool work groups that have a very later chisel fuses to see and several other software development projects and this is the moment where it's interesting to sort of remember the history the Linux operating system had a very important component that it owes its success called GNU Toolchain and GNU Toolchain essentially represents a set of tools, compilers debuggers, etc that were necessary in order to compile the Linux operating system source code and produce the binary that can run on various computers it also became an essential building block for the open source licensing model champion by Linux which is GPL which sort of required required that the generating outputs from the toolchain had to be open source and that defined how would that actually work so then the question is what would be the equivalent of the GNU Toolchain when it comes to hardware and it's a hardware simulation and compilation tools that are equivalent to the GNU Toolchain in the open source hardware so this is something where we put some accent if you look at this chart here it represents the typical flow in the ASIC from architecture specification which can be a powerpoint Word and Excel document to the RTL design tools that generate that are typically in the form of Verilog and system Verilog files and those can be run through the simulators and those can be run through the verification IP and various verification test batches after that come synthesis, timing, place and route etc so this top part is something that where we put accent in our first two years and there are some significant and interesting successes probably number one are projects around expansion of the Verilator so Verilator is a pre-existing open source design tool that has significant performance advantage compared to the commercial tools in both the compilation time and execution time for hardware simulation and we have worked inside the chips alliance on adding system Verilog extensions into the Verilator which enables Verilator to become a major tool for the RTL verification part for the design verification and that project that is going really well has really has really set up a new playground where companies working on open source projects can completely collaborate by using open source design tools which significantly significantly simplifies how to collaboration is done and also enables potentially new and improved licensing models and then the second project of that kind is a chisel and a chisel is a literally equivalent of the compiler in the software and it has two components which is chisel language and chisel front end and the fertile layer that can actually emit synthesized design for various targets like FPGA ASIC etc. The interesting thing in chisel is that hardware design is actually done in the high level languages which are based on Scala extensions but still it's actually compiled into Verilog into the Verilog tool chain is further compiled into actual hardware so these are the foundational tools that make this happen. Another one very powerful is Fuse SoC similarly like Verilator exists before the chips alliance but it's actually used in almost all chips alliance SoC projects and provides a very powerful tool for quick design of SoCs and reliable reuse of IP. So I'm going to actually stop there maybe maybe put this slide as a background and I'm ready to take some questions. Okay, thank you. I have to say I didn't see a new question pop up at this point in the chat but maybe let's invite the other panelists who are still with us Oh, I'm sorry I'm sorry we are not on the question part we are now going to the Callista's presentation so we will take questions after after Callista's presentation which we are going just to you know she pre-recorded it so we are going to see it now and then we will go back to the questions if that's okay. Okay, thank you. Hi, I'm Callista Redmond CEO of Risk 5 International and I'm here to share with you a little bit about our journey and how to engage with us as well as bring you up to date on all the latest things going on across the Risk 5 community. First, let's just check in on an incredibly important fact. Success is not sustainable if it's done in isolation. Success requires the collaboration and the input and the stakeholders around the solution. You know, there may be one-hit wonders that do appear successful in isolation, but I promise you throughout time, throughout history, sustainable success relies completely on the collaboration and I will go further to say in technology software, hardware tools, resources open source has now become the fundamental base building block of the success that we see across the industry. So, let's think about this. What does the open era of computing mean in the semiconductor space? Well, Risk 5 is leading this initiative and we are seeing that through open collaboration, we are enabling a completely fundamental game change across the semiconductor industry. This enables design, freedom and flexibility across domains and industries that has not been seen across time. This collaboration, this open approach is cementing the future for the next era of open computing and we're seeing this at scale. Why? Why is Risk 5 so disruptive? Well, it's two things. First, it's technology and second, it's business but let's talk about technology first. Risk 5 is fundamentally a small and compact base ISA. 47 base instructions that is so much easier to work with than 1500 base instructions that you see in legacy architectures. They didn't all start this way but they got there through the incremental additional extensions that are added onto a base. Risk 5 has taken the approach to freeze the base, to keep it small, easy to manage which helps a lot when you get to power consumption and other variables and upon that base you can add the extensions that you require. A truly modular ISA rather than the incremental approach that has been the stalwart of the industry. Second, we are allowing absolute design freedom. Pick and choose only those aspects that you need rather than be burdened with everything. This is growing rapidly. This design flexibility is at the right inflection point for the business opportunities that our industry is seeing. The business model is disrupted too. Not only are there no IP licensing constraints there are no boundaries put on the designs that you may pursue. This opens up incredible opportunity and I'm not just talking about where you can take your products but the partners that you can collaborate with across industries and around the world in the design and development of your solution. This expands your markets, it expands your opportunity, expands the solutions and adjacent spaces you can take your risk by solution. Beyond just taking these barriers down, risk 5 is at the forefront of building and growing and seizing market opportunities and this is being reflected by analysts far and wide as we look at the opportunities in front of us. First, 50 billion connected in IoT devices are forecast by 2030 we're almost there already we're about halfway there and you're going to continue to see this growth across the next 10 years. This is explosive growth not just an enterprise but especially an interconnected home it ranges from your refrigerator to your wearable. Cross PCs and smart phones are also adding a boost to this and I will say that automotive is one of the fastest growing segments that we're seeing. This has incredible potential for risk 5. In fact, Semicop put this research report out a few years ago and they said only 62 billion risk 5 cores by 2025 that number has already jumped and their latest report just this last March they're forecasting nearly 80 billion risk 5 cores by 2025. I will say we are well on our way to this target already. This is a nearly 115 percent kegger from 2020 to 2025 in five years you can't argue with that kind of growth. This will compose nearly over 14 percent of the overall CPU core market. This is already seeing traction. We are already seeing this today. In fact, last November Wilson research found that nearly a quarter of ASIC and FPGA projects already incorporate risk 5. Nearly a quarter I will contend that no ISA has grown this fast in the history of computing. It's not just the ISA it's not just the hardware. The IP market, the software and the tools and the additional ecosystem that surrounds risk 5 is also seeing rapid growth. This kegger is at 54 percent but watch this space see where you can engage and where you can get involved. That IP is coming from new designs new ways of going after the arts of this market. And that market is growing across cloud and data centers other scale out implementations where providers are looking to leverage AI and machine learning to really gain competitive advantage. This is being seen in automotive. I mentioned Semicos saw this as the fastest growing segment and it's more than 200 percent between 2020 and 2025. This is based on security, security other critical features that go into a car from whether or not the car stops to what's playing on the radio. Industrial IoT is continuing to grow in the small compact efficient designs that look for great power efficiency in a small envelope mobile and wireless. The more you ask of your phone the more processing power it needs and the more cores that are required. This is continuing to grow generation after generation. As I mentioned, IoT consumer level is continuing to grow as well as memory and storage. This is one of the largest fields that we see growth in for risk 5. Risk 5 international, our community is continuing to grow very rapidly. In fact, in 2020 we grew 133 percent. Already this year we have already doubled our membership. This is an incredible testament of the amount of investment that is coming into risk 5 and the amount of opportunity that our members are seizing. We see a span of participation across nearly 70 countries. It's important for us to continue our steadfast focus on technical work, technical work that matters not just for making cool things, but that underpins the strategic foundation for the future of risk 5. Around this, it's important that we have numerous ways to engage. Numerous ways that we risk 5 support the commercial success and industry success of our members. And you'll see us running development partners, risk 5 labs, development board programs as well as working together with others in the community through alliances we have together with CHIPS alliance as well as security with global platforms and about two dozen other alliances. This innovation roadmap has a foundation in technical deliverables. Everything below the timeline you see these are the technical deliverables that we have on track or have delivered since the ISO was defined in the early 2010s. This is also continuing to proliferate the programs and the support that we have for industry adoption. This ranges from AI special interest groups, graphic special interest groups, Android special interest groups, HPC and other types of programs that we are continuing to proliferate across the industry. This is not done in isolation. We are also continuing to work with the software ecosystem and other aspects that are essential, are critical to designing processors that stand the test of time. You will see on this map that we engage across various technical aspects and up through the staff into software through to specific industry type applications. These are incredibly important alliances that risk 5 international fosters for and on behalf of our community such that we work in concert and other, that we upstream wherever possible and that we elicit the best contributions to stand that test of time. This industry progress is already being reflected. We see accelerators being delivered by the European processor initiative. The European processor initiative as well as the European community has quickly embraced risk 5 as the foundation for processors and engaging deeply in investing in digital sovereignty for Europe. The Rio's lab collaboration between Tsinghua University and Berkeley has continued to progress bringing out a small board computer and we see additional progress being made throughout our membership from Alibaba on cloud and edge servers to Andes with superscalar multi-core sci-fi with a personal computer and more. This dedicated community would not be here if we did not engage with the stakeholders all around the solution from students to multinationals across industries from embedded to HPC in services and fab and design services as well as IO memory storage software and of course with our independent engineers and developer community we have made incredible progress together whether it's our 60 plus risk 5 workgroups our labs and development partners more than 300 solutions posted on our website 31 different local community organizations are attracting more than 6000 engineers and continued engagement as we look to continue fostering risk 5 we are supporting and sponsoring six different programs to ensure our members success first technical deliverables delivering on what matters test suites and compliance and verification visibility to amplify the news and information that's relevant to our market learning and talent to see the industry with talent that is tuned to risk 5 advocacy whether it's our ambassadors or our alliances and providing a marketplace for all things risk 5 I am excited to be here and to share this message with you thank you so much for including me in your program and please visit us online at risk5.org follow us on twitter on linkedin and on WeChat we are there for you thank you okay so we heard from Calista and you know she talked a lot about the sort of complexity of the community but also on the business case for risk 5 and also other solutions and products within the open source hardware ecosystem so I would like to invite Zvonimir and Andrew again on screen if you can just share your video again there is the button that you can click there so we'll be able to tackle the last question that we saw in the chat okay and you also have to click and mute yourself perfect Andrew and Zvonimir is still getting there yes perfect okay maybe let's tackle this question that we have in the chat because I think this is quite it would be great to have Javier on board as well but I know that we have some people in the audience who are working a bit more on the public sector side on the more university side I'm just going to read that with the question so we are on the same page so it's a question from somebody who works in the university tech transfer office and the question the concern is that my university as a public institution has no visibility for the value it contributes to generate and of course does not share revenues deriving from the product services developed by companies and the wonder is if there is any way to capture the value that is produced within universities or public sector initiatives I don't know who would like to go first if you have any thoughts on that I'm happy to kick off so Alessandra thank you very much for that question it's an excellent question and I think you've really captured the core of it in that the problem is that there is no visibility to the value that's contributed so we know as a result of the study that we've just done with OFE that value has been created there's no question about that and I think it's important particularly in the world of open hardware more research is done so that we can actually quantify that and in some ways that should be straightforward because especially if you're talking about in the software world we talk about public money, public code if universities are funded by public money then that means it's logical that their outputs should also be made public and if we happen to know that public code is something that generates value for the entire community then that should be end of the story but of course it's not as simple as that because it's a lot easier to claim that a department, a university, an organisation and an individual is successful if they can say I've just received a check for a million euros rather than saying I've increased the value of the economy by 5 million euros so there's definitely some work that's being done in those areas but in terms of specifically capturing value this is where Havia really has a lot to say so I don't want to try to second guess what he would say but there are a number of mechanisms that already exist in the world of open source software and we've heard about the blue hat model which is basically similar to the red hat model and I think that's potentially very powerful as well so from the university's perspective I think the key is to try to nurture a community around that particular product and then nurture a set of commercial partners who are operating around it and then come up with a business model where it's those commercial partners that are commercialising that particular piece of open hardware but that there's a sort of flow of value back to the university in terms of the fact that the university's relationship with those partners can continue to provide commercial value to the partners in terms of continuing collaboration, providing expertise possibly even lending the name of the university or the name of the project as well so I think there are mechanisms that can be applied but in the world of open hardware we're still sort of feeling our way to a degree Thank you Andrew it sounds like it's the question of maturity which comes up a lot when we talk about hardware especially in the context of learning from the software space I know Zvonimir if you have any thoughts on this a little bit more I do there's two angles in this question number one angle is in your domain of expertise which is policy what exactly should be the policies for universities if they're publicly funded and if they're receiving funding for governmental organisations you could argue that the benefit then should be for the whole community of course this is not the case there's plenty of universities especially in the United States that actually do derive significant revenues from IP licensing and that's completely okay as well I would say that what applies for the companies could equally well apply for the university I see a piece of open source IP as an attacker that brings different parties together and on top of that which is a common denominator for different kinds of projects and that can be used as a base for building proprietary IP extension or services and it's kind of difficult to see university taking these corporate things like services and support but definitely university can also release open source components with an intent to sort of create value adds that's built on top of it so I don't see a significant difference there between a company which needs to make money somehow at a university that would like to generate revenue assuming it agrees with public policies and I think it's very different than Europe and the US just the way the universities work we actually think that this is sort of similar in any case of innovation because here we talk about producing innovation so maybe the case is that it's actually quite similar it's just in a slightly different realm and we can talk that it's more complex but I mean all the innovation is complex so is that similar to other innovations in other sectors yes I agree Alessandro I hope that this answered your question I think indeed it would be great to still have Javier on board who would be able to really answer this question from the more personal perspective but as we are running a bit out of time I would like to ask one more question to both of you and then we can conclude the event so it's sort of the future outlook what do you think is necessary to happen for open hardware and sort of in the realm of policies what kind of support do you think would be beneficial and for what particular sectors or types of companies or types of research institutes or types of collaboration just like what would you like to see in the next 5 years let's say 10 years maybe let's start with Zvonimi or you are just next to me on the screen I think for the next 5 years I'm hoping to further extend the openness of toolchain I think when the toolchain needed for hardware design is an open source that really creates incredible platform for innovation because one of the obstacles to open open source hardware innovations was that whenever proprietary toolchains were used they were adding IT into the flow and effectively preventing the sharing of some of the outputs so that's something that we are hoping to develop in the next 5 years we already have OpenRoad as a project in Chyps Alliance and we hope to continue growing and fostering it to enable that and we also think that this is going to be an amazing tool for legal innovations as well because I think once the legal arms realise that this is how things work now in 5 years from today we will then see where the possibilities are Okay, thank you Andrew? I agree with all of that and I think what's interesting is that because a lot of what Zvonimi was talking about is very similar to the way that open source software has become so successful and we already have plenty of examples of that out there industry understands the power behind it I think a lot of what Zvonimi was saying is actually inevitable and you can almost argue that we don't need too much in the way of policy to make that happen because it might happen already because we're talking specifically about the softer end of the hardware spectrum I don't think that's necessarily going to be true it would be very interesting if somebody did come up with an open source suspension component for a car for example whether that's something that is ever going to happen, sort of mechanical designs and that's still seen being sort of very much the hobbyist end of the industry there's plenty of stuff going on and there's a lot of really good stuff going on around medical devices with COVID, etc. and I've been involved in a number of initiatives so I think what we really need to do is to look at the areas of open hardware because anything that benefits open source software from a policy perspective is likely to benefit the softer end of open hardware as well the idea of setting up open source program offices for example is a great one because it basically gives organizations permission to do, it's no longer a weird thing to do working in open source software and therefore it won't be a weird thing to do to work in open source hardware when you're looking at the software end what's a lot more interesting to me is whether we can move into the area of the harder ends of hardware, the 3D printing and the mechanical devices and so on and whether we can get the same dynamics to happen there and that's when we start to talk about things like intellectual property which is a bit too complicated at the moment and it varies very significantly from jurisdiction to jurisdiction so there's a lack of certainty there which can cause problems and also the regulatory regime because clearly if you're providing goods into the marketplace on the one hand you've got to make sure that consumers are protected and that's always the case but it's especially difficult when you're talking about things like medical devices for example but on the other hand make sure that you're not suppressing innovation to a small number of organisations by doing that and I think those are really the challenges that I would like the policy makers to look at and to consider Thank you Andrew, this is very interesting I don't know if you have any last words just you know as a sum up but I think this was quite an interesting look out what can happen in the next 5 years or 10 years because we've been trying to go forward with these discussions for the last one year or two years I have to say it's not that easy in the policy landscape because it's still again we have these issues of maturity and complexity and quite a diverse landscape but I hope we'll have another event like this and then we'll have a bit more of the research that we can cite or just a bit more policy that we can actually develop or analyse at that point so I think yes we are running out of time 20 minutes past so thank you very much I hope that you enjoyed it I learnt quite a lot, I hope that our audience as well we'll have another instalment of the open source policy series it will be next week it will be 26 of June so I hope to see you then and thanks a lot and have a nice evening actually depending where you are thank you so much goodbye