 Welcome back to our last keynote session of the event. Hope you've all been enjoying the week and had a great time yesterday. I was able to drop into a few of the sessions and was so happy to see not only just the amount of engagement over the digital channels, but just the amazing content that all of you are working together on at this event. It's really something that's amazing to see. Our first speaker today, Thomas Glickster, is a long-time Linux kernel developer with an embedded background and strong affinity to impossible missions. The decision to serving as CTO of Linux Tronix, a Germany-based FOS consultancy and service provider, he's an active maintainer in the Linux kernel project and appointed fellow of the Linux Foundation. Today he's going to talk to us about real-time and other merge conflicts. Please welcome Thomas. Good morning. Good afternoon. Good evening. I hope this covers all relevant time zone greetings. First of all, I have to admit that the thought of giving a talk in front of a camera and not in front of an audience made me feel uncomfortable. So I decided to have at least a comfortable seat and nose lights. Let's see how this works out. Trying to get intrusive changes merged into the kernel is a challenge. The real-time patches are quite special in that regard due to their complexity, but the general principles are pretty much the same. Let me look at this from both the submittus and the maintainers view. Most larger projects, but even trivial device drivers, stored out as a proof of concept. Taking shortcuts, working around shortcomings in the existing infrastructure, or just being unaware of better solutions is nothing unusual. It's the normal way how things stored out. The real-time patches are not any different. But this is also the point where it gets interesting. In a one-off project scenario, the main goal is to make it work for the use case at hand. These one-off projects are nothing unusual in our industry. So the proof of concept gets some minor polishing, a few duct tape fixes for the testing fallout, then it's done on to the next project. So getting your code accepted into the mainline kernel should be trivial, right? Polish it up a bit, so it halfway fulfills the formal criteria, post it and the relevant maintainers will just pick it up. Then it's done on to the next project. If it would be that easy, my talk would end right here. But there is this thing called reality which ruins everything. To be honest, I never thought about posting the real-time patches as is for merging. That would have been some interesting fireworks. They were full of awful hacks and obviously we were proud that we had made it work. But looking at an old version today just makes me feel ashamed. So we went and rewrote everything several times, polished it up and merged it piecewise and gradually. Not every project is in the scope of the real-time patch set and a golden rule with 100% success guarantee does not exist at all. But from my experience as a submitter and maintainer there are a few things which help avoiding the traps which have a 100% failure guarantee. Avoid to say, I need to get this merged into kernel version X. Fail is guaranteed. Code is going to be merged when it is considered ready to be merged by all involved parties. Tell this your manager and make sure he understands that fully. This will spare you and the people who need to review your code a lot of grief and frustration. Do not ignore feedback. Doing so is a guarantee for fail. If you are not agreeing with the feedback then start a technical argument and settle it upfront. Maintainers and reviewers are not always right but you need to convince them that your approach is correct or even better. But you also need to take carefully into account their arguments and concussions. They have to maintain the code for a long time even when you moved on to a different project or a different job. Be aware that maintainers tend to keep track of their previous feedback and most of them are not really amused if they are ignored. This is especially important when it comes to user space interfaces or other mechanisms which become part of a hard-to-change interface. Yes, this will take time, but it's valuable time spent. Go the extra mile. Especially with larger project it happens that the reviewer or maintainer asks you to do some preparatory cleanups in order to make it less intrusive and better to maintain. There is again an answer which is guaranteed for failure. It's not in the scope of my project. This is not an option. Such requests are usually very, very reasonable and not outrageous app use. And whether your manager likes it or not, at this point it is in the scope of your project. If you don't feel technically up for the task, just say so. Admitting lack of expertise does not make you a bad engineer. You will get help and assistance which makes you even a better engineer than you are now and gives you a deeper understanding of the bigger picture which is in turn going to help you with your project and your next project. After 20 years of kernel experience, I still run into situations where I stand there like a duck in a thunderstorm and just feel stupid. Nothing wrong with that. Having full comprehension of 28 million lines of code, 24 architectures and a large number of subsystems is simply impossible. Not to talk about the extra complexity of user space. Going the extra mile is not always a detune. It's often a shortcut because the integration of your feature becomes way easier and the overall outcome more palatable for everyone involved. At the end it saves time and nerves. Lost but not least. Don't be stubborn and insist on being smarter than everybody else. I can assure you that the people on the other side are at least as stubborn and smart as you are. You might know more about your particular chip or concept but the people you talk to probably know more about the overall picture and how your particular feature can fit into it. Let me stop here. There are tons of details which I could talk about but that would be in the scope of a lecture series and certainly not done within the time constraints of this talk. I picked a couple of prime examples which can make your life as a submitter harder or easier and have the same effect on the life of a maintainer or reviewer. At the very end all of these examples are more about human interaction than about technical details. Human interaction is hard in general but it becomes even harder when the interaction is based on fundamentally different expectations. Dear Kondl said once, open source is a social experiment. He's right about that. It's a social experiment which does not follow any smart thought out experimental setup. The only purpose of this experiment is to be observed how it evolves over time. I've had and still have the pleasure of observing it for more than 25 years from different point of views. User, occasional submitter, out of tree tinkerer, regular submitter, maintainer and submitter who is trying to get the lost crucial bits of the real-time patch has merged. The expectations of submitters and maintainers certainly went through several permutations over time. But they have all in common that the expectations on both sides were and still are different. In the early days Linus' expectation was to get his teaching toy working while some of the early subcontributors wanted to use Linux for real-world applications. At some point this culminated in a mail from Linus to the mailing list to get to those people to get out of his inbox. During the early distro days a new set of conflicting expectations came along. At that time a lot of maintainers were still hobbyists and not necessarily interested in the enterprise world. Later on we got the enterprise versus embedded conflicts and the conflicting expectations and interests during the 2.5 development cycle which ultimately led to the rolling release model which we are using now. The constantly ongoing conflict of expectations versus handling security issues earns to be mentioned as well. Secrecy, power games, lawyeries and marketing versus a pragmatic and truly collaborative approach of fixing the problems in the best way for everyone. Meltdown Spectre was surely a prime example for that. The reason Bluetooth Affair is just another proof that history has to repeat itself. In the past years we see conflicting expectations of non-technical nature as well. Like the way we communicate, email versus github, old school versus generation set. We surely have also the conflicting expectations of corporates, managers and open source community. Roadmaps, checkboxes, time to market versus take your time and do it right. Of course we have also conflicting expectations of all sorts within the community itself. But fortunately most of them are over technical and maintainability issues and they usually get solved after both sides calm down and figured out that being stuck born forever is not getting us anywhere. So with all the differences there is one and all the changes of expectations. There is one important expectation on the community and maintainer side which has not changed much over time. The expectation to create and preserve a high quality and long term sustainable and maintainable operating system. This is the expectation to focus on and it's important to understand that this expectation is not negotiatable at all. This expectation is one of the many reasons but in my opinion a really important reason why open source and Linux still exist and thrives. It's also the expectation which should become the common ground for all parties involved. If that happens then the rest of the still existing conflicts of interest and expectation will become minor nuisances. From 20 years of kernel experience and 15 years of gradually getting real time into the mainline kernel my main takeaway is that focusing on doing it right is the only way to get a code merged into the mainline kernel. Meeting the non-negotiatable primary expectation is the way to go. It's the easiest way to overcome the conflict of expectations and interests and to build trust with the people you're interacting with. It will take time but ignoring this will take more time and cause grief and frustration on both ends for no reason or it will result in a complete failure in the worst case. At this point I want to take the opportunity to say thanks to the members of the Linux Foundation Real Time Linux Project for their support and deep understanding that doing it in the right way is the only option to get it done at all. Despite their obvious desire to see it merged yesterday. With that I want to thank you for your attention and wish you a nice day, afternoon or evening depending on the time zone you're in. Thank you Thomas. Our next speaker, Professor Jesus Lovarta is Director of Computer Sciences at Barcelona Supercomputing Center where his research team has developed performance analysis and prediction tools and is pioneering research on how to increase the intelligence embedded in these performance tools. Today he's going to share on the European Processor Initiative Project which aims at developing European processor technology for high performance computing and immersion application areas. With an important objective of the project to develop a fully owned implementation of a genetic accelerator based on the risk five vector extension instruction set. Please welcome Professor Jesus Lovarta. Hello, I'm Jesus Lovarta, Director of the Computer Science Department at the Barcelona Supercomputing Center and I'll be talking about the risk vector processor in the project. First let me just tell you this is going to be rather than details on the actual processor or a lot of details on the actual processor is some personal opinions on fundamental characteristics that we believe are important and have directed us to the development of this processor and that I'm kind of biased because I come from the HPC high performance computing world and many of the motivations are in this sector. Although I do believe that it can influence and can help contribute to improvements in computing in general in many other sectors. The motivation for the project is the actual situation that there is no European company designing processors and that is what was intended to set up by or to fix by starting initiatives in that direction. And this is a project which coordinates vendors, companies and users and HPC centers trying to do a design and a design processor for HPC systems which could be used also in other domains that require intensive computing, having intensive computing demands. The end in the current architecture is this depicted here and it's based on ARM as a more mature technology as of today more demonstrated and accepted already in the HPC domain and integrating several of those cores into a chip. But my major focus today would be on a component for which there will be a few of those or just a very limited number of tiles into the chip of this RISC-5 vector based accelerator. In reality, the architecture of this RISC-5 accelerator is architected to be made up of eight RISC-5 vector cores with vector processing units consisting of eight lanes and the ISA being able to support vector lengths of 256 elements. We also have special interest and attention on supporting high byte per flow ratios and on mechanisms to control locality in the shared cache that is available between the different nodes. The tile will be run in Linux and will also have some additional accelerators for deep learning functionalities and extensive algorithms as well as variable precision, large precision arithmetics. I would like to, as I said, focus on the general characteristics of our concerns or challenges that you have when designing a large scale system and there are many and they can be overwhelming. And the objective is to see whether we can kind of describe them in a unified way, whether what are the fundamentals that allow us to really handle all these issues and challenges in an elegant way and with the possibility of maximizing the performance of the system and avoiding the collapse that we are seeing in this cartoon. So let me start by talking about the term holistic co-design, what we understand or what we understand by holistic co-design. Co-design is a buzzword probably, it's been used a lot. And very often it essentially means that one of the levels kind of pushes one contribution or suggestion to the one lower level along the vertical line here. I would like to see it more in a kind of circular way where everybody interacts or can contribute or can be or suggest requirements to any other level. And the issue is to identify where to best place the solutions to the issues in a balanced, well-balanced kind of architecture where the forces naturally distribute between them to maximize with minimum energy getting maximum results. In reality, what we had when you are not able to do part of the system, you cannot do this holistic design where all levels contribute to all levels, probably the system does not work that well. So from that point of view, the objective of the project is to really achieve the holistic system including interaction between hardware and software. And it is really important as we all know that for designing very, very complex systems, interfaces and availability, open source availability of implementations is of key importance. It lets you leverage those interfaces, those standards and lets you contribute where you think you can innovate or you can contribute as interesting or important characteristics for a given sector. So this is the situation where we, which we had also in terms of availability of open standards and interfaces and implementations in this level. And when you try to go into the lower level, hardware level, what is the natural choice as of today it is, this is what we have taken. And this is what we are integrating into this impact accelerator. The first observation is about the importance of really doing very detailed analysis and inside of the behavior. And that code design is aimed at getting fundamentals across multiple applications for that means you have to analyze with verifying granularity many applications and get fundamental characteristics of the interaction between processes, the locality behavior, the contention on resources, what do we observe in existing machines in order to use that information for the design of future ones. This is the kind of information that we leverage from other centers of excellence, which are activities also funded by the European Commission but we let us, let us gain that insight on the behavior of programs. And what I want to argue is that these very fine graining analysis that is needed at the high level is also going to be very useful at the microscopic level to understand the actual behavior of the micro architecture of the processor understand access patterns, and locality and the impact of microscopic impact of different scheduling or different policies in terms of catch replacements or things like micro architectural changes. So, if I have to go into the, the four kinds of principles the first one that I think is important is the fact of designing a hierarchical system and a balanced hierarchical system where each level contributes a given amount of concurrency is level is adapted to a given level of overheads. First, we try to avoid covering the huge amounts of concurrency we are talking about 10 to the eight, 10 to the eighth operations per cycle. It's a huge number of course, and trying to orchestrate all of that in a single as a single level is really huge difficulty and they were the granularity supported by the different levels are more tuned to handle a less limited amount of work at each specific level than innermost levels can support finer granularities. The mechanisms that we are what we think is that workflow levels for example can support a fair amount of concurrency. MPI is a very rigid so it's a one level but but it actually can support a large number of course and then within notes you have potentially different levels of granularity being supported and this is what we try to do with the threat at the several levels, but at the given point we go to a course that are a limited number of courses we have seen in the architecture that we have presented. They are they each of those has a large number of functional units which are handled by this internal hardware and control of the of the micro architecture state. So essentially is is a situation where every level contributes several tens or several hundreds some of them more but essentially adapt the contribution of it level and try to avoid some levels to have to contribute a huge amount of which probably might not even be easy to extract or even available on on applications. A second thing is the fact that we have to move from a latency dominated view of the world to a throughput oriented thing we have to be able to instantiate a lot of work to specify a lot of, a lot of computations to be done with interactions between them with interactions with order to be maintained, but not necessarily a pure fork join a pure synchronous type of behavior but enabling for a synchrony. This is what we believe that the open MP task based model allows us to do on at the threading level within within notes. This is what we are interacting integrating into the MPI higher levels of MPI by allowing easy and productive specification of a synchronous executions. And this is what can be also supported with essentially the same task based models at course grain of workflows computational workflows in Python for example that are automatically centralized and and distributed over the in the large number of resources. That task based approach essentially also happens also appears at the low level in the hierarchy and long vectors and the that we support is nothing more than tasks executed or interpreted or orchestrated by the hardware itself. And being in terms of operating on 256 elements is something that kind of the couple the front end of from the back end of the processor and kind of gives a fair amount of of relief to the to these front ends was which they in very often today in many of the processors really have to rush to keep feeding their back end engines the functional units the idea here is to to have more laxity in terms of doing those things in more in fashions that are more tolerant to variability. So this is a generic feature that we think we support at all levels from the hardware to the to the software. The third very important property is the fact that application should be malleable should be able to adapt their concurrency under the way of exploiting the concurrency to the available resources in a world which is going to be very dynamic. And in many cases very unpredictable huge variabilities. We need for applications to be adaptable to those to those variabilities and to be able to share resources maybe in MPI processes thread share share course between between MPI MPI processes lending resources when not needing them when needed and doing that. This is a kind of elastic way of computing at the fine granularity level. This is what for example we do with this dynamic load balancing libraries on top of open MP. One thing that would be very nice is that these decisions that have are taken by the runtime should be coordinated and this is important the coordination across levels should be coordinated with the with the Linux kernel for example and it would be nice to have a hand of scheduling mechanisms by which the decisions are kind of the information that the higher levels do have is communicated to the runtime for it to to try to satisfy them if possible of course and run them will be always in control but it probably might benefit from hints that can help improve the behavior of the application on top of it. Even at the architectural level, the brief five of course, but as the arms we also have this vector length agnostic characteristic by which programs end up having negotiation phases discussion phase where the program requests to the architecture saying I would like to use this operation this large the architecture comes back saying sorry you have to do it only that large and the application adapt to that. So the vector length is requested from the architecture is notified to the application and the application adapts to that. This is a mally ability which I believe is is going to be very important in the future adapting all levels to the levels below them and trying to coordinate the scheduling decisions that are done at each of those levels this I think this is going to be very very important in the future. And the fourth characteristic is the need to homogenize the originating the need to hide from the programmer the fact that maybe underneath the resources might be of different characteristics different performances different. I just say is different functionalities or access to probably not coherent memories. The idea that we have here is we are relying on open MP but then open MP us thinking of it in a particular way with not not actually using the open MP accelerator directories but only the target for uploading maybe to a device or from from the memory of the arms to the memory to the resource but not in terms of changing the way you specify the computations from the other levels to the inner levels using same kind of nested task parallelism we do believe that this is going to provide a very, very homogeneous view of the of the architecture which will be highly beneficial for the productivity of programmers. And in reality, even if the vector and diagnostic characteristics that I mentioned before is just a way of, of actually also getting homogenizing allowing for homogenizing heterogeneity one can have one can have architecture where heterogeneous big little kind of things, and still the source code being the same the source code being able to adapt. And you could even think of in dynamically shifting between big little things so we believe this is going to happen to open a lot of possibilities that will allow future architectures to be really tuned and optimize dynamically to the dynamic situations and application responding accordingly. What is our current situation today. We have the design of the impact this time the impact accelerator tile we are going to do it and we are going to do a tape out but the beginning of next year they put on 22 nanometers to experiment on and do the first demonstrations of the of the idea on real silicon. We do have a compiler which does handle intrinsics for the risk five vector is a that's also automatic parallelization, so that really this is the actual way that will be proposed on will be used by programmers and and will reduce a lot of the complexity on the cost of using vector instructions. What else do we have we have an emulation platform and analysis platform which lets us run this compiler generated vector codes, run them on an emulator generate micro so instruction level traces of the behavior of the of the program and being able to analyze those instructions level traces. Directly raw traces or even after passing through task simulation through simulation environments, which kind of reveal the introduced timing course timing factors in in the traces depending on on the architectural implementation characteristics. These things are going to let us to do real code design of and try to improve the hardware design of the system. As I mentioned, we are also going to have we also have an heterogeneous arm risk five course. How do we develop system software for that we have FPGAs with armor hard cores and risk five cores implemented on the FPGA logic, and we're actually today. We're actually booting Linux on their side, but in Linux on the right side and actually allowing for MPI open MP applications started on the inside to really offload computation to the right side and and and even the right side being able to reverse off load. So, this is a platform which is going to allow us to to do the, the design of the operating system parts and then we will be make available boards that will implement on FPGAs, the full back architecture, a few course, of course, because of the limitation capabilities, but that will be good software development vehicles. How do I want to conclude I would like to kind of look at the history of Sagrada Familia, which public donation at some point in time allowed for it to raise four towers up to the top or everything up to four or five meters. The choice was to rise only four towers up to the top because that allowed people to see that there was something probably special on on that building and a lot for for the support and the interest to continue on that we hope to do something similar with this initial design test tip on the off the impact processor and we hope to demonstrate the potential interest and the real interest for HPC as well as for other embedded applications of the long vectors and on the RISC five architecture and open software and hardware. Thank you very much. Thank you, Professor LaVarta. Our next speaker, Sachiko Muto is the chief executive officer of open forum Europe, a Brussels based think tank, which explains the merits of openness and computing to policymakers and communities across Europe. In a world of ubiquitous open source, unstable geopolitics, and with an EU looking to regulate digital markets, there are new opportunities and risks for open source across Europe. Sachiko will share how open source can help solve Europe's strategic challenges. Please welcome Sachiko Muto. I'm really pleased to have this opportunity to speak at the open source summit Europe. Many of you are old friends of open forum Europe and I have known you for years. Yet I believe this is the first time that I'm ever presenting at a Linux foundation events or really happy to to be here with you. It really represents a conscious strategic shift in our organization and the way we communicate as the leading think tank promoting openness in IT in Brussels. We've been talking to policymakers for for over 15 years, trying to convince them to take open source seriously. But today I'm here to make the case to you, rather that it's it's time for open source community to to take public policy more seriously. And argue that the time to do so is now. But first, perhaps you'll allow me to indulge in a personal story to illustrate how far we've come also with open source in in Brussels. A couple of years ago, I had just started working with open forum Europe and with some colleagues I had registered to participate in an event organized by by the European Commission on standardization, I believe. I was told informally that the organizers had asked for extra security because the open source people were attending and we were seen as as dangerous first and probably last for me personally. Anyway, fast forward to last week, and the Commission has just published a communication that outlines its new open source strategy. It's a document that not only recognizes the role of open source in underpinning most of the technology today, but also fully embracing it and spells out the role of open source and openness in achieving the twin transformations towards a green and digital Europe. So I urge you all to read it because it's really a bold document. But where does this leave us the open source development model now runs the show. We have a European Commission that's committed to to the openness of to the importance of open source for Europe's digital future is the job done. Can we can we all go home. No, it's, it's, it's really now that open source is is big. It's everywhere that that the real political challenges emerge and it's when your code is fundamental to, to the technologies that build our societies that you have new responsibilities to help solve the large challenges that society faces and policymakers are expecting you to step up. I prefer to put it like this in a positive call for action, but actually change is coming whether you like it or not and, and really whether you get engaged or not. Governments everywhere and in Europe are stepping in to regulate the tech space in the coming years and they have reason to. And now big pieces of legislation are being prepared in order to hold platforms responsible to regulate AI and how data is handled and even to create a new industrial strategy based on the digital transformation. And that's just to name a few and really all these policy efforts will impact open source in different ways. Open source may not be the direct target for regulation, but by being everywhere by winning as open source has done. It will inadvertently at least get affected. And so these efforts, where do they come from. Why, why is this happening now. Well, I think that in large part they stem from from an increased geopolitical tension. And there, there is a sense of urgency coming from the impression that Europe is not autonomous. It's a resilient or some like to put it, it's not sovereign. And indeed, this concept of digital sovereignty is really being heard a lot in the tech policy discussions right now, and is seen as as a as an impetus for action. It's not, it's not new. Europe's autonomy has been a political concern for, for decades, but the way it's, it's currently defined in the context of EU policy, I think was introduced by by Emmanuel Macron in his Sorbonne speech in in September 2017 and has since come to underpin a lot of the digital policy discussions that are going on right now. But if we go if we go beyond this, the buzzword the current buzzword what does this digital sovereignty mean in practice and and why is this both a challenge and an opportunity for the open source community. I believe that that it's it comes down to a sense of control control of our infrastructure, especially the parts that are considered as critical to core functions of our society. And as we know, open source is there running large parts of it. And the sense of control in my view, there are there are two paths to achieving it. The first is one where we make our world smaller in order to feel that we are in charge. For example, develop European standards with only European stakeholders or buying only from European based companies. The second, however, is control through openness. Looking at it from a software perspective, this means having access to to the code, being able to choose and change vendors, avoiding lock in. And if we bring that concept to the geopolitical geopolitical level, you can see that the same messages of control through openness have the potential to resonate well in an unstable world. And if politicians choose the the first path, however, and there is the risk of this, I am convinced that we might get a sense of control, not actual control and innovation and knowledge exchange will suffer, and by extension, open source in Europe will suffer. So, looking at open source from a geopolitical perspective, I see that we can help solve this challenge of control without making our world smaller. So, what is it that we're talking about when we are when open source and we say that open source can help us get more control and why is it so crucial now. It's worth repeating, consider now the old story of lock in of our desktop environments. That was the big fight. That's the time I got got involved first in open source. This challenge of lock in when we are digitizing all aspects of society, be it in healthcare, agriculture, smart cities. Even we help digitize our societies in a global world at the technological forefront without losing control. I think the open source model and open technologies have a major role to play here. Avoiding lock in the classic advocacy fight of open source is now geopolitical. This challenge I think is of way greater magnitude than those of earlier earlier political fights open source advocates have engaged in. And I include myself there. So, where are we open source might have won in the marketplace. But in order to step up and meet today's challenges, we have to sort out some of our limiting beliefs. The first is an internal belief. I believe among members of the open source community. And again, I include myself here. There is still a notion of being the challengers the underdog, the disruptors well open source is now everywhere, including in public services underpinning government services. And here comes challenges and responsibilities. It means you are the establishment. When you are big, that's when you need to start talking to policymakers and to take policymaking and regulation seriously. And here paradoxically, I think we are up against another limiting belief. And notwithstanding the excellent open source strategy of the European Commission. I remind that there are, I would say three general groups among public officials when it comes to open source. The first is sort of the open source believers. They are excited about the possibilities. I outlined earlier. There is the old guard, if I can call it that, which is still strong that looks at open source with suspicion are concerned that it is not secure and just a cheaper alternative. And finally, we have really the biggest group, those who never think about open source. They have been on their radar, or maybe they have a general feeling that it has something to do with with the hackers with hackers in basements, nothing wrong with that. And so paradoxically, I think we are everywhere and experts know this, but the vast majority in the public sector are not aware. They see open source as something niche at best, I would say. And important, I think to note is that not being thought of is sometimes something good. But when governments look to regulate the tech space being unknown can be dangerous or at least have serious consequences. I'll tell you another short story to illustrate this. The Commission published a communication on standards and patents. And when we asked or rather complained about the fact that there was no mention of royalty free licensing, no mention of open source and how it's impacted. The answer from the European Commission was simply not a single open source company had come to them and asked to have a conversation. Or filled out the consultation document. And so think about this, not a single party had come, not a single stakeholder had come to talk to them about this. And so being part of the political discussion had implications then and I think we'll have serious implications in the future if we don't engage. But, you know, it's not all doom and gloom. I think today we have actually a real opportunity to engage and to be proactive when it comes to engaging with with policymakers. And last week, as I mentioned, the European Commission published this document. It's latest open source strategy. A very ambitious document and stating that open source plays an important role in the digital autonomy of Europe, saying also that open source can can give Europe a chance to create and maintain its own independent digital approach. And staying in control of its processes, its information and its technology. So what does what does the commission want with these with these efforts and these statements and I believe we were told openly what they want by pierce Adonahue, a director at did you connect. He spoke at if he's open source policy summit in February this year. The commission challenges open source as a community of stakeholders to show them what open source can do for society and for Europe. And they are reaching out to you to us the open source community to engage. They have already taken the first step and really now it's time for for us to level up. And so where to start right now there is the easiest way for you to engage. I would say in the policy conversation is to take part in in this in this European Commission's study on the impact of open source software and hardware. It's actually being carried out by open form Europe together with Fraunhofer. This study is meant to guide open source policy for the next 10 years. I believe it will go beyond policy statements and strategy documents and will impact funding investments and procurements. So it's really important. We are conducting a big survey as part of this study of open source companies projects and organizations with a name of, you know, to capture the real face of open source as it is today within companies and both small one person ventures and tech giants. We want to hear from from all of you. So go to our website or the Commission's website and you'll find the links where to start if you run an open source business. There are many ways that you can get involved and engaged as a stakeholder. Just just this week, European Open Source Business Association is formally launching in Brussels. It's called Appel, association professionnel européen de logiciel libre. It aims to be the voice of open source businesses vis-à-vis the European institutions. I think it's a it's a milestone really for open source to be represented as a stakeholder in Brussels. So if you are you a supporting member of a National Business Association for open source. Are you encouraging them to engage at the European level. As I mentioned about the standards and patents communication a few years back, there was no open source company that had spoken to the European Commission, and that's how you get inadvertently affected by by by policy. Today, I think the situation has improved somewhat, but the level of resources of the European open source business associations are nowhere near where they should be when taking their real world importance into account. Other industries put way more effort into European representation. And you can tell yourself that you won. It's the government's responsibility to understand the importance of open source and just sit back and wait for them to call us for opinion. But you can also take responsibility for the situation and and start or start the conversation. Build build that bridge. Remember, open source, the community that you're all part of has the capacity, I think to help solve some of Europe's large strategic challenges. And it's not it's not only open source. You know, this is not unique to open source. It usually takes a crisis or an immediate threat for for smaller industries to react to political risks or opportunities. You know, the big ones are at the table already the the establishment is there. But open source is not small anymore. It can sometimes look small in the eyes of the policymakers or look niche, we have to step up and and and improve the communication, educate policymakers and that's why I'm now turning to you here at the open source summit Europe. Join us in our conversation with policymakers. And then we'll meet for an existential threat show policymakers how openness can solve their problems. Thank you, and stay in touch. Thank you, Sachiko. Our final keynote, Dr. Alan Friedman is director of cybersecurity initiatives at the National Telecommunications and Information Administration in the US Department of Commerce. And there he coordinates NTIA's multi stakeholder process on cybersecurity, focusing on addressing vulnerabilities in connected systems across the software world. You know, open source is such a critical part of the world's global software supply chain. And today, Dr. Alan Friedman is going to share the latest on supply chain security. Please welcome Dr. Alan Friedman. Good afternoon. Hi, my name is Alan Friedman. I'm with NTIA, and I'm really honored to be here today to talk about something that's near and dear to my heart, and something that we're thrilled to have partnership with a lot of folks in the open source community on, which is the idea of software bill of materials or In the next few minutes, I'm going to try to walk through why we need it, where we're trying to go as a community, and ideally hopefully convince some of you that this is something that's important enough that maybe you'll get involved yourself. So, let me just take a moment to awkwardly try to set up the screen sharing and sorry it's not like we've been doing this for a few months. So, what is the path towards a more transparent software world. So first, let's start with just something very basic, like a car. Now, this is a particularly nice car, and it's got a lot of fun things about it, but something that computer security expert and co founder of the cyber independent testing lab, much noted a few years ago, is that this brand new car at the time is a pretty new car. And at the time came with libtiff, net cat and the people. Now, for those of you who are in the street research world or the pen testing world may recognize some of these as very vulnerable libraries and common tools that are used to exploit systems. So the punchline that much set us up for is your brand new car, essentially came pre owned. There's something bigger about this other than saying hey what's in a device, which is that when we do discover a new security risk or a new security flaw. Very few organizations, whether they're organizations that help make software or that use software can quickly and easily identify if they're potentially at risk, or putting their downstream users at risk. And that's really a big part of what we're trying to do here today is to create greater transparency for both security as well as license management and efficacy to help folks better manage their software supply chain. So, what are we going to be talking about today. I'm going to briefly walk through I think something that hopefully you'll find relevant which is just a general role that transparency plays in the marketplace today, or you talk a little bit about why we're not doing this. I'm going to talk some about how we've built this broad international multi sector initiative to pull together experts in software from all corners of the world. And then hopefully I'll be able to convince you a little bit to join this community. So, we're all kind of familiar with some of the things that are in the marketplace today to provide transparency. For example, we're all familiar with a list of ingredients. So suppose you wanted this tasty non biodegradable snack, the Twinkie. Well, if you go and buy it, you may look at the back and the ingredients list and find out that Twinkies are not in fact, vegetarian. That's okay. Maybe you don't care. Right. You just want something that's delicious. But all of us know someone who is a dietary restriction or food sensitivity. And the vision here is about empowering us and those we care about to make the right decisions. This is the famous risk management approach to security. And that's really the vision here is to understand what's the software equivalent to having this kind of transparency into our software supply chain. Now the software equivalent I'd argue this idea of a software bill of materials. It's a formal record and forgive the wall of text here but it's a formal record to say, these are the different components. This is what my supply chain is. The list of ingredients or, you know, understanding what the relationship between those ingredients is. Now, the value of S bomb is pretty spectacular across all parts of the software ecosystem. For those of us who build software. And I think that's many of us. It's very hard to claim that we have a secure development process if we aren't tracking the pieces that are going into our software. There's a lot of other other parts that we want in an SDLC or a secure process as long as just one part of it, but it's a very critical part. Similarly, if you're thinking about supply chain, where your software comes from making sure that using the best quality parts, you need the ability to look inside the supply chain of your software. Hey, is the software that I'm about to use in my own project or in my organization, is that helping us or hurting us when it comes to risk? You similarly need to know not just what the piece of software is, but what are the underlying dependencies. And of course, vulnerability management is something that we all care about today, which is to say, hey, where are the vulnerabilities? There's nothing to say this large, well used project has a vulnerability, but often the things that we care about are buried deeper down. So maybe now you're like, yeah, this is a great idea. Why aren't we already doing this? Well, to take us a little bit of history, someone already tried. Over five years ago, there was an attempt in the United States Congress to create a law that says that everything that the US government buys comes with a bill of materials. This was not met with the greatest of enthusiasm by certain corners of the software industry. In fact, they tried to kill it with fire. But it's important to acknowledge what were some of the reasons why they reacted so strongly against it. One of the challenges is that licensing is tricky. And until very recently, I don't know how many of even the biggest and most mature companies in the world could honestly say they had all their open source licenses in order. Today, I'd argue that's a much better understood risk. There are commercial off the shelf tools that had it for you. There are phenomenal efforts like the Open Chain Initiative and the Linux Foundation to help organizations better understand how to use licensing. Every startup now knows that if they're going for exit or even trying to raise money, they need to have some demonstrations that their open source of purchase value. And also, it's hard. If this were easy, we would have done it already. And I'll walk through some of the engineering challenges we're dealing with as a community. But I don't want to overstate that this is the sort of thing that we can easily do, in part because it requires collaboration across different kinds of organizations. This isn't something that just a single organization can solve for themselves. And a big part of it is that no one's asking for this information, so no one's providing it. No one's providing it, so no one's asking for it. And that is what we call a chicken and egg problem. I have a little bit of a background in both cryptography and economics. And in the economics side, we like market killers because one, they make the math a little more interesting. But also, it's kind of a chance for the government to get involved. It's a famous phrase in America, we're from the government, we're here to help. But I think this really is a chance where we can play a key role, not in regulating, but in doing something a little different. So traditionally, how does the government help for this kind of work? Well, we have regulation, we're going to make people do this, or the carrot or the stick. So we have the carrot, which is what pay does. Now, no one gave me a budget, so I can't pay anyone to do anything. And also, my corner of the US government doesn't have regulatory authority. In fact, we're big into helping markets and the community solve problems. And so that's really what we've tried to do here. NTIA has built this multi-stakeholder process. It's open and transparent, it's consensus-based, and really anyone can participate. We're lucky enough to have some folks from the open source community involved. We love you, involve more. But let me just take a little bit of time to talk about what we're trying to do. The scale is quite ambitious. We're trying to address this as a cross-sector initiative. One of the things that we're worried about is that this will develop solutions that are sector-specific. There'll be a healthcare-specific solution, there'll be an aerospace solution, there'll be an automotive solution. As you may know, we all use the same software. And so by fragmenting the ecosystem, we actually make ourselves worse off. Similar, we need to capture the entire supply chain. Starting from open source, going into commercial, a special attention to embedded software, which is often a lot more opaque. And of course, getting the end users involved or the end organizations involved to save what's the data that they need to understand the software that they're using. So this is very broad. We're really trying to address the entire software ecosystem. That's ambitious. So I also want to underscore that we're trying to be as focused on as thin a slice as possible. We don't want to solve everything. So very briefly in these categories, what we're not doing. This is not about regulation. This is not a US-specific initiative. We're working with global organizations. This is not about a formal standards development process or a requirement to disclose source code. And we're also not trying to solve the entire world of security or supply chain. There are a lot of other great initiatives out there. Our vision here is that SBOM should be able to complement them and slot in or dock with them as needed. We've been at this for about two years and we made a lot of progress. There's now, I think, a clear appreciation on the potential value for transparency. When we first started, people tried to stop us, often sometimes from inside the same organizations as folks who were enthusiastic about it. That's one of the things I love about learning about different corners of the world. There's been broad consensus on the scope of the problem, on the need for a global approach, on the fact that we should really be focusing on a baseline or a minimum viable SBOM. We shouldn't try to solve every problem in the world. But any solution that we come up with needs to be machine readable and needs to be scalable. So I'm going to flag this website a couple of times, ntia.gov slash SBOM. It's a chance to learn about more. And so far we've made some progress. We've developed the basics of the what, the why, and the how. And now we're focused on a path to the planet. So Alan, you've been talking about SBOM for a little while now. What the heck is SBOM? And SBOM is very simply a dependency graph. So in our toy example, we have ACME appliance that depends on exactly two dependencies, big O buffer and Bob's browser. Bob's browser in turn has at least one dependency, which is Carol's compression engine. And we know that Carol's compression engine has no further dependencies. So that's it. Just to direct today's signal graph. One of the things that's important to acknowledge is that we're trying to make the known unknowns explicit. In this case, for example, we know that Carol's compression engine doesn't have any further dependencies. We have no idea if big O buffer is similarly the root of the tree, or leaf dependent how you draw your trees, or is it the top of a very large dark branch. But that's okay, because we're not again trying to solve all of this. We're trying to make sure that when we don't have information, we make that lack of information explicit so that folks who care more can learn more about it. So for each of these components, we don't need that much information. You know, who's the supplier, right, what organization to come from, whether it's an open source project or commercial project, what's the component name. What's the version, right, so that we know if it's, you know, something that still has a vulnerability it's been fixed. And what's the hash, so that you and I both know that we're talking about the same piece of software. I'm really confident that I didn't just download a backdoor version, or typo squatted version from GitHub. How many levels deep, or trying to go. Well I touched on this a little bit. You know, ideally, we go all the way, right, we have the entire graph, but I also want to acknowledge that as we start doing this, that might be difficult for some actors, their resources depend on the type of software they're using. The minimum SBOM must include all the top level influence. They in turn should ask for their includes SBOMs, and then hopefully we can use recursion to get further down the graph. Ideally, we make a best faith effort for all known components. As folks start to use it, it's going to be up to them to figure out how much to ask for their supplier, that's going to be a negotiation, and we'll talk about that in a little bit. But why should we do this? So I've talked a little bit about some of the different use cases, and really the benefits depend a lot on the role you wear, right? All of us wear different hats. Some of us make software, some of us select software, many of us operate software. And there's value freezes, and I don't want to walk through all of these. But I do want to say that for many of these, it's kind of crazy that we don't have them today. And there's real value, I think, in demonstrating maturity. The last bullet under operate software driving independent mitigations is also key. So here's a case study on remediation with and without SBOM. So at the top, we see, hey, here's the different ways that this happens. We discover a flaw in January. Well, that project now has to sort of develop a patch, test it, push it. The second supplier does it down the supply chain doesn't realize that there's an issue until a patch has been pushed, and so on down the food chain. Once we have an SBOM, it doesn't magically solve the patching problem doesn't magically solve remediation problem. But now it allows everyone, while they're waiting to say, I know I might be potentially at risk, either I'll just wait, or I can take other remediation steps. I can change my code to make sure that even if this bug is executable, I can detect it, or I can prevent it. There's a lot of other things that organizations can do as the final user that can tune their intrusion detection system. They can work with a threat intel system. They can segment by their network. The vision here is there are lots of potential variations. Once you know about a piece of once you know about a potential risk. It also I think is a great chance to reward those organizations that are producing good software through good processes. If folks downstream now have visibility upstream into the software supply chain. They can now work with their suppliers to say yeah, we want you to use open source projects that have good governance or that have using certain best practices or that have Linux Foundation badges or something like that. There are a lot of details and ways out there by providing visibility. We can reward projects that have the stability and resiliency that we've come to know that we need in the open source world. So it's the what the why. How do we do this because the vision here is to make sure that we can actually automate this. Good news is that there's at least one standard that is there's more than one standard. In fact, there are three SPDX comes at Linux Foundation, SWID tags or an ISO standard. Cyclone DX is relatively new. It comes out of the OS world that's built for exclusively for this. These are great standards. The vision here isn't that we're going to pick a winner or a loser, but that we're going to emphasize cross compatibility and interoperability. The communities decided that a multilingual ecosystem doesn't operate too many doesn't offer too many challenges. And so we're focusing on translation. So for example, those core SPOM fields that I talked about, here's how you can implement them in each of these three standards. And today, we're working together to collect tools that are already doing this, so that make it easier and cheaper for folks to adopt it wherever they sit. Last slide is what flags more that's happening with healthcare sector with global companies because they weren't intent just to start saying well let's try to build out the standards and talk about it. They wanted to show that it was possible today. And in the middle of the world's worst public health crisis reason why sadly can't join everyone today. The healthcare world has been moving ahead, where large medical device manufacturers have been generating SPOM data, and that data in turn is being used by some of the biggest and best hospitals in America for predefined security and it management use cases. So we're showing that this has value today. What are we working on now what are the next steps for the community first we're finding and extending the model. And there's some key challenges that I'm happy to talk about with anyone who's interested, including the challenge of naming software. How do we share SPOM data, the fact that not all vulnerabilities are actually exploitable. There are a lot of efforts that we're trying to sort of manage here to continue at the architectural level. We're focusing as I mentioned on to like what tools exist today and also what's the gap what don't we have. We're getting the message out. We're interested in further demonstrations and we've got folks again from around the world in the energy sector, the finance sector in the automotive sector to just sort of say, Hey, how do we start doing this today. So if you're interested in any of those sectors or your own corner of the software world is interest. I'd love to talk more. One challenge that I briefly mentioned is the idea that not vulnerabilities, not all vulnerabilities are exploitable. So we're trying to make sure that we have a way of communicating that in parallel to the SPOM, again, to maximize efficiency and minimize the amount of time anyone has to wait. So, summing up transparency is as we say in the United States, part of this complete breakfast. It's not going to solve all of our problems, but we think it will help in a lot of different areas, hopefully in an area that you're concerned about. There's been a lot of work, cross sectors from participants around the world. This isn't just the US based effort. And so we love your participation. Think about building SPOMs for your projects today. And if you want to get involved, we'd love to have you as a participant. There's some email address. Find us on Twitter. There are a lot of folks using the hashtag SPOM. Find out what documents are there and chime in and say, Hey, this doesn't meet my needs. I would like you to help me with that. So with that, please feel free to reach out. Thank you so much for your time this afternoon. And I look forward to collaborating with you on this and a lot of other fun projects. Thanks so much. And of course the awkward finish. There we go. Who doesn't love the awkward end of a Zoom meeting. Thanks a lot. Thank you doctor for even and all of our keynotes for joining us. That's the end of our keynote talks for this year's events. We'll have a lot of great conference sessions remaining for you to enjoy today, as well as a ton of great content tomorrow, including KVM forum and Linux security summit, a bunch of summits for a number of LF projects, including open mainframe, LFAI, drone code, Phenos and more. Three highly recommended special sessions are technical resume and writing workshop, cracking the conversation code, allyship workshop, and a mentoring session on writing change logs that make sense. Please enjoy the remainder of the event. Please stay safe. And we truly sincerely hope to see all of you in person next year in Dublin. Thank you all. Have a great rest of the show.