 We hope you had a great day yesterday and thank you so much everyone for joining us today. And before we get started, I would like to make a few announcements related to the Linux Foundation's training efforts. First of all, today we are making an announcement that LPI Japan and the Linux Foundation have decided to team up to distribute Linux Foundation certification in the Japanese market. We are starting with CKA and CKAD and CKAD are the certifications for Kubernetes. These are available now from LPI Japan who will also be combining the certification with others in their in their catalog to provide the opportunity for our folks to receive new stack certifications. In addition, we are launching a new Japanese language YAKTO project training course in partnership with Linear. This is particularly valuable for the automotive sector as YAKTO is used by AGL's unified codebase, we call it UCB. And in addition to YAKTO, we already offer Kubernetes and blockchain training courses in Japanese language. And we are also finalizing the Japanese Linux training course as well, which is going to be probably this month. So we hope many of you will take advantage of these Japanese language training courses. Now we have an exciting keynote lineup today beginning with our first speaker, Arpit Joshipura. Arpit is the general manager of networking IoT and Edge here at the Linux Foundation. He was voted top five movers and shakers in the telecom industry and is an industry self-leader helping to drive major disruption in 5G Edge AI and cloud native. Today he joins us to discuss the state of open source networking and Edge. And please welcome Arpit Joshipura. Thank you and thank you for having me here at the open source summit virtually. I'm going to talk about the open source networking and Edge landscape and ecosystem. Some great stuff is going on here and I'm sure you're all excited to hear about it. Let's go right in. The three things I want to talk about today in a very short time is that networking and Edge, especially the open networking and Edge and IoT, are extremely critical in the new normal that we are in across a whole set of vertical industries. And I'm going to give you why and show you how. The second most important thing is code is one thing, but open compliance, standards, harmonization and mostly use case-driven approaches are gaining a lot of traction. So it's the deployment focus in the area of open networking. And then as we all know, Edge is the new cloud and we are really excited to host some of the most fascinating projects here at the Linux Foundation. So let's dive right in. Open industry is a new term that lots of you may not even be knowing about. I called it open sourceification. Linux Foundation published a white paper which you should download, which goes through the transformation of vertical industries, whether it's automotive, which is co-located and why we are here today. Motion pictures, fintech, public health, energy, and most importantly, telecom is at the forefront of this open sourceification. And it's a huge deal for most of the people around the world that industries are going open. And in that openness, the big two things that are critical are the network and Edge. And it's really technology-wise 5G and Edge. And it's the new normal. As you see in the Forbes article, as businesses and government establish the new normal, technologies like 5G and Edge computing will be extremely important to deliver automation performance and insights for all these industries, manufacturing, healthcare, energy, utilities. So telecom is the critical plumbing layer. And we are excited to host this at the Linux Foundation. The one thing that has changed in the mindset is instead of cost savings that was originally the driver, the reason why operators and end users are choosing to go open source is really to create the market, adopt the acceleration and collaboration. So this is really a new mindset to come and play at the table with open source. Let's go into the second part of my presentation, which is why compliance standards and harmonization of open source and use case-driven approach is extremely important. If you remember this, I think I was a couple of years ago or last year I was presenting at OSS and we had this slide out. And we always focused on projects. We are hosting this project and this, you know, here is one project in Automotive Grid Linux, here is one on the LF Networking, etc. But there is a whole cycle below, which is creating these products from these projects and then taking it to market. And so we are extremely focused beyond code on open compliance, open verification, open interoperability, open testing, open training and certification. And some of these technologies are well beyond code. You can take LF Networking's Anuket project or LF Edge Crano blueprints. They focus on speeding up projects into production. And that's why this is extremely important to focus on the whole life cycle of the project. And you can't do it all by yourself. You have to do it with standards. And we have harmonized all the implementations of the standard bodies that we have collaborated with. So there's no two separate implementations. It's just code reflects the standards, standards reflect the code. Very excited about that specifically as, you know, we host the Oran Alliance software in the Linux Foundation. And that's a big upcoming initiative that we're collaborating with the Oran Alliance on. If you put it all together, now you have a full architecture from enterprise to user edge to service provider edge into core and a stack that kind of goes from cloud into infrastructure control, and then management and applications with all the applications, virtual network functions, cloud native network functions running. And so I want to be very clear that these projects are built for use cases. And the use cases could be network slicing could be quality of experience optimization. It could be nomadic broadband. It could be onboarding. It could be multi tenant. I won't go into that. But extremely important to make sure that you look at the open source from an end to end perspective and look at it from a use case perspective. And so with that plumbing layer of networking and telecommunication, what's the next battle? The next battleground is edge. And I'm telling, I'm saying edge is the next cloud. In fact, edge market sizes four times that of cloud computing the next five years. So we're really excited to host this. And if you look at all the analysts, they're starting to gauge which application and use cases are going to be very important. For those of you who are not familiar with edge computing, which I would find very hard to imagine. It's basically bringing compute and storage close to the application as possible, right? So that you get latency sensitive applications to run as if cloud is coming right at your application. So new vertical industries, new applications, new revenue, new use cases. And this is basically a set up which says, you can have low bandwidth to high bandwidth, low latency to high latency. And here's a set of applications that are going to pluriphrate at the edge. The one thing that we have done, and I would strongly encourage you to download this white paper, is define what edge is. In the past year, a lot of organizations and a lot of people have kind of used terms like thin edge, thick edge, far edge, near edge. They're all loose and relative terms. This terminology is done by an open terminology project called State of the Edge, which we host at LF Edge. And effectively, it's two types of edges. You have a user edge and a service provider edge. User edge is dedicated and operated. Service provider edge is shared. The last mile separates it. It's not a hard cut. But then if you look inside the user edge, you have extremely constrained device, smart devices on-prem data center edge. And then if you look at a service provider, you have the access edge. And clearly, we have not one, but many different types of edges. And we have projects to address solutions and use cases for these edges. These solutions come to us as a form of a blueprint. A blueprint is kind of an end to end implementation of a use case. And I'll give you some examples and talk about one particular one which is relevant to this conference. But these blueprints could be anything from MicroMAC on the left to cloud gaming, to telco radio edge appliance to connected vehicles, which I'll talk about. And so these blueprints, please download them, use them. These are open solutions that accelerate the deployment. And then if you take these blueprints and put it into a use case that these projects under LF Edge are working on, starting with the left, you have anomaly detection, surveillance, everything that you need in a home, or the equivalent in an enterprise on factory with edge virtualization engine. Or if you take Fledge as a project, it's all in the industrial IoT, predictive maintenance, turbines, transformers, pumps, et cetera. Or a Crano, where you have all these connected vehicles and augmented reality classrooms and blueprints for the telco world. As well as a Jax Foundry, which is one of the more stage three project, which goes into building automation, retail, industrial process, control, et cetera. So we're very excited that use case driven approaches are proliferating. And now you look at one of the deeper examples, which is relevant because of the connected vehicle blueprint, which is part of this conference. You have the architecture on the left, but this blueprint takes advantage of the edge compute to improve the location accuracies, navigate smarter, safety improvement, reduce violations, huge amount of traction on this. I won't go into this, but think about it. With a cloud close by, your vehicle can go miles, literally, without missing a heartbeat. So to wrap it up, I want to emphasize that network is even more important in the new world. End users have a vast array of options that they can take advantage of this. And open source collaboration goes well beyond software to build what I just talked about and to look at all the things that come our way. And it's really faster security and faster time to deployment. And finally, edge is the new cloud. Again, four times the size. Don't miss out, especially because technologies like 5G, Cloud, AI, IoT, they all come together. And applications like vehicles and connected cars are the next big thing, along with the industrial manufacturing and retail. So with that, I would like to wrap up and thank you very much for the opportunity to speak here at the open source summit in Japan. Thanks, Arbit. Our next speaker is Masafumi Ota, Senior Business Strategist for Suzai, Japan. Ota-san's career has been focused on a variety of technology from Solaris to OpenStack to edge automotive systems and machine learning. He has been an active contributor to the open source community for many years and was the founder of the Japanese Raspberry Pi user group back in 2012. Today, he will discuss how Suzai is powering the automotive digital future. With that, please welcome Masafumi Ota. Thanks for coming on my session. And I will present it about how Suzai is using automotive and the connected car markets. Let me introduce you to Suzai Japan. Now, I would like to welcome Tessihara-san and the president of Suzai to enjoy our business in Japan. Suzai is a simple, simple, modernized, accelerated. We have three vision for our business. And we have acquired the lunches in this year and start working together in this month. And you don't know that there are 12, 15 automotive using Suzai. So let's again about the Suzai Linux for the automotive market. And five and the Linux for the vehicles. So you know in the connected car now, it's like a smart devices. It's like a smart home on wheels. And so turning to the reaction, automotive and automotive is on the first and the secures and the azure to deploy it and first it and first prototyping using the connected car. So Linux is very useful and constantly improving in automotives. So now we have collaborated in the electric bit and the delivery distribution for the automotive and the Linux distribution. So we are talking about advertising because, sorry, advances in Suzai and automotive Linux. So based on the automotive Linux, it's now distributed by an extra bit. And it is on the automotive grade platform based on Suzai. And so it is and it is a continuous links for automotive use and the reliability and stability in the maintenance and positive management safety security upgrade. And so secure Linux and the automotive and the solution. And it is secure and the whole automotive. So Suzai continuous of the automotive and the zero and the and and several and so use for the and the continuous as a Suzai micro micro OS and so right-weight and sorry and you know in the K3S and the Kubernetes and by launches is a useful for and the automotive automotive. And we and we know we still need for the traditional technology for automotives. So real-time kernel is very important and the elements and automotive market. So and life cycle advantage for automotives and in Suzai. So we have over 30 years and they and and support in in in one this is in and Belgium. So automotive inside in Suzai and and and it is and this is the visual approach in the car. And you can see and the central compute cluster in the and Suzai and the a little bit and it is based on the container of the and the virtual machine and to deploy it and so and the to all and the automotives to and the improved and this and for the connected car. And and it is very easy to use easy to handle and the work and the container is very and very easy to handle to the and so move and to strategies and also but also be and the the VM is also the and the easy to and the move to and and and between his strategies and using and the containers and the launches and the Kubernetes. And so now and you know is and the and so ECUs and so and with the sensors and the works with and that's on a Linux. So weekend and the set only and the and the cars and so and the and the to checks and the status and in and the automotives and the conclusions. So today and I'm talking about and the so they and the 12, 13 and 15 so the and the using for the automotives and the economic and so they are the drivers of the next distribution for the automotives. So the long-term and support over 30 years and case 3S and the support you and the automotive business. So, thank you very much on the get it. Thank you. Our next speaker is truly good man. Founder and executive director of energy energy is energy's ambition is to accelerate the energy transition and the decolonization of the world economies truly has nearly three decades of experience and a startup and ongoing support of government governance and multi stakeholder engagement bodies that have been converted to enable decision making and provide the steering capacity for high visibility and high risk initiatives. Today, she will share the latest on LF energy and how you can get involved with that. Please welcome surely good man. Hello and good morning. It's an honor to share this time with you. I want to speak to you both personally and technically about the most important and existential challenge that is facing humanity. Every single person in this audience has a part to play in enabling the decarbonization of our economies. For the next 20 years, I propose that every decision you make professionally and personally is either going to bring us closer or farther away from climate collapse. But I want to start with something that's really my delight to announce from the main stage at open source summit Japan. And that is the Sony computer science laboratories has joined the growing and dynamic LF energy community. With their membership comes a contribution to the world of a fully functional, tested in the field, open source DC microgrid. Microgrids network energy. Think of them a bit like the fractal nodes that compose the energy networks of the future. Microgrids, while significantly more complex, are a bit like to energy what the Apache web server was for the internet. Like the Apache web server that enabled network scaling, it is my hope that this contribution will launch a revolution. It is a call to action to hardware manufacturers that we must up our game from a few thousand microgrids a year to tens of thousands of microgrids a month. We must shift the economics of microgrids from being precious to being ubiquitous. And for this reason, Japan is central and critical to the energy revolution. We've not had time to name the software nor design the logo, but in the world of good enough, we begin the journey today with this announcement. So please join us just about an hour at 10.45 a.m. for the LF energy mini summit where Dr. Huroaki Kitano, president and CEO of Sony consume computer science laboratories will deliver a presentation on the microgrid project and take questions at the summit. We will also show videos from five other LF energy projects. The event is free to attend. Please check your schedules. Okay, so 2020 has been quite a year, a year that has brought many of us to our knees. As challenging though as it has been, I believe this is the year that we really began our reimagining. This is the year the doors began to close on a chapter of human experience and the new ones began to open filled with immense possibility and hope. Nearly 150 years ago with the advent of fossil fuel and the internal combustion engine, we entered a period of enormous growth that brings us to today when the externalities and pollutions from fossil fuel threatens life on earth. A pandemic is one face of this crushing reality. So 2020 is the year we collectively agreed it is time for us to find a way forward. Now we are nearing the end of that year. So as we turn towards 2021, I ask us all to use our grief, our fear, our hope and our love to help propel us to make a credible leap from the comfort of our past into a future that awaits us. This is for future generations. While those of us are technologists and business people, first we are neighbors sharing a plan up. So in a personal sense, to share with you where I'm coming from, this is the place that I love that I call home. It is the view from my neighbor's house looking north. The smoke that you see is from two different fires, one less than five miles away, another about a dozen miles. We were surrounded by fire for three months. We lived in fear with bags packed, cars pointed out, prepared to leave everything. Along with COVID, many of us have also awakened to the realization that as Terry Tempest Williams wrote, we have been living a myth. We have constructed a dream. We have cajoled and seduced ourselves into believing we are at the center of all things. This is a lethal lie that will be seen by future generations as a grave, a grave moral sin committed and buried in the name of ignorance and arrogance. So this is the challenge then. Do we continue to use business cases that solely privilege profit over taking leadership for the direction of our planet? I would say that actually it's not an either or. We have to be able to do both. We have to make the transition to decarbonization economically viable and we have to take direction for our planet. And quickly, LF Energy and the Lennox Foundation are a bet on the future, a future that requires collaboration, cooperation, and the ability to abstract the complexity of digital communication to orchestrate the supply and demand of energy through open source. We cannot act like pandemics and extreme climate events are normal and continue with businesses usual. We have to name the time that we find ourselves in. And I trust that in doing so, we will find our way through. LF Energy gives me hope and I want to share that hope with you. So the DNA of LF Energy is the Lennox Foundation. It's all of you. I often carried this seed around. It is the seed to one of the most largest and fastest growing sequoia trees on the planet. And when you look at a seed, you don't really know that it actually is going to become this great thing. Well, that's what it is with LF Energy. We have the DNA of the Lennox Foundation. It's the kernel. It's LF networking. It's automotive-grade Lennox. It's LFAI. It's Hyperledger, Node.js, CNCF, Risk 5. All of you have cleared a path. You have left breadcrumbs for us to follow as we must rapidly decarbonize our economies. While we are a young foundation and an old industry, we carry a mighty history. The grid of the future is composed of all of you, every single one of you. But I'll be honest. The energy sector is a modern-day version of a log jam. We actually don't have many log jams on the planet anymore because we've harvested so much of our old timber. But what are the properties of a log jam? Stuck, slow, barely moving. Yet we must learn to move at the speed of technology. So the thing about a log jam is that you cannot delicately pick it apart. The way that you clear a log jam is with dynamite. And while that may seem scary or just kind of like, how do I even understand that, why I am talking about dynamite is that I am speaking to you all today as technology and hyperscaling businesses. You are the dynamite. You are the thing that is going to break this log jam. Many of you have created net zero or carbon negative business goals and I thank you. I predict that the more you internalize the goal of decarbonization, the more you will become the de facto energy companies of the future. So if utilities don't actually take it upon themselves to blow themselves up, they will be blown up by the efforts of you. And I invite you to do that, not because I'm wanting to destroy anything, but because I want to get the river moving. A distributed energy future is not just about PVs, photovoltaics and batteries, but it's about internalizing energy, the production, the use, the demand and making energy central to the formula for all your products and services. I want to talk with you not just about what LF energy is, but what we can become if we all work together. If you join us and that LF energy and the Lenox foundation becomes ground zero for the digital foundations for decarbonizing the planet and our economies. Energy moving at the speed of technology is a call to action. At LF energy, we are reimagining power and energy at all levels. We are building the digital foundations for the grid of the future. We were founded by transmission and distribution system operators in Europe. Together we are using open source and digitalization to accelerate the energy transition. Decarbonization is our goal to restore climate balance and mitigate the worst of climate change. In many ways we're on a great adventure in which these immense challenges are the mountains we will scale to create a sustainable future. To give you a sense of the potential for innovation, we have not even begun as a species, as a humanity to tap into the possibilities. Every hour the sun provides enough power for us to electrify the planet for a year. Think about that. There is so much opportunity to design a future aligned with the natural laws of the universe. So much will change in the next 20 years and it is your imagination and creativity that will unleash it. Yet today, to power our devices, warm our homes, cool our foods, use our consumer electronics and drive our cars, we are spewing into the environment global warming pollution. So that is the grand challenge that we must address in the next five to 10 years. We don't have longer than that if we don't really make a huge leap forward. And you are quite literally the people who must make these changes. It is your products, your services, your vision that is going to rapidly decarbonize power and transportation. So some of you may be asking, but why put an energy ecosystem in the Lenox Foundation? So my background is not as a power systems engineer, but in what was enterprise information management and governance, I went for a PhD. And what I studied was innovation, focusing on adoption and diffusion. So many of you have probably seen this typical adoption curve, gentle and predictable. Most of all innovations, be they beliefs, practices or technologies followed this curve. This is the diffusion curve we face in the next 30 years to avert thousands of years of climate collapse. The longer we wait, the steeper the curve. We have not a minute to lose. While steep, it has been done before. If you look at diffusion curves for colored TVs or smartphones or social media or the internet or the cloud, they all happen very quickly in a really short period of time. So I have hope that placing LF energy into the Lenox Foundation gives us superpowers to navigate this. And we need your superpowers. So these are our marching orders. Remove carbon from our power and transportation system to provide 75% of the reduction in CO2 emissions. And this is what we need. We need mass collaboration. We need to work with automotive-grade Lenox and the automobile manufacturers because we, like you, have to be able to provide the energy that is going to power vehicles. And vehicle-to-grid integration is going to become one of the greatest technical feats we actually accomplish in the next 10 to 15 to 20 years. So we need mass collaboration, like the Starling murmur. We need to design the grid of the future in relationship to the natural systems of the planet. And for that, we need the Lenox Foundation and we need you. We are more than just the most important open source platform in the world. I propose that we are ground zero for the digital foundations that will power the decarbonization of our economies. Like COVID, we cannot have a patchwork approach. We must work together. So I want to step back a little bit. This is basically what the old system looked like. You had generate, transmit over high voltage, over long distances, and distribute. And then you would turn a switch on and boom, lights came on. The key principle that balanced supply and demand was inertia. It was rigid, it was centralized, and it was extremely reliable. The power system sector has historically looked at investment in 50 to 80-year windows. It has been historically proprietary and not an easy system to change. But here we have the new grid. It's composed of variable energy from PV, wind, batteries, and automobiles. And it's massively distributed with multidirectional communication capabilities. This is a problem we know how to solve. This is a problem you know how to solve because we're talking about connected assets, connected homes, connected buildings, connected vehicles, connected infrastructure, and connected markets. And that is what the Lennox Foundation does really, really well. So the grid in the future is composed of loosely coupled systems that are resilient, manageable, and observable. Combined with robust automation, the digitalization of energy enables engineers and markets to make high impact changes frequently and predictably with minimal toil. Does this sound familiar? Well, the engineers that have helped drive the architecture of LF energy, they actually took this from CNCF. And why is it similar? Because distributed energy requires distributed computing. So it's not enough to decarbonize Denmark, or France, or the United States, or even Japan. We have to decarbonize the world in our economies. And this is why Japan is so important. Because you know how to take ideas, commercialize them, and spread them globally. So what I'm going to show you right now is I'm going to take just a couple of minutes and show you sort of the functional architecture of LF energy. I'm going to talk about this much more tomorrow. So I'm going to go through this pretty quickly, or it's not tomorrow. It's in an hour from now at the many summit. So the grid is the largest machine on the planet. And so there's no way to create a singular top-down architecture. But what we can do is build a taxonomy that we work with and manage and build consensus and mantle models around that allow us to build the grid of the future. So these are the components. These are the capabilities, both whether it's a microgrid or it is the macrogrid. And then you have these buckets that drop down. And these are the big buckets of functional requirements that network operators, again, whether it's a mini-grid or a macrogrid, have to compose their grids with. And then you have dropping down to the next level, you have the actual functional articulation of the grid of the future. This is incredibly important in terms of its visualization and realization of what are the components that we need to use to build the grid of the future. Now, I often say to people in chess, you know, there are no thousand engineers somewhere in a basement, somewhere building the grid of the future about to throw it over the wall and save us all from ourselves. In fact, what you're looking at really is a microservices architecture that privileges fast, quick, iterative, and that focuses on the interoperability between these parts. These are the projects that the Lenox Foundation Energy has. They're all the way from behind the meter and at the distribution level to asset management and edge node control and shared services. And we will get presentations for many of these from opening meter operator fabric, the Sony DC microgrid, and I think the grid exchange fabric and possible. So those are the ones that we're going to actually get presentations from in a few minutes. The other thing that what we've done as the projects have come on board is that we also have built frameworks and the functional architecture was the first framework that we worked on. We're now working on a data architecture and infrastructure and security and these cross project frameworks were officially integrated into our attack in the fall of 2020. So we are using these frameworks to actually speed the diffusion of technologies. So I'm going to these are our members today. We have 34 members in growing. I actually think it's 35 as of today. And so we are becoming a very healthy community of network operators, technology companies, vendors and suppliers. So it's been a fantastic year despite or maybe because of the challenges. We're creating a movement. That's what I often say to folks. And we do that by laying the foundations and creating the conditions for the entire world to collaborate at scale to transform energy and power and to transform transportation. We're not in that swim lane, but vehicles are going to have to integrate with the grid and the grid is going to have to be a green grid in order to be able to power those vehicles and decarbonize. So we need to work together in the same way that we need to work with LF networking because 5G is going to be critical. So everything that we're doing, we're doing in community and we are slowly but surely building those connections as we become more solid as a foundation. We are building those relationships to the other foundations at the at the Linux foundation. So we have seven new projects for total of 13. I think actually today we got two more projects that I hope are going to come in this year. One of them is a digital bill of materials for supply chains. And another one is a multi protocol gateway that is built on Fledge, which is out of LF Edge. And so these collaborations are tremendous and important. We have probably about 30 or 40 meetings a month. They're all open. You can if you go to our website and you go to community, you'll get access to the wiki. On the wiki, you can see that we're beginning. And, you know, in some ways I want you all to think of this as you're you're visiting a frontier village where we're just beginning to create the outlines of a future community that is going to hold the decarbonization of energy and power systems for the planet. We have many webinars a month and we have a new newsletter and we have had lots of really great media in the course of this year. So I want to bring us back to that we're going to be doing in a very short period of time we're going to be doing a mini summit. And I want you to join us. I think if you I tried to put the links in there, but if you go to the schedule, you'll see it. If you go to the OSS Japan website, you can see it. Please sign up. It's it's free and it's available to anyone. I will continue to kind of do more of a deep dive into the functional architecture because they really do believe that it is the foundation for what the grid of the future is going to look like. And then Sony is going to give a talk. And then we're going to have five videos so that you get some sense about the projects at LF energy. So, you know, in closing, I talk about the log jam. But I think we all need to remember what a free and moving river looks like. This is what we need to do. We need to break up the log jam, get the water moving. Let's do this together. We are the power of together. Join us, please. There's so much to do. Thank you. Be safe. This is how to reach out to me and also reach out to Mike Dolan. We are there for you. Thank you, Julie. Finally, our last keynote speaker today is John Corbett, co-founder and executive editor of LWN.net. John is also a Linux developer and maintainer of the documentation systems as well as several device drivers. John joins us to share the Linux kernel report. Please welcome John Corbett. Hello, everybody. Thank you all for tuning in. My name is John. I'm here to talk about the kernel. I really wish I could be in Japan talking to you all about this, but that's not to be this year. And of course, the reason behind that, something you may have noticed is that we are in fact in the middle of a pandemic. So if I'm going to talk about the state of the kernel, I should certainly be talking about how has the pandemic affected our community and how are we responding to it so far? So there's a couple of aspects to this that can be looked at here. One, of course, as we're seeing right now, has to do with conferences. Many of our important events this year, including the storage file system and memory management summit and the maintainer summit, have been canceled outright as a result of the pandemic. Others, many others have gone online, some of which have done better than others. But even when it comes to online things, we have lost an opportunity. We've lost the ability we had to get together to meet people that we wouldn't have met otherwise and to get to know them a bit. And this is important because we are a worldwide development community that is connected almost exclusively electronically. We don't often get to see each other. And that can make it very hard for us to work as closely together as we need to do. So the opportunity to occasionally actually meet in person, perhaps share a beer, and get to understand where our co-workers are coming from is really important. So the loss of this is going to hurt. And if we continue to be unable to get together in person, it is going to hurt us more in the long term. Hopefully this is not going to be a long term thing, but it is something to be aware of. That's only one aspect of it. The other question that one would ask, of course, is what about the code? Because that is what we are here for is to develop the Linux operating system. So I'll put up this chart, the usual sort of chart I put up in talks like this, showing the kernel release history over the course of the last year. And as you can see, we have continued to put out kernels about every nine or 10 weeks, just as we have for many years. These continue to be very busy releases. In fact, some of them, like 5.8, was the busiest kernel release we have ever made with over 16,000 change sets in it. And these kernels all involve participation of 1800, 1900 developers or so. So we seem to be going kind of business as usual in that regard. Another way to look at this would be to look at the history of the patches themselves. So these two plots indicate when kernel patches that landed in the main line were first posted, which is the top line, the blue plot, or committed to a repository, the bottom one. These plots go, as you can see, from the beginning of 2020 before the pandemic through the present. And it's really hard to see the impact of the pandemic at all in any of this. So once again, it seems that we're doing okay. I would say, in fact, one could say so far so good. If anything, we've gotten a little bit more productive during this time. So one might argue that the pandemic for all the horrible things that it has brought has given us the opportunity to do what many of us just wanted to do all along anyway, which is to hide away from people and just work on the code. So be it as it may, we're doing okay so far in this regard. And that's only a good thing. So moving on though, I put up this table of kernel releases. These, of course, are mainline kernel releases. Very few of us actually run mainline kernels on our devices. So one might ask what kernel do we actually run? And I've talked in the past about stable kernel updates and all that sort of stuff. And that is all relevant. But if you look at what's actually running on our devices, on more devices than anything else, that is, of course, the Android kernel. Because Android systems do in fact run on a Linux kernel. The history of relations between Android and the kernel community has been a little bit mixed at times. There have been some tensions at times, especially in the early days, a lot better now. And it's getting even better yet. And that's what I wanted to talk about in particular here, because there's a development that is worth keeping an eye on. And that is called the Android generic kernel image. This is a mainline kernel, actually an almost mainline kernel. There's a small number of patches applied still. This is built by the Android open source project. This kernel is based on a current stable release and tracks them going forward. So it's a current kernel. And the interesting aspect of this is that as of the Android 12 release, use of the generic kernel image will be mandatory for any device that wants to call itself an Android device. Any code supplied by vendors can only be provided as loadable kernel modules. And this is important. This is a huge change. If you look at the history of vendor supplied kernels on Android devices, you see an awful lot of out-of-treat code applied to them, often millions of lines of out-of-treat code. And this code has often done things like replace core components like the CPU scheduler, things like that. Really kind of ugly in that regard. So what's going to happen now is that you can't really do much of this anymore, because the module interface for the kernel just does not allow the replacement of the CPU scheduler or many other changes to the core kernel. So it's going to very much restrict what what Android device suppliers and vendors can do to the kernel and bring as much clonor to the mainline in that regard. Since the generic kernel image is based on stable updates and tracks them, Android devices themselves will get the latest fixes and the latest security updates, at least for as long as they are getting updates in general, which is a separate problem. But with regards to the updates they do get, they will be updated to current kernels and that is a good thing. Finally, this policy gives vendors a strong incentive to upstream their code. For a lot of the things they might do to the core kernel, the only way to get the functionality they need is to get it into the mainline since they can no longer apply it themselves after the fact. But even for things like device drivers and such that can be supplied as loadable modules. The life of these vendors will be much easier if they can get into the mainline because this generic kernel image will move forward over time and if they don't want to have to keep forward porting their code, they will be much happier to have that in the mainline and have a lot of that work done for them. So the end result of all this is that it's getting closer to possible to run generic mainline kernels on Android devices. Someday we'll actually get there for at least some devices. And this particular problem is mostly behind us now and that's a great thing. That is great news and something that's testament to the hard work of both the people in the kernel community and especially the people in the Android ecosystem who worked very hard to solve this problem and to bring these two communities together. So one bit of interesting legacy behind us. I wanted to talk about a couple of other legacy problems while I was on the topic and perhaps even getting a little bit more technical than I often do while I'm at it because I think again there are some interesting insights to be seen here. So the first of those regards an internal kernel function called set FS. This is something that users of the kernels never see but it is something that kernel developers have occasionally had to work with over time. So what is set FS? If you look at the way that the virtual address space is laid out in Linux systems you see something like this. This is a typical 64 bit address space for a pretty normal configuration on the x86 64 architecture. And you see that this address space is split between kernel space and user space with the boundary between them. Set FS is a function that can change the location of that boundary. Now the boundary exists in particular as part of the kernel security mechanism to prevent system calls from acting on memory in kernel space. So if you call read for example to read the contents of a file into a memory buffer the kernel will not allow you to read into a memory buffer that lives in kernel space because that's beyond this boundary. And that's just not something that we want to allow users to do in general. But what happens what happens if there's a case where you need to be able to do this. There are cases for example where the kernel itself needs to read a file in the kernel space memory. There's a lot of things relating to network configuration for example that involve invoking system call code on buffers that are in kernel space. So what do you do in that case? In that case you call set FS. Set FS will move that boundary essentially removes it making the entire virtual address space accessible to system calls and other such things. So now you can call your system call from kernel space and it will not run into the security barrier and things will work. This is all just fine as long as you don't forget to put the boundary back when you are done. Anybody who's paid attention to how these APIs worked knows that in a case like this somebody is always going to forget to put it back. And in fact that has happened on occasion. Sometimes it's just sloppy coding or sometimes you get some particular obscure error path that manages to bypass the set FS call restoring the boundary there. In any case what you end up with is a really insidious vulnerability because everything appears to work just fine. There's no visible bug unless you try to exploit it somehow in which case it is there. So this has been the cause of a number of security problems, a number of CVE entries over the years. But the good news is that as of the 5.10 kernel set FS is now gone at least for a number of the more commonly used architectures. This is the result of work that has been done over a course of years and especially over the last year. And it required the replacement of a lot of system call interfaces with internal interfaces that can do the sorts of things you need to do safely without having to manually mess with the security boundary there. So it's a result of all this work. We have something that most users will not see except that perhaps they don't have to apply as many security updates going forward which is a good thing. Another legacy problem that is worth mentioning is the year 2038 problem. In January of 2038 the 32-bit time value that's used on 32-bit systems will overflow. And then time values of course can no longer be accurately represented at that point so you get corrupted times and other things that go wrong. And you might ask well why do I really care about this because we're all using 64-bit systems now. Even my phone is a 64-bit system. But the truth of the matter is that 32-bit systems are still being built. They're still being deployed and they will still be in use in 2038. So we want to fix this problem. We want to fix it soon because some of the systems being deployed now will still be around. So again there is good news that the work to fix this in the kernel was mostly completed in 2020. I'm done mostly earlier this year. There are still some loose ends mostly in the area of file systems. A few things that need to be fixed there. But otherwise we have managed to fix this problem in the kernel. Finally this was a many-year effort to get there. There's still a fair amount of work yet to be done in user space and for distributors to do. They're working on it. They have plans. But the kernel side is just about done. So again good news. There's kind of a common aspect to this. I wanted to point out that neither of these problems was a small problem. Both of them required effort over months or years really to get through and to fix them. So fine. But the problem can be this is the sort of work in the open source community that can be very hard to get funded. There are a lot of people who work on the kernel. There are a lot of companies that support work on the kernel. We typically have about 200 companies supporting work on any given kernel release. But there are still areas that many companies just don't see fit to try to support. Most of them are very focused on getting their own products working well with the kernel and don't much see beyond that. So the kernel like just about any other open source project has certain dark areas of important core stuff that nobody sees as being their problem to fix. Nonetheless, we were able to get the time and the effort to get these problems fixed and others as well. So this is good news. I hope this continues as long as we are able to do this. I think that the kernel has a long and bright future ahead of it. If we ever get to a point where we really cannot get this work done, then we're going to have trouble. But for now, we are doing well. Okay, let's move on to a related topic, which is security. And in particular, I want to talk about a couple of interesting upcoming security technologies that are worth keeping an eye on just as a sign of where things are going in this area. So let's think about 64-bit machines again. If you have a 64-bit machine, that machine, of course, uses 64-bit pointers to refer to addresses in the virtual address space. So if you think about ARM64 machines, for example, they have these 64-bit pointers, but it turns out that 64-bits can address an awful lot of memory more than we can really expect to need to use anytime soon, even if you take into account all the past predictions of how we're never going to need that much memory. It is still the case. As a result, on an ARM64 machine, only 48 of those bits or in certain configurations on some hardware, 52 of them at most, are used to actually address memory. The remaining bits, the remaining 12 or 16 bits at the top of the address are not used and thus could perhaps be put to another different good use. There are a few uses that have been laid out for this, but I want to talk about one in particular that's called the memory type keys extension. It looks like this. You've got all those bits out there. So set aside four of them, in particular the lower four bits of the uppermost byte, and use that for key value. So you can stick any arbitrary value between zero and 15 in those four bits and call that the key for this particular pointer. The address pointed to by this pointer does not change, but it is now qualified by this key value, and that has interesting use. If you then look at what you can do with memory, because this technology also allows you to assign keys to memory at about a cache line resolution. So I've just tried to diagram it here with the different colors where you see different parts of memory, different regions of memory having different keys assigned to it, as indicated by the different colors there. So now we have keys and pointers, we have keys in memory. When you go to dereference one of those pointers in a program, the processor will check that key and compare it to the key that is assigned to the memory. So if the keys match is the case here, where you can see the colors are the same, then the dereference continues as always and everything continues and works as it always did. If, though, those keys do not match, then the processor will reject this operation. In fact, it will probably kill the process that is trying to do it and flag a bug that is happening because the key no longer matched what was there. So this is an interesting technology and it has some interesting things that you can do with it. For example, use after free bugs are often a source of security problems in the kernel, where kernel code will free a range of memory but it actually isn't finished with it yet and that memory can be reallocated somewhere else and then trouble results. So if your memory allocated reserves one key value for memory that's not actually allocated to any purpose, and then as soon as you free a range of memory, the key of that memory is immediately set to this unused value, then any use of the memory after freeing it will generate a trap that would be caught and so these sorts of bugs will no longer go unnoticed. Similarly, buffer overflow vulnerabilities can be found this way. If you ensure that adjacent objects and memory have different key values, then as soon as a pointer gets incremented past the end of one object, the key values don't match anymore and once again you get a trap and you catch the problem. This technology can also catch problems with stray pointers in general whether it's just a completely corrupted value from a bug in the kernel or whether it's a pointer that has been somehow corrupted by an attacker who is able to override a pointer but does not know the correct key value to stick with it and so an attack is caught in that regard. So there's a lot of ways that we can catch security problems both when code is being developed and also in production and we're getting more help from the hardware than we used to here or at least we will be once it's out because you can't actually buy a processor that implements this functionality yet but they're coming they promise and once it's out there the kernel will in fact be ready to use this technology and to make our systems more secure than they were before. Another related technology that I would like to talk about is something that I'm calling memory hiding techniques. So what are those? Let's take a look once again at this diagram I put up before of the virtual address space and what I want to point out here is that this is the address space as it is seen by a process running in user space. It includes actually the entire kernel in the upper part of it. User space cannot actually access the kernel because the page protections will prohibit that will block it but in a sense that the kernel is still visible there it's in the address space. Things don't have to be done this way but they have been really beginning for performance purposes right because as soon as you try to separate those address spaces you greatly increase the cost of every context switch between kernel and user space and and the performance impact is readily felt and we've not wanted to do that. So we kept those address spaces all together at least we did until the meltdown vulnerability came about. Meltdown of course allowed the use of a hardware problem to bypass the page protections and read memory that was in kernel space leading to all kinds of vulnerabilities. So at that point we could no longer share things in that way and the way meltdown was fixed to destroy that mapping in the kernel space when user space is running so that it is no longer visible in this way and we did in fact duly have a performance set as a result of it a fairly big one but we're it was the only thing that we could do to address this particular hardware vulnerability. So this is unfortunate but we had to do it but it points at a more general principle that people started to think about a little bit more which is that memory that is not visible is more difficult to attack as long as the kernel was visible in a processes address space there were only the page protections that were preventing an attack on that memory. Once it is not visible once there is simply no path to it at all then it becomes much harder for an attacker to get at because there's a whole separate set of obstacles that have to be overcome to get a memory that you cannot see at all so if you can do it here perhaps you can do this elsewhere as well. So some of the things that are coming in the pipeline in the near future include a new system call called memfd secret this system call will allocate a range of memory into a process that is absolutely hidden from the rest of the system the kernel cannot access it no other process can access it and in fact it can even be kept out of the system caches so it's it makes it it's a fairly expensive thing to do but if it is used in moderation for example to allocate buffers to hold cryptographic graphic keys then it can increase the security of of your system and increase your resilience against certain kinds of attacks so this system call I would expect to be added to the kernel sometime in the next year it seems to be about ready to go. Secure enclaves are a hardware technology and are based perhaps more on encryption than on hiding a memory but they allow virtualized guests in particular or other sorts of of code bodies to to run in a separate environment where their memory is encrypted and the processor itself handles the encryption and decryption so that when you're running outside of that special context you cannot access it so once again virtual machines can run with their memory encrypted the host kernel cannot access that memory and neither can other guests this technology is is out there it has had some security issues of its own so far when assumes those will be worked out over time and this will become put into more common use. Meanwhile though people working on other techniques such as the thing that's called KVM protected memory this is a kernel technique that allows a virtual machine running under the KVM hypervisor to once again remove its memory from the visibility of the host kernel and from the hypervisor so the only the virtual machine can see its memory this once again will protect the virtual machine against the kernel on the host that it's running on protected against guests and other virtual machines and so on for a relatively low cost so this this particular work has a ways to go yet so I can't really make a prediction as to when it will make it into the mainline kernel but there's definitely some some interest behind it and I think we will see it come in sooner or later and once again increase the protection of of our systems especially those running virtualized guests the last thing I wanted to talk about in the security context was the the ongoing effort to simply clean up the code and fix bugs this is happening in every kernel release hundreds and often thousands of patches going into every single kernel release just doing basic code cleanups fixing problematic code patterns and so on it's kind of thankless work it doesn't bring the sort of glory that other sorts of kernel patches do but we owe a big debt to the people who are doing this work because it makes our kernel both more secure and more maintainable going forward I'm glad that this work is continuing to be done so with my remaining time I wanted to talk about one last topic and that was tools and in particular the tools that we use to work on the kernel itself this is an area that we have not always done all that well in but um well as I'll get to this let's go back to 1975 there's a guy named Frederick Brooks who put out an influential book at the time called the mythical man month this book was famous for the the quote the adding manpower to late software project makes it later if your project is running behind you cannot fix it by adding more developers to it in fact you will make things worse because the overhead of just keeping all these people on the same page and getting them to work together will slow you down even more so that was the case in 1975 one might ask if it still applies so we can look briefly at the kernel development history version 0.01 released in 1991 had exactly one developer by 2620 we had 741 of them and by the time we get to the 5.8 kernel this year we had almost 2000 developers working on it so we are certainly adding developers nobody can doubt that likewise nobody can doubt that we are nowhere near finished in fact we may be further from finished developing the kernel than we ever were so with regard to the kernel perhaps it is really true that adding so adding developers has only made things later who knows but in any case that's not what I'm actually here to talk about there was another observation this book where he was talking about how the teams organized around software projects should be set up and he said that every one of these teams should have a toolsmith somebody who is not working on the actual end product but who's working on the tools that the other developers use to create this end product and this is an area where we have tended to fall down in the past over the course of the year there are over 4000 developers who will contribute to the kernel and as I mentioned before there are certain things that just don't get done certain problems that nobody sees it as being theirs to fix and one of those has certainly been tools so the kernel project has in many ways lagged behind other free software projects in terms of the tools that it uses and that has hurt us at times so it's happening though is that the situation is getting a lot better over the last few years we have acquired a whole set of testing and fuzzing tools that have found hundreds if not thousands of bugs before we release the kernels before users find them or before attackers can exploit them so this has been a good thing because we were not very good about testing for a long time it was often said that testing the kernel was the reason that we keep users around so they can do that for us but we understand we need a better story than that we need a better story still than we have now but things are heading in the right direction the other aspect of this though is tools for the development of the kernel on the other side lord at kernel.org is a comprehensive archive of all of our kernel mailing lists going back to the beginning wherever possible something we never had until just a couple years ago but it has already rendered itself indispensable as a way of finding our discussions understanding how it is that we decided to do certain things and even more recent tool is a little thing called before if you give before the message ID of a patch of interest or just in fact pipe the patch into it it will go to lord kernel.org download the patch in fact it will download an entire patch history or patch series all the patches that are associated with that patch you also download any replies to it extract any tags like reviewed by tags or act by tags apply those to the patches that they apply to and package the whole thing up as a nice little mailbox file that you can then just apply directly to your particular git repository is eased a bunch of manual work that kernel developers all had their own scripts for for years it's a tool that we needed but nobody actually sat on to develop it until Constantine the Linux Foundation did that a couple of years ago he's got a lot of plans for it you can see the article link there for some of the other stuff that's going on including patch access station all that but it has already really made the life of kernel subsystem maintainers easier by having that so we're doing better we actually have tool smiths I hope we get more of them in fact I would say to anybody out there that if you want to participate in kernel development but you don't necessarily feel that you're up for working on the kernel itself working on our tooling is an area that would generate a huge payback and I would encourage people to to look at the the things that we need done there and to help out in that regard and with that I'm basically out of time so there's a whole ton of things that I could have talked about here many of these could have filled a whole talk by themselves but there's just not time to do that in a 30 minute slot so much of the stuff you can read about on LWN of course in other places there's a lot going on in the kernel community we're staying busy we're generating new kernels at a break pace and I think in general things are going pretty well and at that point I am done I thank you all very much for your attention and I wish you the best for the rest of the open source summit Japan and for your holiday season thank you thank you so much all the speakers for joining us we now have a break before conference session begins at 10 25 a.m in japan time so stretch grab some coffee and snack then join one of the breakout sessions and don't miss out our live music duo performance of the Japanese koto koto is the string instrument Japanese string instrument and shakuhachi bamboo flute so shakuhachi is a flute but made by bamboo and that will begin at 205 p.m in japan time and with that please have a great day and please enjoy the rest of the day thank you