 Hi, good afternoon. Let's start my session. My name is Tsukika Shibata. I come from Tokyo, Japan on Sunday. It's after three days, but still I'm a little bit jet lag. But let's enjoy our conference. First of all, again, my name is Tsukika Shibata. I submit my proposal. I was working for NEC. It's a Japanese company. But after that, I leave my company so that I step away from my company. But still, very much interested in working for Linux and open source so that I'm joined the Linux Foundation and also open invention network and source. Full time working for full of open source now. In my presentation, normally I talk about the statistics of Linux kernel development and also which version is the best for usable for the industry actual use case. So let's get started. Linux is one of the most successful open source projects and continue growing in 2080s, expanding the variety of use cases such as IT Enterprise, Cloud, Networking, Android, embedded IoT, and many other devices. And development and releasing under GPL version 2. Most of these statistics are discussed by Greg Coahartman or Jonathan Corbett, who is a writer of LWN. Again, I will try to talk about Linux development community. The development kernel is participating over 1,700 developers and 230 companies every release. And growing nearly 1.5 million lines of code and over 4,000 files are increased in every year. And again, Linux was started in 1991. So now it's 28 years. So it's not an early phase, it's very much matured. And maintainers have a great skill to manage the subsystem and professional knowledge of its area of technologies. In the very early stage, every larger company have its own operating system so that developers are working for their own operating system or own hardware. But nowadays, many of the company have no such kind of proprietary operating system so that such kind of people go into open source and gathering to the Linux community. So that's the reason why the great maintainer is gathering into kernel communities. And here is the latest status of Linux kernel. The latest release version is 5.2. And that was released in July 7 last month. Lines of code is 26 millions of millions. And the files are 64,000. And that was development period was 63 days from previous one, 5.1 version. 63 days means nine weeks. So most of the Linux kernel is released in mostly Sunday. And another update will be also released in Sunday so that nine weeks is every Sunday update will happen. I will show you some more details. And current stable kernel is 5.2.9. So nine times update has happened. And the current development kernel is 5.3 RC release candidate 5. So fifth times of release candidates already released. So how long? OK, yeah. Yeah, 5.3 release candidate. Yeah, it means after 5.2 release and update release will, third one, nice update is happening. That's why 5.2.9. Yeah, I will show you some more details later. And how long does the kernel take development? I was counting the number of days when release was happened. So let's look at the year 2016. 4.4 was released in 2016, January. And 4.9 was released in December. That was a six release was happened. And each release was take four days, 63 days, 70 days. 70 days means 10 weeks. And for 2017, it was started from February. And 4.14 was released in November. So it was a five release was happened in a year. And last year started from 4.15 in January. And 4.20 was released in December. So six release was happened. It's about two months or less than three months. And for this year, 5.0 was released in March. And 5.1 was May. And 5.2 was July. It's very coincidentally, March 3, May 5, and July 7. So maybe next one is September, hopefully September 9. But September 9 is not a Sunday. So maybe not good. It's just a joke. But it's easy to see. So we are easy to think about what time frame the next corner will be released. The corner development is a kind of nice development process, having a very nice development process. And what time frame the next corner will be released is easy to estimate. That's a history of long years of development. And another very important point is the Linux kernel development policy. The upstream is the only place to send patch. It's a single place. Every new features or fixes should be sent to upstream. And the very matured skill maintainers review each individual patches and then decide it will be accepted or not. So that's why Linux kernel can keep the higher quality. And test it with every single patches without any conflict. It's also happening. And that will need a well-coordinated process. But the kernel developer is discussing each other to create a better development process that in the current shape. So we need to understand upstream is the only place so that if we sometimes we made our own product and then create our own patches, but that's not good. If we will continuously release our own kernel, that may sometimes make a difference with upstream and our own version. So that's a very important point. And here is a kernel development process. After the 5.n was released, and then start two weeks of merge window. Or I'm not yet upload my slide. But after my presentation, then I will upload my slide. Sorry. And after 5.n was released, two weeks of merge window will happen. So everyone can propose the newer patches. And then Linus Stobals will create a Dash Rc1 release candidate. And mostly every week, next release candidate will be provided. So Linus will review the patches, and then release a release candidate mostly every Sunday. So developer can download that release candidate from Monday. And until Saturday, the new changes will be submitted. And then Linus will merge and release Dash Rc2 Rc3. That's a development process. And mostly Rc7 or 8 will happen. And then 5.n plus 1 will be released. That is why it's seven weeks or eight weeks will take in the release. Or in the release. And here is a graph of how Linux kernel code is growing. It's surprising. It's 20 years, but still growing. Because of not just the IT enterprise, but also embedded or some other use case is happening. And newer driver need to be merged so that newer use case will be every time happening so that the code is increased. But Linux kernel have a nice customization or configuration option so that unused driver is not be able to include in the kernel. So source code is being big, but if you wanted to use the unused driver, that will be outside of binary in the outside of binaries. And this one is always in upstream. Experimental code is always produced, and sometimes it's not mature. So people who wanted to use a stable one, not an experimental one, so that such kind of use case, there is a stable kernel. Greg Craw Hartman, who is a fellow of the Linux Foundation, is maintaining this stable kernel. This one is, you ask me, that 5.1, 3-digit of the kernel is frequently released. This is sometimes one time per week, or sometimes two times per week. And this one becomes the end of life when the next kernel version will be released. So that stable version is useful for the stable use case, but the end of life is just 60 or 70 days. So it's not a full fit for our industry use case, or we wanted to use a more long term kernel. To that kind of requirement, there is LTS kernel is there. That is continue to maintain the single kernel version. That is LTS. Also, Greg is maintaining this one. And this one will be released just one time in a year and maintained two years. Initially, that was always this one was two years, but nowadays it was extended. And someone is maintained six years. This one may be sometimes somewhat reasonable. And let's look back what I explained. Latest kernel was 5.2. And stable one is a nine time release is now happening. And already 5.1, stable one was 21 time released, but becomes the end of life. And also, current development kernel is looking at the 5.3 version, but now is the release candidate 5. So maybe more 3, 2, or 3 weeks later, 5.3 will be released. That's the current situation. So like this, we are very much easier to imagine what time kernel will be released. But sometimes, experimental feature is create some of regression. And that may take sometimes extended. So that's why we are important to looking at what kernel development is happening. So let's look going to LTS. LTS, as I mentioned, only a tree provide a fix from the community. And in the real use case, we don't need experimental feature. Just only stable, tested, and conformed kernel. And the security fix will be released frequently until end of life. It's sometimes two years and sometimes six years. And LTS will be released around November or December time frame so that we are able to imagine how what kind of kernel version will be released in December and November time frame. And targeting that kernel, we are able to submit our own patches into the kernel. OK. Here is what version is already released. In the bottom side, 3.16 is maintained by Bayne Hutchings. It was released in 2014. Bayne Hutchings is a maintainer of Debian kernel. So Debian is committed to provide a stable kernel. But other than that, Greg Goldhardman is maintained. 4.4 was released in 2016 in January. Initially, it was a two-year term, but now it's maintaining six years. So committed by Greg. And the end of life were projected to February 2022. And also 4.9 also become six years. And when I was presented a similar slide in July, 4.14 was a two-year term, but now it was extended to six years. Because of some large company is talking with Greg and extended to six years. So we hopefully extend the 4.19 also becomes six years. It's so nice, but Greg is just one guy. And in the growth, six-year maintenance period, maintenance period, and then maybe next year. This year, so another one will be released. Next year will be released. Greg should maintain six different kernel in a year. That's a heavy load, so that we may need to do something to more automated testing or much easier to maintain. That's a big issue in not just the kernel community, but also users for us. That's a big issue. And actually, I'm trying to do something but not yet solved. So now is a very difficult point of time. Here is an actual use case of LTS. For Android, it's already committed. Android-con AOSP will be used six years. And 4.4, 4.9, 4.14 is used by AOSP. Actually, Google is always talking with Greg and help something and asking him to expand six years. And also, Chromebook is initially used 4.4, but now extended to 4.14. So the latest Chromebook is using 4.14 kernel. And also, Microsoft is already announced the Windows sub-system for Linux version 2. It's running real Linux kernel on top of Hyper-B on Windows 10. It is not a every user cannot use but the test version. But the kernel version is 4.19, because Microsoft have knowledge about this LTS. And I like Raspberry Pi to use something. And I checked the July version of Raspberry Pi kernel. Also, it's also 4.19. And also, Amazon is providing their own kernel. Amazon Linux is using 4.14 to end 4.19. So we can see from the Raspberry Pi to Cloud, Linux is running everywhere with the LTS version. So LTS is a nice choice that we can see. It's not only a good thing. I was counting the number of commits included in LTS. Yellow one is the LTS version. And others are normal stable kernel. So let's look at 4.4. It's included 12,000 of commits are included already. But 4.5 have 900. So a huge amount of patches are already provided. This kind of change need to be applied real product. That may be some of the issue for every most of embedded shipment is just shipment one time and have no fixes. That's another issue. And let's see 4.9 is 30,000. 4.14 is 11,000. This kind of large amount of fixes are provided by Greg. And this should be applied. That will create more secure devices. But unfortunately, some people are not provided such kind of fixes. But in case of Android, Sony is shipping changes. Every month or every three months with this kind of fix, included with these kind of fixes. So that's one of the important point to do more provider fix to the already shipped product. And how many in an yearly fix are provided in case of 4.4? I count the total number and divided by the year. So that 3,000, 4,000, 7,000 of fix are already provided. So yearly, this kind of huge number of patches provided from the community. So we pick up these patches to our already shipped product. That's one of the issue. So how to handle such huge amount of patches? The testing kernel for every year this is actually hard. So use automated testing. There is some of automated testing open source framework, such as Fuego, or kernel CI, or some other rubber from Rinaldo. That kind of open source-based automated testing framework will have something. And also use common test suites and share the result. I believe almost all the company is doing their own test, but the result of the test is not shared so that everyone is doing the same test when having the same program, but it's not open. So if that kind of sharing will be happened, that may be some decreasing of some of the activities. And make a consensus of common test and develop it. We have some LTP or some other test packages, but it's kind of community-driven, and sometimes not very much fit with the industry use case. So industry people need to get together and create some common test packages. And then that will be tested in the very early phase of Rinaldo's kernel. Then almost all some of the program will be fixed by the community and not come back to, come later to many of the industry use case. That is another topic to be discussed. And kernel CI is already actively working, but they are trying to create their own project and by funding some of the company. But it's still not yet happened because of people don't want to invest the testing phase. But this one may be create a common place, so I really hope to establish kernel CI, but it's still not yet happened. And here is my recommended step for the future. So expect it next LTS version and release timeframe around December or November, put our own patches into upstream, and then that patch will be included and then very easy to maintain for longer term. And also choose a planned LTS kernel for your own product or services, and that will be maintained in six years, and create a process to apply all the patches, including security fixes. And that will be so nice, but I know it's not easy. So that's why I will present this kind of session is that try to solve this kind of problem. And okay, another one is, so what is the next LTS version? I need a drum roll. I actually discussed with Greg Hardman and he told me 5.4 will be the next LTS version if everything goes fine. And sometimes last year, Linus was stopped his maintaining of the kernel, sometimes maybe delayed, and sometimes another problem may be happened. The community is gathering of the contribution so that sometimes the development phase will become more extended so that if everything goes fine, within this year, 5.4 will be released. So this one is next LTS, so okay. Most of my presentation is finished, but I'm trying to talk about what is happening in the kernel. I have two topics. One is the CPU vulnerabilities. Actually, last year, 2018, in January, Spectre or Meltdown program was happened. That was a big surprise. We were believed that the CPU doesn't have any problem, but Spectre or Meltdown was a big issue and the kernel community was working so hard, even in the January, and create a better solution to solve these kind of a problem. That was happening last year, January, but after that in May, another Spectre program was announced. And then in June, some other one was released, announced, and also in August, L1TF is called Four Shadow, that kind of another vulnerability was happened. This one maybe found some other things, I thought, and then this year, May, MDS, it's called the Zombie Road or RIDL or Fallout, this kind of same kind of CPU vulnerability was happened. And this means that we were getting some of an impact that is what CPU we are using, Intel, AMD or ARM, and also what kind of generation of CPU, like a coffee leg or some sky leg or some other, and also 32-bit or 64-bits, these are big impact. And also what kind of kernel, Linux, Android, commercial OS such as Windows or Mac, or cloud OS like Amazon, and also other vulnerability environment, like a web browser or microcode. This is not just a CPU problem, but also a more broader range of impact is happening. And also some software version, is that all the one have no workaround, so that must be latest software, upgraded software, and a BIOS configuration, and a performance degradation is happening, and then need to add more higher performance CPU, so that is not just a single vulnerability, but also impact is so huge. And finally, should we expect further problem in the future? The answer will be, I think, yes, maybe some other will happen, so that the right answer is fix CPU, it takes some more time, so that we need to think about how further vulnerability will be happen, and then get these kind of impact will happen. That is a current situation. So, so regular kernel update is very important. CPU vulnerability terrorist issue is happening everywhere, not just software, but also in a CPU or microcode, and so on. So, security problem is also number one issue. If we were shipping embedded device to the customers, customer always ask us, are there any of the security issues there? No, it's not. Then, we should provide the security fixes, and we must think with LTS, no patch providers, no LTS kernel. So, this is also very important to use LTS. If we will not use LTS kernel, then these kind of security fix will not provide it. Greg is only provide the patch for these security fix for LTS kernel. That is another important point. And sometimes we wanted to pick cherry picking, oh, this patch can solve this problem, but Greg always told us that patch is not the single one. Further patch will be provided later, so chasing latest LTS is another very important point. I think Greg's keynote will happen in Friday morning. So, if you are interested in, please hear his keynote. And again, applying all the LTS patches the best way, not cherry picking. That's one of the topic. Second one is another latest one. Fukushiya, I'm not sure this one pronounces okay because I'm Japanese. Fukushiya is provided by Google as a microkernel based on Zircon. Microkernel is very different from monolithic kernel. I was a kind of old guy, and when I was very young, maybe 20 years, 80 years before, I was a member of Minix user group, and Linus was sent patch to Minix community. And in that time, Professor Andrew Tannenbaum becomes angry. The latest kernel should be microkernel, but Linus submitted monolithic kernel. So that was some of our debate, but Linus didn't change his mind. So that now Linux kernel is monolithic, but after 20, 80 years later, the Fukushiya is based on microkernel. Because of in the old time, CPU performance was very low, and by using a microkernel architecture, the inter-process communication, IPC, is very massively used so that the CPU performance is lower, and in that time, the performance of the kernel were not so good. So that's one of the reason why Linus chosen microkernel. But nowadays, CPU performance much better than before. So maybe this one is so good, and Google said that they expect it to be used in next generation embedded devices maybe in five years. So that may be very, very interesting. So maybe Fukushiya can replace the Linux kernel. And in the contrast of Fukushiya, Huawei announced Harmony OS just this month. We are very easy to imagine that the US and China relationship nowadays, it's so hard. So that Huawei is already thinking about the creation of their own microkernel. It's called Harmony OS. It's also a microkernel basis. They said it's a safer or secure because of a microkernel. And they have deterministic latency engine that may be kind of a higher performance of some other things. And they mentioned that they're available in 2020. That is next year. So those two are very, very interesting. I'm really, I mostly come from an OS layer guy. These are very interesting. And I hope this one will be grow up and it can be used. And I would like to try this one. But how newer technology comes up closer to Linux? We are, it's a Fukushiya and Harmony OS becomes an open source. But open source is the community activity contributing code into upstream with diverse developers. Does Fukushiya is Harmony OS can do this? That's a big question. And also open and transparent development model. Everyone can imagine when that will be released, what kind of code will be included? That is an open and transparent development model. But Fukushiya and Harmony OS can do this. That's a question. And security and bound fixing with trusted and timely fashion, that is another point. And finally, long term supported by the community. This is not just open up the code, but also create a better community is very important. So being a member of the community is very important. And until these kind of structure can be established into Fukushiya or Harmony OS, we must use Linux. That's a current situation. But still I'm very much hoping to see what is happening in the micro counter ways. And this one is the last one. So what is the key piece of maintaining open source in long term? I think that there is a three key point. One is a long term community. Community continue to provide a bug fix in long term, maybe six years or more. And then may have an organization to support its activities. Greg Kohutman is a fellow of Linux Foundation so that Linux Foundation is supporting his activities. So that's why he can continue to do that. So this one is not be able to happen in a single person, but the backside of this kind of organization is supporting. That is very important. And also security fix, provide security fix with trusted process. Community is living in each other, each single individual parties. So that's a trusted process. And continue update and issue down, less downtime and also respond to many different risks. That may be sometimes not owned by community but the company need to do that. And third one is a compliance. We are already very much serious to looking at our own product is matching with the open source compliance sometime of GPL or some other things. But not just a single product, but also companies shipping many different product and then corporate level of internal standard should be created for the compliance issue. And also there is a number of discussion happening supply chain compliance. So that sometimes OEM, ODM vendor can do different things that may happen. So open chain is try to be more compliant for all the supply chain. That is another thing is happening. So these three is key piece of maintaining open source in long term. That is final my slide. Thank you so much.