 So thank you very much, everyone, for coming here. Konnichiwa. I'm going to be talking today a little bit about the Zephyr project. And some of the results, I've been managing open source projects for about 20 years, roughly, right now. And Linux Kernel, well, the new two compilers, and Linux Kernel were some of the early ones. And so I was there when Linux was younger, I guess, than we see today. And I was not a developer. I was a manager. I managed the development team. And trying to understand, OK, what are the practices and learning what the kernel did made a very strong impression on me. And so when I ended up working at the Linux Foundation, I had various opinions and had worked with things in terms of, OK, how can we make an open source project be successful? And one of the obvious ones that has been successful, that's probably the most successful right now, is the Linux Kernel. So what I'm going to show you right now after this is what's happening with Zephyr today. And then I'll go a little bit through how we got there and what the statement is at the moment. So I will encourage people to ask questions. Anyone who asks a question can get a Zephyr kite, OK? I also have a lot of Zephyr stickers with me. And so if people want Zephyr stickers because they're working with Zephyr, just come up at the end and let me know. No problem at all, OK? And so let me start in. So OK, just a quick show of hands. How many people know what Zephyr is today? There's a few, but there's a few that don't. OK, so I'll just let you know that it's an open source RTOS. And we started the project aiming to get safety and security going. And we also wanted to get a wide range of hardware enabled. And we wanted to be under neutral governance and with the license people were comfortable using and integrating into products. So it's very modular, which means for an embedded RTOS, you can get down to about 15K, OK? And then you just bring in with the device exactly what you need. So you're building up an image bit by bit to what you actually need and want. So that's kind of what we're doing. And right from the start, we knew we wanted to go after certifications with this. We focused initially on security, and now we're starting to focus on safety, as you heard this morning. So this is the stats. I went to GitHub on the 1st of December. And just a second, let me show you the method. Oops, sorry. This is a methodology. You can go and do this yourself to prove my numbers here, OK? All I did was manually go to the project. I figured out, OK, what are the total number of commits? How many contributors? Oh, and over a month period, how many people have contributed to the project and what's the rate for that? And so as of December 1st, we were sitting at 1777, which when you do that math is about 2.45 commits per hour. So we have an embedded OS here that's seeing roughly about a quarter of the speed of the Linux kernel in terms of velocity and commits and interactions with the community. And so when you actually go back and you can see without the thing, the next closest one, these are community ones. They are the ones that are the most active. So the next closest is NuttX. And you're sort of seeing that we're about five or six times in terms of the last month, how many contributors. And you can sort of see it's around that sort of level, if not more, in terms of the commits. So the question is, how has it become this active? What have we done? And are there other things we can do? Is this lessons others can learn and try to work from? So that's what this talks about today. And like I say, I do welcome questions. 2.45 commits per hour as of December 1st. And as you see, average number of unique contributors is increasing each month compared to the other RTOSs that are out there. The developers are coming. They like the community. They like the code base. And as a result, they're sticking around and figuring out they just need to do a little bit more for things to be effective for them. And we see also the average number of commits per month is going up. As developers go up, the commits go up. It's a pretty simple formula. We had, we launched the project. So it got launched in February of 2016. And as you see, it went down and then it went up steadily. And in that, where it goes down is where we were looking at, okay, which processes and what things we wanna change in the project and do anything else other than be under neutral governance. And part of it involved listening to the community. So we've been running annual surveys, getting feedback, and then that goes into our TSC and they work on issues. So it's a very inclusive community in that sense. But more and more people are working on using GitHub Clones and using unique visitors. We're seeing high stats continuing to increase year over year. There's about, you know, 880 unique clones a day. And there's over like 1200 unique visitors. And when I looked at this six months ago, it's had a nice significant increase since then. But it's on a steady ramp up. It's nothing terribly dramatic, but it is there. And if you'll notice, for GitHub stars, for anyone who considers that a valuable metric, they are increasing, like say, the developers like working in the code base. And since it's developers sort of decide the policies or they set it up so it's things they wanna do. So we see now more contributors coming in each release. And roughly about 30%. So each of our new releases is bringing in about 30% new contributors, which is what is fueling this growth. And the existing ones are sticking around and participating. So, you know, how to also, it's all these, okay, stats, lies, you know, whatever. How does this all compare with Linux? Well, I also work on like the real-time Linux project. And I also look at the Linux kernel and help with things there. And especially on the embedded Linux side. So I see this sort of information fairly closely. And so if you look back at the 6.5 release, there was about 1,900 contributors from two 18 organizations over a six-week period. And there's about 7,000 lines of code daily, 1,945 lines modified, about 1,200 lines removed every day. And that also goes to about nine changes an hour. So while Zephyr is at 2.45, the Linux kernel is at nine. In fact, the Linux kernel is the most active open-source project out there that I'm aware of. And there's various stats to prove it. In fact, the Kubernetes community, every year, a couple times a year, does a velocity analysis in terms of the most active ones. And they plot it on a logarithmic scale. And Linux is always at the top of it, okay. But Zephyr is actually now the fifth most active project in the Linux Foundation. And so given it would just started back in, it just was open-sourced in 2016. We've had a tremendous amount of growth in this period. So what was Linux like when it started though? Okay, there was a mixed fragmented ecosystem out there. And there were all commercial distributions. There was Minix, there was BSD and so forth. And over time, they've all sort of slipped away. They're not as active as much anymore. Some of them are still active and have a very good unique place in the ecosystem. But at this point now, Linux pretty much dominates most of these markets, including the embedded systems market, which is interacting with Zephyr a bit. But most of the other places, it's, this is one of the slides that the Linux Foundation's won. So I just pulled it up to sort of show that there is, Linux is still the dominant one in the embedded as well. But there are places we've learned that Linux just won't fit. It's too big. It doesn't get smaller than about three megs right now as an image. And when you're working with a sensor and an actuator on a car or in some other agricultural device in the field, whatever, size matters and power matters for that matter. And you want to be able to work within the constraints and be as efficient as possible. So that was the reason we started working with Linux. Now, as you saw that dip went down in 2016, after we launched it, part of this was okay, whatever the lessons we should look at from Linux. And we were experimenting a little bit at the start with the release cycles and we've so resettled into our cadence like the Linux kernel has. But short release cycles are important. This actually all came out of a Linux kernel developer report. These were pieces that were written up back then and we looked at it. So tools matter. Developers like to work with tools they're comfortable with. So work with that flow. The kernel is strongly consensus oriented. So you don't feel like people are getting shut out. You've got certain goals that the communities agreed to. And there's related factors, the no regressions rule. Don't make things go bad. We've adopted that certainly in Zephyr 2. And quite frankly, corporate participation in the process is crucial because corporates have interests and they will put in resources to the janitorial things. If you're doing this as a hobby, you don't wanna do these things that you don't like doing. However, the corporations that participate see the value in getting something that is stable across a range. And so they add that into this ecosystem. When we just finished doing the Zephyr developer survey, we actually surveyed the upstream and the downstream developers that wanted to respond. We had about 400 responses. About 85% of our community right now is being paid to work on Zephyr. But 15% is doing it as a hobby because they like working on it. And the Linux kernel, when you actually sort of try to go into the stats there, is around the same percent these days. And so there also should be no internal boundaries within the project. And so we've been working to do that. You can go and work in any part of the code you want in the kernel, okay? It's up to you to decide where you wanna work. And we've been trying to emulate that in Zephyr 2. Some other lessons learned through looking at open source projects from the kernel are vendor-neutral environments are important for decision-making. There's a reason that Linus and Greg worked with the Linux Foundation because it's not one company dominating and saying, I'm not gonna pay you your salary if you don't do what I want you to do. Having a mix of companies and individuals, working on advancing things that are interested. What are the new ones? The person who's working on PicoLib is doing it as a hobby. He likes it. He likes doing small libraries. And so there'll be a tech talk, I think, tomorrow or the day after, actually, that Benjamin will record with Keith Packard about that. And so people have things that they think are interesting and they want to advance. And this is a forum for them to do it. The Linux kernel has been a forum for them to do it. And this is another one so they can just put the things on. Making it easy to put things upstream. CCO process, that's been a major part of the Linux girl's success. You don't have to basically sign a CLA. You don't have to transfer your copyright. These are things that are useful and also having things being public and reviewed by. These tags have been useful in the kernel as well. Consensus-oriented decisions. I should probably say nowadays it's not really email or in person. It's really if all of our stuff mostly is happening on GitHub or in Discord or occasionally in the TSC meetings. Most of the stuff is just flowing through reasonably well right now. We have a hierarchical developer model like the kernel. And so we are signed off buys. There's no internal boundaries, tools matter. And short predictable release cycles with fixed merge windows and it's stable. So one of the lessons we learned in the kernel is when developers get frustrated with the status quo they come up with solutions. Creative solutions at that. And so we tried to listen and make sure we could listen to the Zephyr developers and help support them to adjust it to be a co-base they were comfortable with. And so what happens when we apply those lessons? Well, our vision right from the start was to be a best in the class RTOS for these connected resource constrained devices to be able to be safe and secure. And so Zephyr's developers decide the technical directions. You know, when we launched it, we had K config and K build. We use that same technologies from the kernel. And basically K config was retained and K build moved over to CMake. Initially when it was launched, there was a nano and a microkernel because that was what was there at the start. Discussions happened and they once said, no, we're just gonna go to a unified kernel. There wasn't enough value for the overhead as far as they were concerned, anything. Our infrastructure was initially on Garrett and Jira and we've moved to GitHub on issues now in 2017. And so other areas like our APIs in our house have been reworked, the modularization, the device tree support got incorporated. The release and the LTS processes were refined based on the input from the developers here. And as a result, more developers like to work with it. And so basically as you see, everything is publicly documented. We've got the hierarchy, we've got our maintainers files. We've got, you know, everyone can contribute. We've got our release cycles and we've got a long-term support. I think the long-term support has been key for his effort to get adopted for products because as things are shifting all the time on tip, the people doing products, they wanna put a product out and just leave it. They care about security fixes, but they don't care so much about, you know, they wanna keep up with all the shifting APIs and interfaces. And so the LTS has been very useful for that side of adoption. What? How long is the LTS? Two years, it's a two-year LTS and we do an overlap of six months and we do an LTS every two years and we overlap the zone so that we have some support through that period for the back ports. And the rule is everything goes upstream first and then gets back ported into the LTS just like the kernel. We took that lesson as well. Now, if people were setting products out, you know, I can talk about the security minutes of the things that I meant, but if people are putting out products, they're just gonna wanna patch their own kernel with whatever gets back ported, okay? But, you know, there's a stable code base and that's the important part here. So right now, Zephyr is effectively supported on all the major architectures. We have ports in the repo and the architecture repo side so you can go look at that in the code base. If there's an architecture that someone wants to contribute that's not on here, certainly the community is very welcoming. The IE stack is pretty comprehensive between having a basic sort of scheduling in the kernels and the management and so forth. There's architecture interfaces, there's various low level services for, you know, typical things like watchdog timers and so forth, as well as a whole connectivity stack now above it that's been implemented natively. So we also have the thread and low line pan and CAN buses and Z buses and so forth. So most of the connectivity that you're looking for is already out there now for Zephyr. This makes it very easy for people to say, okay, I want to, and it's all integrated together and it's all been tested together. So it's not like you're getting a library like a kernel as its own image and then bolting libraries together with it. It's sort of all in one and it's been a unified test environment. And that's what's making it powerful, I think for people because it's very easy to take it. So you're close to what you've got. You might want to make some changes on one board or make some changes with, you know, the configurations you're doing through device tree or other things and then get something just up and started quickly. And I'll give you more stats about this stuff later, but what about Zephyr security? Well, again, right from the start we wanted to be secure. And we actually had a security committee start meeting right at the start of the project. We open sourced it and they started meeting. And they've been meeting every week or every two weeks depending on what was going on. And this was actually, this diagram was actually created back in 2015 before the project even launched, trying to figure out how can we get it so we can be safe and secure? Well, the key was to, you know, if you're changing here at about 2.5 commits an hour, that's gonna be really hard for people doing products to keep up with, hence the LTS. And so going to the LTS support was there. And then a subset of that is what we're doing the analysis on to take it out to work with the certification authorities and so forth as an audible set. And we came out with this diagram and that is what we've been working with until now. Still seems to be relevant. So LTS as I've mentioned is product focused and security updates are the focus for it. It's compatible with the new hardware. So if people wanted to put a new board into it, it's not an automatic no, but new functionality generally is. You put new functionality in tip, you don't put it into the LTS. And there's, you know, it's tested by the project and it is a part of about two and a half years. And we've already released, we've sort of retired our release one already, our LTS one that we first came out with. We had four releases for it. We had the CVEs fixed for it and so forth. And we fix and we delivered to that as we wanted to. And we're working on the second one next, this next year, 2024, we should be getting our LTS three out. And that's going to, and then we'll be retiring the LTS two. And so right now we're basically maintaining the LTS two. The audible co-base is a subset of the LTS. And we try very hard to keep the co-bases all in sync. So if you go onto this after site, you'll find there's a lot of information about project security documentation. We basically built around security development, security design and going after security certification so we could go way up to it or not put any blocks in for people to do it. We haven't done it as a project because our members haven't funded it yet to do us. There's been no consensus and they've been doing it for themselves. However, it's something that's within our scope. And so you can pretty much go to our documentation to read the current overview and then read all the threat information. Then we've got a threat model and so forth. And one of the things that Zephyr's got that I'm personally very engaged with is you can generate an SBOM by just changing, invoking your build a certain way. In fact, you actually generate three SBOMs. You do a source SBOM for your Zephyr sources, you do a source SBOM for your app sources, and then you do a build SBOM that connects the actual images to your .As to exactly which source files made it into your .As. And this comes out of the build automatically. So you've got a very high quality SBOM there. I was talking with Greg earlier today and he's working on things like this for the kernel, but we've got it today already in Zephyr for this stuff. So anyone who goes and builds something with Zephyr with the West Tool and CMake is getting this. And you've got, you know, it's working with the latest versions and so forth. And it basically is using these relationships to say, okay, this A, the app A is sitting in that elf image. This .C is sitting in that elf image. So you're being very precise, which when you have vulnerabilities coming into versions, if you know the file isn't there, you can say, I'm not affected. I don't need to do any work and prove it to people. So you don't have to update things unless the vulnerability is present. And if the vulnerability is present, then you have to update it and fix it, obviously. But we learned this lesson from Amnesia 33, which was an Fnet vulnerability a few years ago. And realistically, the Fnet, it hit all the R tosses, okay? Fnet was being used visibly and they gave us a version. But when we actually looked at the source files, it wasn't even in our tip, but in our LTS it was. There was some FNS code there. However, when we actually looked at the code, files with the vulnerabilities weren't even present. So we didn't have to do anything. However, at the time, the only thing we could do to tell people that was to write a blog post. So if you go look in our history, there's a blog post sitting there. But this is part of the reason I've become a very strong believer. We need things like VEX to signal you're not affected in a consistent way that's automatable. So I'll show you a link to a dashboard later that you can go look at. The other thing we did in early in 2017 is we worked with MITRE to become a CVE numbering authority. So we actually have our own security incident response team, a P-Cert. And we issue our own CVEs and we call anything we get sent in to us before we assign a CVE number to it. So we had a little bit of scope of control because we didn't have one company that took over the mission for us. And we didn't have like a foundation that said they were gonna take over the mission, but we wanted to make sure we were doing security best practices. And so I think you'll see probably the next kernel is starting to go in some of this direction, maybe doing something similar to this next year if we'll see. But it is important to actually stop noise out there. The other thing we did is about the same time as F was launching this core infrastructure initiative launched their goal best practices, their badging thing that OpenSSF is now taking over and maintaining. And we initially started to pass. That was our first goal. So we were passing and then as they added higher levels up to gold, we figured, okay, let's go for it from the security side. And so we've actually had gold status where I think we're the fifth or fourth or fifth since February of 2019. And on a personal note, I was rather pleased that we got it before the Linux kernel did. They got it shortly thereafter, but nonetheless, you know, it's like, okay, let's race to do the best things we can possibly do. So you can go in and read it. The thing I kind of like about this project, the program is occasionally when our infrastructure changes over, we stop getting you being gold and then we have to go fix it, fix the documentation and, you know, get ourselves back to gold. And that's happened twice now for us. So it sort of tracks certain things and checks out at least certain things haven't gone stale. And when you have that checking going on, I start to have more trust personally when it fails things. So we fixed it a couple of times, but yeah. And then we actually had NCC group did an assessment for some of their clients of Zephyr and they give us a bulk vulnerability reports, bulk vulnerability reports, which sort of said, oh, I guess we got to change some processes here. So we listened to what was being needed. And we initially just had a 30 day or 60, it was a 60 day embargo window associated with the project with phones. And we sort of pointed out that, you know, the people doing products need time too. They only do 60 days. And they're wider. So we've been taking a view of trying to commit to having vulnerabilities fixed within 30 days and then give vendor 60 days before we make the CVE numbers public. Well, make the issues public. And, you know, this was that Nisha 33, I was referencing and you can read the blog about it. You know, from 2020. And, you know, it wasn't affected. So we had to figure out how to do that. And hopefully now we'll be starting to use the VEX as a way of signaling this type of thing, but that's something for next year. So I guess in summary, you know, we're using the core infrastructures, the OpenSSF's best practices badge. We are doing weekly covariate scans and doing Mr. Scans. And then we're also doing SBOM generation. So these are some of the best practices that we see in the industry. We're trying to apply it in this project. And I would encourage any project to look at at least these to actually try to get your security posture up as a starting point. So what's all this about Zephyr and safety? A lot of the people want to work with a process that's VMV model. It's been typically difficult to map a stereotypical open source project development with VMV model. And so getting this evidence pulled together and getting it coherent has been sort of something that the project wanted to try to go towards because of the pressures it wants to use and places it wants to use. So we've actually done a fair amount of analysis of what the safety analysis is, who gets access to it, who's managing it, so forth. And we've been working on these documents in the background. Some of them are private, some are private to the members, others are public for everyone. Anything that's to do with the code is pretty much public and we're also gonna be making the requirements public here. Some of the, and we're linking, most of the input of the information for coverage is gonna be in the code. So a lot of it's gonna be public, it's just gonna be a formal doc. So I think our members are doing it because it is not inexpensive to go through safety certification. We've been, we have a functional safety manager that the project is contracted with for the last couple years. And she's been getting us ready and we've actually signed a contract with Tooth Seed and they've reviewed our processes and they think it'll work from their perspective. So we are getting everything assessed and we're trying to get everything ready for our next LTS to be able to be certified. So our initial scope of focus, like I said, auditable is a subset. The subset we're gonna be looking at here is the kernel interfaces. We're gonna be going after 61508 cell three. Yes, the root three, the SC3 with root 3S. And we actually have the option for going for 26262 with that with some additional tables on work and additional fee, of course. But we have things that way. So we're likely to go after that as well next year once we're getting close. Because there's a lot of people that seem to want to have that ability to know that it can be done. So we're gonna look at, the idea of what we're gonna try to do is do this all in the open and have as much of this linkage of the code to test, so the requirements to code to test and have the requirements open, the code open, the tests open. So people can see the pattern and oh hey, if they wanna start working on this component and have this component in their system and get it done, maybe they'll contribute it into the project. The same way people contribute code and maybe we can get requirements coverage contributed into the project. That's the experiment we're trying this year to see if we can get it so that people can scratch their own itches when it really matters to them. And that's what's not been out there publicly up till now. And so let's see if the experiment works. I'm hoping it will. We're working with StrikDoc in the Zephyr project. And so we're basically looking at connecting up all the requirements we've been generating from looking at the code and from talking to our man pages and so forth. And we're looking at capturing them there and the StrikDoc stuff is gonna work on generating SPDX and importing SPDX. So that we should hopefully be able to create this stuff in one tool and have other tools imported. And then that way as things move throughout the ecosystem, it'll be useful. So what's happening right now? We've got a safety committee that meets from just as members and they basically control the scope of what is in scope for the talking to the assessors and paying the money for. And then the safety working group is working on the qualifications and looking at the evidence and setting up the requirements management so forth. This one is open to anyone to join. If you care about safety, it meets every two weeks and help is always welcome to start hooking this all up, okay? And so what are the results who are applying these best practices? Well, we've got over 5.4K forks in the wild on Zephyr at this point in time. And it's always fun to see where I find you things or other people point message at them. We have a lot of end user products now on Zephyr. Because it's open source and they don't have to tell us they're running with Zephyr, it's a bit of a, how shall I put it? Treasure hunt to find these products. So if anyone knows of a product that's running Zephyr, please come tell me. I will happily give you a kite. And we will try to investigate and reach out to them and let the essence let us know we can use their product in their pages. There's over 550 boards in the repo right now. Okay? And there's over 170 sensors integrated already in. So there's a pretty good chance what you're going to work for. You've got a starting point, at least. It may not be as polished as you might want or something like that, but there are things there to work from and please contribute back. And so we've got a wide range of connectivity options now. As I saw, I've talked about a little bit earlier. We have a native IP stack that works in most of the modes including the TSN support and the BSD sockets API. We have Bluetooth host and mesh we work with very closely with the Bluetooth SIG and we have a working group that's between the two groups that uses Zephyr to prototype the specs before they come out. So as soon as the spec becomes public, it's in Zephyr. We can turn off the hide it flag. And so Zephyr is keeping pretty close to the tip of a Bluetooth work, which is very useful for the project as well as the low energy controller. It's compliant for the 5.3 spec right now. There's a active USB device stack sitting there and it's supporting a wide range of families. And we've got native emulation on Linux with this stuff as well. Power management, we want it as little as possible. We've got to take the scheduler and it's handled by the kernel and can be customized through the device tree parameters and so forth. Device tree, as I've said, is here. And so there's device tree in Uboot. There's device tree in Zephyr. There's a device tree in the kernel. There was a really good talk at Linux Plumbers about some of the differences and subtleties of the different uses and implementations of device tree. But we're trying to use that modularity and pick what you need and work from there. And obviously we need security. So secure boot and managing devices and updates. So we work with the MSU boot project and we also have customized the over the air updates as well with some of the other higher level communication protocol stacks. And there's cryptography APIs and trusted firmware is already integrated now. And we've basically built on POSIX. So some of the interfaces available and we can also run on top of Linux for emulation which is mostly where we spend most of our time doing tests, like most of our testing and automated testing is done that way. And then there are people that have boards and so forth that will test their specific things that they care about. And so we've started trying to start to get a handle on our benchmarking. This is some initial preliminary data with using one of the cores and with the context switches and so forth and threads. And more recently in the last year we're seeing more and more graphical user interfaces getting incorporated. The LVGL stuff came in recently and various interfaces and device drivers are there for you. And so with the inter-process communication we've also got the Z bus now in there and we have support for tracing and debugging. So most of what most developers want to use is mostly there now. And if there isn't, you know, start the discussion and we actually are building up finally a vibrant ecosystem which was one of my goals for doing ecosystem enablement right from the start with this. So you'll see that there are people that do have tools with it. We've got multiple compilers. We've got multiple tracers and debuggers. Emulation and simulators are there. And this is a weak point. Training is a weak point for us but our members are stepping up. Goliath and Nordic have made training available for free to people. Once a month type of deal, things like that. So you can basically get some training that way and there are people that are building free courses for it right now. We'll see when they get rolled out. There's also people that will help you deploy for your stuff or thing if you don't have the developers that want to work on it. And then we have the security stuff for the MLTLS and we'll have FSL is integrated as well as a lot of these sensors are on the edge and they need to do some filtering. They need to do some analysis as to passing signals forward. The noise, you know, all the information coming in from these sensors. What's significant to pass forward. So some of them are starting to use the TensorFlow Lite or the tiny edge in Pulse and so forth for having trained training data sets and there's examples in the repo right now with that. And we're working with the MicroPython J script and then the REST. Also, sorry, the ROS folks, sorry, my bad. And there's some sound open for aware of the science libraries there and then Memphal has some other work that they're available. Remote management is a key use case here. So we've got a variety of options, including some of the ones that have a specific security focus like sturdum and then we're looking at the robotic stuff. And these are all integrated that we've seen and found from the community. So the ecosystem is becoming much more vibrant in the last probably two years, I'd say. Just like the kernel and so if you want to see us bombs at scale, I go to this dashboard here and I would basically just go and look at it and anything that says built or passed or generated, if you click on it, you can download them and you'll see the elf image, you could run on the simulator if you wanted, but you'll also see those three S bombs I told you about. And so this gets run on commits periodically, I guess every day, every other day, by the renote, by the micro folk and they use this to test their simulator. So they're using this to test the simulator but they're generating these things as a byproduct which is how easy it is to do it. And that's much more accurate than a tool going in and trying to figure it out. Because the information is known during the build. So that I'm getting close to my time, I suspect. And if you're interested in learning more, come join us. Yep, the five minute mark. I'll also point out we've got over 8,000 developers on Discord right now and we have a lot of channels. So there's likely to be a community for the things that you're potentially interested there right now. There's mail lists, the codes there and there's an overview is up on the webpage. So feel free to join in where you're most comfortable and look around and kick the tires a bit. And so that's I think pretty much what's going on with Zephyr and as you can see, we've used Linux lessons to try to make Zephyr a good place for developers to work. And it seems to be paying off because they are seeming to enjoy working in the community. And with that, I will say does anyone have any questions? Go for it. Just mine, probably. Here. Do you want to take? So about the hierarchy of maintainers. Who's at the top? Is it an individual? Is it a group? Is it TSC? TSC, sorry. TSC? Yeah. Who pulls it all together? So it all gets pulled together in a TSC. And this is one of the things that's different from the kernel. There was a very strong passion in certain projects to have like one person at the top. What we decided to do with Zephyr when we formed it and wrote the governance up is we made it such that it was an electable position by peers, okay? So realistically, the TSC chair is as close to someone like that as possible. And so he sets the agenda for the TSC meetings and then rallies it around to have a discussion on these issues when there's problems. And then we actually formally vote. And it's publicly voted. And these TSC meetings tend to be public for everyone. There's one or two, sort of like, if you're voting to let someone new come into the TSC, that's from the community, some of those types of votes might happen in private to avoid hurting people's feelings. But the TSC has public, basically has members from, basically the TSC is composed of representative from members as well as represents for the community. And to get into the TSC from the community side without being a member, you have to have contributed to the community in a significant way. And that's kind of how we're running it. And it seems to be roughly working for us. So that's it. Does anyone else have questions or does that answer mostly what you're looking for? Yeah, good. Okay. Any other questions? Please. Would you like a kite? Okay. Okay. Can you, you wanna give him a mic and then bring back? Is there a portal that aggregates security rule for developers? I'm worried because we have a lot of new developers. So I'm also convicted. So I'm worried that I don't understand it properly. So we have a security working group that's open to the public. And that is where anyone could come in and ask security questions. And that's where they're working towards some sort of occasions. But that's where our security architect sits. So if there's things that are unclear on the security side, that's where to ask the questions. And hopefully if things get clarified for you, you'll contribute the upstream to the patch to the documentation to make it clear for everyone. So we've got portals, we've got places. We also have the ability for anyone who's making a product and can show to us that they've got, product was effort, that we will add them to the notification list for vulnerabilities for free. They don't need to be members to be notified. They just basically let us know that they've got a product. And that's where we go. You're looking for it? Okay, thank you. Any other, one last question. Okay, go ahead, Lukuchi-san. Thank you for the talk. And so my question is very similar. Do you have a maintenance in the project? And my question based on the background is, is the background is, so the, in the ZIFR project, there are many tasks other than coding for testing, making, making a sperm and checking safety and so, so, so many activities. So how do you manage such a large tasks for how many maintenance or TSC do you have? So the TSC is about 40 members. I think something like that right now, I'd have to go and double check the numbering account, but it's getting around the 40 count. And then in the maintainers, the maintainers follow the public. And we have some spots whether if we don't have a maintainer and what's a known issue. So it's not perfect, but we also have been introduced the concept of having co-maintenors. So that is not just one, burdensome just falling on one person. That's one of the experiments we're doing. So we are using the hierarchy structures. We're using, you know, maintainer hierarchies and we're using sort of like, okay, certain maintainers have to start on things. The challenge for us becomes is when things cross multiple subsystems or we're doing a massive change, those sorts of discussions sometimes end up in the TSC. Uh-huh. Does the maintainer also do the testing and making it, you know, other people? Yeah, we actually have a test team as part of the project. And they're the ones who manage the CI. We've also got, we've actually got, so we've got actually a full, we've got someone who's monitoring our CI. And then we've got a test team that basically is working on involving the test infrastructure. And so that's already part of the project too. Okay. Thank you. Would you like a kite? Oh yeah. Okay. Thank you for the questions. And thank you very much everyone. Thank you.