 Good morning, everyone. Let's just get starting. We tried to go at it fast. Like, we got, like, 30 minutes. And we're going to still, like, try to let some minutes, like, for questions. So my name is Ricardo Salvedi. I'm here with Microscott. So, like, what I'm going to be showing is, you know, like, how back porting is like an old concept and how, you know, like, and why we need to start, like, thinking differently on how we maintain and use our software. So basically what I'm going to, the talk is going to talk about in just, like, some of the common problems, you know, like, when designing an IoT product, the traditional embedded model, you know, like, when doing so, that are kind of used to do, give a little bit of, like, some of the examples on the Linux side, how, like, traditional embedded Linux products are being done, and then Mic is going to go over Zephyr, like, some of the problems as well, you know, like, when you're doing, like, IoT products based on, on our tasks and for MCUs, like, for example, using Zephyr, and why you should use, and try to use and work with the latest software all the time. So, you know, like, some of the common problems when you're designing IoT products nowadays is that, you know, like, traditionally embedded products were, like, really simple and you could just isolate it and you could replace it any time. You didn't need to think to, you know, like, to maintain it for too long, like, you didn't have, like, a long product lifetime. But this is changing with IoT. Some of the nice and interesting problems as well, like, quite a few of those devices, you need to stay, like, they need to stay on all the time. They need to be on a network. As a consequence, they have, like, a larger attack surface. So you need to be aware of, like, some, you know, like, you need to handle more problems than you usually do with the old embedded products. Also, like, it's pretty common to see, like, nowadays, they bring, like, a really complex architecture on the product itself. Like, then you need to handle security, how we talk with, you know, like, with the Internet of Things. The device itself is becoming even more complex over time. And as a consequence, you know, like, you need to continue to maintain your product as you push it out. Because, like, otherwise it's just gonna be one more target for botnets and so on. Like, you're gonna lose control of your product if you don't take care and think of that further. So, like, then moving on, you know, like, kind of, what is the traditional embedded approach when doing, you know, like, embedded products? So it's pretty common and still pretty common, unfortunately, you know, like, when you're designing a board, you're designing a product, you always start out out of, like, a BSP, kernel tree, for example, or, you know, like, or even, like, containing drivers in your user space from the vendor. You do your, you know, like, your hacks in there, you build your own OS, you customize it, and, you know, like, you fork quite a few products along the way. You try to get it out of work, you do some QA, and you release, right? You know, you're done with it, and you release a product, right? And when you have, like, an update, a mechanism for it, it's usually, like, in-house, not necessarily using the best practices and so on. So this is still pretty common nowadays. And then when you think, you know, like, how you're gonna be, like, maintaining, like, the traditional approach is just, like, only react and only push updates when really, really needed, right? For example, I got, like, one router at home that is, like, from a, you know, a big manufacturer, and I only got one update, and it was after, like, the WPA, like, for an issue, that what happened in the specification, and it was clear that, you know, like, if you didn't receive an update, you're probably vulnerable, so you needed to do that. So I only got, like, one single update, and that was probably the reason. And there's still, like, this thinking that when you're maintaining a product that is gonna be easy to, you know, like, cherry-pick and bring stable updates to, you know, like, deliver those fixes for the customers, and just by cherry-picking from stable trees for your products, or, you know, like, just simply back-porting the patches. Like, you know, there's still this mentality that is an easy job and, you know, not too complicated. And then what, you know, like, what you usually think is you just, like, cherry-pick, you get in back-port and iterate it over already, like, customized OS. You do the kind of the same level of QA and it just releases it out to the field, right? But let's see how complex that chain really is in the reality. Just, like, taking a look for, for example, like, for the kernel, there's a long supply chain if we're starting out a BSP tree, right? Usually, like, the SOC vendors, they start out of, like, a ready release, like, a long-term supported kernel, from Note 3. They do a lot of board-supporting packages in there. They add a lot of code, a lot of drivers, a lot of patches, sometimes, like, thousands and thousands of lines. And then if you depend on a distribution, the distribution also let their own special sauce base it on the BSP that comes from the vendor. And then if you're based it on a board, they also, like, depending on the board vendor, they also add a little bit more sauce in there. And then when it comes to, you know, like, at your time to create that product, it's also common, you know, for a product builder to also add, you know, like, some more special sauce, some more additional changes into it. So at the end of the day, you get, like, a super complex in the long supply chain. If you have a problem, like, for example, vulnerability and then you want to apply, like, effects, it's really hard to trace back and see who is responsible for it and how to make sure that, for example, an upstream update doesn't bring, you know, like a regression or an issue on the stock vendor trees and BSP. So it becomes, like, really hard to maintain over time. And, you know, like, just to give an example, like a quote from Greg, and this was from a thread when they were discussing how to backport the meltdown aspect for the arm tree on the 4.9 and 4.4. They already had it on 4.14. And it was, it turned it out, like, to be, like, a really complicated backporting to backport the fixes. Because, you know, like, they had to change a lot of things inside the kernel. And this is, like, nice in particular because, like, even if the software was right, right, it was actually a bug that happened in the hardware level. So you needed to fix on the software side, but, you know, like, you couldn't prepare yourself for those kind of things. And, you know, nowadays it's pretty common to see bugs on the specifications and a hardware and also on the software. And it's becoming more and more complex. So it's becoming a big problem to make sure that you're always, like, prepared to handle those issues. And just talking about, like, stable maintenance, it's a complex job, and it's not easy. Even if you're always trying to follow, like, and looking at the fixes that are going in upstream, it's pretty complicated to see, you know, like, and identify what should you be backporting. So in which fix, you know, like, how identify the fixes that are security related or just, like, common fixes. I know that Sasha, for example, in the upstream, is doing a lot of work with machine learning, try to identify. He had a presentation yesterday describing the process and this is becoming better, but it's a complex and it's a really complicated process to, you know, like, to apply in the end. And also when you add, like, more SOC specific or board vendor changes on top of that, it becomes even more complex. So, and it's also, like, pretty common to see when people are backporting fixes from upstream, they end up, like, introducing more and more issues and more and more regressions. And sometimes also, like, even if you're staying on a stable tree, you might as are, like, new features, for example, for curlsaw protection, like for stack overflow and a few other things that might be useful and you might like to have that in your product. So it becomes complicated when you're staying on top of a stable tree if you also wanna have those features in place. And just to give some examples how complicated it is, this is, and you know, like, for example, here's one CVE that was opened this year but the issue basically, like, it was introduced in 3.10, the bug was reported, it was fixed at upstream and it landed in December 15. And it was fixed on the 4.4 LTS and Ubuntu got that but not every distro got that fixed in particular. And the CVE only was requested and published, like, a long time after the bug was originally identified. And there's a main list where most distros are getting the notifications like Linux distros and OS security and where the CVE is notified and that only happened in August too. So, like, for example, Santos was vulnerable up to that point. And so even if you're staying with the latest, you know, like, depends on how you manage and how you try to identify the things that you should be bringing in and there's still like a big window that is to, you might be vulnerable even though, like, for example, you're tracking all the CVEs. And also, like, some examples like this is from Ubuntu and when they were dealing with, for example, the, you know, like, the L1 terminal fault that after they applied the backboard, the introduction of regressions on the kernel side and also in the user space, which is, of course, like, not good. And, you know, like, one coming back to the meltdown, one that was, you know, like, pretty surprising is that, like, when backporting, it actually caused, like, a bunch of systems to fail to boot. So it's a pretty complicated issue to manage. And I am just coming back to the upstream discussions that were happening, like, this week and the kernel summit that happened yesterday. So it's, you know, like, this idea of, like, having to maintain a product for a long, long lifetime and then base it on the same trees and so on, you know, like, for example, for 20 years, it's just madness, right? I mean, like, the products and the technologies are becoming more and more complex over time. So it's just, we need to start, like, think differently on how we're gonna handle this because the rest is not gonna scale. And I'm gonna pass, like, now to Mike. He's gonna talk a little bit more on the Zephyr side. Yeah, I thought it'd be important when we're talking about backporting, if we step away from just a Linux product and maybe look at some other projects, maybe, you know, we're gonna talk about Zephyr today, but this could really be any open source project where you maybe have a product that's based on and how do you stay on a little bit closer to tip so that as bugs and security fixes come in, you're easier able to integrate these. But this is a little description of Zephyr. It's an RTOS best in the world. I'm sold. We're gonna use this. Matter of fact, we're gonna go on a little time journey here and we're now at company and we're gonna build a product and we're gonna go back to October, 2017 and we're gonna select Zephyr as our software for building. We're gonna do a wearable. It's gonna be great. We're gonna make a lot of money. Our plan is to download the source, start jamming on it and we're gonna release next year today. What could happen, right? It's gonna be fantastic. But as we go through our development cycle, here is December. We've got a new version of Zephyr and what could possibly be fixed? Do we really need to take the updates? Oh look, we've got a major overhaul to the build system. We've got HTTP API changes. They've actually changed the Zoab library on us. Now we're into March. We've got another release of Zephyr. So what you're starting to see is bug fixes to LWM2M. There's a lot of, and this may be a repeating theme and I pick on Zephyr a lot because I work on Zephyr myself but what we're starting to realize is as time goes on, we're now into June, we're noticing 1,900 commits at every release, all sorts of networking and schedule rewrites. These are really hard to backport. You're not gonna be able to get this back into your code without spending a ton of time and effort staying up to date on all these patches. There needs to be a better way. And so here we are getting closer to the day and we're supposed to release our wearable and notice all the security fixes we should have taken along the way. And this is even a sort of sample of what's coming even more. So now just take a moment and go back and say, hey, what kind of product would we have if we had stayed on Zephyr 1.9, tried to cherry pick those fixes in and get it all out to date. It just wouldn't work and this is our belief. This is like an arms race. There is really no such thing as secure software since everybody is trying to break it and trying to get their little exploits in. The latest software is the most secure. That's where all the bug fixes are landing. That's where you wanna deal with as drivers are fixed. Can you imagine contacting a developer and saying, hey, I'm working on Zephyr 1.9 from a year ago. I got this bug that I really think you need to take a look at and they're gonna just say, move to the latest software. Much easier to report and add fixes. You've got fast review, testing iterations, less load on upstream maintainers, et cetera, et cetera, the list goes on. And I'm gonna talk a little bit about maybe how to handle that. How do you take a piece of software that has such a large amount of churn and how do you integrate it into your workflow so that you can stay a little closer to tip? And I just wanna say it's not easy. I mean, this is not something that's just, hey, I'm gonna roll right through, but it's worth it. And it's actually essential because as Ricardo mentioned, we've got products that are in the field. They need these security updates. Companies' brands are dependent on being trustworthy and responding to updates as they happen or security flaws. So number one is you can't grab 1,900 commits at a time and try to shove it in your project. It's hard to test. If you do have a bug, it's hard to bisect and regression test. You have to take it in small batches and you have to understand what those 100 or 200 or 300 commits are doing. And just to say you may not land all 100 in your project at a time, you may figure out that you need the next 200 to match up with those so that you get kind of a complete set to land at once. And until you hit a group that you feel is stable in your development tree, that's the sort of commit you might merge in after it's passed your internal testing. Moving forward, testing, who tests the testers? You don't just test to test. I mean, you have to pick out and choose why you're testing, what you're testing and how it affects your project. You have to have a goal in mind. So obviously a lot of projects have unit tests. Those are the good things to examine. In Zephyr, there's something called the sanity check. It literally runs through every single sort of internal structure and will run a sort of internal test. A lot of times using either QMU. What I actually recommend is you actually can run sanity check on your hardware if you configure it correctly. Those are things that you can do on every commit internally or external on the tip of the tree. You can actually get like a get trigger to trigger those tests. And then look through the main line for samples that actually demonstrate what you're doing. These could be things like if we're building a wearable, maybe it's a BLE connection, there's a BLE connection sample. Every time you make a change in the main line, you can actually run that sample and make sure that it's working the way you expect it to work. And the same thing, if you have to do some HTTP connection or something like that, there are all samples that sort of flow through that sort of work case, a use case that are gonna match up. And number three is understanding development cycle. Every project gets developed a little differently. And in the case of Zephyr, if you go out to the Wiki, they start with a development where literally there are thousands or hundreds of commits going in in the first couple of weeks. Those are large commits. They're very likely to break you or cause regressions. This is gonna be a heavy development period. That's where I say you may need to extend those couple of, like say you normally work in maybe 100 or 200 commits, you may have to go to 400 until that's an atomic sort of commit that you can work with that causes your stuff to be saved. It may actually have broken the system and then fixed it a little bit later. It gets a little complicated. So this is a very heavy period of testing. You can probably almost plan on doubling the amount of work effort that you're gonna be doing between this period. And then it slows down. And then what you'll see is it reaches a little more of a stable point towards the end of the development cycle. And that's where you can maybe hone in on your own features and improvements and land more of your own stuff to develop. So those are like the one, two, three of kind of maybe controlling a little bit of the churn. And I'm gonna hand it back to Ricardo when he can talk about Linux. And back on the Linux side, it's not too different in that sense. But one of the things that helped quite a lot when you're working in there and trying to always use the latest technologies. And I think we need to think it's not just a matter of QA and testing more and make sure that you're always following upstream, but it's also changing the development and practices as well. Like for example, depending on the product that you have, it might be really complex and might have many sorts of components in there. And one of the good practices is just like trying to break it down and isolate it from each other. So you can move those independently if required, right? We are kind of using it with Linux to have only one single build that builds the whole stack that comes like from the builder to the stack, to the graphic stack, to the applications and so on. And when you need to update one single thing, you need to move everything forward at the same time and becomes like really complicated to manage. So, and it's good because with Linux, there's like a lot of new technologies out there, like for example, such as containers, run times and so on that allows you to isolate those pieces and give you a little bit more flexibility to move only one at a time if desired. And also like one thing that is really nice that it's happening with the kernel, it's just like the only way for you to be prepared to use the latest upstream, like if required and when you need an update, is start to do like instead of focusing all the key way in your product baseline, which is already a fork and tree and so on, like try to do a little bit more key way on the upstream project as it gets developed ahead of time. Because then you start like preparing yourself for possible regressions or new features that might be common in place. And as Mike said, it's a lot easier to talk with the developers, the upstream containers that are working on that project when you find an issue, right? If you find an issue in a stable tree that is like out there, like for many years, it's almost impossible that you're gonna have, like, you know, like a, you're gonna find like a developer that is really like eager and able to like help you with that sort of issue, like because the developer is already like, quite ahead of time. So just for example, one, two examples here, like for the octa and open embedded, right? There's always master, of course, like you can, you know, continuously test and help validating that. But the project also, like for the core layers, they also maintain master next. So which kind of it goes ahead and try to, you know, like, to help validating what is going on before even going to master. So if you, you can start like hooking up and testing and QA more out of those branches, for example, it's a lot easier to handle later on when you get an issue. One caveat is master next will be based on you. So there to get history will go away. Yeah. Yeah, no worries. Thank you. And so the kernel CI is also really interesting product. There's, I think there's gonna be a talk later today about kernel CI. And it's the whole idea of like connecting boards and making sure that you were testing the kernel as it gets developed. So I really recommend you, if you're interested in the project, check the talks. I know that there's an automation test and some of it I think like tomorrow is an invitation only, but I think they're gonna be publishing all the talks and the record as well. So I think what we need to do is like, instead of like focusing on doing QA, maintaining only like a fork of trees, instead of like doing kind of a more joint effort across like SOC, harder vendors, IP vendors and so on. And try to continuously test upstream Addis Evolves and try to make upstream always like the good target because even if you don't necessarily think that you need to update your product, you need to be like ready for, you know like when something bad happens and you need to be able to react. And just finishing it, this is like was really nice. Like we were not expecting this, but like John Corbett like did, you know like quite a bit of the talk already on Monday when he was like doing the user kernel report and this whole like stable tree maintenance thread and discussion. So I think even if you wanna stay with the stable, like the only really safe way to manage this is to be always with the latest table, right? But to be on top of the latest table, you need to also like be always following upstream, right? And because that's where the developments happen, right? It's the best kernel that as a community, they, you know, like we know how to make and it's really nice. Like what is that in the end? I think that's the way it's gonna end up eventually, right? They're the only way to react fast as we go and as we improve the technologies and it's keeping with the latest, right? Instead of focusing on the ancient trees, like for some of these kernel, the long, long term supported kernels like 4.9, 4.4, even older. So I think we need to start like think differently how we move forward and how we maintain front, especially for IOT. So I think it's pretty much it. I'm gonna let some, yeah, we got like 10 minutes for questions so. You know, we may not have this yet. Hello. Very good. This person, let's go there. Okay, so in my opinion, this is like pushing away a problem from us up to our customers. So if you want to maintain the old kernels, we might limit the customers which will use this. As I feel, for example, an automotive industry. So they are strongly relying on the software. And if you change, for example, operating system to the really newer kernel, it might require a major test loop which will be repeated. So all validation and so on, so on. This is really, really expensive. So I think that in this way, customers might decide to stay with commercial operating systems because they will be certain that once this is really needed, someone will only implement certain bugs. They will not force them to change the version of operating system. So I can go over like in the lens so you can see it on the Zephyr light. I think, yeah, it's like, there's a lot of industries that are risk aware and they want to try to, like they have this mentality of like staying and keeping it stable. I think it's part of like, but at the same time, you get that impression that if you were able to stick with a node release and just bring changes that are stable fixes over time, but like for example, the meltdown is back to show that it might be super complicated to back port and problems. So you might create new ones. So there's no super bullet in there, right? Which is why I think ideally what we should be seeing is like, as I said, like instead of focusing the key way, for example, on a release, even though like you're on a stable tree, you should also be doing the same level of testing or at least partially on the upstream as the development flows. So in case, you know, like it becomes too hard to back port and like a fix on that stable-based line that we're relying on, it might not be too complicated to jump to a newer version of that kernel, for example. And it might be even better in the end because it might be simpler. So I think what we need to do is just, instead of like focusing on one thing, is you're still gonna be like doing the same level of testing on what you're delivering, but we should also be, you know, like focusing on testing upstream as it goes to make sure the quality is nice. And this is my concern. I like the fact that you brought up certification. I think that's a great point in that, I think our industry actually needs to address this a little bit in that we're placing the value of, like there's a delay here where we're forcing companies to pay money in to get certified all the time. And if they're forced to pay the money, they're actually disincentive to release updates in a way which is counterintuitive. We need to make that certification process easier and maybe even less expensive because we need to be able to certify quicker in a way to help these companies not feel like they're penalized for releasing updates. So that's what I wanted to speak to you. So I think we need to get the compliance machinery that's out there in functional safety and so on. We need to get that to also move and change and become more agile. And so, you know, like the Silto Linux MP project was looking at attempting to get the Linux kernel on a multiprocessor system to be shown to be functionally safe. And then the idea is that other people would just do a delta on top of what had been done. And so this is where we need to, all of us need to go out there and get that to move in those compliance agencies in this testing. You know, I came from a military hardware background. I understand what you're talking about. I know the costs. But as you said, Meltdown Spectre, you're not gonna prevent these things. And so you just have a false sense of safety and security. I mean, it's completely, completely false because it has been proven time and time again that you have new vectors, new attack vectors and surfaces that you never expected and you didn't test for and your compliance is actually worth zero except that we're not getting litigated enough to make it matter. We'll go back here real quick. So if you're moving, if you're trying to move with upstream all the time, how do you make sure that the devices like hardware that you use is still supported? So I was on a talk yesterday about the ELT, Extremely Large Telescope. They plan to support it to 2060. So it's 40 years from now. How do you make sure that like hardware that they use is still there? I'd like to address that. If you're more active in the upstream, doesn't that mean that your hardware is gonna stay by very nature more supported? Whereas if you're staying more towards the long old ancient kernels, it's much harder to keep your hardware supported on a newer kernel. So I think as people move to this sort of idea that you're gonna stay a little closer to tip, you'll find that the hardware that they want is actually gonna be easier to support in the future. I think because they're gonna be more active, they're gonna be the ones finding bugs and keeping that hardware current. Kernel CI, exactly. I wanna touch on an interesting point that ties in with your guys' question is that I think you said we're pushing the support then onto our customers, right? Because the APIs change, which is absolutely true. I think at this point in time, like retail consumer market aren't aware of these security issues, right? And so there's no incentive for them to push back on the products that they're buying right now, saying I want this to be a secure product, like a car. There's more and more telematics and connectivity that's being added to cars. And if that gets exploited in a way, it's really bad for your brand. Now I don't think it's happened to a scale where it's like I won't buy this car brand because it's been exploited, but if that happens, that's a problem for the customer. And I know it's hard. It's kind of future reaching a little bit to say we need to be ahead of the curve here, but either it's gonna be government regulation that comes down that makes this model happen because that's really the only way we can solve these hardware bugs or it's gonna be from the retail push, the consumer side that says we demand more secure products or I never wanna buy this brand again because of these security flaws that were exposed to the public. I think it's we're in a hard spot now because the general public doesn't realize the impact of security in software. And so either one of those is gonna come to a head and we just have to be ready for it. I don't think there's a silver bullet just yet other than running the latest. So if all the customers on Sun move to the master see the issues that they came in and integrate them on the Sun doing the job, don't you think that at the end there could be a feedback loop to the maintainers to say, okay, slow down the pace of changes. Otherwise we can follow and we need to move to other project trees, more stable. And then at the end limit the path of changes of the project and limit the evolution. I think you're not gonna limit necessarily the development, but if you're continually passing at least, you can make sure that when a bug happened, for example, that the upstream maintainers, they're notified and I'm sure that they're gonna care enough. That's the beauty of working with latest, right? If you're a developer and maintainer that is just no brainer, right? You find, you do a modification, there's a regression, if someone complains you need to be aware of that and you're more than happy to work with that. So I don't think the speed of the changes are gonna continue to go up as the technology gets more complex, but we need to, if you were together in testing with upstream all the time, at least when you find issues, I think the upstream projects are probably gonna be more than willing to stop and help you trying to fix that issue as it goes. So avoid regressions. I think one of the other things that if you start to adopt this model, what's happening is more and more people. At the moment today, as you said, everybody is building their own product, their own kernel, their own everything from scratch. So all the testing they're doing only benefits their product. There isn't that feedback loop. If we start building our products more on towards tip, more towards latest stable, whatever it is, but tracking, then everybody's testing is making everything better. Everybody can see it. You don't have this, I'm not gonna buy that product from China because I don't know what's in it, right? If you build on the same foundation, then you get the network effect and everybody's testing's improving it and the maintainers want to work on it because it's the latest stuff, not, as you say, 10 years old. And to the point of certification, I agree with what you said. At the end of the day, the certification process for all products, whether it's industrial products, healthcare, safety products, we have to change it. And you've seen this in the phone space, right? Already in the phone space, you're supposed to certify absolutely every change through the carriers. Well, guess what? It doesn't happen now because there are too many things that we have to fix too quickly. Look at how quickly your iPhone or your Pixel is updated. Most phones aren't updated and that's pretty scary. And it only needs one massive thing like Specter Meltdown but actually that applies to IoT devices and you have to be able to update your product immediately. So if you don't do this, if we don't move towards this model, I think we're all gonna be in trouble at some point. So I think one other thing to think about is years ago, there was pushback on test-driven development. People did not wanna do that because they thought it was extra work. I look at this as the same kind of idea, right? Staying on top of master, staying on top of tip of all of these things is just a little bit of extra work along the way. Just like writing tests was a little bit of extra work along the way. And if you keep doing that and you spread it out, it's completely manageable. If you wait for these great big burps every six months or every two years or whatever your cadence is, you create just immense amount of work and all you're doing every single time you start a new project, you have just completely obsoleted yourself and created immense amounts of future technical debt. We can't live like that anymore. We don't have enough resources but there's not enough programmers in the world, developers in the world to do this work that needs to happen in future technical debt. And especially when we're relying on open source projects that not everyone gets paid like to be fully focused on that necessarily. So we are, and they're not gonna be willing to go back in time and maintain stuff like for a long, long time. All the developers are always looking forward and which is why it's critical to be together with them to make sure that as we move forward, we move forward like safely as we go. And to add to that, I think we've also reached a point where the consumer base is more comfortable taking updates. Now, I think 10 years ago it was always like, oh, I don't know, I'm not gonna update that. I might break my router or break this. Now it's like, God, I want that company to get me an update because I know it's probably fixing something that's important. So I think there's a key shift there where you actually expect it now rather than maybe dread it. It's like the WPA issue. There's a bug in the specification and you need to be updated, right? If you understand that, if you see that when you have a router that's not, it wasn't updated, you know that you're vulnerable, right? I mean, there's nothing you can do. So the customers are expecting updates and their updates are actually becoming a good thing because they're bringing fixes and making more secure over time. So do you think this model necessitates a change in the upstream and how we do releases and that release cadence because if you look at the evolution of the Zephyr project in particular, since that's what I'm personally working on, when we first started, we had I think a monthly release cadence and we said, then we stretched it. Now it's quarterly. And as we're talking about LTS, there's the potential of, okay, well, we might want to stretch the next one. There's this constant pressure to keep pushing that to be wider and wider and wider. So do you think that, moving to this model and you talk about, taking smaller chunks at a time, does that necessitate a quicker, I mean, I'm wondering about that feedback loop back into the upstream and say, okay, do we need to change how we're doing things? I think one of the points is that it almost doesn't matter what the actual release schedule of a particular piece of software is. If you're consuming from that sort of tip in small chunks, whether you release as a final in six months or three months, you can still have a fairly stable point along the way. And it's up to the company themselves to find where that is and where they merge that code back into their code base. If they run enough testing, they find that this is stable at this point in time so that if they have a regression, they need to release, then they can push that into their code and then they can move forward. And I think the goal is always just to be ready to respond. As long as you're ready and you have a stable point, I think that's the most important. So we actually do this. That's why we're giving this talk. We manage this sort of, and it's a lot of churn. There's a lot of commits that come into Zephyr and we sort of bundle them through. We run a lot of tests. We try to get them on the upstream side and we also do regression and bisection, which we actually work with the upstream maintainers with. We submit a lot of bug fixes. And so what we do is we bundle these together. We write a really nice report on the highlights, things that have changed, things that have moved sideways, what to expect, what's coming. And then our job is to make the next person that consumes that a little easier so that they know what's coming next or if we present them with these little stable bytes, then it makes their development cycle a lot easier. So that it's, and I think the more people that get involved with that feedback loop, the easier it becomes. This could be a thing where it really gets easier over time. We have one question in the back. So instead of doing maybe shorter releases, would it help to maybe have the project promoting stable nodes and mentioning, okay, these nodes is a coherent chunk of changes. You can do your test on this point. And maybe, because what you are doing, everyone will have to do it. Every customer will have to do it. So maybe on this information is the same for everyone. So maybe it could be shared by the project. That's kind of how the Linux kernel development works. Yeah, your Rc1, Rc2, you know, there may be a pretty unstable, right? But as Rc5, Rc7, that's kind of more of a stable point. But I think, you know, Zephyr has that as well, right? Most projects do have. I think like one thing that I saw that we don't have in Zephyr, but correct me if I'm wrong, like one thing that is good on Linux is the concept of like Linux next. Like for example, because I know that with Zephyr, like one of the complications is during the merge window, there's a lot of breakage, there's a lot of issues going on. And in the kernel, like one thing that they did is try to merge ahead on a tree that can be tested. And when the window opens, it just basically brings what in next into master. So you know that it's somehow like more stable. So I think that maybe that is one practice that might help as well on the Zephyr project. Yeah, it is possible. Right. No, we try that. We try that. Yeah. It may at least take one. Exactly. You want people to think, okay, I've committed, now I can walk to the next big thing and forget that they have to make a spot for the upcoming. Yeah. That's a problem. So have people thought about it? So can you elaborate a little bit more about how do you, I mean, are you releasing like every day or because I'm trying to put myself in your proposal, right? Like we do updates, but we move to a new LTS every year, but we don't, you know, rebase everything every week or so, right? So, but I'm trying to put myself in your proposal and it seems to me like you are taking changes every day and that will imply that I have to run full validation every day. Or I mean, how often are you releasing? This is a good point. So what I would say is you're already doing, oh, here, thanks Carlos. So you're already ahead of a curve by moving to the latest stable and that's really commendable. I think what, you know, in your case, what I would recommend is that, you know, you do test so you know you can move to the latest. You don't have to necessarily, but you know you're compatible and if you had to, you're like, I know it boots, I know it does, you know, passes the smoke test. So if there's a massive vulnerability, you're not sitting there going, I have no idea if the main line boots or we can move to that right now. So I think that's the model we're gonna see is this intermediate. I mean, I think everybody that's a developer for Linux would love to see everybody running TIP, but that's not feasible right now. And so I think it's a slow transition period where move to the latest stable update, but you're ready to move because you have the CI loop that's running. It's kind of hands off and, oh, hey, you know, this broke, maybe we should look at it because this is something that we're eventually gonna have to deal with the next time we move to the stable update. So it's like, you're eventually gonna have to deal with those problems. And so it's an early warning system, but then also, if you do it right, you're kind of prepared to make the move if you absolutely have to. That's at the point, actually, the development is happening in the team. Yeah. So when we move next year, we're already there. Yeah. So we're running a subset of our, it's not our full CI on TIP, so. We're doing, I don't know, I bet three fourths of our CI like workload is just stuff on TIP. And it's in a way not quite as important, but we're seeing like what the breakages are. I think a nice thing, I'm not doing the merge-ups these guys do, but they have kind of an idea ahead of time. Oh, this is a nice time to merge up or this is gonna be a problem during the merge-up, but you're going in eyes wide open, which is kind of nice because sometimes you're like, oh, I'm gonna merge today and I don't know if it's gonna take me a week or a day or two months to get it all working. So we always kind of know and we're like, I think this one looks pretty good. And then at that point, we kind of tag things and we say, let's do a full test on it, see how things look and if it looks good, we say it's out. And if it's not, we'll keep waiting and get it stable. But whenever everything looks good, put it out. But whatever you are doing, the testing on is focused on what you are developing and what you are working on. You are not testing everybody else's work. Correct. And that's easy to do like in a team focused on something but trying to do that, you love it. Well, which is why like the key way it needs to be shared across vendors and across the BN to feed back upstream. I was using Zephyr on the talk and some people are using the same system. Sometimes you have all kind of glossed off and not open. I mean, there are so many things that are happening that's okay, I know it's happening but I really don't care what's going on there. I'm glad that it's happening. We should not have to worry about that. Yeah, yeah. This is a talk designed for downstream. So this is the talk about how a company can maybe approach upstream projects and I'm not close to the mic. Yeah. I just want to speak to that. So basically if you set up your CI system to actually be watching TIP and creating commits for your own review system and having your smoke tests and everything running on that, you have your own basic next for your own product and it's always just running on CI and if you have some members of your team spending the time to fix the breakage there, you're aware of exactly what's already gonna be breaking in your own stuff and then you decide when you're gonna pull in that next release and when it becomes your product. But if you have that constantly, constantly running, again, you spread the work out over time and because of the fact that you're catching things that broke for you in upstream, if you're working with upstream, then you can actually get that stuff fixed ahead of time for yourself. And I guarantee that if you've got a product that's connected to the internet for 10 years with added complexity that we're seeing in products today, your cost of keeping your product up to date, even if you never deploy those updates because you never need to, because it's a pacemaker in somebody's chest and actually none of the security problems affect that pacemaker, but you're ready because somebody will break your product, somebody will get in and it may not be your fault, and then your cost of maintenance of that product is far lower than having to deal with a crisis two or three years down the line and basically you can't do it, you don't have time. It takes months to get these patches back into old software and you honestly don't know where that software came from. So we're out of time, but obviously if you wanna come and talk to us afterwards. Thanks a lot. Thank you guys.