 Okay, so good afternoon everyone. Thanks for attending this session. Today we're going to talk about OpenWRT and LED. So let's first ask a question. Who knows OpenWRT in the room? Good. Who knows LED? Wow. Okay. All right. That's going to be interesting. First of all, let me describe a little bit about myself. So in 2004, I bought a Lynx's WRT-54G from, which was this ancient and venerable router, which got me interested in Linux, cross-compilation, and later on OpenWRT. In 2006, I became an OpenWRT developer, which essentially meant having it at OpenWRT.org email account. From 2006 to 2013, I was very active with the project. And in 2013, I joined Broadcom to work on Citabox, cable modem, Linux, kernel, toolchain, bootloader, root file system. When the LED project was announced, after a couple months of reflection, I ended up choosing to request commit access so I could keep working on the code base while remaining in OpenWRT at the same time, which is kind of a weird thing to do, I guess. But here we are. So in summary, we're going to talk about OpenWRT and LED from a technical perspective, what they are, what they do, why they're useful in the existing embedded Linux landscape. Then we'll talk a bit more in-depth into about the design features and a few examples that might be helpful. And finally, we'll talk about the just drama and the interesting bits about what's going on in the project and where we currently are, why did we reach that point, where we are today, and what to expect next as a user and or developer. So let's start with an introduction to OpenWRT and LED. So what are they? So they're essentially pre-things, if you want to say it. One thing is they're a build system, very much like build route or Yachto. They have their own way of building packages, so they have recipes to build other software. They're also Linux distributions in a sense that it's not technically new Linux software. It's more like busybox Linux software, but there's also a bunch of custom user space due to the specific niche that OpenWRT and LED want to address, which require this customer user space. And maybe more importantly than all of these two technical things are the communities surrounding the two projects. So wiki, extremely active forums, mailing lists, code repositories of course, but a ton of users, fewer developers, and a fair amount of contributors to two projects. So if we actually look at what we have, essentially on the left side we have open source software that could come from pretty much anywhere HTTP, Git, SVN, you name it, or local file. You have OpenWRT lead user space components that are either versioned directly with the build system and the same repo or coming in a separate repo. Then there's the core OpenWRT lead that is consisting of makefiles, a dot config based approach we'll talk about a little later, and several tools for different purposes. And so on the right side in green you'll see these are kind of the intermediate build products that we have from the build system. So based on user configuration and selection, you'll get a kernel image for your platform, a root file system that exists in whatever format your platform needs could be UBI, UBIFS, JFFS to EXT4, you name it basically. If your platform requires a special bootloader, we can eventually build it as part of the platform. Like if your bootloader is too ancient and you want to use the newer version of UBoot or something like that, that's a possibility. And finally we have a toll chain which OpenWRT lead, we can do different things, we'll talk about it a little bit more, but it's more of an intermediate build product to build these other products. In orange boxes you have kind of the redistributable components. So packages, and by that I mean like dot IPK packages, so similar to DBN or RPM packages that you can install later on. You have firmer images that are for 95% of the platform supported, you can basically upload that file to the web interface of your router and next reboot you're into OpenWRT lead. We have an SDK that basically encapsulates the toll chain and the essential parts of the build system, so as an application developer you could create packages directly without rebuilding all of these things. And we have an image builder which is more geared towards like integration of existing packages into an existing image. Some of the design goals, so probably the most important is the maintainability of what is it that we're doing with this project. So as much as possible we try to work with the latest technologies and by that I mean latest kernels as much as possible, latest toll chains, latest TLS SSL libraries as well, and not use like ancient CGI web interfaces or things like that, more like JSON RPC or other things. As much as possible, and that's just like any Linux distribution, try to make frequent updates through the different software that is shipped. Another goal of OpenWRT lead is the ubiquity in that most of the shelf routers that you could find out there will be supported within weeks or months after they've been shipped and sold to the public. Most of the time, thanks to the community that does a tremendous job in like reverse engineering the tarble of the vendor and just does what it needs to be done in the kernel or tools that OpenWRT uses so you can actually get it running on your device. We've lead versus a desire to extend the scope beyond wireless routers and we'll talk about it and the reasons why there was this reboot slash fork. And ultimately a lot of people in OpenWRT have tried to work with hardware vendors in the past to get OpenWRT enabled on their platforms as the actual standard platform that you would get as a consumer. Second part of the design goals is user empowerment. So since it's open source, you can inspect the code, you can modify it, you can customize it, you can do anything you want and I don't have exact data to back this claim but you should see superior quality and control over the vendor provided firmware. If not wireless quality and that's not the topic of today, at least control is definitely granted because it's open source. One thing that's very tempting is to differentiate yourself a lot from existing projects and OpenWRT and lead have done a very, very careful job in that area where there's only selected differentiation is that the point, the primary target as well as routers and DSL cable routers in general so that specific use case needs to work very well and it should give you like a state-of-the-art network experience not something that's hard to configure or to use. And finally because of that and because of other stuff that got added along this process, you could consider it a turnkey solution to build real products from. So where do we stand kind of the embedded Linux space in the landscape? So on the Y-axis you have the complexity by that like how difficult would it be to understand the code as a beginner first-time user of the build system and on the X-axis you have the number of components and packages supported. So we're kind of in between build route which is simple to understand. It's got a fair amount of packages supported and as is its own feature state. OpenWRT lead versus the blue box is subdivided into a core packages which is what you get if you just download the build system and the core packages and then there's a notion of feeds that allows you to extend the distribution with more packages. So once we start adding that we're closer to the thousand plus packages available and supported by the distribution and finally we have Yachto open embedded whose scope is a lot different and which is in my opinion much more complex to comprehend as a first-time user. In term of release timeline so interestingly build route served as a basis for OpenWRT back in 2003. So the build route build system was there got forked, got heavily modified to support exclusively the links is WRT54G at support for creating .IPK packages and from there a new timeline existed until sometime last year where LED spun off and created its own timeline. So kind of a hint if you look at this timeline you'll see that for instance the kamikaze release which was 8.09 after the year and the month it was released essentially covered three years and this pattern cannot repeats itself so one complaint that people have made about OpenWRT is the lack of frequent releases and we'll see later that it's actually a big problem. LED has just released 16.01.0 I think it was two days yesterday depending on the time zone so congratulations and the the arrow we have in dotted line is whether OpenWRT and lead will reconcile. A word or two about router security I think this has been I'm very pleased to see that this has been a very important topic for this session of ELC I cannot stress this out I can't stress it enough that having control over your router should be your top priority as a citizen as a user or as a developer because very by far the most interesting attack surface for a lot of things in particular both nets most times there's no monitoring software running on your router there's a ton of security flaws that nobody is looking into fixing but be at rest assured that a lot of hackers are interested in exploiting and there's millions of Linux devices vulnerable out there which make them extremely easy to and and we've like selected like maybe one two three specific architecture so they're all going to be pretty much the same MIPS pretty much the same ARM processor so you write one exploit for these guys and you're going to address a lot of devices and it's the gift that keeps on giving every day or almost every day or every month you hear about new security vulnerability that is affecting xxl routers or QNAP NASs or what have you so don't rely on the vendor to fix this take control now we'll talk about a few design like the actual design of the build system and toolchain sdk features and examples so the build system is written in GNU make file it's not python or big bake or things like that it produces dot ipk files for software packages and kernel modules um that actually means that every when you write a recipe for a package the build output you get out of this is an ipk file that we later on install into the root file system we're going to create so everything is a redistributable package by design and by default it abstracts all the auto tools cmake bear metal make file uh lib tool patching and things like that as you may expect it uh versus make menu config based interface which is mostly in cursors um there's a whole lot of effort put into dependencies resolution and configuration validation so for instance if there are kernel modules that are not available for your specific kernel version they'll just not be uh selectable for instance it supports by partial rebuild of everything so if you say started midway and then interrupted your build your toolchain is halfway there it will restart where it left it won't like erase the whole toolchain and then continue again uh it supports building uh for different targets for instance mips or arm arm 64 what have you within the same source tree so within a few commands you can actually switch between build environments and it's parallel whenever it's possible so natural question would be well why not use bill rude or yachto so bill rude did not and still does not support packages that's that's part of the design goal and that's that's great but it was a great basis to start from uh yachto oe is a little too complicated and also too slow maybe we're getting more into the church war here so might not be that interesting the menu config based interface kind of looks like this so for instance at the top you're going to choose your target system uh there's a notion of sub targets which i'll explain a little later which is kind of a additional layer of customization then there's another layer profiles and then you could in target images for instance you could choose what file system you want to target in it rhyme fs squash fs gffs too uh you can configure so-called advanced configuration options like how you want a toolchain to which toolchain version do you want kind of which bnewtools version which jillip c version just which gcc version uh you could decide if you want to build the image builder sdk and then we move on to the actual package selection which is organized in categories so base system networking utilities libraries and the more packages you add the obviously the menu expense so about the toolchain so open wrt and lead prefer using vanilla gcc and bnewtools for some time we use the lanaro toolchain and finally we're back into the vanilla versions and there's additional patches which are mostly customizations about like code model for specific platforms the only exception is arc which was just submitted by synopsis and it's still work in progress with the upstream gcc the default is so-called internal build in that you let the open wrt build your toolchain from scratch so downloads all the sources builds bnewtools gcc first pass the c library gc second pass and then you're done but you can also use an external toolchain if you desire so it could come from cross lng it could be like a code sorcery binary toolchain and open wrt leads support gillip see use ellip see next generation which is the continuation of use ellip see and muscle ellip although the default is muscle ellip see nowadays in terms of kernel it's a open wrt and lead try to use the vanilla kernel and track lts releases so right now we're at four dot four next one is going to be four dot nine there are open wrt slash lead patches which are a collection of platform agnostic patches that are necessary for the system to bootstrap and or to extend functionality that's not upstream in the kernel yet and there are platform specific patches um versus as a developer you might prefer the option of building an external kernel which could be on a dedicated directory and or you might want to clone directly from a git repo for instance with branch and everything one particularity compared to other build systems is that the kernel configuration is managed via fragments so i'll explain a little later but there's each layer can add or remove the kernel configuration so it's can be both a pain and a blessing at the same time so if we look for instance that can you guys read this or yeah okay well all right well so i'll i'll move over here so this is kind of an example of a package make file that you would want to write if you were to port an application and try to build it so on the top most rectangle you would define like the package name package release which is going to be used by the ipk part of the build how you would want to download it so this one's for instance coming from git so you're going to specify the protocol as git you're going to indicate what's the url to download it from package source date is kind of since it's a git snapshot you did the two key identification metric is going to be the date and the commit and then there's a hash to verify the download we're going to have in the third box we're going to have like a package license which is useful for future extraction if you need to like produce like a compliance list or something the person responsible for it then we include make file macros that are going to utilize this information we in the one two three four five six four whatever we're going to define where the package is going to appear in the make menu config so this one is in the base section and the category is i can't even read myself sorry in the base system then there's going to be a bunch of dependencies but we can express so this one depends on two packages one is lib ubox and the other one is lib json c and then a title which is just kind of a friendly name if you wanted to search for it and make menu config and the url of word the upstream source for that package uh finally either the most interesting part is the not the final the previous two final section which is how do we actually create the package and this one's very simple you create a user slash bin directory and then you stage your binary in there which is called uh json path and the sources we call json filter in the final package and finally we add the package to the build system which is just like a evil call package macro slash the name so this heavily relies on GNU make macros so it's not as simple as say build root if you're familiar with it but it's it's kind of an abstraction built on top of it but it's still GNU make file syntax so humans can understand it hopefully uh example workflow so for instance if we take our package uh from an existing build system would be very easy to just like build clean compile and install the package so these three commands would of course expand to three different make targets but you would it's as simple as that for instance so if you if you're iterating over development of your package you don't need to type make all the time wait until you arrive at the section where the build system builds but json filter can just do it there uh another example is for instance if you have eth tool but it's not enabled in your kernel in your current config you could still override that by just enabling very much like you would do with like the kernel or maybe over a kick and fake base system and you can just say well download the sources so that's what it does and finally one thing that OpenWRT and lead use a lot is something called quilt which is a way to manage patches within a directory so it's uh it's kind of poor man's version of git from when it existed so if you have like patches that apply on top of an upstream package you could manage them directly from the build directory and remove add new files whatever the platform layer so this is probably the most interesting part of one of the most interesting parts about OpenWRT so there's four distinct four distinct layers in the build system that are created the first one is the generic layer so in that layer you're going to find the generic configuration the kernel configuration that's common to all platforms so not just one like all of them for instance uh squash fs is going to be enabled for pretty much old platforms or jffs too all the everything that makes sense to be shared is there you're also going to find some patches the OpenWRT specific patches that I mentioned before like for a long time we had like a custom implementation of the xe decompression on top of ubi fs for instance that that wasn't part of the mainstream kernel so I was added and we have base files which are going to be added into the root file system then you could subdivide that into multiple platforms so if you looked at the source tree you would see for instance the popular aphoros ar71 xx platform being one platform you would see x86 being another one you would see brockham dsl router as another one for instance so these platforms can also augment the generic kernel config and patches with their own specific changes and the same thing goes with base files one additional thing we could do at that layer is we can also change the default package selection so if your platform wants let's say usb kernel modules you could put that information in there it's not mandatory but it's one way to do it another layer that we have is the so-called sub target which is if there's like something if your platform the best example I can come up with is if your platform for instance exists in two different engine s one little indian one big indian you would most likely create a little indian sub target and a big indian sub target and there you could define specific kernel configuration but most likely enable alien indian disable big indian and vice versa and finally you can have a profile which is a way to further customize the package selection and the base files as well as the firmware creation so some platforms some socs for instance are existing in routers and nases for instance and so routers would typically run from flash spy nand and nas devices may run from like a internally attached hard drive so if you need to make that kind of distinction you could do it there hopefully this is a little more readable so if you want to define a platform it's pretty simple at the top you're going to include the macros you need you define the architecture which is used by both the kernel configuration and the toll chain so here for instance arm the board name is real view which is like a emulation platform from arm features is kind of like what your platform supports so this one supports an fpu which is going to influence how the toll chain is built and supports ram disc but we could put additional stuff in there if you have additional cpu specific optimization that you would need to specify like this one is like a mp core version and supports vfp but if you had a newer version you could put that is like a cortex a15 supports vfp4 and you could like target specifically that a bi and floating point model if you wanted to we're going to tell which kernel patch version this is the device type is a way to influence the default package selection so for instance for a developer board you may not want ip tables bridging stuff like that we give it like a just a short description to appear nicely as a user friendly name in the menu config presentation finally a kernel name which is the when you do like make arch equals arm what is the final image that you want to build this one wants a z image and finally we add it to the build system by just calling this line same thing as with packages it's reasonably easy to work with with the platform so you could like build just the kernel modules if you wanted to you could build kernel image and firmware uh you could manage the patches with quilt and maybe more interesting you could actually switch between environments so if you have an arm based platform you could create a new environment for that which is essentially going to save your dot config file and a bunch of other stuff that could be moving around between builds switch to it type make stop there and then switch back to say your MIPS platform that you're working on one thing that's pretty cool is that even kernel modules are packages which means that you can actually install them later on so it's kind of the same idea as before you define a kernel module package name here it's like tygon3 the procon driver you tell the build system which kernel config option it's keyed on what are the dependencies um which file to copy so this one is like driver's net even at brock arm tg3.co then a we don't use mod probe in open area to led for like space constraints so instead you kind of have to teach the in smart uh if there's dependencies order you would you would use a number to indicate that and finally add it to the build system one thing that's also very popular and useful is to extend the default package basis something called feeds which are essentially URLs to other packages so for instance here we have two examples the first one is uh src git means you're going to download for git packages is the name of the feed which is going to be like an identifier to find these packages later on and the URL where these packages are but you could you could think of the second one as like you have your local development repository of package package recipes that is called custom and it's just like a symbolic link or directory and from there there's a feeds script within the sources that allows you to search update packages and install them install in that case means tell the build system how to find it and how to build it later on one thing that i alluded to earlier is the image builder and sdk so the image builder is essentially a way for you to take an existing image of a router for instance that you have say a wrt50 4g and you've built just a minimum amount of packages and later on you're like oh wouldn't that be great if i could just add open vpn to this firmware image create a new firmware image with that and then install that on my base of router so that's exactly the use case the image builder is for so it contains the kernel image a bunch of recipes and tools and allows you to recreate firmware image using pre-compiled packages on the left side as an input the sdk is just the next step like if you have if you want to redistribute a toolchain that's essentially what you would do you would use the sdk which contains the toolchain recipes and tools takes open source software as input or proprietary software for that matter as input on the left side and produces a binary package that is redistributable so if you link the two together you can actually get source package package to image uh we are now into why did we actually create all this custom user space and open our urt led so the reason why is it's it's manifold the first part is that modern systems require coordination between nature genius components like in a traditional back in 2003 for instance your embedded system will have like maybe a dhcp server a pppoe session your pppoe session your dhcp server doesn't even know about it right so that's not really you could work things out with scripts and stuff but it still wouldn't be the way it should be done if you have any kind of user interface cli command line interface web graphical user interface they're going to change the system configuration so that needs to be told to the applications that care about the configuration system it's not just a hub signal that you send to the package to the software uh networking devices are incredibly more complex these days and we'll talk about a little more but it's it's no longer ipv4 dhcp to provision your device it's it's a lot more complicated and finally since we're we want to be able to update frequently we need a proven and working update mechanism that does not rely as much as it can on the bootloader or something that's vendor specific so if we look at the open rt lead software stack we essentially have six different components so the first one on the left is called netifd which is essentially a network management demon which is response which just deals with event driven networking so for instance even netlink comes down everything gets disabled even even netlink comes back up everything gets re-enabled vpn gets re-established and things like that so it's going to deal with the stacking of network devices basically we have something called progd which is a process monitoring demon kind of like systemd if you want to think about it that does uh jailing hotplug hotplug messages generation watchdog syslog uh in its crypts uh then we have lucy which is the web interface it's written in lua it supports uh plugins and modules so it can be made dynamic as you install and remove packages it's got a json or it uses json rpc to talk to other parts of the system and it's got a so-called ubus expert which export which i'll talk about it the the binding blue arrows are actually what's so-called ubus which is the system bus so it's essentially debus on your desktop except it's smaller and simpler so ubus is socket-based ipc bus very much like debus supports acl so you could say this packet this demon can talk to this demon not this one uh it exports exports methods and signals so you can call into other packages sorry call into other software through that bus and you can also subscribe to notifications and events and it supports a binary and a json data format and finally we have uci which stands for universal configuration interface which is a config central configuration database to store everything that your system needs to know about like my hostname my uh network configuration and things like that supports uh it's a transactional model so it supports commit and rollback in case something wrong happened system upgrades so system upgrades work consistently across all your devices they're independent of the boot medium so wherever you spy nor spy net emmc it's going to work the same way the only thing we need is the platform to tell us where to flash a firmware image because we can't guess that right so how to identify the image like is this image really for my platform and how do i for instance put the kernel here and the root file system here and then what the system does is freezes all system activity moves does a pivot route to slash tmp and tmpfs which means that your flash can now be used and there's no dangling file descriptor and stuff like that rewrite everything and next thing you know you're rebooting to the new version and all the configuration files are preserved another thing that's interesting is called overly fs so it just made it we had it's if you're familiar with union fs the idea is that you want to assemble two different file systems under a single uh namespace mount namespace so what is what this is used for in open rarity and lead is to provide a read-only partition of your system that contains everything that's needed to boot to a shell and to allow you to do recovery so you can't do rm-rf slash and have all your system wiped out if you want to there's an option to do it but by default there's at least the base system that's preserved and then there's another partition mounted on top of that which allows you to do read write operation so you'll get installable package to work just fine and finally there's a failsafe mechanism that's typically provided through device specific buttons most most routers have like a reset button or a factory default button so when you press that for x amount of seconds you could you're dropped into a shell where you can do recovery operations networking today so back in the good old days we had evernet dhcp and we'd be done with it nowadays you can have your network connection coming from pretty much anywhere even at 3g 4g x dsl docsys and you could have you could have different provisioning protocols dhcp router advertisement plus dhcp v6 and ip or ip 6 configuration protocol and with ipv6 being massively deployed with isp there's many different ways to provide ipv6 to end gateways could be 6rd dslid map p map t 464 translate you name it so why did we why since we have all of this today the idea is that you should not have to configure much more than what's in these boxes so configure if we what's common to all of these parts is that you're going to configure just the bare minimum for your network interface so if it's like evernet you're going to say my one interface is on eth1 it's provisioned for dhcp i also have a one six interface which is a logical name for my interface and it's provisioned for dhcp v6 similarly i'm i'm going to do the same thing 3g 4g pppo whatever and then we're going to let net ifd do all the magic so net ifd basically orchestrates all these devices devices configuration into one central location it calls into protocol specific handler so if you have a new protocol you're working on tomorrow it's very easy to extend it you don't have to rebuild the whole system and it's going to deal specifically with physical or virtual devices like evernet x dsl tunnels vlands and whenever your network device does something it's going to propagate propagate through the ubus system bus and the firewall configuration can be adjusted your dns mask dhcp caching server can be restarted or reconfigured you can have network aware services say samba dlna upnp to be restarted or reconfigured and it will fire instances of protocol clients for instance like dhcp client things like that now a few things that are interesting in terms of security compared to other embedded systems and build system is that openwrt by default supports full railroad which is read only relocation of your elf executable when it gets loaded it's configurable you can disable it you can do full partial it does format security checking so if you have like a printfs can have statement that's unsafe the compiler can tell you that it supports source fortification as native support for stack smashing protection for both user and kernel and all ipk packages are assigned so bad avoids like mine in the middle attack where I could try to sneak in a malicious update of the package to you there's a few interesting runtime security features as well so there's a notion of jails which is supported supported by progd the monitoring process which basically allows you to it's kind of think of it as a dynamic csh route where you specify only the files that you need that should be made available to your processes it's super helpful for applications that are known to have security vulnerabilities like samba or my dns mask and on top of this on top of doing file system level restriction you could do system call level restriction through the use of sec comp so you could list exactly which system calls your application should be allowed to use some other cool features and goodies is that there's existing arm arm 64 mips and 86 targets that run natively in coimew so if you don't want to flash anything or a device you just want to run this in your pc tinker with the system there's ready ready to be built images you can have your packages with separate debug information you could also have a tool that helps you with like gpl compliance or if you're like a vendor and you need to provide a list of packages and which license they have the build system can do that for you as well you could work off a local package mirror and alternate download directory which is useful if you're like in a corporate environment and you can customize everything you want in user space areas of improvements because there's obviously some so we essentially lack continuous integration testing it's kind of hard we have well we have essentially the same challenges as the kernel people maybe worse because this requires actual physical devices and what you could test with emulators are probably never going to break in the emulator but always going to break in a system and like the upgrade mechanism something that's kind of hard to test so fortunately the community is active and is providing feedback but clearly having a board of farms would be something that helps something open up your team lead have traditionally been criticized on is the lack of upstream work so there's like about 170 patches against the 4.9 kernel right now just for the generic part of the kernel so not counting additional targets and stuff that's way too much so there's an ongoing effort to migrate the most popular platform the quacko mafros into device free which is probably going to be fun exercise another thing that could be good to have is have a opt-in or opt-out security updates like automatic updates of your if not the whole system at least the packages that are important or can be vulnerable and as just pretty much like any other open source project documentation and one thing that people regularly ask about is what's the best supported device so if we could come up with like a good metric for evaluating the support of a particular device then we could like have a top 10 or something like that conclusion so works great on your router but it's would also work great anywhere else if you're willing to give it a shot on like a tablet or pc an embedded system or whatever it you can use all all these features that are router centric like you could ditch them and replace them if you want it it's reasonably fast versus a little and flexible it's a turnkey solution for products but if you're more into doing like embedded systems development or kernel development you could also use it it's very convenient for that purpose and we have extremely active communities and now let's talk about the drama all right so what happened last year on march 5th 2016 a group of open rt developers announced the formation of lead and there were essentially two types of reaction most people immediately welcome lead and just switched to it and a smaller group did not acknowledge the problems being stated and well flurry of email ensued and went pretty much nowhere essentially what that meant though is there was a problem with open rt that need to be fixed and maybe not just one problem so why led so first of all more transparency all decisions are now made public there's a governance mailing list that will list all the decisions that the project is making and all these decisions must be collectively approved and everyone has equal decision rights there's a established and clear process and guidelines as well to operate the project so how to resolve conflicts how to do external communication to vendors to press to and how do we make a release and how do we decide when and how to make a release there's also less centralization open rt was essentially hosted on one single machine where every service was which is a huge single point of failure and so there's been like a all the services are now spread github for the source code but also cloned on a different server a different mailing list provider etc and there's therefore freedom to move all the project services just to follow the requirements basically and one big complaint from the user base but also internal developers was the lack of predictability on the release schedule so lead seventeen point zero point one is a good example that this was addressed and finally there should be like an easier integration process from contributor if you're submitting frequent patches to developer where you get like committed rights and you you're part of the development team meanwhile an open rt surprise as in not a good surprise there were few people left like maybe seven to six and even fewer active like maybe four or five so clearly that that wasn't a good thing but it was probably a good thing to to get a wake-up call so there's been unfortunately inappropriate responses and that lack of response entirely from some people people who were part of the open wrt and moved to lead at at open rt.org emails those emails get disabled which means that they were unable to receive emails from before which was still active so this was a totally inappropriate response and unfortunately this was a tech a emotional response not a technical response so where are we today so the reunion reunification terms are very simple the lead code base is going to be used moving forward because it's been actively maintained it's been receiving a steady stream of patches and contributions open rt in the meantime has been completely quiet and flat the open rt team has been given lead repository access so we can make changes we're considered lead committers committers like anybody else and there's discussions on whether open rt should stick as a name it's a trademark name it's also a brand it's got i mean probably i mean a lot of you know about the name so that that's got to be something it's got a larger popularity but unfortunately right now it's a stall discussion nothing has happened past couple three meetings so what next the release is finally out which means that we should be able to focus the energy on bringing the two projects together and not getting a release out um back i'm echoing to my earlier slide we need def desperately need a truly open source project that supports wireless routers and not ddwrt not tomato firmware not what have you open of your t because it's it's shooting to be open source auditable frequently updated um and on the tech the project uh human side we need we essentially need to discuss and agree so discuss in person discuss more frequently uh on the reunification term and literally bury the hatchet so i'm thinking about buying one just for that symbolic uh and i'm linking here an email from an open wrt developer entitled state of the union where it's essentially saying that the the renew if you start the reunification terms are accepted by open wrt and this was three days ago so hopefully we'll be moving from there if you want to be browsing the websites or looking for references you can find them here and now if you have questions uh i'm right open to answer them no reactions really uh okay so the question is what's the purpose of the image builder and who uses that so the image builder is like uh you don't want to you're not a developer you don't know how to compile you don't really know anything the only thing you know is you want this package inside your firmware image and you want to go there as fast as you can so it's usually it's end users that would typically use that feature where they start with the base open area t image and then they start customizing it by adding additional package they found on the internet a lot of like in germany there's a lot of wireless communities like fryfunk and and these guys and they heavily utilize that feature yeah so the question is about reproducible builds so uh verse been an effort last year to get closer to reproducible builds with the help of the dbn packages patches that they've done uh i can't tell you like what's our coverage percentage or things like that but there's definitely people looking into that list like at least a couple people um regularly contributing changes to make that happen but yeah this is super important for security uh so the question is if i have any insights on the uh what's xgui oh oh lucy yeah so no no no worries so lucy so yeah the the state of on lucy so we the the deployed interface is currently lucy version one uh there's a lucy two that's being worked on uh so i heard but i've never seen the code source code so far so i don't really know where we are uh i don't even know what problems they want to address other than maybe being faster and maybe more javascript more uh client side validation and things like that uh it's like the original version of the web interface was written in a new hook awk which was like i don't know it was fun but very difficult to hack our questions not going once going twice thank you very much oh sorry no go ahead go ahead so the question is like we have tons of little devices with very very small memory footprint typically eight megabytes of flash uh yes there's ongoing uh ram i mean would be challenged to even get linux to not out of memory in eight megabytes of ram i think so i wish we could do that like there's a i know this was like the motivation for the some versions of the wrt 54 gl and stuff that were running vx works instead of linux because with twice as less ram you could get like maybe a few cents down your product and that was meaningful but these were running vx work so as much as i'd like to i don't think it's feasible with a modern kernel you you'd probably have to go back to 2.4 for that thank you