 My name is Andy Gross. I work in Leonardo in the IoT group and today I'm going to present on the Zephyr OS configuration, some of the device tree work that's been in progress. So I'm going to talk briefly about how Zephyr works today, at least with regards to configuration. If you have looked at the Zephyr source tree, you'll notice that there's configuration information that is spread across a number of directories. For any given board, you have a board directory, you have an arch directory which contains some information, and the information is in a form of Kconfig files, Defconfig files, and there's also some auto-configrated information. So when you actually go through your build process, all this is sucked in, and the options are set, and different pieces of the drivers are enabled or disabled based on those settings. As such, the configuration is kind of hard-coded, so depending on how you have your Kconfig set up and your Defconfig, it just does it. If you look at the code, you'll notice places where there's multiple copies of things, if-defs around those, which utilizes those options to decide whether or not something's compiled or used. There's also multiple sources for this information for the definitions, whether it be the SimSys files which are generated or given from the vendors or other vendor-specific include files. And the thing is, across different boards, even in the same family, you'll have more almost identical directories with very similar information. So it's not very extensible. Every time you're at a board in a family, it's almost the same information is included. So the idea was, well, how do we get away from doing that? And one of the ways that you could do that is to start using DeviceTree to describe the hardware configuration and even some of the Zephyr software configuration that you want. The nice thing about DeviceTree is it's architecturally neutral. It could work on x86, it works on ARM, it works on RISC, it works on a lot of different architectures. It's very easy to add new ones. If you get your configuration from the DeviceTree, you can actually remove a lot of the K-config options that are hard-coded right now. So on a given board, if a specific device node is enabled, that means you're using it. Whereas before, you would actually have a K-config option that says, okay, I'm using UR0, I'm using UR1, 2, 3, 4, however many URs you have. DeviceTree can describe any device node. If you go to the DeviceTree website, which actually I don't have a link to here, which I should have added, there's a whole document that describes how DeviceTree is used to describe nodes. It talks about the format. It's clear text. There's specific fields that have to be done. There's specific syntax. If you follow all that, you can run it through the compiler and you can get output. If you look at the different architectures that currently use DeviceTree and you just grab any number of the files and just take a look at them, there's lots of different pieces of hardware that people describe using DeviceTree. You can describe buses. You can describe just about anything. And if you need a property, some generic property, you can add that property. It's not that big of a deal. So for Zephyr, right now, so if we look at how the Linux kernel uses DeviceTree, it's very much a, okay, you describe the hardware, you compile it, you get a blob, which is a binary form of this. When the Linux boots up, it uses that to create its device list and then that's how it goes through and probes and gets drivers loaded. For Zephyr, it's very different. So Zephyr, you don't have a lot of room. You don't have a lot of flash. You can't keep the blob in the image itself. It's too big. So if you actually take a DeviceTree file and you compile it, it's 4A12K. If you look at some of the flash sizes on some of these parts that we run Zephyr on, that's just not going to work. So for Zephyr, we're not going to be using the blob. We're going to only use the DeviceTree to generate some include information and I'll get to that here in a little bit. The nice thing about DeviceTree, as I said before, if you add a board that's in a family that's already supported, it's really, all you're describing is the difference in that board from another board that uses the same SOC family. So adding new boards is a whole lot easier, which is pretty nice. So I showed this slide in a previous, a couple of these slides in a previous discussion. The thing about DeviceTree is for Zephyr, we have to be able to use the DeviceTree to generate some of the include file information that we use for builds. So how would we do that? So you're going to start basically with collecting a lot of information. You're going to have a DeviceTree source which may include some files that exist elsewhere, whether it's a Simpsys file or a vendor file. You're going to want to process that like the Linux does with the Zephyr processor to replace a lot of the macros that you may use in the DeviceTree file. And then lastly you're going to get some output that you're going to have to use to build your include information and or device structures if that's what you want to do. So here's how it actually came out. In Zephyr today, with the 1.7.0 release, we actually have DeviceTree support for some of the boards. There's a TI board, the CC3200. There's some Kinetus boards from NXP. There's the ARM Beetle and some of the ST boards. And how it works is you provide some DTS or DTS files. The DTS files are include files that are used to build a device board file. You may have other include files where you pull in information. A good example on the TI board, the UART address we actually pull from one of the include files. You take those two, you run it through the compiler and the Zephyr processor and you get an output. In our case, we care about what we want is we want a DTS to DTS pass-through. When you run a DTC compiler, you have a choice. You have an input file that you can use and you have an output file. The input file and output file can be of a couple of different formats. In our case, we want the input file to be a DTS file which is clear text and we want the output file to be clear text as well so we want it to be a DTS file. That's important because we have a parser that only works on the DTS itself and we have to go through the compiler because we want a lot of the different things to be resolved. When you run it through the compiler, all the P handles are resolved which P handle is just a pointer to another device node. When you have something like an interrupt controller or a GPI controller and you actually reference those in one of the device drivers like for instance, let's say you have a spy device and it has some pins it wants to set, you have to be able to resolve that back to the GPIO controller so that you can know how to reference that device and also tell it which pin and maybe some parameters for that pin. So when we run it through the compiler we get our compiled DTS file. We still need to know how to extract information from that. You may have a lot of information in the DTS file that you don't really care about. You may be using a DTS file that is actually the same DTS file that's in Linux. There's a lot of things that are described that we don't care about but how do we extract that information? And the answer is we have another file that's in a YAML format that tells us, okay, here are the pieces of the file that we care about. We can match the YAML file that we have to the compiled DTS using the compatible. And if you are familiar with the device tree, well let me say if you're not familiar with device tree, for every device node you have a compatible field which is used to match. So when you try to decide what driver works for a different device node, you do it off the compatible. What we do the same thing here, the difference is we're matching the YAML. When we match those two then we can rip through the YAML, look for the properties that we care about, match those in the device tree, extract that information and put it in a format that we can use in the include file. So the output of the compiled DTS and the YAML is a generated include file. And let me skip forward to what the generated output is going to look like. For the ARM beetle board, this is only a snippet. The output is basically a bunch of pound defines. If you look in the Zephyr code before the device tree was added, you're going to see a lot of these pound defines that were K config options. The thing is that for a given board, you have a device node that sits at a certain address, you still have to resolve UR0 is a specific node at a specific address. We have had to work around that a little bit right now, but that's going to be going away. But this is what you're going to see from that output. And this file is included in the driver files and other files in the system that require it. I'm going to back up to the YAML. So in this slide I talked about how we have a compiled DTS and we have a YAML file that we use to extract the information. Now I'm going to talk about how we're using YAML in Zephyr. So we have devices described in DT that's in device tree format. We have the device node described in YAML, which tells us compatible properties that we care about, cells, what the cell names are going to be. This is important because like I mentioned earlier, interrupt controllers, GPIO controllers, when you have a device tree entry, you're going to have the handle and you're going to have some number of cells. What do those cells mean? Well, for an interrupt controller, the first cell could mean an IRQ. The second cell could mean some flags. For GPIO, GPIO PIN, GPIO flags like the BIAS, the drive strength, things of that nature. So we have to have a way of defining that and also using that in our extraction so that we can actually use the include information that we generate. So the YAML gives us a description of the contents. It gives us the definition of the properties. It tells us whether or not those properties are to be extracted. It also tells us the format of the output. We have two ideas for how we use the information coming out of this. One, we care about include information, which is what you saw in the output. The other thing that we care about trying to create is data structures. So if you look at the device driver APIs right now, you'll notice that there's platform data that's created and encased in these if-deaf statements. Well, one, we want to get away from that. We really want to have platform data generated and with this, we can actually generate it because we know exactly how many nodes we have. We know what we need to extract and we can also tell it, okay, well, for let's say a node that requires some number of GPIO pins, why don't we just build up a platform data? If you look at the driver examples right now, they actually do that. They'll have some platform data structure that's created and they'll have some fields in there that actually say, okay, this is my clock gate, this is my GPIO pin. Well, we can actually create this and so all that code can go away. The other important thing about the YAML is, is it allows us to validate the DT contents. This is important because if you have a device tree file and you have a YAML and the two don't necessarily match up, you're going to have some issues. So we have to have a way of validating that. So we're going to have some helper scripts that when you create your DTS and your YAML files, you can use those to validate one that you have a YAML file that's of the correct syntax and format and also that the YAML file will actually appropriately work on that device tree itself. We don't want, when a developer is working on this, we want them to be able to figure out, okay, this is the issues that you're having before you get to the point of actually building Zephyr. And when I say user, I kind of mean, in a lot of these cases, the vendor's going to supply the device tree files and they're going to supply the YAML because they're the people who know how the hardware works. They have hardware databases to generate this stuff. So the intent for a lot of this stuff is not necessarily for the end user out there, you know, the hobbyist or whoever's using this to actually go out and write device tree and YAML. I think that's a very high bar for people. What we really want is we want the vendors to generate this. They have all the information on hand. They should be able to do it. And in fact, it's just a different format. They're already generating Simpsys files. It's very easy for them to generate anything else. And for some of these vendors, they're already dealing with Linux, they already generate a lot of this stuff. For Linux, there shouldn't be any, there shouldn't be much tooling required for them to actually generate it for Zephyr as well. So, yes. No, you can actually use Simpsys as includes. Or a subset of the Simpsys is includes. Right now, the Simpsys files that are generated have not only pound-defined information, they also have structures. And that's kind of a problem for the C-pre processor. There are some vendors that actually have some of that information partition such that you can actually include it. The idea is that for, let's say, addresses, it would make much more sense for us to be able to use the Simpsys addresses for that stuff so that you don't have like a fat finger error where somebody has done something wrong. For that to happen, though, we've got to be able to kind of figure out how to knock out the structures of the Simpsys. The Simpsys files need to include something that allows us to cut all that out. When we give it, you know, one of the compile flag, that would probably be the ideal thing, or if they could just split it out. It's more likely for them to encase it in something that allows us to cut out the structure stuff. But the intent definitely is to leverage Simpsys where we can, which, at a minimum, is going to be the addresses. If you look at the Simpsys files, they actually include a lot more stuff. Yes, and in fact, if at the previous ELC, or with the plumbers, with the plumbers, plumbers, this was exactly the subject. Make the documentation go away if people write these YAML files in a certain specific format that works. You can actually use it to validate the DT. That can be an extra step in the Linux compile. And so then all the documentation stuff goes away. If you look at the documentation in the Linux kernel today, you look at any one file, and you compare it to any other one file, and you have major differences. Because I know when I was writing documentation for that stuff, I would look at a couple of examples and just kind of decide what I wanted to do. With YAML, you don't have that. This is how it's going to be. And as long as you stick to that, I think things will work pretty well. And if you want to talk to somebody, you can talk to Frank afterwards about that, because he was looking at doing that exact thing. So here is just a small snippet of a YAML and a DT example. So this YAML file is describing the UART here for the beetle. On the right, you have the device tree snippet. So you have the node name. You have a label. You have a compatible. The reg is the register location plus size. You have what interrupt it is. And then here's a Zephyr specific property, which is the IRQ prior, which is the IRQ priority. And then you have a baud rate. The baud rate that's being specified is the baud rate that's going to be set when the driver comes up initially. That's not to say that it won't change later based off of whatever the user is doing. On the left, you have the YAML file, which is describing this. So at the top, you have an inherits field which pulls in a UART YAML, which is actually generic across all the UARTs across every platform. This is important because there's going to be fields in that UART.YAML file that you don't need to specify in the SOC specific one. You also have the include Zephyr devices, which is where the Zephyr IRQ prior field comes from. And then you have the properties that are specific to this binding. You have a compatible, which is probably could actually be pulled into a generic file because everybody needs a compatible. But it's going to be a string. The constraint is what we use to match the device tree. The constraint has to match for this YAML file to be used to process against this node. And then you have a reg field which tells us that, okay, this is an array. The generation is defined, meaning that we actually want to generate something out of this and that generated information will be a pound define. So the output of this in the next file is the base address. That's what decides that that needs to be generated. And there's some aliases here that for base address is zero. We always generate just a base address itself because that's usually the main one that people reference. If you look at interrupts, this node has one interrupt. It is a required, and actually there's a typo there. And then we need one to generate a define on that as well. So if you look here, you see IRQ zero, and the define is a zero, which matches the device tree itself. So this is the kind of thing that you're going to see across all the UART. For right now, the UARTs are the things that are defined in the system and also the SRAM base address and flash address. We're going to start adding in other devices and also adding in capabilities like clock gating, pinmux, GPIO. So what I alluded to earlier, if you look at the bottom of this file, there's a fixup. If you look in the Zephyr code itself, there are pound defines there that have... If you look at how it was done previously, you have a kconfig and then it would match this pound define. In our case, we need to match the right side, which is the UART 4000, 4000 IRQ zero to the APB UART zero IRQ. This was a stopgap measure for the 1.70 release. This stuff is going to go away, and we're going to do this by adding some fields into the YAML file, I think, to tell us how we want to generate that name. So current state of development. We have device tree support in 1.70. It's base support, the base address for SRAM Flash and UART. The DTS Python parsing script library was a part of 1.70 well before the release was decided. That script, if you want to look at it, it's in the scripts directory, and it's the device tree parse Python script. It's not that large, but what it does, it goes through the DTS file and actually will create a Python structure, which is a dictionary in lists of all the device nodes. There's additional Python scripts on top of that, which actually extract the information. It goes through that device tree file, knocks out some of the nodes that aren't enabled, pulls in all the YAML descriptions, and then goes through every one of those and extracts the information and spits it out into standard out. If you look at the make files, the make files actually redirect standard out into an output file, and that's how this stuff ends up in that output file. We have the temporary fix up, current support for ARM Beetle, CC3200 from TI, the STM32, L476RG, and a number of the NXP kinetics parts. So work for the near term. So as devices are added, cleanup has to happen in the configuration directories for the boards. K-config options go away, and if you look in the driver directories themselves, there's K-config files in there as well. A lot of those options are going to go away. The drivers need to leverage the generated files. The drivers need to start using this for the initializations and what that means is that we need to start generating some of the platform data that right now is done as part of the FDF stuff. The other thing is that because we know what devices are being used in the system, what's enabled, we also have the YAML files to tell us, okay, what K-config option is this going to be? We can actually generate the configurations for actually building drivers using the status of the DT node. And then lastly, I kind of went out of order. The platform data and structure support in drivers. I think that's where we're going to get a lot of removal of some of the existing code that's in there. And that's to the end. We already had a few questions. I'm sure we have some more. Does anybody have, and I can back up to whatever slide we're talking about, but any questions? Right. So with the platform data, like the first use case I thought was well from a dynamic perspective, just from if a driver's turned on, you're going to have cases where you want to go low power mode, so you're going to have more than one pin configuration. So that's one case. You can actually define more than one pin case for a device. The other thing is, as you enable and disable devices, if we describe all the devices in the device tree and it's built, it has to be... If you want to actually use a device, it has to be built into the image. So all that information has to be there anyway. Now whether or not you have pins configured one way or another really depends on whether or not that driver's enabled or disabled. As long as the driver has the information that describes those pins and reconfiguring them, it should be able to go back and forth at will. The key point is having that information on hand. If you don't have it, you're... I mean, if you look in this effort today, there's a lot of static configurations for the pins and that limits you very much. So we want to kind of get away from that. Got a question in the back? Yeah, so if you... So the question was the Python scripts and libraries that we wrote, did that leverage stuff that exists currently today? And the answer to that is no. Initially when I was doing this work, I leveraged some of the work from Neil here in the room. Which was doing Python parsing of the compiled blob. The thing about that is we don't care to compile it to a blob. We'd rather leave it in clear text. And there's not really any good solutions out there right now for just plain DTS parsing. So I started writing it really kind of dumb. Because I just now... I've written tons of Perl. I haven't written a ton of Python. I wrote a really stupid Python parser. And then we had a sprint and one of the Intel guys just in one day ripped out and wrote a whole thing to parse it, which was awesome. So I switched over to that. And then the parts that I wrote were on top of that. So the library that he wrote will parse DT right now and basically put it in a structure format, which is dictionaries and lists. If you're familiar with Python, that's basically how you can build up a hierarchy of information. And what my scripts do is it just kind of windows down that to a smaller set of nodes that I care about. So they answer your question, yes in the beginning and no, not now. And then the follow-up question is, how do we go back and give back to the community? And actually I need to talk to Frank about that. Right, right. So we need to figure that out. And we definitely need that to be something that's fed back out the outside. I mean, Zephyr is an open source project, but that doesn't mean that we can't go back to the projects that where a lot of this stuff actually originated and give back. So that's really where we need to go. Yeah. You mean the end output? Yes. So it basically comes down to your YAML file and your device tree file. And whatever properties you add, that's fine. You add a property in the YAML file and you say, yes, I want to generate this information. And it'll auto-generate it. The only, yeah, go ahead. I haven't really thought about that, frankly. Yes. So the question was, do we try to be compatible with the Linux DTS? So I'm guessing if a platform exists in both, are they using the same DTS? Is that your question? Yeah. So in some cases, yes. The vendor is going to have to decide whether or not those DTS files are going to be the same. And if they are, that's fine. But I think that the board DTS file itself is probably going to leverage some of the DTS file, DTS I file, the include files, that are going to be common between Linux and Zephyr. You know, the board DTS file is very specific. It says, OK, I care, these nodes need to be enabled. This board has these ports. It's very specific to that board. And a lot of cases, I don't think you're going to see board DTS files in Zephyr that you're going to see in Linux. You may see the base architecture support for the family, but you may not see the same boards. Right. So, yeah, I mean, I've looked at the Linux kernel to kind of get an idea of, OK, well, for a UR and SPI and I2C, those are easy. Those are like the easiest, because they're all very similar. Everybody works the same way. But when you get away from some of those very well-known devices, it gets kind of gray. So, yeah, we've got to look at that. Does this microphone, yes, actually using a microphone to ask a question. And it's the, it's, I'd like you to elaborate on the question I asked earlier about licensing. Right. Because the reason the BSD guys didn't use, didn't wind up using DeviceTree, there was a thread on Linux kernel a while ago about the DeviceTree files GPL. And the answer was yes, the ones in the Linux kernel. You're talking about using DTI files from the Linux kernel that are GPL to produce binary output that goes in Zephyr. So, does that mean Zephyr is then, you know, going to be GPL on these targets? And you were also talking about the YAML files, about moving the documentation to a parsable format. If the Linux kernel guys do redo their documentation into the YAML format and you use that as part of your build, does that also make your DeviceTree's GPL? You know, what are the licensing implications of importing Linux kernel code into Zephyr via the DeviceTree build? Right. And I'll give the same answer I gave you. The file, the origination of the file is from the same source. So a vendor who generates a DTS file or DTSI file for Linux uses, I very much doubt that they did it by hand. They probably have some database where they're generating this file. If they generate some format that goes in the Linux and they generate from the same source some file that goes in Zephyr, they have a choice. Yeah, they can do whatever they like. Yes. A format for checking the bindings. I guess if it's only a static syntax checker, but if you do reuse DTI files from the kernel to describe SOCs or peripherals or stuff like that, you're essentially importing Linux kernel source in a way that generates binary output that ships on your device. No, I got you. And in this case, we haven't done that. The files that I've used, I didn't do what Tom said, which is lifting stuff. What I did was I said, okay, you look at all these devices. For each device, you look and say, okay, what's common across all of them? You generate whatever you're going to generate. It just so happens that when you do that and you look at what Linux did with some of these things, it's like, oh, hey, look, they're very similar. But Zephyr. Yeah. As far as the YAML is concerned, I think we're kind of beating them to the punch a little bit on that. My gut feeling is that whatever ends up in Linux is going to be different than what we have. It may be doing the same things. And whether or not we converge over time, I have no idea because that's to be determined. But when that information was given out in plumbers, it was kind of like a aha moment for me because it's like, okay, that solves a couple of my problems for me. But we definitely need to converge in the future because we're trying to do the same thing. There's no reason to have two completely different implementations which do the same thing, which are very similar. I mean, it's crazy. So a lot of the DTS files in the kernel are now dual licensed and they're very open to accepting more DTS as with the dual license. So it's both GPL and more permissive for exactly the reason you pointed out, Rob. And the person driving the validation stuff in DeviceTree, well, there are several people, but one of them is Grant, likely. And he's very, very sensitive to be able to share his work across the different projects. So he's dealing with those license issues and trying to make it available across projects and out of bounds. That's a whole other point. But his whole validation in things like the YAML stuff, he's very aware of licensing issues and not making it just GPL, but shareable. And that definitely is a better solution long term. My solution was, I'm going to let the vendor deal with it because, I mean, they actually own the information. There's always going to be people like me when I was doing these files that did them by hand, but that's crazy. No one's going to do that long term. Any other questions? So on the previous slide, you listed future work and leveraging the generated files. That's what I would consider to be kind of like scaffolding. How do you see that going forward? Because there's kind of a chicken and egg thing that goes on there. Yeah, and that's the real problem. You have to get enough in place to be able to generate a lot of this information. And then you also have to kind of look at how you're going to use it beforehand anyway because those two go together. So it's kind of messy. So the answer is that we're at a point now where we can start dealing with the platform data aspects. We have the parsing. We have the extensibility built in, I think, to handle this type of thing. It's just a matter of generating it and then actually consuming it in the drivers themselves. Our first step was very much, let's get our foot in the door. Let's get the board booted using DeviceTree for the base addresses and all that. And let's at least pick one standard device across all the sock vendors and get it working. And so that was UART because that's like, yeah, it makes sense. The next thing is, okay, get the things that you're going to have to configure for each one of the devices, the clock gating, the pinmux, GPIOs and all that stuff. At that point then, it kind of all comes in at once because then you can generate all the platform data that you care about for a device. If you look across all the devices, really the only thing that they care about is base address, IRQ, maybe some pins and clock gate. That's it. Generally. That shouldn't be so absolute. So essentially that fix up code is where you're retrofitting. Yeah. It's a stopgap measure for 1.70. That's going to go away. That needs to be solved. And this actually brings up a couple of things. One, how do you map the instance of a device and a device type to the generated information? That's one thing. And even inside Zephyr itself, if you look at a board and you look at the SOC and the naming and all that stuff, it can be very messy. So that's another problem. And then the third one is for a given device, since we already have a YAML file, we can pretty much generate, we could embed it there to generate the poundifying that we have because we know what the prefix is supposed to be for an instance. Then the only question is, okay, well, is 4,000, 4,000's instance zero or is it instance one? Because if you look at how some SOCs, they sometimes invert their addresses. You can't just go from beginning to end. They do some crazy things. So yeah, we have to kind of work through that a little bit. It sounds as if the system you're creating is going to be completely static. And you're trying to reduce the number of static inputs. Why not instead seek to apply that same logic to the generated device tree log? The thing is that for Zephyr, everything is done upfront. You know what nodes you're going to turn on. You know what's going to be included in the system for size constraints and all that. So the other thing is that for a use case, you may or may not turn on additional ports. The base enablement for a board may be one thing, but when you go and try to do, let's say, Bluetooth, you may have to turn on UART3. So there's some device tree information that's going to have to be included for some of these use cases. But yeah, I don't know. I mean, the system's pretty static as it is right now. Well, product line that had variations on a common SMC, it seems like you're removing one of the main advantages. I think it goes back to what you said in the beginning of size. You're not generalizing. You want to shrink to fit in the available resources. For the generic device tree, you use a lot of resources. Right. And that's why I don't use the blob. So we don't have, if the blob was included, we could basically query out the information runtime. But we can't do that because the size of that blob is too big. On some of these parts, you could probably fit it. On an M0 part, no way. And there's no way it's specific associated with loading? I'll give you an idea. I had one node, one node. And it was 4K. So there's a fairly large overhead to just the entry point of DT. Yeah. Yeah, so then there's the library portion that has to be included in the software as well. If I may offer, since I have the mic. Hey, you're talking about Zephyr versus Linux. We think of it in the Linux world as being, you know, reconfigurable and making use of that. But how often are you going to take one of these small footprint devices and really, truly reconfigure it completely radically without reloading it? And that's what you're talking about. I mean, as I said before, the idea is that you have one kernel that you're maintaining, and then you have many different hardware institutions around it. So say you have a product line in which you have like very basic midline advanced hardware. They have different peripherals. If you're using the Linux kernel approach, then you could have different device trees on each of them in a common kernel. But then you're just compiling it down to a single binary, wanting that binary to run on any one of these devices. Can I actually... When we went from A.out to ELF, ELF was able to do much better dynamic linking. That doesn't mean static linking went away. This information has to live somewhere. Having it in device trees so that it's in the same format the Linux kernel is using, moving towards a common standard for representing this information, is good even if you are then going to statically link the result. I think Mike has a comment. I think the goals of the project are different. There's a goal in Linux to have a single image which can run on multiple machines, and that's not a goal of Zephyr, right? You're just compiling an image to run on a single platform. So it's just like an existentially different goal. Any other questions? Pretty good discussion. Just wondering how realistic... So you're expecting... I interpreted what you said as, oh, the vendor is going to supply the device tree or the guts of what is needed. And yet, I think about the hundreds, if not thousands of variations of M0 and M3 and M4, and that's just... You're just talking about ARM Cortex, you know, other architectures. Right. I have a hard time seeing that happening, and I think there's going to be more hacking of device tree than you might expect. Yeah, the insanity of hand doing it may go on for a little while, but as I see it, the vendor, the SOC vendors are already generating Simpsons. Sir, I mean, without your SOC and M0... Yeah, I use that term to encompass all of them, but when you're talking about an M0, you're right. I mean, it's like... It's a microcontroller. So, yeah, so let me... I should just say microcontrollers. The vendor is already generating Simpsons, and I think that's true across all vendors. Are there any vendors that don't? Yeah, it's Cortex, I think. Yeah, so to me it's like, if there's already a database that they've pulled this information from, it's very, very easy to just regenerate that output in a slightly different format. And if they can get their head around that, it becomes very easy. Yes, it's probably going to be a little bit of a discussion with these companies, but, you know, device tree is new to Zephyr, so people are going to have to wrap their head around that. A lot of people haven't had exposure to it. They either have lived in the microcontroller world solely. There are very few people that I've actually found that have actually encountered it before. So it's been a learning curve for them. It's also been a learning curve for me to try to explain device tree. So, I think you had something, Paul? Yeah, okay, so other maintainers of other architectures in Zephyr besides ARM who expressed interest in using device tree already, because currently it seems to be ARM only. Yeah, so X86, I think, is going to be the next one. So I need to actually talk to one of the Intel guys about that and kind of help facilitate that a little bit. Okay, great. And what is the general roadmap for Filsa device tree work in Zephyr? For other what? Roadmap for Filsa work. For example, 1.7 is done. If we have something, what would go into 1.8 and how long to your project? It takes to have pretty good support of the device tree and so forth. Yeah, so I think for the next point release we definitely need to get the clock gating, pen control, GPIO stuff built in. It's also my view that we need to pull in like maybe the i-squared C and spy, some of the other serial interface stuff to start to convert some of those drivers over. And then it just kind of starts, you know, grows out from there. There'll be some other interesting things like I had someone talk to me a little bit about sensors. So we might need to think about that because that's actually a pretty important piece if we can start to work off some of those. Definitely at least figure out a YAML format for them. So anyone else? Okay, I guess that's it. Thank you.