 Hi everybody. My name is Marco and today I will talk about East which is a companion tool for building NCS and Zephyr applications that we have developed in earnest. So quickly about what is East like from the from really far away. So it's a command line meta tool written Python. It's useful for creating, managing and employing Zephyr or NRF Connect SDK projects. It's built on top of West and Nordic's NRF Connect toolchain manager. It's not intended to replace them but it's to be used alongside of them and it's open source and available at our company's GitHub. So essentially what you can do with East, it supports automated toolchain installation of NCS projects. It offers sandbox development environments and it does automatic release artifacts generation. It has a support for build types. I will explain what I mean exactly by this a little bit later and it uses and includes some commonly used developer utilities, at least the ones that we found that we mostly use all the time. What will you gain listening to this talk? So you will learn, you will see some, you learn why tooling is important and how it impacts our development, basically how it enables us to be a little bit more productive. You will learn how the East was designed and why. And there's going to be the end, there's going to be hands-on demonstration of East. So a little bit about me and my background. So I'm an electrical engineer by education and firmware systems engineer by profession. I have been working in DNS for about four years now and I did, in that four years, a transition from completely completely from PCB designer slash assembly engineer to the firmware engineer, like everything, all the possible things on the way. So I have worked with on the consumer devices, medical devices, mostly IoT systems with a little bit of machine learning on the side. And internally I have been also working on some internal tooling plus the DevOps for our projects. So really quickly about what Ernest does and why this is kind of connected with what we do. So we are, for most we are a development company. So we design electronics, we develop, we develop firmware and software for them and we create mechanical designs. We specialize in creating a prototype and bringing it to an end-to-end solution as quickly as possible. And we are also a visual Nordic design partner and a Zephyr project member. So this is basically our, so this is basically our firmware stack at Ernest. Some few years ago Ernest made a strategic decision to only focus on one single platform too. And we picked Nordic because of their low power hardware, Bluetooth support and their excellent development platform. And this meant at the beginning we were mostly doing our projects using by using NR5 SDK. As the things evolved, so did we. So we switched to the NCS, which is based on Zephyr as you probably know. My, let's say my first bigger firmware project was quite interesting. They were like, it was a consumer device which had motor control. It had to advertise over BLE. You could connect to it. It had support diffuse. And it was actually four different projects, products which had similar functions and they all shared the same code base. And this code base had to support like, I think it was like at some point for each of the boards, like two or three hardware revisions. So we were creating new images on pretty much daily basis. And we turned that project from a prototype to 100k units in about nine months. We used NRF5 SDK and NRF52832 as you can see on the picture. And this was kind of like our bread and butter for all the future projects that came in. And they were similar in that case that they introduced new boards that had to be supported and they need to be quickly developed. So alongside of this, we faced some of the challenges that we, at that moment, we didn't have the answer for. So for example, how to manage several different boards and how to manage build variants. For example, I had a hardware and I was building, I could build my firmware fine. And I had a colleague that was working remotely and only had a development kit from Nordic and he was doing the Bluetooth stuff. So he still needed to somehow build the board and test the functionalities. So we needed some kind of build variant that would disable motor-controlling part and just do the Bluetooth stuff. Another one of the problems was how to quickly and efficiently create releases. So we had to be sure that whatever we put outside, the client is going to use it. It's not going to either break the device or it's going to, he's going to upload the wrong image. It's going to upload the image to a wrong device and such like that's like how to make this much more clear. And also like how to create a producible built environment. Again, like we, at that moment, at that point, we didn't have a way to say you should use this GCC version. You should use this kind of a make tool. So there needed to be a way that we can set up this fast. And also as the project came running this, we had these issues all the time and repeatedly. So we all had to solve this. So in that kind of environment, first version of the East came out. At the time, it was not named like that. It was just some internal tooling that we used. So we were 100% inspired by what Memfold was showing on their interrupt blocks. So what we basically did was we took the default NRF5 SDK make file. We expanded it a little bit so it could accept target chip, it could accept the name of the board, software and hardware revisions that will be later baked into the final image, optimization, flags and so on. And then we used a tool called invoke to call that make file. So for those that you don't know, it's like it's a tool written in Python and it's very, very similar to MIG. You create a command line tool, you create a command and suddenly you say, and you write help and you're going to see that command there. It's a little bit easier to work with. And it also supports configuration automatically. So it supports configuration via YAML file. And this worked fine for a while. Like we could, we created a few commands when we were using them, but we still had to solve the tooling problem. So in that case, again, because of the Memfold, we used condos. So this is like, again, a tool for the Python packages, but for versioning and installing Python packages, but it can also do that for any kind of a binary that you can think of. So for example, we used it to download GCC, to download make, and of course, all other Python related stuff. So this tooling was very useful for us. We are still using it today. But it's only suitable for NRF5 SDK projects, not for the Zephyr or NCS. So this is basically a screenshot of what commands that we used in this program. So it's like, you can see very common ones, like build, clean, flash, debug. Nordic has had all the entire setup with a few images. So you have to create an application, build the application, build the bootloader, tie them together, sign them. So this was first done on all manuals, so we automated it. Even on this project, we were also using Memfold, so there is one command there that is like, create an image, send it to the Memfold server. So this was quite useful for us. And when we started working with NCS and the Zephyr, we found out that we had kind of the same issues as before, but some additional ones. So the first one was we were switching between the projects all the time. Most of our developers are actively working on one or two projects and maybe even maintaining the third one alongside it. And all of the projects are running different versions of NCS, so we had to switch between those versions and figure out how we're going to do the tooling. So at that point, we solved this with like, we downloaded separate versions of NCS on the computer and changed it in the shell scripts to use a specific one. So again, we just built environment at that point, we didn't have a good answer to that. And yeah, we just left it at that and hopefully most of the time, it worked okay, most of the time. And also, this was a big one, so creating firmware releases by hand took way too much time in this era. Again, one of our projects had, one of my colleagues was working on a project that after accounting for all possible boards and all the hardware variations, there were like 81 different unique combinations that you could build the firmware. And there was no way that you could do this by hand. And it was not done by hand, so only the relevant ones were done. So if something broke, you would not know it until you build the image for that board and you will find out that it broke several commits ago. So we detected that there are some problems that we can solve with this. So we set out to create some internal tool that would solve them, so that became East. So the goal that we wanted to achieve were firstly, it had to be familiar to the vast user. So we didn't want to create yet another tool that the developer will have to learn and we wanted it to be as seamless as possible. We wanted it to have to automatically detect and install the tool chains for the NRF projects, NCS projects, because we mostly used that. And it needed to have a sandbox development environment. We didn't want to have the access to the system binaries if we didn't need them or that we would have some kind of a version clashes. And also it should support automated generation of release artifacts for entire project. It should have a support for build variants and it should be suitable for CI. So ideally we wanted to have a tool that later on we could reuse in our GitHub actions to directly call it and be done with it. So we identified some building blocks that needed to be done. So the first two are kind of like trivial how to make them. I was developing this for myself. I know how to do Python. So both of these two things are well done in Python. There are existing libraries for that. So we reused that. But the third one, so sandbox development environment, it was like the choice of that will impact how the whole tool looks like and how easy it would be for a developer to actually use it. And so that was like the main point that we had to address. So we started looking at things. So of course, first we looked at the conda. And because we already knew it, it had a simple interface. We just create one YAML file. It tells I want this version of GCC. I want this version of make. And you activate the create environment and activate it and you can run the commands in it and you get that. But on the downside, it's not a real sandbox environment. You still have the access to your system binaries. On one of our projects, I was using this for quite a while until I figured out that I was using the tools that were not even specified in the YAML file. And this was like, okay, that's not good. One of the things was it's hard to create an environment file for the NCES just due to the whole amount of the tooling that is required to create it. Sometimes you can find it. So for example, conda is pulling the packages from the conda repository, which has a huge number of packages. But for example, if you don't find that package that you need, you have to create it for yourself. In previous case, for example, an NRF command line utilities were not made by anybody yet. So we had to make them so we could download them. So if there was no package, you had to create it. You had to maintain it. Also, packages can break. I had instances where I was using the same environment for a while. It was fine. And then suddenly it didn't work. So conda didn't seem like a good solution at time. But we could still use it for bootstrapping our projects and managing all other tooling that we required additionally alongside the tool chain. So the next stop was like, of course, Docker. The idea was have one image per NCS version and use the Python SDK for interacting with the containers that we would be running. So advantages, it's a real sandbox environment. You get what you install in the image. It was easy to create image with NCS tooling because they were existing examples that we could look at and learn from them. But it had all other aspects of it that we had to solve. So all the problem with the image management. So how does developer get the image? How does he work with it? Does he enter the container interactively? How to avoid the constant startup times and stuff like that? And also, looking through the Internet, I saw there were some issues with USB serial and serial post communications. I'm not sure yet because I didn't research this enough. I'm not sure yet how real are those problems. But we saw that some advance in understanding the Docker would be needed and a time was required to invest into this. So Docker seems to be a hard pad but we're going to take it because it seems the right way. So this was an initial idea, an initial architecture of the East and basically all the building blocks, all the infrastructure that you would need to run this thing. Keep in mind that the smaller block on the right side is actually what the user has to interact with. Everything else is just for the guy who is doing the infrastructure. The main idea was just pass the vest commands to the Docker container. So if you call, for example, East Build, that gets turned into a vest build and it's just passed along to the Docker container to do its thing. When we made this, we were like, okay, we would like to talk, this seems like it could work. Let's talk to the Nordic guys to see what they think about this and we wanted to present the idea to them and hear their opinion about it. And of course, because we wanted to know if there are some solutions that we were not thinking about or we just didn't know it. So one of the things that they suggested was immediately was basically NRF Toolchain Manager. So this was known to us and we knew that you could install, it does its job very nicely. You can install the toolchains, you can update them every time when the new version of the NCS comes out, you have it available there, you download it and you're using it. But the downside for that was it's a GUI program, so you cannot interact with it programmatically, at least in an easy way. But then we found out from them that they also have an executable version of that program that is running in the background. And we were like, okay, that seems very interesting. So what does it do? Basically, it is the Toolchain Manager, the GUI is just the front end for it. So it does all the things that we needed. So you can see on the commands, it searches for installable and toolchains, it installs them, it uninstalls them. But what is also really cool about it, it can launch arbitrary commands inside of a sandbox environment. So this would mean that this is exactly what we were searching for and this would greatly simplify what we had to do in order to create East. So for example, there are some blocks here, so for example, an example command from East, it's like East build, some kind of board, it's just passed on to the NRF Toolchain Manager. And you basically say launch with that version of NCS and after the double dash, you add your command and that's it. So this was like, so the middle one is the first version that we started and prototype with and we were like, okay, this seems great, this works. The actual version that is used in the currently is the second one because we realized that the error codes do not get propagated outside of the Toolchain Manager. So if you send some command inside and the build fails, Toolchain Manager, it's not propagated through the error code. So you had to do some bash magic to get, to figure out if some success.txt file was created and based on the presence of that file, look to determine if this was, if this command ran successfully or not. So instead of that big block, it was just this. So this is the current architecture, so it's just the behavior East is depends on just too far. So the vast YAML file, which is present in the NCS project and the East dot YAML file, which is completely optional. So if you're not using it, you lose some of the other new features, but if you do, you still have the access to the sandbox environment or you can still write the commands and send them directly to the Toolchain Manager. So yeah, so this is the screenshot of the help output. So generally, so like a little bit of overview, so there are two, so we split the commands into different kinds of commands, two different groups. So we call them Workspace and System. Workspace commands can be run inside of a vast workplace. So and this is considered by the Zephyr and NCS anything that has the vast folder inside of it and all the subdirectories of that folder. Because we said, okay, some of the commands only make sense if they're run inside of this. The other group of commands was a system command, which is basically you can run them anywhere you want to. So the East is so it has some common vast commands like built, flash, debug. And like I said, you can configure it via YAML file. So one of the key features is built type to explain it. So it's basically just a set of kconfig fragment files. So for example, for example, if I'm working on some kind of a project and I see a need for a development version of that firmware, I would create a def build file and I would put specific development specific options into that.conf. And that would be my kind of the build type. So to order to do this and not to break anything, what is what is being done with the NCS, we said, okay, kconfig files, if you're using this kconfig files have to be stored in conf folder and project conf is now called conf. It's now called common.conf. And we if you're doing this, you're implicitly have a release build type, which is just the common.conf. So anything else that you create, it's using that common.conf plus anything on top of it. So for example, so if you have like the you can see on the left, you can see example East YAML file, you can see that for some example app, there are two different types of build types. So def type and debug type. So for example, def would have, would enable debug optimizations, would enable RTT and it would disable MCO boot just because it takes a long time to flash it. And the debug would be just enable debug optimization. So with this YAML file, there are three possible build commands that you would be using. So without any flag, so that means I'm using that implicit release version that just uses the common or def and debug that use the groups. The next thing is East release. So this was like very, this was like number one priority for us to have it. It basically just runs all the possible combinations of application, VAS boards and their hardware revisions and build types. So it's just a series of East build, VAS build commands. And it can also build samples if you list them. And the created artifacts are renamed and then zipped. And you can then, you have just a release folder with all the zips and you just upload them to the GitHub release or for example, Memphold or whatever you're using. And that's it. So there is, we didn't have to do this by hand anymore. So when to use East? So if you're working, so it's, it might be useful to use East, if you're working on multiple projects with different NCS versions, you need to build firmer images for several different boards and build variants. And you want to quickly set up projects. And you might want to use East if you don't want to manage development environment on several different machine, on several development machines. So this is, this is our, our small roadmap. We have like, we are thinking about changing a little bit of configuration format for the build types. We want to support patch files. Sometimes we find the need that we want to change something, how it's done in the Zephyr or maybe we find the bug and we don't want to wait for it to be fixed. So we fix it ourselves. And there has to be a way for a developer to be basically say, I ran vest update, maybe East update, and all the patch files are automatically applied. So you're not, it's just optimization. You're not thinking about this, that you have to do something and to avoid some debugging time. And also, because we, we had an issue with that before, we need a way to show like deltas, deltas between K config, when changing K configs, we need to be able to show the deltas what actually changed because a developer of mine was like very puzzled why enabling one option enables the whole logging system when he didn't want it, but it was just unspecified and then it got enabled. And if he said it disabled, he would get the error, but he didn't. So that's one thing that we learned. Okay, demonstration time. Let me change the view here. So, so what I will show you. So I have here like the Zephyr, the example application from the Zephyr project repository. I changed, I checked out a specific version of the NCS for this, for this presentation. So you have here like the general, the general structure. Actually, if we go inside of the application itself, like this is the general, general structure application that you have, I did already did some of the modifications for these talks just to avoid some of the waiting time, but and I will explain them which they are. So it's already, so one of the things that I did was change the West Yama to change this into NRF correct and NCS project. So I checked out the specific version already Ryan West update. And also I created a few board of the board overlays. So for example, let's see, okay, let's go and build the application. So if you wanted to build this with East, you would just write, for example, is built. Now, if I run this does the East will tell me I don't have an order digital chain manager installed on the system, it cannot do anything. It's not useful with others. So it tells me to do something without it. So it tells me to run a command that will basically do the system setup and don't don't know it for me. And it does that it puts it in some local folder. Right. And now I'm like, okay, let's try this again. Let's I have this now and I want to use it. And now it tells me that it detected the tool chain, but I don't have it installed. So you need to suffer every for every version of the NCS, you have a you have a separate tool chain and it that needs to be installed. So I did. So let's let's run this. I already downloaded the tool chain. So this part is going to go really fast. And he's just going to unpack it. But otherwise, it would take about, I don't know, like three to five minutes to get to get this working. Just to save a little bit of time. So let's try again, if this builds. Yeah, you does. Okay, so this is so taking an account. So if somebody had to, if somebody had to make this go from the real beginning, they would have to people install the tool, run the vest update, clone the repository run, run the vest update and install the specific tool chain. And they would be ready, depending on their internet internet connection, they would be like, I don't know, this would be probably like 10 minutes and they could build the project and they would, they could actually run it and start working something on it. So that's how it works. Um, so to show off the, the, the, the build flex, let's, let's see what we have here. So the example application comes with some configuration file. So we have here project comf, rtt comf and debug.com. So if we look every single one of them, so project comf, okay, enables a sensor. So this one is going to be, we're going to create for this our comf folder and we're just going to move it inside and rename it. If we look into the debug, okay, so debug enables some debug optimizations and logging and console. So we also, we are also interested in this. So we're going to move it in that and also there is the rtt, we just enables rtt, but without logging. So we know that if we want this, we need to have logging enabled. So we need to use these two files in conjunction. So let's move it again into the com folder. Yeah. So there's this. Now to actually configure this, we need to create east channel file. So this I have already prepared a little bit of a snippet here because I have an example on the project site. So east.yaml and yeah, here's here. So basically, like we said, three build types, release is implicit one. Debug type is just going to use debug.com and rtt build type is just going to use debug comf and then it's going to add rtt.com. So we should be good with that. And for example, now if we go to build our application, so let's clean the build folder and run it again, you're going to see, yeah, you're going to see, so you get some extra diagnostic here message that the build folder was not found and running CMake build. And you can actually see that the common comf is used here. So the east has a echo command. So you can actually see what commands are being passed to the nrf connect. So this was useful for me for debugging. It's also useful for somebody to understand how is this actually working. And it just, yeah, in this case, it did just that. But if we clean it because it detected it doesn't have to rebuild because all the configuration is already the same. But if you're running it again, after it lists all the possible tool chains, it's going to launch, this is the command that launches the whole build. And you can see there's our west build command and then the comf file that we're setting to the common.com and the touch success thing. So we created this. So for example, if we wanted a different build type, let's say rtt, for example, the east would detect that you change the build type. So it has to rebuild the project and it would add the two overlay configs here. And you can see comf debug comf rtt. And it did that. And that's basically it. You're switching between the build types however you want. The next thing that is useful is east release. So if I just run it anywhere inside from the project, it's going to look at the east demo file and it's going to try to build all the possible build combinations of the images that I have. So I made it so that you get like a small table that tells you what kind of app are you building? What is it? Is it an app or is it a sample? For what kind of board are you building? And in the end, the build type. So at the bottom you see let's say redacted version of the build commands that are being run. So you can know what is happening. Now if anything would break at this point, the build would stop. You would get the build output and you could look and try to figure out what went wrong in this process. Now created to release files. So if we go here, we can see there's a folder inside that it's new. It's a release folder. Now if we get inside, there's going to be a few zip packages and the folder which contains all the created binaries. So the zip packages are basically contain what's inside of the apps and are useful to just push them to the GitHub. So if we go into the apps, you can see for example, here's the example app that we named. You could have several apps in your project and if you go under it, then you have different build type versions. For example, if we go to the RTT, then you see that you were building this for two different boards and finally at the last stage, you get to the final .bin.elf.hex files. Now currently the naming format that it's used here, it's not configurable. It's the one that we used and maybe it's a little bit verbose, but it just helps to prevent any accidental mistakes that happen. One specific here, what is happening here, you can see that the version was the last tag is used here. So for example, because we checked out this version, it's going to appear here, but because we have modified the Git repository, it has some changes. So that's why the system adds basically this qualifier, which is just Git hash of this current thing that happened plus the plus which says that your repository was dirty at the time when it was built. This is useful for example, when you are quickly debugging and you need to perform the whole release process all the time because you are giving out maybe to the client or maybe to your colleague the supposedly entire image and at some point you need to be able to identify what is working, which one was not working, which one was working and that's how we do it. So if you check out directly a version and sorry, a tag and it does not have any uncommitted changes, you're not going to have this qualifier here. Yeah, I think that's about it. Yeah. Thank you for your attention. Q&A time. Online. Sorry. Can you, is it micro working? We have a question online. Okay, just talk, fine. So Luke Seegers is asking, do you guys use this in CICD? Question mark. If we need to install all of this every time in a GitHub Actions, for example, it will take a lot of time. Do you have a solution for that caching, Docker, things like that? Sorry. Can you repeat please? I didn't hear well. Are you using this in CICD? Currently at the moment no, because we didn't get that far, but like it would be at that point it is trivial to actually make it run in CI because yeah, the issue is of course you're downloading something and the worrying part is that you have to re-download every time when you're doing build. This is not true because you can use, for example, in GitHub Actions, you can use caching. So for that entire, is this saving all the tool chains, all the nrf toolchain manager in a file and you can say to GitHub Actions, I want this folder to be cached. So every time you will still run the same kind of commands and this is just going to say that, yep, this is installed, this is installed, continue with the build and yeah, it's going to take a long time the first time, but every repeating one now. Okay, I've got a personal question here. Are you able to enable the S-bomb generation S-bomb? This is actually one of the issues that we have open yet and we didn't get to, not yet, but like it's... Because I think you've got some things you might be able to extend this, the generation to capture, to be useful, especially like configs and things like that. At the point, like right now, yeah, you can run, yeah, but you cannot do it correctly, you still need the tools, you still need the best tools installed to run S-bomb. We could, of course, yeah, we could add this command inside and we intend to do so, but like apart, aliasing basically commands, so just turning this into from west to the east S-bomb, like, I guess like added value would be some different, a different representation format like to say like here's your S-bomb, this is what you use, maybe something, what's changed and stuff like that, like there is, I think there's much more added value that we can create with this thing. Yeah. Happy to have a discussion and how we can make it a little bit better because, yeah, I've got ideas here too. Oh, okay, cool. But I... Yeah, sure. Thank you. Oh, thank you, yeah. So congrats from the tool, it's really cool. Then the thing is, I was thinking the NCS, the toolchain manager, the NRF view toolchain manager, it's not officially released, is it? You're basically extracting it from the package NRF Connect for desktop tool, aren't you? No, it's on the GitHub. Oh, okay. So there's a how it's called PC NRF till repository and, yeah, like some people from the Nordic basically said, like, we are using this, like try this out and after we figured out, okay, this is what it does, we're basically checking every time and so on to see like, okay, there's maybe a new version, we are directly downloading from the URL of the GitHub. Okay, I see. The binary. I see, I understand. Okay, because I'll bring this up internally to see if we can actually make official releases of this because if you're, you know, if you're using it, it makes it makes sense. Yeah, like, yeah, we found it useful. I mean, I understand that this might this might be a specific use case, like I'm like, you could say, you could say, like, I'm used to the using this stuff in the command line. And I know that Nordic is actively developing the VS code and everything environment around that. But we saw that like, even if we even if you're using that, it's still very, very beneficial for us that we are using the same kind of a tool that we would use, for example, in the CI. So we would see what the expected results is. And like, like developing stuff in GitHub actions takes a long, long time to debug. And like the debug, the debug cycle is quite long. And you're like, let's just test as much as I can on machine. So that's, that was one of the things that we wanted to do. Yeah. So, so the other thing that I want to ask is, do you test this on on macOS and Windows as well or just Linux? Yeah. So this tool just only runs on Linux, because yeah, this is mostly what we use. We had some clients that were that wanted to run this on the on the Windows. And we said, okay, like, we had to like, internally, some things in the east have to change, like, it's essentially it's a Python program. So it has to become compatible for the both versions. I think some of the clients made it run on the VCL on the on the Windows, but all not because of the two chain manager, because I'm pretty sure the two chain manager supports the three. Yeah, it does. Of course, there are three there for the there are three versions, I think even for for for the Apple for the Apple, sorry, Apple Intel, Apple Silicon, Windows and Linux. Yeah, but it was running in VCL. And as far as I heard, they had no problem using it in that in that sort in that way. But I guess it depends then how you're doing the debugging and any extra stuff like I think it builds fine. Yeah. All right. That's that's it. Thank you. Thanks. Any more questions? Yeah. Hi. Yeah, the Python dependencies. Are they installed separately or are they running in a virtual environment? Or says the Python dependencies of the east Python dependencies of Zephyr in general, the Python dependencies of the Zephyr are contained in an RF contact toolchain manager. Like this is this is the great part about this, like this is already because it's built in. So yeah, you're doing this yourself. So we are running out of time. If anybody have any extra questions, feel free to talk with me. Yeah. Thank you.