 All right, thank you for coming. So as I mentioned, the keynotes this morning, this will be a talk mainly about Debian and Open Embedded, Yachto, and the whole development process from prototyping to commercialization, some issues to think about, benefits of each of them, concerns about each of them. But additionally, just an introduction to the Snapdragon 410 platform. So there's some, anyway, I'll get to the slides and I can explain it from there. So the 410E Embedded platform is the chip itself, basically. And then there can be psalms created from that and other boards. But there's a reference board, the Dragonboard 410C development board, which is compliant with the 96 board spec. It's available from Aero. Lanaro produces software for it. There's a Debian build and an Open Embedded build. So the 410E processor basically is a quad core Cortex A53. It's got integrated connectivity, Bluetooth, GPS, Wi-Fi. It has a hexagon DSP in it. Currently, the builds for Open Embedded and for Debian don't have a hexagon SDK compatibility that's being worked on sometime in the near future. That should also be available. There's a Adreno 306 GPU, which is supported by the open source for Adreno driver. And then there's other, several other peripheral interfaces there at the bottom. So this is supported with an upstream kernel for graphics, video acceleration, audio, IO. One thing that is different with the 410C than some of the other embedded boards is that it uses the Android boot image format, and it uses the LK boot loader instead of a GRUB or RU boot, I should say. OK, this is my last slide on the 410C, and then I'll get into the other parts. So the difference here, the Snapdragon 410E part is a long-term availability part. So if you're prototyping something, building something today, that part's going to be around for 10 years total. If you're looking at prototyping, the Dragonboard 410C is a great platform to take into your prototyping on. And they're both available. They'll be sold through Arrow. There's also a link there to other embedded platforms if you want to see some of the other commercial options for getting the 410E. So back to Debian. So let me start with just giving you an overview of the Debian ecosystem. Some of the pros and cons. So Debian has a huge repository of pre-built packages. If you're trying to get up and running quickly, I mean, this is the benefit that people have seen with Raspberry Pi and many of the developer boards is you don't have to build everything from scratch. You don't have to start and have a nine-hour build time to get up and running with a root file system. You can basically app get install, all the things you need, many things are prepackaged, at least the dependencies are there for other packages. And that's huge for prototyping. That's really going to save you some time. There's a very active community of supporters and contributors to Debian. They do a great job doing updates, doing bug fixes, supporting the releases, being transparent. They have a Debian social contract, which talks about how the Debian packages will remain open, how they will also allow for closed-source packages to be used with Debian, how any of those packages can be used not necessarily in the aggregate of Debian, but individually as well. It really lays out all the things that are useful to understand how this can be used commercially. The one caveat I would say is if you don't have a long history and background in Debian with the package management and the history and evolution of Debian, much of the documentation that's there is going to be very, very difficult, because it presumes a lot of understanding of the different packaging projects that have been there. They've evolved over time. Some of the documentation is a little stale. So those are some of the things you're going to find with Debian. Yeah, so releases. So Debian has stable releases and testing releases. At any given time, there's one stable, one testing release, which has the support of the Debian security team. And then when a new version is released, or when it evolves and when testing moves to stable, there's usually about an additional year provided for support for that from the Debian security team. However, there is another group, the Debian long-term support group, which is extending the lifetime of Debian stable releases to at least five years. So this is really, really great for companies that need to provide long-term support for platforms. And this is also being looked at by certain other projects like the Civil Infrastructure Project. So the Debian build methodology typically has been historically native build. If you have something that you want to build in Debian, you build it natively on the platform that it's on. It's either if it's an ARM hard float, you build it on an ARM hard float platform, et cetera. And for instance, if you have a 4.10c or a Raspberry Pi, any of those boards, and you have a simple project that you want to build, it's simple to just build it on the board itself. Install all of the necessary build packages, build it easy, super simple. However, if you want to build Chrome, you're not going to do that on your Raspberry Pi. You're not going to do that on most embedded boards. If you're trying to build Clang, I don't even think you can link it on most of these boards. So one of the other things that people use is doing native build, but doing it in QMU. Then you have access to a lot more RAM. You can get all the packages installed. You basically create a sys route using multi-strap, and then you can install all the dependent packages that you need. And basically, CH route into the CH route, you can bind map all everything you need so that you can just have network access and app get installed additional packages if you wanted. The caveat is that it runs about five times at least slower than your native system will. And that's the challenge. There is work going on, especially I believe in the Buster release to improve the cross-build capabilities of Debian. There is some that are there. Debian also supports MultiArch so that you can install other architectures. And so I could install the basic G-Lib C other packages into my PC. So say I could do ARH64 and load those packages in and build and link an application. But there's many problems. For instance, say I need the Python dev headers. And if I want to install those for a foreign architecture, they collide with the native ones. And so I can't actually install them. So there are some problems just doing that. This cross-build is not the same as that. It uses a sys route approach as well. There are some things that are still broken, but it seems like it's evolving. It seems like the story's improving. And I can't say that I have gone down that road because it's fairly new, I think, in the sense that it may be more viable. So dev package format, for those not familiar with Debian, it's basically it has your control files that are in it. It has the description of the package dependencies. It has all of the packages laid out. There's lots of tools for querying the packages, installing them. It is a package format supported also by Yachto. It's any Debian derivative like Ubuntu uses it as well. But there's a really confusing evolution of packaging helpers to create these packages. And so one of the more recent ones is this git build package. If you have a git repository, you can use it to build a package. There was pbuilder, which I think is still used. There's sbuild, which I don't even have any idea what it is because I've never used that. And then there's dh, which is basically dev helper 7. So you'll see references to these if you're reading the Debian documentation. And this is sort of the big learning curve part if you're really trying to get into that. Typically, the Debian methodology is you would create a source package. And then the source package would be compiled and generate the binary package, which you would then deploy. However, if you have binaries that are pre-generated, that are closed source, that you need to repackage into your system, that's not typically the Debian methodology. And it is not a real good fit there. So some of the things I've done in the past is just basically create my own control file, install the files into that, and then run dpackage dev build just to package it up with the necessary dependencies. And then I can integrate it into my system. I can install it, uninstall it, and it makes it much cleaner than just throwing a tarball in to the system. So Debian, very friendly terms for commercial deployment, some links there. If you are rolling something out and you don't want necessarily just an apt update and an apt upgrade and get all the different packages, if you need to have a tested release be deployed at any moment in time, you can pin packages if you want, so you don't have to pick up certain packages. But you may want to control when an update can be done, if not allowing obviously root access and not allowing anyone to just do an apt update, an apt upgrade. There are some license compliance tools that are there that are mainly done for the overall Debian repository. There are some tools that are available if you're creating an application that will go through and find out what all your dependencies are and then tell you what those licenses are. So if you're doing license compliance for your product, obviously this is a big important thing. And having five years of support is a huge benefit of Debian. Currently the Lunaro build for 4.10c is based on Debian testing. So if you're prototyping now, and depending on the life cycle of your prototyping phase, if you plan to roll out on stable, it's actually a good pipeline rather than rolling out on stable and eating up a year of your support time. All right, so I'll switch over to Yachto Open Embedded. So there's maybe some confusion as to what's the difference between Yachto and Open Embedded. So there's a couple good sites I found. Strangely Wikipedia actually gives you a fairly good description of the layers and the way that this works. There's basically what Open Embedded is is a series of layers of recipes in order to build packages that then can be brought together to create a root file system that you can then deploy. Kunkui has a really good slides there that describe the terminology of this if you're interested in understanding more if you find any of the terms confusing. But just to cut to the chase here, basically Open Embedded is a build system that's based on BitBake, which is a way to build the packages from these recipes. Open Embedded is not a distro. It's just made up of collections of recipes for BitBake organized by layers. So what is Yachto? Well, Yachto basically provides a reference distro that's built with Open Embedded, but it adds a lot of additional tools and recipes, which I'll get into some of them. Yeah, so one of the big benefits of Open Embedded or Yachto is if you're building products that are cost sensitive and you really need to reduce flash, reduce RAM, reduce the bomb cost, and you're super sensitive to the size of your root file system image, this is gonna be better for you than typical Debian system, which may have many more base packages or package dependencies. You have flexibility, you have control, you can decide what options you wanna enable in a package when you build it. You can build with Busybox versus a set of other packages. You can choose whatever tool chain you want. You're not stuck with the tool chains that are available in that particular release of Debian, for instance. They provide tools for software compliance. You can basically generate an XPDX report that will tell you what of all the different packages are from your root file system and its dependencies out. Sometimes that's needed if you're providing things in a supply chain. But Yachto and OpenEmbedded have a huge learning curve for anyone who has not used it before. It is basically going to be a whole different way of doing things from just basically taking your root file system and pointing your sys root there with your compiler and building. You have all these recipes. You have Python layers that build it. I have definitely had to dive into the Python layers to a bit bake, to debug things and builds. So that is the challenge with it. It also takes a long time to build the images, depending on how big if you want one with X11 and a bunch of extra packages. And depending on the speed of your machine, it can take, it's taking me up to like nine hours to build a root file system image. And if you want to change something and tweak the configuration of your build, you may end up regenerating the entire build again. So there is lots of challenges with this, but you get the flexibility that you need. It also requires lots of storage and processing power. You definitely want to share by disk when you're starting to work with this kind of system. So when I adopt this, what am I getting myself into? So basically, if you're using Debian, you're installing packages and you have a platform that developers can target and someone can write a third-party package they know how it's gonna run, they know what the dependent libraries are. When you are using Open Embedded, you're building your own distro. You are your distro maintainer unless you're getting Open Embedded or Yachto from someone like a Mentor Graphics or someone else. You control the system updates. You control getting all those critical fixes in there. And when they're deployed, whether users can install packages or not. But there's no third-party software ecosystem for your specific distro unless you create it. Basically, when you do a Yachto build, you can do a build of the SDK for your specific distro. And then that is what a third-party internal or external could build with. So say that you have a development team that does not want to develop in using Open Embedded because they don't have the expertise, they don't have the time, they don't have the overhead. They're used to building with assist root. You can generate an SDK for your baseline platform, give it to that other team. They can generate their software product and then run it on top of your BSP. We've used that internally at Qualcomm. That's a good model to separate components when you're doing rapid prototyping. And there are a couple of ways to do these SDKs. There's a standard external SDK which is really just like assist root. And then there's this extensible SDK which has a totally different workflow that lets you build and package your software and be able to upload your recipes into this, you're basically extend your BSP so that other people can basically create extensions that are compatible with your platform. There's a great talk at last year's ELC which I gave a link for there. I'll read you for time. All right, lots of time. So what's the workflow then for doing this? When you are using Yachto, you basically create your own layer of recipes that are custom. You can take existing recipes and just tweak them slightly using a append file, a BB append file that says, take this recipe, but instead do this or add this patch or change this version. Or you can create a whole new recipe and add that in your own layer as well. So in Lenaro's case there, they've created a meta Qualcomm layer that adds Qualcomm specific recipes for things that are specific to that platform. You then need to aggregate all the layers. Is it a Yachto layer, an open embedded layer, an external layer, for instance, there's a MetaRoss layer that's available on GitHub that you could then integrate as well. And then you create this bblayers.conf file that puts all those layers together and says, here's all the different layers that make up all of the different recipes that I can pull from to build my root file system. You then set up your local.conf file that says, okay, this is the target I'm building and this is the compilers I'm using and the target machine and all my build flags and if I'm masking any packages and anything else that defines this is how I'm going to make my build. And then you can build either standard targets that are in Yachto, for instance, like this BitBake core image minimal or you can define your own images for your specific product or you can build what are called package groups that define a whole bunch of packages related to one thing and then just build that particular package group. You can also build an individual package if you want to specify that. So that was the workflow for open embedded. There are some hybrid approaches between Debian and open embedded. One of them was presented here before. ISR was something that Siemens had initially started and it basically was a package builder. It was not a distro builder. So it could take any Debian derived OS like Debian or Mdebian or Ubuntu even and you could build compatible packages for that distribution using this system which basically used BitBake and BitBake recipes and used all of the headers and everything from Debian. So it's a way to do a cross build basically for Debian and it let people who are used to creating embedded products with open embedded and BitBake to do that for a Debian based system. This, I've tried it, I've used it. There were some caveats. I had to run several things as root. It also maps your device, devices from your kernel into the sys root and if you RM minus RF, your sys root, you remove all your devices off your machine which just happened. So it's a little fragile and it's not something that I would have felt comfortable rolling out to people at least at the time that I had used it. There's another one called Debi which is a merger of the Debian approach to things and the Pocky approach to things and it became Debi. So it is not a package builder. It's basically a distro builder in the sense that you can build your own custom distro but it is using all of the source packages from Debian to do that. And so what's the point? Why would you do that? Basically you had that five-year support for all those Debian source packages. So the Debian community is committed to supporting those so if you're creating a product and you need to create your own custom one and you don't wanna rely on the packages in Yachto which only have a year support and you wanna leverage those. This is a great way to figure out how to incorporate that long-term support with your custom platform build. This is what's being looked at from the Civil Infrastructure Project which is currently based on Debi. And anyway, there's a link there to more about that. So commercial deployments of OE and Yachto. This is kind of why it exists is for commercial deployment. It was, Yachto was created from a bunch of commercial companies in the open embedded ecosystem. There was Mentor Graphics. There was, I can't even remember all the different players that were there, Wind River. And basically they wanted to consolidate the directions of things and to figure out how to scale and not replicate common activities. And so, excuse me, that's basically why Yachto exists. There's no commercial distro, I would say, from Yachto. It provides Pocky, which is a reference distro. It's basically, this is how you would put a distro together. And Lenaro doesn't use Yachto per se. They create something called the reference platform build. And Pocky is not part of that. So Yachto, it makes, as I said, two releases a year and each release is only supported for one year, which is a challenge for commercially. If you're deploying something that's based on Yachto, you now have to figure out, well, where am I gonna get my fixes from? Where am I gonna get security fixes? How am I gonna pipeline those in? Who's responsible for that? Can I pay someone to do that? Those are questions you're gonna wanna answer if you're planning to deploy and ever update your product. And there are some companies that are offering commercial support. You can buy their, basically their BSP based on Yachto customized for your hardware and have a support contract through them. So then product, yeah, correct. So let me say, so the same mentor graphics, right? In order for them to have a business in scale, what they would do is they would create a distribution, a reference BSP that works across multiple SOCs that basically has a common tested platform and then they would do the tweaks to change the kernel for that particular SOC or others. And then the incremental difference is basically what you're charging so that it's scalable to charge to different customers. It wouldn't be possible for them to create a special snowflake distro for every different customer and support every one of them for 10 years. It would be phenomenally expensive for someone to do that. If you're creating something highly fragmented, it's gonna be very expensive. It all depends. Yes, true, true, fair enough. So it certainly, the more common it is, the more scalable it is. And I think that is the point I'm really trying to make. Okay, so for planning, for productization, some potential gotchas. For a lot of Qualcomm platforms, I know that there's older compilers use like a 4.9 compiler to build things. Some of those things are on Qualcomm Developer Network. And so if you're trying to grab some of that stuff and integrate them into something like a recent version of Debian, you're gonna find that there could be some challenges involved. If you have proprietary middleware that you've been using, say that you were using actually Android. So there is an Android build that you could run on a 4.10c, but it's not something that's commercially supported. It's actually just for community more. You're gonna find that there were, it's a downstream kernel, it has access to certain pieces of hardware that are maybe not enabled in the fully upstream build yet. So just again, something to be aware of if you've used Android and you're anticipating that all of these things are gonna map over, you wanna check that first. There's pre-built libraries that may have different C++ APIs. So you have the change that happened in GCC 5 where they moved to the proper C++ 11 ABI and broke clang at the time. And so there was a lot of issues depending on the compiler that you have through that timeframe. There's also people who are using the Android compilers and those versions of them and then trying to move to an IoT platform that has a much more recent compiler. There's a flag that you need to set basically to select the ABI that you're using to build and make sure that you're building things that are compatible as you're putting all these libraries together. Commercial support and software updates, LTS kernels. Basically, if you support an LTS kernel, there was an announcement from Greg Grohartman at Lenox Foundation about it was a six-year support for five-year support or just six years, six years support, yeah, for LTS kernels now. And so there's the question of do you want to stay in a kernel that old versus not? Sometimes you have to, but basically it provides you that option. And then, or do you want to have, so do you have a frozen OS basically? If you chose Yachto and you chose to go with the Morty release and you're gonna support that for 10 years, are you gonna support the Morty release on that hardware for 10 years or are you gonna migrate users from the Morty to the Pyro to the Rocky and each thing as it goes on? So you wanna think about how you're going to do updates to your system given the support, the ways that support is structured for bug fixes and security updates and everything for those platforms. And then open source compliance, what tools do you have? Do you need to provide an XPDX for your, what is it, for your basically stream of deployment? Or customers who use your product? Supply chain, that's the word I'm looking for. And then on top of all that, if you have a platform, how many third-party tools do you wanna leverage that may or may not be built for your specific custom distro that you make? If you have middleware components that you want to use that would need to, that say AWS for instance, may not be able to target and build custom for every individual BSP, it would depend on the size of the customer and those kinds of things. So if you have something like a Debian, which is a scalable addressable platform for third parties to create software for, you then have a much easier way to roll out third-party packages and support third-party packages on your device and platform. Robot operating system, others as well. There can be lots of pain trying to incorporate ROS and all its dependencies and everything else into the Yocto platform, especially as it rolls from release to release. For Debian, it's basically an app get install. All right. So I wanted to leave you with some useful links. I hope the slides are useful, stand alone. But basically if you want more information on the Dragonboard 4.10.C, there's links off that page to both the Debian and Open Embedded builds for that. There's the Qualcomm Developer Networks page that has lots of different projects that can be done on the 4.10.C. Debian Resources, if you wanna find out more information for Debian, there's the support page there. Open Embedded, there's a guide on 96 boards, which is very useful that talks about Open Embedded and just gives a general overview and then how you can use it on the 96 board platforms. Yocto, it has a very extensive developer manual that you can get access to there for each of the different releases. Pardon me. And then Arrow Electronics is where you would go to get a Dragonboard 4.10.C or any of the accessory boards that are available for it. So thank you very much for coming and I'll take some questions. Yeah. So there's a trade-off. There's a break-even point, right? So say that I have 10 packages that I have to build. If I have to build 10 packages, way easier to do it in QMU because five times the cost of 10 packages is not the time of building an entire OS from scratch. Ah, one sec. But if you have 100 packages or more to build, your times start to become almost as long as building the OS from scratch. Depends how much of the OS is your custom stuff and how much of it is differentiating. Correct. So the question was about building a single package and that it doesn't take that long to actually build a single package. That's correct in that once you've eaten the initial cost of building your BSP and you have a package and you want to make a modification, making a modification to a specific package that changes no platform dependencies is relatively quick. And so that's not a big deal once you're used to the workflow. If you change anything in that package or you change something in the platform for that package, you are rebuilding the whole platform. Yeah. Yes, yes. Correct, I'll summarize the comment. So good point in that, what Yachto does is it doesn't every time tell you that you have to build the entire distro from scratch again, if you change something, it will track the dependencies of the things you changed and build all the components necessary to do that change. So, correct. Right, so the device, so the dragon board is not a 10 year part. The chip, the Snapdragon chip and that is available for the 10 years and there's many hardware partners that are making either psalms or boards related to that. Software right now, if you're going through Lenaro, there's the Debian build, if you want to prototype on something, if you're really going to do and you could commercialize on that as well. If you want to use open embedded, certainly that's another path that you can do. Lenaro right now updates every time there's a new open embedded release, they'll rebase their open embedded on that. So if you're going to roll with the open embedded releases and you choose open embedded, then that's basically your path forward. If you're going to freeze, then you got to decide who's going to do the support for you. So I would not say that it's Qualcomm's issue per se to do that choice, these choices are yours, but correct, well anything that's in the upstream kernel that supports that device should not likely be removed from it. There's basically like old platforms and the maintainers would ask Qualcomm, is anyone using this anymore? Should we remove it or not? If it's already supported in the upstream kernel, it's not going to vanish overnight. No, there's not, I don't think there's a anyone who's saying that there's a long term commitment to support something which is already supported. It didn't, I don't think, if it's supported in the upstream kernel. Yeah, did you have a question? You were building the tool chain three times. You're building the tool chain, you're building the Canadian Cross tool chain or the Bootstrap tool chain, and then you're building the tool chain and then you can build the build. Yeah, 410C, and it is supported upstream through video for Linux, V4L2. I believe there are the open max drivers that are in the 410C. That's, yes, I believe that's true. So it's a zero copy pipeline that Lenaro's put together for the upstream support for the video core. There's the open, I believe, I believe it's based on the open max IL components wrapped around the Dree Stream layer or wrapped around the, so the other parts are there, you can leverage those directly. Lenaro guys are probably the best guys to talk to. Is there any other questions? Comments? Corrections? Great, thank you everybody for coming.