 Thank you. Hello, welcome. Thanks for joining my session today. My name is Tobias, and I want to talk about today about the story of how I fell in love with SUFFA. That's actually been a long journey that started way before the actual inception of SUFFA, but now has culminated in a statement I would now make every day is I won't, I don't want to start another embedded device, embedded product development again without using SUFFA. And how that came about is the content of my talk. I have to mention also Stefan, who's also in the front row. Most of the work I'm going to present today to you has been joined work by a few other collaborators as well next to Stefan, and I'm glad he's in the room because if you might have questions to the more technical details that I will generously skip during the presentation, Stefan then will eventually help me out. On device tree macros and stuff like that. A few words to ourselves. So I'm a staff engineer at a company called UL Method Park in Germany. That won't tell you much, but if I say I'm typically engaged in projects where clients seek advice on medical devices, I typically act as a system architect and also as a software architect. So I'm not a day-to-day coder and unfortunately not any longer. I'm working more with the boxes and the diagrams, if you know what I mean. We come to that too. What we do, as I said already, we are in a medical device industries. I've been also working before my current job position at UL Method Park. I've been working for a medical device company for about seven, eight years. And we've seen medical device projects all around the globe here in Europe, Asia and North America. And I have learned and seen a lot in the embedded device industries and a lot of which I wish I hadn't seen, but unfortunately I had to. And that's also something I want to share with you today. Beyond the development of embedded devices, we're also looking into connected health services because every device today is an embedded device. As you know, and that is obviously also true for the medical space. Both of us, we're also the maintainers of a project that we launched in Github called the Bridal Project. And Stefan will have another talk on Thursday where we will show and introduce that in a more wider sense than what I will be able to do during this talk. And our mission at the moment is to bring CEPHR into the medical device domain, to really allow and empower medical device manufacturers to use CEPHR in their next medical device. We're not there yet. The CEPHR upstream project isn't there yet. There's still a lot of things to be solved, but a lot of things are right now happening. And it's all looking promising. So that's definitely something we're both willing to commit our time and energy to. And also, we're doing CEPHR trainings for our clients. So if you want to talk about trainings, come talk to us as well. And that's also something we do. This talk is actually, as I said, about a journey, about the journey I took. And that roughly started more than 10 years ago, like 12 years ago. And back then, you might say, okay, back then, embedded device and embedded product development might have looked like the way I'm going to describe it on the next slide. But unfortunately, the reality is, since then, I've seen many, many other embedded product development projects. And to date, they still run the same. And that's why I said, actually, that is the status quo of embedded device development, as far as I can tell. And I know it's not true for most of us as we are here in the room, but for a lot of the clients I work with and a lot of the classical product developments. So what do I mean by them? That was a PCBA, a printed circuit board, that was part of the very first product I helped building and I helped creating. The workflow went roughly as follows. We captured the requirements initially, and all of them, of course. And then for whatever odd reason, and that remains an invariant throughout all the project that I've come across since then, the second most important step is to select a processor. I don't know why that is. But typically, the second important thing, the second step in any project development is the processor selection. And then you get some evil kids for your software guys. And then you design the hardware and you make a couple of prototypes. Typically, five-ish prototypes. They worked the first day. They came off the factory belt. Everything's great. The guys on the software had already started creating the software on that evil kid. You port that to the prototype hardware that has been developed. You complete the firmware design on the prototype hardware, and then you ship it. That's the idea of embedded product development as at least as I see and can read it off the plans. That's because in medical device development, everyone has to have a development plan. That's the steps I typically see in the product development plans. The point is, and I'm not sure about you, but I've never seen that happening in real life. It never worked that way. On that particular instance, we, and I still feel kind of the pride we had when we had signed off on the requirements and we had submitted them to the design agency that was then building that PCBA. We had like a good evening and cheers and everything. And the next day, I had already the first change requirement, change request in my inbox from the life scientists that were eventually using that system to build the next medical product they were after. And then during the development of that PCBA, which took several months, you can, if you want to count the connectors, it's more than 30 connectors on that board. So there were many, many more subsystems that I'm not showing here. It took a couple of months. We had more than three dozen of change requests till the first prototype actually appeared in the physical world. And the portage of the software didn't take two weeks or four weeks or six weeks. It eventually took nine months. And that was the time when I felt something's wrong with product development in the embedded space. We tried to analyze that. And over the years, I've come to realize a certain number of anti-patterns that I see over and over again, which I've listed here. And one of the most prevalent anti-patterns still in existence today is that there is an implicit assumption that requirements won't change. You just need to get them right once, right, and then everything works. And it has never happened on any single project I've been working on that requirements won't change. The next anti-pattern, as I called it here, that I've seen is that somehow electrical engineers typically tend to believe that two hardware versions is all they need. And the software guys pick it up from the hardware guys and make sure that the software, the currently running the software, which is currently on the code repository, only ever works on the most recent of those two hardware revisions. You cannot go back. That is another failure, which has caused a lot of pain and grief in the projects I've been using that you could always ever run the latest firmware on the latest hardware prototype. I call that the port forward only policy for firmware. And then there is a belief that for whatever odd reason during development, you should already save time, space and money by omitting debug LEDs or any other sorts of understanding the board bring up of your board, because eventually you want to ship it right away into the product. Now that doesn't go to the coders, obviously, that's more for the electrical engineers to pick up. But as I said, my role is like a system architect. So I need to understand how these two groups of people work and how we can improve on the way they work together. All in all, you have dozens of reasons to believe that there are deadlock conditions in your development process where either the software guys are waiting for the hardware guys or the hardware guys are waiting for the software guys because the software hasn't been written to bring the board up. And that just causes drag and friction in the development process. And pretty much every single project we have supported customers in. And the idea that I initially showed with that linear sequence, I came to believe that is just a that's a fallacy. There is nothing linear in product development. And the more innovative your product is and in life science, typically it is just not clear what the product needs to look like and what functions does it have to have. You need to think about your organization and the system as something co-evolving and being an architect. I love the word architecture on slides. That's why I put it down three times on that slide. And I just have to understand and to realize that it's not only about the system architecture that me as a system architect needs to think about, it's also the collaboration and the knowledge architecture. The knowledge architecture is how do we make sure that on a typical project and we have what we typically do projects from something like a dozen to 120 people and all these people on that product development team share a common have a share big picture that they all know what they're working against. What's the what's the common goal? And that is the knowledge. How do you share? How do you make knowledge and information available in that organization of up to 150 people? And how do you propagate new pieces of information? And that also ties directly into another setup that you need to think about which I've called the collaboration architecture. And that is all about how the work gets done, whether these are scrum teams, because that is the latest fashion. Well, it's not the most recent fashion anymore, but people still want to work in scrum teams. And also to understand that what I produce as an electrical engineer, be it a schematic, will eventually be consumed by someone else on the larger team to produce a firmware. And how do I make sure that information isn't lost, that information is not misinterpreted by that handover from an electrical engineer to a software engineer say. And all these things do interrelate with one another. And that's typically what I spend my days on thinking about. And this is why I think product development is just hard. It's inherently hard because this is what I'm just describing here is a complex system with nonlinear because of feedback, nonlinear interactions. And one of the ways out that we've come to believe in and that we are promoting is on the one side to also adopt the ideas of agile software development to the hardware development. And I will talk about that in a second. And also to actually for the knowledge architecture to capture our knowledge, not in written documents like Word documents or Excel spreadsheets, which is still the de facto standard in most companies. I mean, there are cloud based versions of Office 365 these days, but the essential is still the same. It is not formally, it's not machine readable information. In contrast, what's been promoted for the last 15 even more years is what's called model based systems engineering. And that's something that we try to apply to the projects now. Nowadays that we're working on, which is a very structured approach and you actually try to capture your information in models that you can actually let talk to one another. You can extract information from a model. If you change something in one model or in one view of a model, it's been reflected on the other end. And typically you start with requirements. They come from your users, they come from industry standards in the medical space. You can imagine there are many ISO and IEC standards that the medical manufacturers have to follow. And first and foremost, you capture them in a system model. That's my job. I'm typically working here. And then it's called downstream engineering. You have the different disciplines, the engineering disciplines that will now translate various aspects of that model into design artifacts in their respective domain, be it a mechanical design, be it an electrical design, or being software. And what you want to make sure is that these models keep talking to each other. And for me, and we've spent a considerable amount of time thinking about this, it was about this interrelation between the ECAD models and the software models. So how do we make sure that we, if someone makes a change to a PCBA design, the software guys will take notice in due course and can respond to it. That was the idea. And we've got all our tools. Typically, I'm using tools that's called system L modelers, but I'm not going to talk about that today. And you then extract, for instance, for your design history file, that's the typical noun in medical space, documents as views on that underlying model. And the idea behind that is that your information is always up to date, consistent and coherent with one another. And consistency and coherency is a big one in large projects to make sure that all the documents that you generate actually refer to the same thing. What we initially did, and that actually predates Zephyr, and then we found Zephyr and it was such a nice fit that that's why I'm going to repeat very roughly what we did like more than eight years ago now, what we called agile hardware, which was our initial attempt to bridge the impedance mismatch between how hardware was developed, and for most cases still is being developed, and how we already develop software, namely in an agile manner. And the biggest impedance mismatch came from the different timelines. Typically, firmware developers wanted to run sprints of two weeks or four weeks, whereas hardware developers would spend two to three, even four months on a revision of their hardware. And that's what I called an impedance mismatch because you could not co-evolve the design at the system level if you had such a slow quotes and quotes hardware development. So we came up with an idea that actually in particular, in the beginning, lots of what eventually needs to be done for your hardware actually isn't necessary. And the very first realization we had, or there were actually a couple of them, was that the important interface between the electronics world and the software world is not so much the PCB layout and the form factor board, it's the schematic. If you are a software, if you're a firmware developer, you've read schematics already. So that is really the interface that you need to agree on. And from a single schematic, and that basically is the bottom line of what we then call agile hardware, from one schematic, you can derive different layouts if you wanted to. And you can do prototype layouts that we call development rigs, which were easy to make because you only populated the PCB from one side, you made it as big as you want, you even apply the auto-router just to have it quick and handy. And eventually, you would use the very same schematic which you then also could decompose into something we call scamlets to produce your form factor. That's a typo, I do apologize, that should call product PCB here to derive another PCB that would functionally be equivalent to what you had already been doing, but now was on form factor and was locked down. The idea of those depth rigs was that you could actually have these modules plug and play, you could play with them, and each of these single PCBAs we were able to do in three to four weeks turnaround. So we brought these two velocities, the one from the software world and the one from the hardware world closer together. And that's what I looked like just to give you an example. We proved that it worked. That is what I called a development rig on the previous slide. So we had really clumsy big PCBAs and they were made for speed, not for appeal. And then we took the very same schematic that was on these boards and we cut them and shrink them down by some automated transformations on the schematic to a credit size car PCBA. And the firmware that was running on that big rig was then also running on the small rig one to one. There was no the deportage or the integration became trivial. That was the state when we actually found Sefer. So we're now fast forwarding a little bit. And before I do that, I just for the slides to come to make sense need to introduce a slight deviation from this picture. I just showed you which is we had a like a core board. We call that back core boards. We also had shields already where on this core board all you would put down is the MCU. Think of a nuclear board if you will. And then on a shield, you would then just bring out the pins and the peripherals of your MCU to various connectors that then would connect to the modules. For various reasons in the project, I will I will take the next set of examples from we actually combined the shield and the core board into a single board that we now will call I will now call the control board and all the modules. And I'll just we had more in the actual project, but I cut that down to three just for the sake of brevity. We also put on one board, which I will now from now on, what's we'll call the peripherals board. And as you can see, we had here we had three, three modules, we had to have a heater, because on life science, you probably need to leak heat the liquid, the specimen that you want to analyze. We had a motor to pump liquid. And we had an analog front and to do some electrochemistry stuff. And I will not you won't need to understand any more details than just knowing that there were these three, three modules. And that brings brings me now eventually to to suffer. And again, this diagram predates suffer, but we found what a nicely match that architecture that we drew nearly 10 years ago actually was then in the suffer ecosystem and with the functions that suffer provide. I won't explain in too much detail what you can see here. Suffice it to say, there are like services that we that we had to find back in the days. We wanted to have a shell, and we wanted to have some remote procedure call interface into the functions. We wanted to have some telemetry out of the functions like give me the temperature every every 10 seconds or something like that. And then we also wanted to be modular in the functions because we didn't really know what of those modules eventually would make it for the final product. So we have to have it plug and play. So we created functions, module by module. And each module would have had the core logic that which is, for instance, for the heater would just be a PID control loop. And then we in the architecture, we define bindings to those services. And if you have worked with suffer, you will immediately see how all of that so nicely maps to primitives that are now available in suffer. And that kind of draws naturally into the suffer ecosystem. To just give you a bit of an idea. I'm not sure how I'm doing on time. Let's add a we go. So that I that we have enough time for Q&A. I will not show all the details. But if you go online, and you check out the slides, there will be the slides or as well that show more of the details which I might omit right now. But let's dig a bit into it here. So we defined a substructure very similar to the suffer upstream project, we call them subsystems, or we just took the subsys, we had modules, we had services. So already the architecture was nicely mapping to the structural view on the actual code base. And we can we could then make use of the CMEG integration to actually have conditional compilages. So when we didn't want to have a particular function, we could just switch it off by means of K config. And that would then, as you all know, transparently propagate to the CMEG build system. Another thing we found very useful. And we're quite glad that this does exist within the separate tooling systems. And maybe you haven't heard about those before, are so called K config templates, which help you to standardize on K config symbols that you wish to offer for a variety of services or modules. So we had these K config templates for our modules, so that each of each of our modules would just pull in the template, and then would provide similar symbols that would now just be distinguished by namespace. So subsys heater or subsys AFE, but each would then also define the bindings and we could then also extract refactor the common common pieces like the MQTT RPC bindings would only be enabled if the actual RPC service had been enabled before. So this is where K config really helps you in mapping architectural constraints to the actual software system. Another thing which we spend a lot of time on was working our heads around the device tree. And that has been indeed a journey and a one that I will now actually unfold in a couple of rounds. Just also to show you that this for us also was a journey and we probably haven't reached the end yet, and there's probably also more to come in the upstream suffer when it comes to device tree. Initially, as a system architect, I want to have a mapping or some relation between my hardware domain and my software domain. And in the hardware domain, as I had shown on the previous slide, I had two boards that were connected by a big thin FPC connector. I had that control board and I had what I call the peripherals board. And the very first thing we did was we ignore the fact that these were two boards, we just made one DTS one board in the suffer. We call that the dafric DTS. And that was a trivial mapping. And it kind of worked. But it's also wrong. Obviously, so because there are two entities in the hardware world, which now are being folded into a single entity in the software world. So it already starts to feel kind of wobbly. But what we already did at that level, and that proved very useful, is we made a lot of use of the suffer user node to actually introduce introduce aliases or property names that would be the only interface from the device tree into the functions and into the services. So we started to set a policy that from whatever service or function you were working on, you should not dig deeper into the device tree than to the suffer user node. Because that was then a clean decoupling of these two functional concepts. I probably will spare the next couple of slides in the tech step and just tell you what happened next. That was a reality check. And that was just what happens in a product development. And we were faced with the fact that the motor module had to be revised. And that led to a new revision of the peripherals board, because the motor is the stepper motor driver that was initially used was just not strong enough. And it was just, we had to try that out, right? The life scientist had to figure that out. And there was a couple of other findings. So short story or short answer, the board revision to a peripherals board version two was coming. And in order to cope with that, we made use of another very nice and clever mechanism, which is built into the way the CMake system interacts with the device tree. We did a refactoring of our existing system solely for the purpose to keep the existing system running, because I also said that that how I was being a pain for us that you never could go back to the previously running system with a new functionality. So we kept that running. And we wanted to add another version of the peripherals board. And this is the time where we found out about shields. So we split these two boards into two abstractions, one board, the control board it became, and one shield for the version one and one shield for the version two. And I mentioned this clever mechanism. We really didn't recreate these things from scratch. We basically took all the notes from the original DEFRIG DTS that would eventually belong to the peripheral shield version one and put them into a board specific overlay on the peripherals version one shield. And that is just the way the device tree and these overlays and this DTSC includes are being passed by the CMake system. You can have board specific overlays for your shields. In fact, the peripheral shield version one, this file up here, that was actually empty, because we didn't want to, at that point in time, we hadn't in mind to have that shield being connected to any other board. It was just that we could actually separate the things that went to change from the things that stay the same. And now we were able at the West invocation line to actually say declaratively, we want to build a firmware for version one of the peripherals board, or we want to build it for version two. That was the second step. Get was reality knocked on the door again. And this time it was basically Corona crisis and we heard about supply chain issues and also we were facing that. The story went as so. We had to build 200 additional rigs because the life scientists were kind of satisfied with the overall performance of the rigs, the five or 10 rigs that they've been playing around with today to that date, but now they really wanted to go into the statistics. So they really needed these many rigs. But at that time, we couldn't even build another five core boards because the MCU on Mauser and all the other platforms just went out of stock. And we had lead times as someone had already mentioned in another talk, I was visiting. We had lead times of 53 and even more weeks. But to our fortune, we could still buy nuclear boards. So we couldn't buy the SDM 32 anymore, the DMCU to put on our own board, but we could still build or buy at least a bunch of those nuclear F7 boards. So our question then became if we were to make a change to our hardware development rigs and replacing the core board or the control board with a nuclear, how would that change being reflected in the in the device tree constructions we had built so far? Turned out that we had to take one step in between, which I'm mentioned now, which is in the previous abstractions I had shown you, we had already built a model, the control board and the peripherals board. But it turned out what we kind of had missed in the software world was to explicitly model the connector in between. And that now became important because if you want to replace the control board by a nuclear board, you want to plug that board into something else. So you need to model explicitly what that connector accepts or what that connector needs from that board to actually make them plug and play. And this time around, we introduced another concept, which we here call the peripheral interface, which is in addition to the device tree of the control board that was actually hiding all the nitty gritty details of which peripheral port, a nice QSC SDA line would come out. And we made these symbols transparent to the usage on the peripherals board. At that point, it was again refactoring and we hadn't gained much. But just modeling this connector explicitly allowed us in the next round to then introduce the nuclear board down here. We had to build another shield where this connector was then brought to the morpho connectors, which you would find on the nuclear board. But all the technologies had already been in place. We didn't need to invent anything that was not already available in the surface system to actually plug these many boards or shields into one another. And the same shields that we were previously using with the control board now also started to work with that stack. So in all that we're doing, we never changed. And I think this is quite remarkable compared to my other experiences in embedded software development. In all of what we need doing here, we never changed the signal line of source in any of the functions and any of the core logic. Everything that was hardware related that I'm talking about here today all happened in the device tree and never reached the level of the source code. And that is, if you think about regression stability, actually an immense, an immense improvement. The invocations unfortunately did become a bit longer. So we had to say, that's my board and I now need to have these shields on top of each other. But you at least can see that in software we still reflect what's actually happening in hardware. And eventually, we also didn't get enough of the F7 boards. We also had to bring in H7 free and H745 boards. But with those abstraction, we now had found all of those things, all of these configurations, these eight configurations and eventually it would be more because we then also build more of those peripheral board versions could build from the single source code base. And for me, that was really impressive to see because all the concepts, all the models kept talking to each other. And we managed to deal and cope with this variety of hardware artifacts that obviously due to the supply chain issue we were forced to work with. All of that, though it didn't really leave was not yet the moment of true love. But before I stop on the slides, I just want to talk about that real briefly because due to the brevity of time, I only talked about the right half, the real time half of our X. There was also an application half of it, which the actual scientists were using. And on the initial system, that was an IMx8. And we deliberately had made the system to have two Ethernet ports. So the real time control, this connection here was talking Ethernet. And on a secondary Ethernet interface, the application controller was then talking to the web browser of the life scientist. Turns out we also couldn't build the IMx8 boards anymore because everything was just off of the shelves. So we took another approach and we put the Raspberry Pi 4 in as a substitute for the IMx8 board. Now this turned out to be a minor issue because the Raspberry Pi, as you all know, only it's got one Ethernet port. And we had to use that Ethernet port for the connectivity to the laptops because Wi-Fi wouldn't work if you have 200 of those rigs in a single room. So we really wanted to have them on wire. Turned out that with Cepha, and this is really the moment of true love, it was like three lines of change in the K-config. And we switched the entire Ethernet stack and all our RPC, which was based on MQTT RPC from using a real Ethernet to USB CDC, like a virtual Ethernet, rebuild the firmware. Again, we didn't touch a single line of source, and the whole system came up in an instant. And that was just like a flashing moment for us. I wouldn't have believed that something like this is possible. And most of the guys I'm working with when I come to visit clients still wouldn't believe that something like this is possible. But with Cepha, it is. And I thank all the contributors and maintainers for this great piece of work because it has made my life so much simpler. Since then, we have actually explored our ideas further. And I go briefly through that because I think I'm already a bit over time. We started a brighter project, which is our attempt to share what we've learned in these product development projects with the open source community. It's on GitHub. You can find the links and the slides. And we replicated all these ideas with some open source and free or easily available shields and modules from the growth system by Seed Studio. And all of that is now for everybody to explore, toy around with pride feedback to us. Like you can explore the same ideas we did on the nuclear board and these Arduino sensors on Raspberry Pi Pico. It works the same as before. Now we want to bring it into the open source. And we also want to develop that now together with the community because I think that is some valuable asset to share. I won't go into the constraints. There are a couple of constraints you might ask me for them. If you're interested to hear about those or you find me later in the talk, after the talk, and we can go through them as well. As an outlook, and that is probably the last slide for the talk, is I'd like to go further than that. I've already started looking into actually deriving at least parts of the device tree straight from a schematic, like annotating a Kaikert schematic file in the appropriate way using the Python API that Kaikert these days ships with to extract the actual information from the schematic. Because the more we had started training the electrical engineers and the software engineers to talk to each other about the device tree, the more the editing of the device tree shifted from the software engineers to the electrical engineers. Now then that is obviously a mild generalization of that idea. And me being a system architect, there is another big standard coming in the system architect's world, which is called SUSM-LV2, which again is much more amenable to derive like a device tree or containers of a device tree from system architectures. And with that all set, I thank you all for your attention and I'm willing to take questions. Thank you. Right, thanks. So the question was about who in the software world nowadays can actually read schematics and I would agree with you. It really depends on the education that the firmware engineers are coming from. The older generations like people even older than me are typically electrical engineers by training and they took firmware engineering as their second profession. For them that's no problem. The other way around is I think more tricky when we have new engineers joining teams from university, from a computer science curriculum, we typically need to train them on reading schematics. And that is part of the agile hardware, which I didn't talk about today. We also typically sit down with electrical engineers and have something similar to what we might call clean code in software now for electrical engineers and for schematics. So making schematics readable. And that is something electrical engineers also need training on. So that they need to understand that the schematic they're produces not only for them to consume, but also for others. And that typically takes a bit of time. Okay. So the question was about my experiences with multi-core systems. I'm glad to say I haven't any. We typically, for safety reasons, we have a strict segregation principle at play. So there is a real-time controller, typically a Cortex-M MCU single core and we have an application processor, a multicore Linux say like an IMX8. And we've never run into performance issues the way you describe them. So I cannot say anything about that. Yes. Yes. Yes. We took the bloody nose road. Yes. Yeah. Very good question. Right. Very good question. The question is how actually do you break the habits of the traditional development processes and how do you make people start looking into suffer and agile hardware? Different attempts at different times, I'd say. Typically it was the stealth route. So we kind of sneaked it in. We had like a proof of concept, a very small group of people like five, six people in a co-disciplinary team, like two electrical engineers and two or three firmware guys and had them work together. A good scrum master or a facilitator of team collaboration works. And then they would perform like a nucleus and other people will look at this and like this idea with this Raspberry Pi for people who haven't seen these technologies before, that just looks like magic. And this is just by convincing people, look, you can also weave magic if you want to learn the tricks by us. But I have to stop. I just got the red sign. Sorry for that. Thanks for your interest. You can find me till Thursday on the conference and I'm happy to talk about all of that stuff and agile hardware as well. Thank you very much again.