 Good morning, good day. So good morning, good day, good evening. I can appreciate that some of you may have joined, and this might be fairly late for you. So thank you for accommodating my time zone. I'm on the West Coast of the United States. So the beginning of this, we're going to talk about what software engineering is. Maybe we'll do a little introduction into my background so you understand where I'm coming from when we do this talk. And then some of the resource materials that are available to you for free, specifically, when we talk about, once we go through and we talk about what that software engineering is, then you'll have reference material that you can actually go back and look at. So if you want interested in learning more about it, and so on and so forth. And then maybe more specifically, we're going to answer that fourth bullet there. How do you know when you're done? And then once we finish talking about software engineering, I'll go start going into the differentiation between what we can expect to see from a quality framework, from software engineering framework, and then cover the safety analyses and code coverage metrics that are associated with safety. So I know I haven't said anything about myself yet. So actually, let's go ahead and answer these questions first. When we finish this presentation, we want you to be able to answer these questions. So the first of those is the same one that I already talked about, which is how do you know when you're done? Because when we're talking about things from a functional safety standpoint, and you're going to release it out into the world, out into the wilds with people driving their cars or doing something from a safety critical context, we want to have a very high confidence that the software that we created is in fact complete and it's demonstrably complete. Then we're going to answer the question about what is quality? Because from my perspective, quality is a really important factor. And in fact, when we look at most of the safety standards, quality is the baseline set of expectations that the safety standards then build upon. And so that last bullet then, we're going to show you what the distinctions are between those two. So first a little bit about me. So my current title is the functional safety engineering leader for KVA by UL. KVA is an automotive functional safety company, consulting company that was purchased by Underwriter Laboratories about three years ago. And so since then, I actually joined after that purchase. So I've been a UL employee the entire time. And the term engineering leader basically means that I'm a first line, first level manager. But so I have about eight people reporting to me and I have several different peers. The people on my team mostly function, mostly focus on the software and systems aspects that are associated with functional safety. So my background, why is it that I know about this stuff? My first couple of jobs actually were working in the aviation industry, specifically on jet engine control systems. So I came out of school and then immediately got dumped into, jumped into working on functionally safe software. In the first instance, it was military jet engines, but then the latter half, I actually went and worked in the commercial aviation side. Just, and really they end up being very much the same, at least in that capacity, at least from a safety perspective. From there, this was in the early 90s. So I've been doing this for a while. The avionics industry started laying off. And so I went off and went and did generic embedded systems. Print servers, this was early days of the internet. So I did a lot of embedded communication protocols and things like that. Cable scanners, people installing newly installed cat five cables in buildings. If anybody out there has used a, a microchip technology part, a PIC processor, I actually wrote the first version of the GUI for their in-circuit emulator and their device programmers. And that's called MP Lab. From there, I went to Intel. And I stayed at Intel for 18 years. And probably the most notable things that I did when I was there was that I led an all volunteer effort to establish a software engineering curriculum to try and get us on a level playing field with regards to what expectations were for given different software programs. And the thing that actually got me back into safety critical applications was I was the lead software engineer for Intel's 2015 Consumer Electronics ADAS. So ADAS is Advanced Driver Assistance System. And so I was the lead engineer for that, for the, for that Consumer Electronics show. From there, I left Intel. I moved to Portland, Oregon, which is where I'm based now and was working for a small company focused on creating a software infrastructure to enable autonomous driving. But it, I sort of came to the conclusion that automotive and startup is an oxymoron. And so I left and went and worked on a company that uses a different safety standard. It's called ISO 13849. And it makes, they make, when I was, when I left there, they were working on a 30 kilowatt laser, which could cut, it can cut 40 millimeter thick steel at a meter a minute. So these, these are not lasers you mess around with. And then when I saw that, and then when I saw that KVA had positions available in Portland, I jumped at the chance and I've been there for the last two and a half years. As part of the work that I did in creating the software engineering curriculum at Intel, it got me involved in two specific things, the SWE com and the SWE block. And we'll actually be talking about those later in the presentation. So I will, I will save the introduction to that material until later on. But that also got me involved with ABET. So ABET is the accreditation board for engineering and technology. And it's the organization, at least in the US, that goes to different university programs. And if they have a software engineering degree program, I am a person, it's a program evaluator. And so I go to the university and I evaluate their degree program for accreditation. So in other words, the university can then say that their program and their graduates are accredited by ABET. This year actually, yeah, so this fall, I will be going to my fourth university to do that evaluation. And I'm currently working with the Linux Foundation. We are working on, I'm working with another peer and we're trying to take the material that's in the SWEBA and actually create a free Linux Foundation class around it so that, so further education for people who are interested in learning more will be available at some point. It's slow going, because there's a lot of material there and we have to figure out how to fit at all and what we're going to fit into that class. That class is specifically about software engineering. And certainly if people are interested in working on it, volunteers are welcome. You can get my email address from the Linux Foundation is my assumption. So the references, the SWEBA, I will actually make these links. These are all links here and we'll make these links available to you as reference materials such that if you're interested in learning more about it that you can go there and actually do these things. So the first item here is the SWEBA and that's the software engineering body of knowledge. And the second one is the SWECOM and it's the software engineering competency model. ABET is the accreditation board for engineering and technology. We talked about that. And then there's another reference specifically from the ACM, which is the CC 2020 which is the computing curriculum 2020. And it's a document that gets published, I think about every six years that goes through and talks about the different disciplines that are associated with the computing curriculum. And of course, software engineering is one of those. So the first question we might ask or the first question that I would ask and it's not rhetorical is what is software engineering? So this is not my definition. I'm referencing the CC 2020 where it says specifically that software engineering is an engineering discipline first and foremost, it's an engineering discipline that focuses on the development and the use of rigorous methods for designing and constructing software artifacts that will reliably perform specified tasks. So that sounds like an engineering discipline it very much is. It's very focused on what the processes and things that you have to do, all the things that you have to put in place in order to be able to say that at the end of this that we can reliably perform those specified tasks. But then immediately follow it, it's ironic because they talk about software engineering and then immediately following that definition they talk about the term software engineer because the term software engineer is pretty generic. It kind of means anybody in the spectrum of people who write code to people who practice that discipline of software engineering that we just talked about. So the term is much more broadly employed than software engineering and as an academic discipline or a degree program. So there are many more people with the title of software engineer than who have graduated from a software engineering program. And that's so I wanna try and make that distinction up front so that everybody who's listening as we go through this presentation will understand what I mean when I say software engineering. It's a much tighter discipline than we've come to expect. And I wanna sort of establish that as the precedent before talking about the rest of the material. So when we see software engineering and we saw the definitions that were there I like to restate that because specifically this is something that comes through when we talk about a quality product or we talk about a safety product that software engineering is the method by which we the software engineers control the amount of systematic error that we build into the system. And I'll talk, so when we talk about quality I'll bring a little bit more discussion into that. But really we have to think about it from this perspective because, and so this is from safety as well but because software is not subject to random errors anything that any bug or any problem that occurs in a piece of software that we built is by definition something that we put into it. So the software engineering methodology then is the mechanism by which we reduce the number of potential mistakes or the number of potential defects that we add into a system or a program that we're creating as part of this activity as part of the process of following as part of the software engineering process that we might follow. And then the last there or this is the mechanism by which we can demonstrably prove that the software that we've created is complete. And I understand that when we talk in terms of an open source product or Linux for that matter that it's in a constant state of change and that's understandable. The Linux foundation and Shua is who's one of the panelists here goes through a long discipline process of releasing that product. But this is when we talk in a safety context we want something that may never have to change again. And that's so when we're putting it on a braking system or we're putting it on a gen engine we wanna know that the software that we've produced is complete and we've demonstrated that it is complete and it handles all the potential errors that might get thrown at it prior to the release of that into the wilds or into the field. So to that end, we can first talk about the SWEBOC and it stands for the Software Engineering Body of Knowledge. It's created and maintained by the IEEE Computer Society. And it's actually that link that I posted there is a Wiki page. So it covers a wide variety of different topics that somewhat but not entirely overlap with the SWECOM. And it's obviously it's clearly generally available for your use in order to go find more detail. One key problem that I find with the SWEBOC, let me see if there's more material on it. Yeah, so it covers these 15 categories of skills. Software engineering by definition is a very, very broad topic. And we'll actually cover some of the skills when we go into the competency model. But one key piece, and this is part of the reason that we're working on the class itself that the SWEBOC misses is that it doesn't tell you why you're intended to do each one of these things that they outlined. So they don't tell you why you're supposed to follow a process. They don't tell you why architecture is important. They don't tell you why code coverage is something that you have to pay attention to. So what we want to do is try to bring that why into the discussion topic when we go through and contain or detail the material that is included in the class that we're creating. I don't know if I mentioned it before also, but please feel free to post chats or put stuff on the Q&A or even go ahead and if you think that the question is topical to break in and ask a question. I would prefer this to be more of a discussion, especially with the give and take or feel free to save your questions for the Q&A afterwards also. So the second reference that I listed there was the SWE COM and that stands for the Software Engineering Competency Model. So what they've done through it, well, it's also from the IEEE Computer Society and it isn't yet in a wiki form, but the link that I provided actually will allow you to go through and download. You have to provide your name, your job function, your business and so on and so forth, but then you can freely download it after the fact. And it's provided to you as a PDF. And this one is basically a detailed list of all of the different competencies that are associated with software engineering, at least at the time of publication. Those competencies, they may have increased in number in the intervening years. I believe this was published in 2014. Although to be fair, the competencies, the baseline competencies won't have changed very much in that amount of time because those baseline competencies have been around for a long time. But they give you this mechanism whereby you can go through and look at the different sets of skills that are associated with the competency model and then figure out where you are on the spectrum of competency against those skills that are listed. So those skills are divided into two different sets of categories, the life cycle skills and the cross-cutting skills. The life cycle skills are what you might expect to see really in any engineering process. So the fact that we have requirements, design, construction, testing and sustainment is immaterial what type of engineering we would be talking about. This could be software engineering, electrical engineering, it could be, these are just the standard product development expectations when we're working on something that's going to be a product that might go into an industrial or in avionics or automotive domain. We're expected to do all of these things as we go through and you do each one of them. And of course, each one of these skills has different sets of skills even inside it because if we talk about requirements, there's elicitation which is going and talking to customers and figuring out what they actually want and then analyzing what they've requested of you. And then the verification process is how you go through and determine whether or not the set of requirements accurately reflect what the customers actually requested and that they were developed in a sufficient quality level that you could actually go create based off of architecture based upon the requirements that you have. So you can see there's a lot of nuance that's associated with each one of these skills just in the life cycle skills. So these are the five life cycle skills that are listed in this week on. And then actually I consolidated some of them in order to make them fit at least in part because some of them overlap considerably in terms of the cross cutting skills. The first of these is process. So this is the mechanisms, the methods that you choose to follow, the methods that you document that go through and describe how you're going to do each and every phase of the development life cycle. And in fact, the first thing I listed there is the software development life cycle. And it goes through and basically just describes how you're gonna follow through from requirements through your testing. In addition to the process stuff though, there is also assessment and improvement. It just happens that when I was working at the avionics companies early on in my career, they were terrified about competition coming in and preventing or taking away business from them. So they instituted and trained us all in this thing called continuous improvement. And it just happens that that is a, it's a process methodology, whereby you go through and you look at each and everything that you do and you say, how could we do this better? And this is actually reflected pretty well if anybody's familiar with Scrum. Scrum has an inherent sprint retrospective that is specifically the process improvement stage that's listed here as part of this process. The next one is systems engineering. I won't belabor this, but for me, software engineering and systems engineering end up being very, very similar just because if you look at the processes that are defined and the cycles that you follow, and especially when we look at the complexity of systems that are being built today, the software engineers inherently have to understand the complexity of those systems in order to build in order to be able to write the software to control the complexity of that system. And so of course, design and other things are there for managing that complexity. Now, I listed quality, safety and security as one topic and they are actually three different ones in the SWECOM itself, but that's at least in part because quality, safety and security are all specific mechanisms, they're all specific sets of processes that you might follow. It's a matter of perspective as you follow this development lifecycle, which thing you're actually looking at. So are you gonna follow a quality process? Are you gonna follow a safety process or are you going to follow a security process? And arguably for most modern systems that we're building, we wanna follow all three. So they have mechanisms in there. So Nico has a comment here that FMEA analysis is a good example to mention in that regard. So Nico, I'm actually gonna talk about the software FMEA as in the safety analysis portion. So thank you for bringing that up. We have configuration management measurement, which I call it performance analysis, but really this is just benchmarking. It's how to do the benchmarking effectively, how to tell whether or not the benchmark tells you what you think it's telling you and then how to then institute improvements. But of course improvements is subjective because you may want to optimize for size versus optimizing for speed. And then the last one is human computer interaction. So this is one of the newer categories that was added and arguably there may be some other new categories like AI or common, you know, CNNs or other things that are associated with machine learning and other capabilities that might be another specific set of skills that we had in the next version of this week. The competency model, once we've gone through and we've described a lot of those skills, we then it then creates these different sets of categories with regards to competency. They have technician, entry level practitioner, technical leader and senior software engineer. And you can see that there's a verb that follows each one of them that sort of describes where you are in that development, where you are in terms of competency according to which title you might have. And I would argue that the person, once you get to practitioner, you participate. That means you are now considered competent at that activity and that's just something that where you can function independently. So these are the learning stages. These are the innovation and changing stages and mentoring stages where you're teaching the junior engineers how to go through and do this, how to get better at the skills that you're working on. So the first question is, that first question that we talked about is how do you know when you're done? This is an issue on many occasions, especially when we're working in a safety critical context. So we have this thing called the software development life cycle. And it's a framework where there are specific methods for each stage of the development life cycle. And so we talked about those already. We talked about the requirements. Arguably, we would have architecture and architecture being distinct from design, but you could call them high and low level design at the same time. We have implementation and then we have different forms of testing. So there are different goals that are associated with each type of test that you might run. You can test the code itself, the individual functions. You can do integration testing where you're testing the architecture and then we have acceptance testing where we're testing the entire system that we just built as the black box. So each one of those things is a different stage in the overall life cycle. And there are a variety of different process models that we could follow. Arguably, I put waterfall in quotes here because you can actually look up this paper on the internet. And if you go through and read the paper, this paper was published in August of 1970 in an IEEE publication. And it's Dr. Winston Royce. And it's ironic, but he doesn't say to follow the waterfall model in the waterfall paper. He says that you should follow an iterative development life cycle. So the agile development methodologies that were listed were just sort of formalizing or more rapidly iterating against something that had already been put in place. And not a lot of people have heard spiral, but this was an iterative model that was published by Dr. Barry Bohm in I think 1984, vitamin 1986, but still a long time ago. And the question runs in, why do we follow a process? Does anybody have any idea why we follow a process out there? You can type it in the chat. You can put it in the Q&A if you want to. So Amir, I'm hoping I'm saying your name correctly. We need to be systematic in our development. That's why we follow. So, but what do we get by being systematic in our development? Document learnings, how about quality? If we have a process where we have a set of requirements, where we can have a set of tests that demonstrate that each one of those requirements has been fulfilled, then we can demonstrate both that we are done and potentially, depending on the quality of our requirements, that we have satisfied what our customers actually expect. So the process is there. The process, I mean, the baseline software engineering process is all about quality. It's about us going through and being definitive about being able to prove that this product that we've just developed satisfies everything or the way that I usually say it, it's that we can meet and or exceed our customer's expectations. And there's another great, the Wellington Diaz here said, control and measurement. Arguably, that's a really good one as well. But that's not the purpose of the process. That's one of the main mechanisms whereby we improve the process. So we can control and measure the processes, the portion of the processes that we are applying to the development that we're doing. And then we can improve them along the way. Yes, so I wanted to make that point up here upfront because one of the specific targets of the system properties, one of the main targets of system properties that we're looking for is quality. We're looking at it from a quality. And that in fact, when we look at most processes that are fined, there are a variety that are listed. One of them that's used a lot in the automotive industry is a thing called Aspice. Let me see if I, yeah. So we'll go into that in just a moment. But the thing called Aspice, which is automotive software performance or process improvement capability determination. And yes, I know determination does not start with an E. But that one is just specifically for automotive. There are other ones. There's one called ISO 1207, which is a compendium of quality processes that we can cite and follow for any software development life cycle. So when we look at it, this is very generic. I'm not ascribing this that this is anyone specifically, but the typical modern process that we see, and this is actually cited as part of the safety standards. I know it's definitively in ISO 13849 and ISO 26262. 13849 is an industrial safety standard and 2626262, I may have done too many 266 there, is the automotive safety standard. So the V-Model exists in this form roughly in both of those standards. It follows very much an outside in approach. So specifically what we want to avoid doing is specifying the details of what the implementation are prior to us having a good idea of what it is that we're trying to build. And we've actually demonstrated, I had a buddy at Intel who actually went and demonstrated that just by improving the quality of the requirements, he reduced the bug grade by like 70% because when he created, when they went through and created the set of requirements for the product that they were developing, the developers now had a target. They now knew what it was that they were trying to build. And so as part of, they didn't do these parts still, but specifically as part of the implementation, they had a much better sense of what the coding should be prior to any of the testing that they might do. It's important to recognize too that when we're doing software engineering, the tools that we're using aren't necessarily the same as for systems engineering. They are specifically the set of software tools, whether that be continuous integration or the configuration management systems or your editor or a graphing or diagramming tool, whether it be Visio Enterprise Architect or Medini Analyzer or even MATLAB Simulink. All of those are available to us to go through and do the component and or architectural diagrams that we're gonna put out. And of course we can apply this development model at any level. It doesn't just have to be at the system. We can also do this for a fairly complex component so we can reduce the complexity to a set of components that we can actually build. So how do you know when you're done? When you've demonstrated that all the requirements have been fulfilled? So now that we can demonstrate that the product is complete, that we can demonstrate that we completed the product, how do we know that the product was built with sufficient quality? Anybody? Well, you know, I'm just gonna keep talking. So the first measure, I actually have three, maybe four measures of quality. The first of those is meeting where exceeding your customer's expectations. So specifically, the quality of your requirements has a direct impact on the quality of the product that you're building, at least in terms of what the customers are expecting. And of course, since quality is subjective, that expectations, the set of expectations with regards to how you capture what your customers want is going to vary in terms of what that perceived quality is. But we can look at a very common thing, Android versus iPhone, the perceived quality on the iPhones is very high. And I think that's at least in part because Apple pays a lot of attention to the details that are associated with how the UI works, the usability factor. That's a big perception, that's a big motivator, a big improvement in perceived quality by the people who are using the phones. One of the next metrics. So I've seen this a lot is the bug rate. And that's at least that comes along because organizations don't necessarily have a process that they follow. And so they don't have stuff that's measurable. If you don't have requirements, the only thing you can measure is bug rate. And unfortunately, bug rate is a fairly difficult metric to use to try and measure quality, at least in part. So I say it's a bad metric. And that specifically is because it's too late. If we try to control for the quality of a piece of software that we write, be solely on the bug rate. We don't know what the bugs are until after we're done writing the code. And so trying to figure out or backfill the quality of the code based around the bugs that we find sort of presumes that we have an exhaustive test suite or a set of tests that demonstrate well enough what our customers actually want. And if we have a set of tests that where we know what our customers want, then we should be able to capture those as requirements. So it doesn't really flow very well in terms of controlling how we do how we do our quality development. And then the last bullet there is adherence to process. So arguably the main, there's product quality, which is this one, the first bullet and maybe the second bullet too. And then there's process quality. So how well we follow the process? So in other words, we have quality metrics that are associated with the requirements themselves with the architecture. And there are properties associated with the architecture like how modular is it? How extensible is it? How reusable is it? Because if it's none of those things, then we don't have high quality at the architectural phase. And we can have the same metrics at a coding phase. So in fact, unit testing arguably is a mechanism to demonstrate the quality of the code more so than anything else. So imagining the process that we choose to follow in order to generate high quality software, adherence to process is a big deal with regards to following and demonstrating quality throughout the development life cycle that you might follow. I already covered bad rate being bad mechanism for bug rate. So I know we came here to talk about software engineering and functional safety. And the first part of this was all about software engineering. So now the question is, how do we rope safety into that? So ISO 26262, the 2018 version, which is the spec that we use in automotive and that's the spec that I actually train the most of. It defines safety critical software as software that enables the safe execution on a nominal function. So in other words, if we have software that's controlling the fuel flow to an engine or controlling the torque output from a battery management system to an electric motor and electric vehicle. The nominal functionality is the thing is the sort of series of functions that we write that control how all of that actually works. So in the fuel flow, it may be a PID controller in the battery management system, it may still be but arguably it's a more a torque limiter than anything else. But it's the amount of current that we deliver to that motor to accelerate or control or maintain the speed of the vehicle. So the question becomes, how do we demonstrate that that nominal functionality is still working the way we expect it to? So when we do this, when we talk about safety, what we do is we add safety monitors or we add other mechanisms into the software that allow us to check and make sure that the nominal functionality is still working. So there are different varieties of different things that we can do. We can have a windowed watchdog. We can have just a straight external watchdog. We can have program flow monitoring which is basically where we feed the nominal functionality of stimulus and say, provide me an answer to demonstrate to me that you're still working the way I expected to. First and foremost, any kind of timeout is the most important thing. If that monitor demonstrates that the nominal functionality has failed, then we want to have something, we wanna have some kind of safe state. And usually the safe state, we have a timing limitation with regards to how much time we have available to us in order to get that to the safe state. So in the case of the engine control, what we might want to do is, we put a check engine light, we might wanna cut fuel flow to the engine, but we also want to simply put it in some kind of degraded mode that limits the amount of fuel that can go to the engine so that the person has the opportunity to get the vehicle over to the side of the road prior to any kind of crash occurring. And then last but not least, because hardware is subject to random errors, we have software that can detect, indicate or mitigate those errors. So we can be monitoring, looking for errors in RAM, errors in communication systems or errors in the instruction stream that we might be executing on a given processor. So the most important thing that we can take away from this though, is that every safety standard that I've worked with has this expectation that when you're doing your development work, that it's going to follow a quality management life cycle process, which is why I covered all of that quality management stuff at the beginning of all of this, because when you do that quality management system, you get a significant portion of the safety life cycle simply by following the quality one. And here are the different, I actually mentioned ASPICE and ISO 1207, but ISO 9001 actually has a software development process description in it as well. I don't see it used very often. Most of the time I see ISO 9001 used strictly in a manufacturing capacity, but it can apply to your development processes as well. And I don't remember what IATF stands for, but 16949 is a specific set of automotive additions on top of 9001 that specify what the development processes for your software or your entire development life cycle might need to be. So the differences between quality and safety. Safety critical software is expected to go through a safety analysis phase. And in ISO 26262, it describes two different mechanisms. The first of those is the software FMEA, and FMEA is failure modes and effects analysis. And it is contingent, actually I may be covering that a very, yeah, I may be covering that a lot, I'm probably not covering it later on. The FMEA, you must have the requirements and the architecture in order to do the software FMEA. So that's why that quality management framework is this baseline expectation because what they want you to do is go through and look at the architecture and look for potential faults in terms of how components within the architecture are exchanging information. Specifically, you look at the inputs into each and every component and make sure that you have coverage if a parameter is out of line or if it comes in too late or too early, there's a whole process that's associated with performing the software FMEA. And then you only need to do this if you have software that is mixed criticality running on the same processor or in the same context, but we have a dependent failure analysis. And we do that because most of the time when we talk specifically about the monitors, monitoring the nominal functionality, what we don't want to have happen is that nominal functionality to fail in some way to drop into an infinite loop or to have a wild pointer and in some way corrupt the safety mechanism. So the dependent failure analysis is the mechanism whereby we determine that the, what we call the safety mechanisms are free from interference from any lower level or less safety critical application that might be running on the same processor because we want to be able to have a guarantee, a high guarantee that the safety mechanisms are there and are in fact protected against any kind of corruption and that they're gonna be able to continue to do their job and that they will detect an error and get you to your safe state should that error occur. So the differences between quality and safety, obviously you have the safety analysis phase but there is also this code coverage metric that is expected in most of the safety standards that I've seen and they expect you to cover as close to 100% of the code as possible at the unit test level and at the integration level. It varies in terms of the type of coverage that you might need to do. Statement coverage, branch coverage or a thing called modified condition decision coverage. And I can show an example of that if we have time later on. But the point being is that at the lower level asils, the lower level of criticality software in automotive, we may only need to be able to demonstrate that we can hit all of the statements. Then we need to be able to demonstrate that all of the branches have taken all possible outcomes. And then MCDC, like I said, if we have time we'll go into that level at that later, sorry. And then at the integration level we have call and function coverage. So we want to be able to start working on the actual target hardware and demonstrate that these components interact with each other appropriately and safely in order to demonstrate that they together perform our system level functionality, both the safety and the nominal functionality. So summary, I'm right on time. The software engineering, it's a very broad discipline. You saw that with a number of categories of skills that we can associate here. And specifically if we follow this systematic approach we can create software and demonstrate either quality, safety or security or all three. And it's a matter of doing and having the processes in place in order to be able to make that assertion to demonstrate that this software is in fact done. And that is it. So that's my email address. Should anybody have any questions specifically about the topics that we covered? So let's go take a look at some of the questions on the chat to begin with. Let's see what it means to the accept criteria. So I'm not sure how far to go back, control the measurement, follow a process. So surrendered, you said when it meets the acceptance criteria was that in reference to one of the specifics that are questions that I asked, was that about the quality one? Yes, that's right. Okay. So the next one is Luis. Quality of the user experience and quality of code-maintainability. Was that in reference to the different perspectives, different types of quality that we can have in a system? Yeah. Okay. So the question I would ask is how we capture or how we describe. So when we talk about the quality of a user experience, what does that mean? There are usability, there should be usability metrics that are associated with what it is that we are trying to achieve. And so I wanna make this very clear. UI design is not my forte. I'm an embedded systems guy. So when I talk, if I talk about UI stuff, it's going to be in very generic terms. I have done some UI development, but it's very clear that this was a UI developed by an engineer, not by a UI designer or a human factors engineer. But we can talk about quality. We can talk about, there's a book called The Design of Everyday Things by Don Norman where he talks about design and he talks about inherent or intuitive design or obvious design in that the better job that we do of designing something, the more obvious it is to the user of what they're supposed to do. And so if we can capture that as a set of metrics, so much the better as part of whatever UI design that we're doing. So, Bhavneet, you said specifically fixing the bug rate is the quality of software maintenance and how coding is done. I'm not sure I understand that. Was that just talking about why bug rate is a bad metric for trying to measure quality? Yes or no? Okay. Okay, great. Oh, there's another one here. Why is quality considered subjective? Because at least in part, different people because at least from a user perspective, quality is subjective. Now adherence to process, there isn't a lot of variability because in fact, when we look at the safety standards, actually that's something I should have added. The safety standards do expect you to go through some kind of, well, both audit and assessment. So an assessment is performed on a product basis and it's basically somebody coming in and looking at the processes that you have defined and seeing whether or not those conform, whether or not you the organization and the product itself has been built in accordance with the safety standard. And then an audit, that was an assessment and then an audit is performed on the organization which is to say how robust and complete the set of processes that you have defined for your organization actually are. And so the combination of those two give you quality from a process standpoint and from a product standpoint, except in this instance, it's safety but the process that we follow in each instance are the same. Okay. The next question is from Nico and it says, in your perspective, how the safety standards are changed with autonomous driving systems? Mostly the algorithms 2.0 as somebody calls them. Are FMA procedures more and more elaborated and complex now in your opinion? So the first thing to understand about the safety, well, so the autonomy standards. So ISO 26262 doesn't really apply to the safety standard and rather the autonomy standards. Sorry about that. Instead, there's a new standard. It's called ISO 21448, 21448. And it's often referred to as the SOTIF or the safety of the intended function. And specifically it delineates an entirely different set of processes for how we go through and do in effect a statistical analysis against the autonomy systems that you are attempting to comply with and the criteria that they use to go through and apply those standards are based around empirical information that we have gathered from different governments in fact, have gathered from essentially accident data around the world. In the U.S. there's a thing it's called the Sharper 2, it's SHRPR. I don't remember what it stands for. And it may not have been government, it may have been government funded but it was collected, I believe by either Virginia Tech or Clemson. Two different universities in the U.S. And what they did was they went out and collected a whole bunch of data about driving habits and patterns and took account for the ages of people in the car and the driving patterns, the genders of the people driving and tried to look at the statistics that were associated with the accident rates of the people that were in there. And from that, they got a driving distance, a safety driving distance according to an average usage. So when we look at the SODIF, we can then reference that as a baseline in effect. And the SODIF says, if we're gonna do this in an autonomy context then we have to look at where it's going. It's called the operational design domain. We have to look at how safe the vehicle needs to be in that domain. And then we have to be 50% or 100% better than an equivalent human driving in that same situation. And then there's a beta factor you can throw in to make it even more difficult to achieve. But specifically, then you now need to be able to demonstrate either the amount of time or the distance that you need to drive that the autonomy system needs to drive in order to be able to demonstrate that it is in fact that much better than equivalent human driving in the same situation. I think, and so FMEA doesn't really come into it when we go through it. They do a thing called the higher process, H-I-R-E, which is the hazard identification and risk evaluation. And they look specifically for functional insufficiencies, which means that our sensors may not be accurate enough that our algorithms may not be precise enough or we didn't do enough sampling on the design, the domain, the scenario evaluation for where we're going to be working. Okay, thanks a lot, Peter. Sorry, I have a very, a lot of troubles in the connection today, but thanks a lot for your answer, yeah. You're very welcome, I hope. So if you show a post at a link to a different, I don't know what a standard there as part of this whole thing. That is the sort of, I thought. That's the sort of standard? Okay, is that ISO 21-448? Yes, that is correct. Let me check, just double checking. Yes, 21-448, right? Perfect, yes, that's correct. That is from 2019, of course. Yes, yeah, the first version, it was called a PAS, a publicly available spec, and I believe it came out in December of 2018. And I think it's in draft form right now. I don't know that it's- Yeah, there is a lot of, still a lot of draft. My curiosity is because I worked in autonomous driving and not working in the autonomous driving space right now, but still I have a lot of interest in- Absolutely, yeah, yeah. But so specifically, that's the point I want to make though, is that the safety standards for the most part, so I will say this, even the SOTIF, even the SOTIF- I say this still in the way that I will say, right? Even the SOTIF, the 21-448, it still specifies that you follow a quality management process. They still want to see that you have the requirements, architecture, design, and all of the testing processes in place, because we want to have confidence that the software was built correctly before we now go and trust it to do the training or the implementation of the convolutional neural network or any of the rest of that information. Okay, perfect. Muhammad asked a question specifically, do any of the standards combine quality, safety, and security? And it's ironic, but no. There's a quality standard, there's a safety standard, and there's a security standard, and they each to lineate a specific set of processes that you are supposed to follow. So the quality management will follow just the baseline quality expectations, but then specifically, the safety standard has a bunch of stuff that you have to do up front. It's called the hazard analysis and risk assessment, because that's the mechanism by which we determine how safety critical the software is that we're working on. And then that gives you guidance throughout the rest of the development life cycle in terms of how much testing you have to do or how detailed the testing that you have to do and how much work you have to put in with regards to the safety analyses and all the rest of that. And by the same token, the security life cycle, it's called ISO 21-434, that that's the automotive cybersecurity standard. And it doesn't have a HARA, or hazard analysis and risk assessment. Instead, it is a threat analysis and risk assessment or a TARA. But then once you get past those early stages that give you the criticality, whether it be safety or security, from that point on, the development processes end up being very, very similar to that baseline quality one nonetheless. There may be slight variations throughout the rest of the development life cycle, at least in terms of like penetration testing when we talk about a security development life cycle, but barring that, they're all pretty similar still. And then Wellington, a great question because I actually learned about Rust when I was at Polysync just after I left Intel. You can use Rust, you can use C, you can use any programming language at all. What I discovered as part of learning software engineering, and I learned all of this empirically, is that it doesn't actually matter very much what language you're using. The best majority of time that you're spending is not spending, is not spent. Yes, you're spending a lot of time doing coding, don't get me wrong. But you spend a lot of time writing requirements, doing architecture, doing analyses, doing the unit design and doing testing. And so if Rust as a language improves your throughput or improves the quality of the code that you produce, go for it. But I would argue that if you follow, if you use C and follow the MISRA coding standard, MISRA stands for Motor Industry Reliability Software, Reliability Association. And because I did an analysis of a MISRA-based C code versus Rust, and I really didn't find much difference between the two, at least in terms of the types of errors that you might encounter. So by all means, the only thing that I'll say with regards to Rust, is that you're gonna have an easier time finding C coders than you are Rust coders. Thank you. You're welcome. Okay, yes, Shu posted the 21-4-48. The link on Nico left. Okay, so we're right at 11.30. Any other questions? Okay, well, I'm gonna all go ahead and stay on if anybody has any further questions. But by all means, you know, let me know. And if you feel free to contact me at this email address if you do have a further question that you don't feel comfortable asking in the more public forum. And then I'll pass it back to you, Candice. Thank you so much, Pete and Shua, for your time today. And thank you everyone for joining us. As a reminder, this recording will be on the Linux Foundation YouTube page later today, and a copy of the presentation slides will be added to the Linux Foundation website. We hope you are able to join us for the future mentorship sessions. Have a wonderful day, everyone. Excellent, thank you, Surrender.