 Hello and welcome to this presentation of the STM32L4 safety support. It covers the requirements for compliance with safety standards and how STM microelectronics helps customers targeting safety for their projects. Safety requirements for electronic devices increase permanently as the use of electronic control systems expands into the huge range of human activities. The massive expansion of these devices requires their compliance with specific safety standards. The primary goal is to prevent human death or injury as well as environmental damage. But there are many other important factors at a lower level, such as the devaluation of an industrial process including the loss of important data, connections, power or control and many others. The process for developing harmonized standards at both national and international levels is rather complex, sometimes involving completely opposite efforts such as local market protection versus its globalization. In any case, the main influencing factors come from field experience, market requirements, insurance issues and the globalization of trade and business. The standards are produced by specific legislative and executive bodies while specific worldwide recognized testing houses inspect and verify all the required appliances to ensure their compliance. Applications targeting safety can benefit from the acceleration of software development. Efficient and early diagnostics using specific hardware features together with the application of proper hardware and software methods decrease the probability of hazardous events due to possible component malfunctions. Applying certain hardware design and manufacturing methods can even increase component reliability. ST supports two basic general safety standards, a specific one targeting household appliances known as a Class B or Class C standard and a more common industrial standard targeting safety integrity levels called SIL. The second one is a generic standard which produces a large number of derivative standards dedicated to different fields of application. ST, in compliance with these standards, cares about both systematic and random failures. Systematic failures are predictable and their avoidance and monitoring are based on practical experience gained in the industry. Systematic failures can be avoided mainly by applying correct internal processes throughout a project's life cycle. These requirements are defined in specific internal quality documentation. Regular inspections and audits ensure that these internal rules are applied and comply with the recognized standards. To ensure integrity against random failures, specific software methods and hardware design techniques must be applied as described in the following slides. Not all random failures result in a hazardous event and they may even be considered as safe from a safety point of view. Basically, safety standards require monitoring to detect dangerous failures that may be directly or indirectly related to safety and have the potential to cause a dangerous situation. Both safe and dangerous errors can either be detected or stay hidden and undetected by the system. The more often dangerous errors are discovered and prevented in time, the more the probability of a failure propagating into a hazardous event decreases. The time needed to detect dangerous errors and prevent hazardous events must fit into overall process safety time or PST available, which includes all the possible delays and reaction times for the system, such as on sensors or actuators. For quantification purposes, safety standards recognize a safe failure fraction and diagnostic coverage. The safe failure fraction, or SSF, is the ratio of the rate of safe failures including the rate of detected dangerous failures to the total failure rate of safe failures as well as detected and undetected dangerous failures. The diagnostic coverage, or DC, is the ratio of the probability of detected dangerous failures to the probability of all the dangerous failures. Random failures can cause permanent or recoverable errors. Hard failures cause permanent physical damage to the component and the system is no longer able to operate normally. If no compensation is possible, the system has to be put into a safe state until it is repaired. Random soft latch-up and transient failures are returnable and some kind of recovery process is usually applicable. In addition to being detected, these failures can also be compensated for in certain cases. Latch-up failures can be matched by both hardware and software while transient failures need fast hardware methods exclusively. Software tests can never compensate for these temporary and short-lived errors efficiently as they are considerably slower and limited by their execution time. From a cross-product point of view, we can recognize single-point, latent, or common types of failure causes. Common causes of failure require a special focus, especially as they can potentially destroy even quite complex safety structures. When random failures are detected and cannot be compensated for, especially after a dangerous error is detected, the system has to be stopped and placed into a safe state or go through a recovery process like reset, rollback, or a specific check function. Compensation methods usually allow the system to continue operating normally while using error correction, passivation, or masking functions. Generally, a sure voting process is used to identify the damaged part or incorrect data which is then replaced by the correct one. Standards recognize hard fault tolerance or the maximum number of errors which the system can absorb while it still can continue at normal operation. In addition to specific functional testing, redundancy is the essential diagnostic principle here. Both detection and compensation techniques always require a sure level of redundancy to be efficient. Compensation is considerably more demanding than detection as not only discrepancies, but the correct state has to be identified as well. To do so, specific comparison and voting mechanisms have to be additionally applied. The required level of redundancy can be achieved using a wide range of different software or hardware methods and techniques. Some of them are listed here and others will be highlighted later in this presentation. The techniques can usually be achieved either by hardware or software or a combination of both. From a safety point of view, a microcontroller is a relatively complex programmable electronic component which has to comply with specific requirements determined by the applicable standards. In regards to support safety for a microcontroller, a vendor considers the product as a component out of context as its final application purpose and safety tasks are not known in advance. This is why we can speak about components ready or suitable for a determined common level of safety tasks. The effort is always to cover the components' overall reliability and fulfill the overall budget of diagnostic coverage defined by the standard for the given safety integrity level required by the final application. A complex component like a microcontroller can be considered as a set of partial components involved in various safety tasks, each with a different diagnostic coverage and weight in the overall component's safety budget. An effective way to ensure the required overall safety budget has to be focused on crucial and generic parts of the microcontroller, especially those used by most applications. Any small improvement in the safety of these fundamental and significant parts of the design always brings the biggest gain in the overall safety budget of the component, which is beneficial for each application. Once a microcontroller is included in an application design and the safety task is specified, then the safety support can be deployed much more efficiently and cover just the very specific parts of the microcontroller involved in the required safety case. Many efficient methods can then be applied based on detailed knowledge of the application requirements, its design, the process, and the equipment under control. Redundancy and knowledge of the system behavior are crucial principles applied either separately or together. Inputs and outputs can be multiplied or checked by feedback, tested for logical state, value, or expected response in trends or time intervals. The processes can be monitored for correct timing and flow order. Correct decisions can be made on the comparison of results coming from redundant and independent flows, analysis, calculations, or data. STM32L4 microcontrollers feature specific data for efficient diagnostic testing and to quickly react to failures with the potential to cover a wide range of lower-level safety applications. The hardware tests are autonomous with minimal or no software control. This is especially helpful in detecting transient errors and consumes the least amount of time of the overall process safety time. Nevertheless, all the tests listed here, with the exception of ECC, are exclusively dedicated to the detection of failures. This is why additional software tests must be added when compensation or an additional check is required, for example, to achieve a higher level of SIL. In this case, the user must ensure that the software testing period takes into account the process safety time. This slide lists application-specific safety features. A sure level of safety integrity can be achieved with an STM32L4XX microcontroller if the specific conditions and limitations described in dedicated documentation are respected. This slide lists the software checks included in the STSF solution with a brief summary of why they are available. Generally, the firmware focuses on generic parts of the microcontroller based on in-depth knowledge of the design, while packages dedicated to achieve SIL standards use more extensive testing methods proved by specific methodology for their efficiency. The packages are not available for free download. Users should ask their local ST representative for the firmware. The flash memory is protected using an error correcting code or ECC, but an additional CRC test can be helpful to detect latent and multiple bit failures, especially early. The ECC status should be checked in parallel. This regular scrubbing can detect problems in the parts of memory which are used exceptionally. A comparison with a pre-calculated CRC pattern can then verify the overall firmware image. Part of the embedded RAM is covered by a hardware parity check. This method reliably detects single bit errors. Due to applied design, the probability of multiple errors raised contemporarily at single byte is very low. To prevent multiple single bit errors, the user can apply scrubbing when the overall memory area content is read out at regular intervals. This prevents latent faults as well. If memory where no parity bit is applied needs to store safety critical information, the user should apply a Marchex algorithm functional test. This test additionally checks the data and address buses, but it is destructive towards the memory content. It is suitable as an initial functional test of all the RAM space. At runtime it has to be done per parts and in a transparent way to prevent corruption of the actual memory content. Clock cross-reference measurements can use dedicated timer interconnections. The proportional ratio between two independent clock sources has to stay within an expected range. One frequency is used as timer input while the other gates the timer input and raises timer capture events. A specific set of instructions can verify the CPU unit and its registers. For peripherals it is suggested to verify the correct configuration at regular intervals. In principle, self-test procedures are included as an additional task when initializing the application main loop during system startup. This runtime self-test task provides periodic testing of the CPU, clock system, stack boundary, program flow in both volatile and non-volatile memories. The watchdog timeout is refreshed upon completion if everything goes correctly. The memory areas are tested step-by-step per parts within the task. The test is synchronized by time-based ticks derived from timer interrupts. The interval required to complete the test depends mainly on the size of the memory areas under test frequency of the task calls and sizes of the blocks tested in a single step. Optionally, a one-time initial startup overall self-test can be additionally implemented at power-on or after application reset. Whenever a malfunction or discrepancy is found during these tests, the fail-safe routine is called. It should put the application into safe state and determine the next recovery possibilities. ST provides support for customers developing applications with safety requirements. Specific self-test libraries certified by Worldwide Recognized Safety Inspection Institutes are available upon request for different products. Detailed documentation describes specific conditions and limitations when the software is implemented. ST cooperates with external expertise companies with the goal to provide complex consultancy and support services from the start of design to the final certified product. For more details, please refer to dedicated documentation and contact your local ST representatives for the availability, status, and possible delivery of the firmware and associated documentation.