 Thank you everyone for joining me today. I'm Natalie, the CEO and co-founder of Sternum. And we are here to talk about security, especially in the context of embedded devices and how hackers perceive embedded devices, vulnerabilities and opportunities to exploitations. So it's going to be interesting and we are going to also do a bit of a deep dive into the technical aspects of exploitations and also integrating security solutions. While the talk will be focused on Zypher operating system, it is relevant to all of the embedded Linux and real-time operating systems in general. So I just like this quote, especially the beginning. The challenge for connected device manufacturers is to strike a delicate balance between security, reliability, scalability. And I think more importantly, performance and being deterministic in the way they operate, right? They cannot fail. A little bit about myself. So I grew up studying computer engineering, started when I was 14, which paved the way to unit a 200, the Israeli intelligence unit. This is where I actually got my cybersecurity education and I was an expert in finding zero-day vulnerabilities in Linux kernel, especially, but also Wi-Fi drivers, bootloaders and Android operating systems. Finding the vulnerabilities is the part that most people focus on, but actually the exploitation was the most interesting part of the job because taking a vulnerability and then creating an exploit chain where you can actually get a remote access to devices or assets and then gain intelligence is really challenging, even more than finding the vulnerable piece of code. So this was really what I did, was then leading research and development teams until co-found in Sternum in a mission to actually secure and bring visibility into the most embedded resource-constrained mission-critical systems. I'm talking about PLCs, medical devices and gateways, everything you can think of. And here I am talking to you today. We are walking with leaders across the globe, raised $40 million to date and we're happy to be one of the sponsors of the Zeifer Foundation. So I want to start with just an overview of how we protect applications and the different security layers used in the industry, not in the embedded industry. And we need to take a look at three main layers, the operating system and infrastructure, the user space where we actually deploy applications and the application itself, our code, our program. So in the infrastructure itself, we see techniques like memory isolation, segmentations, stack enemies. We see secure API with the user space, secure boot and over there, updates, hardware security in some places. That's really the infrastructure that we are provided with to secure our systems. Then comes user space security. In the user space, we also have stack enemies and other compiler flags to make our code more secure. We also usually have continuous monitoring in modern systems, endpoints, servers. We monitor them continuously to see what's going on with the user space in real time. We have permissions and policies to manage processes and we have endpoint protection that is protecting the entire user space. Then there is application security and I'm talking about what we are doing during development of application and what we are doing post-production. So usually during development, we will see static analysis tool, software composition tool, dynamic analysis, best practices from engineering perspective, like implementing encryption, data protection, user management and post-production once the devices beats endpoint, cloud and so on, we usually see some endpoint protection, something running on the asset, preventing attacks in real time. We usually see some zero day attack prevention and detection, not only focusing on preventing known vulnerabilities, but also the next threat, what's called zero day, a vulnerability that is not yet known. We have usually real time alerts, something is going on, we immediately get alerts. Actually the situation today is that we have too many alerts in the traditional industry and that's a problem as well. And we have application performance monitoring and other tools to just monitor our own application. So that's the standard layers used today in the industry, but let's take a look in the embedded industry status for a second. So from an infrastructure perspective, and especially if we're talking about safe autos or safer, then some of the capabilities exist, trust zone, OTA, secure booth, memory isolation, when we go a step higher to the user space security. Sometimes we have permission and policies, usually depending on how much the engineers invested in it, stack entries, if you compile your code with it and if the third party is provided to you, comes with that security mechanism, then you have it. Continuous monitoring and endpoint protection is usually absent. In the application security stage, dynamic analysis is missing during development, which means that we miss complex bugs, memory leaks and so on. And in post-production, we're actually missing most of the capabilities. We're missing zero-day prevention and endpoint protection, continuous monitoring, real-time alerting. So we're really left blind in the post-production scenario on our own application. The meaning of that means that vulnerabilities that attack the application, the user space, zero-day vulnerabilities or advanced rats cannot be handled during the existing tools. So what happened to real-time monitoring and protection? Why is it so difficult to take these standard tools from the industry and embed them into our own applications? So of course, IoT and embedded security is very different. From various perspectives, one is the deterministic nature. Endpoint servers, they have user interface. You can surf the web, you can download applications. You can get phishing emails. So the attack surface is really different. In the IoT and embedded space, the nature of the system is deterministic. It's predictable. There is minimum input channels, which actually means that most of the threats are software vulnerabilities, not phishing attempts and things that antiviruses are aimed to handle with. The different attack landscape means we need different solutions. Third, limited available resources. Compute, memory, battery. So when you hear about integrating a new solution, usually the first thing someone asks, especially an engineer, is what is the overhead? How difficult it is to integrate and how much CPU and memory is it going to take? And those resources are so limited in some of the systems that imagining, integrating something so advanced, seem very difficult. The diversification is also difficult. If you build an endpoint protection, usually you protect Windows or Linux devices or servers. If you want to protect embedded devices, you need to support more than 100 different operating systems. How do you do that? When most solutions use the kernel and the operating system mechanisms to protect? So that's very difficult when the operating systems are changing and diversified. So what do we see today in terms of security? So we see that OS security features covers the first layer. I mentioned that Zephyr has some great security features that give you the infrastructure to build secure applications. We see some best practices used in the industry, static analysis, software build of materials, encryption, vulnerability management, a lot of passive or post-attack tools that help us patch or understand the software that is within our devices. But what are the gaps? So first, static analysis tool misses around 50% of the vulnerabilities. So instead of going out with 50 bugs per 1,000 lines of code, you're going out with seven. This is still a lot when you have a major code base. S-boned vulnerability management takes care of public well-known vulnerabilities. They do not handle zero days. So it means that around 50% of the exploitations are exploiting zero days. None of those tools can really handle it. Patching takes time and money. Patching is not easy in the embedded space. It's not like updating your iPhone or mobile device. In some cases, if you are regulated, it can take six months. Six months is a lot of time to be exposed to a one-day vulnerability. Encryption. Sometimes people think if they encrypt the data, they are secure. So when an endpoint gets attacks, the data on the endpoint is decrypted. If someone is hacking your mobile, it can read your texts because you can read your texts. So encryption really secures the data during transit. It cannot prevent software vulnerability exploitation and it cannot protect your data if you're exploited. No real-time application security and monitoring, in most cases. OS memory protection does not prevent memory vulnerabilities in user space and in your own application. What I mean by that is sometimes I hear people think that if Zephyr has stuck protection or memory separation, they are not vulnerable to, let's say, heap overflow. That's not true. If you have an heap overflow in your own code, Zephyr cannot prevent it. And we see a lot of those. The result is that embedded endpoints are far behind and blind when they go to production. So we have no monitoring and protection in real-time and that leads to everything that we all saw in the past few years. The reason security is such a topic right now is that we are seeing more and more attacks. And not just attacks. When we're not monitoring our application in real-time, in the real environment when they operate, we also see recalls, problems, battery issues, communication issues, customers' calls and complaints. It's hard to debug when you don't have real-time visibility and logs into what's going on. So the blindness leads not only to security issues but also to performance and quality issues. This is a nice video. I wanna take a few seconds to take a look at it. Exploiting a device that is actually implementing all of best practices in the market. It's hard to hear. So what you're seeing on the screen because it's hard to hear is two people from, I think Germany exploiting a Cisco business router remotely via a buffer overflow vulnerability and actually gets root access to the device. This exploit is actually available publicly. There might be some business routers that are still unpatched and the router uses encryption and all of the best practices and actually still simple programming bug enable them to exploit the device. The reason I am showing that is that actually vulnerabilities are inevitable and endless. We can't really release products without vulnerabilities. And it's not me saying so. It's just the numbers and the statistics. 70% of patched Tuesdays. Patched Tuesday is Microsoft patch that happened every Tuesday, every week is due to memory vulnerabilities. So if Microsoft have memory vulnerabilities every week, how can we produce products without vulnerabilities? We have 2,000 CVEs, new CVEs each month. 50 vulnerabilities per thousand lines of code, as I mentioned, and third parties, for example, they scanned Origin 11, Blueburn, Amnesia, Ripple 20, all of them are publicly available open source libraries that was scanned by multiple static analysis tools and all of them missed those vulnerabilities. And if you have a third party with a vulnerability, static analysis misses it, you need to go out and release a patch. So the only reasonable conclusion from a hacker's perspective is that every device I'm gonna look at, every library I'm gonna look at, TCPIP, Bluetooth, I'm going to find a vulnerability. And honestly, this is how we felt when we looked for vulnerabilities. There is no way we're not going to find one. And this is why I personally don't believe that we should target vulnerability-less devices because that's just not possible. And there are also many attack vectors. So how actually hackers get into a device? So I wanna show you this simple draft that shows really how many components can be part of the entry point for the hacker. So first, inside the devices, all of them, doesn't matter if you are a medical device or a smart plug. You have chips and modules within the device. Those have software within them, passing packets, Wi-Fi packets, Bluetooth packets. This software can contain bugs and issues. You have third party code. So a medical device and a gateway can use the same third party, the Bluetooth library, the TCPIP library. This is how hackers can actually gain access to multiple different devices and attack in scale instead of just investigating or researching just one device. Then your device communicates outside. So there are protocol vulnerabilities, Bluetooth, unparsing user input properly, connecting to a mobile application, network vulnerabilities, and so on. So every input that the hacker can control, a packet that is coming from the outside to the inside, a USB connection to the device, everything like that is an attack vector. If you are completely not connected and you don't have any physical connections as well, then nobody can hack you, even if you have vulnerabilities. So a vulnerability, in order to be exploited, has to be exposed and triggered remotely. So the situation is put it simply. As defenders, we have to stop all of the vulnerabilities. And as attackers, we only need to find one. So usually hackers are the messy in the game and they usually score, even if you stop 99% of the problems. So let's take a look at actually the video that we've seen and break down the attack very simply. There is a hacker on the internet. There is a CVE memory corruption exploit publicly available. He downloads the exploits, finds an exposed router and take over the router VPN gateway completely. From that point on, even if the router itself is not interesting, he has a way to full enterprise network exposed behind the router. So he can perform lateral movement, change control, disturb service, perform ransomware and so on. At this point, after we are already attacked, we have limited options. React, patch, try to stop, try to detect as fast as possible, and we don't have data. Another example, Sternum actually disclosed multiple vulnerabilities in the past few months. In xyxel devices, in belkin devices, in a QNAP devices. And this is actually a critical vulnerability we discovered in WEMO smart plug. And the company actually said that they are not going to patch because the device was three years old. So that was a problem to some consumers and now belkin decided that they are going to patch. But the vulnerability itself was again memory corruption. So belkin is using all best practices, static analysis, encryption, but still finding a memory corruption was very much possible. And it enabled us to exploit remotely and gain a complete takeover on the device. You can read the full research on our website. It's actually very interesting, including firmware extraction and how you go into researching a device. So the current approaches, as we already understood, is reactive. Patching is reactive and costly. Encryption, and this is a quote that I like by Adish Amir, the S in RSA, actually the inventor of the most used encryption algorithms. Usually there are much simpler ways of penetrating the security system than cracking the crypto. So nobody is going to crack the cryptographic algorithm if they can just find a buffer overflow and take over the device. That's the easiest way to do it. And we already discussed that. So what can we do to protect against zero days and unpatched vulnerabilities? So as you already understood by now, I'm not a big believer in preventing vulnerabilities, but the way of exploiting vulnerabilities actually has a unique fingerprint. When we come to exploit a buffer overflow, there are some things that we have to do. So exploitation has, or the attack chain, has four main steps. The first step is the vulnerability. The second step is weaponizing code, taking advantage of the vulnerabilities to manipulate the system behavior. So this is from Wikipedia. To exploit a vulnerability, an attacker must have at least one applicable tool or technique that can connect to a system weakness. So exploitation is a piece of software, sequence of commands that takes advantage of a bug of vulnerability to cause unintended or unanticipated behavior. What can we infer from it? First, exploitation is software. It has a chain of commands to execute something on the system. Second, it connects to a specific system weakness. The way to exploit buffer overflow isn't like exploiting a command injection. The vulnerability actually is enforcing a specific exploitation technique. I need different techniques for different vulnerability types. Third, it must cause an unanticipated or unintended behavior. So if you remember, embedded devices are very deterministic, so if they are not behaving as intended, we can assume that there is something interrupting the legitimate behavior of the device. So what does it mean to prevent exploitation? How really zero day prevention works? So we at Sternum call it the exploitation fingerprint. We have four patents on that, and it's really about bringing the benefits of endpoint protection and RASP runtime application self-protection into the embedded space. And some of this fingerprint includes, for example, preventing memory overflows, command injections, information leaks, manipulation of execution flow. Why are we focusing on those things? Because manipulation of execution flow means that someone is trying to manipulate the deterministic execution flow that can be calculated on each software to cause unintended behaviors. If we can calculate the expected behavior, we can know in real time that there are deviations from the legitimate behavior. Memory corruptions. In order to exploit a memory corruption, you have to corrupt the memory. You have to overflow buffers, for example. So what if we monitor in real time on the memory to make sure there are no overflows and no corruptions in real time? So during development, we would be able to find memory leaks and overflows, and in post-production, we would be able to prevent exploitations, and so on and so forth. So it really means the power of flips, because now the hacker actually needs to find an exploitation technique and a vulnerability that is not protected, type of a vulnerability, not a specific one, that he can exploit while bypassing all the continuous monitoring, anomaly detection, and deterministic protection embedded on the system. That could be really tricky. So when I used to find stack overflows, for example, and the system used stack canneries, it will go to waste. I just couldn't exploit it, because the cannery would prevent me from doing so. So I had to look for something else, more sophisticated. So if you have stack protection and heap protection and command injection protection, and you also have continuous monitoring and anomaly detection, it becomes really hard to do something to your system without you noticing or preventing. So for example, we deployed our solution on some of the devices that we found to be vulnerable, and actually we were able to stop the exploitation. So now, using this exploitation fingerprint technique, instead of a complete takeover, notification was sent to the customer that his device is under attack, including the line of code, memory addresses that was attacked, visibility into the bigger picture, and device integrity and real-time operation is actually maintained. Uptime and all the mission critical, WEMO is not mission critical device, but other devices, it could be very significant. And more importantly, no reaction required. Everything happens automatically. So now really the question at this point is, sounds interesting, but integrating anti-protection and monitoring to an embedded device, this must be an nightmare and will take me a few months. So let me just walk you through the Zephyr integration with Sternum, and how easy it is. It is as easy to integrate to embedded Linux and free autos and other systems. So first, you just add our directory and a few lines to the CMake list of Zephyr. No other changes are necessary. Sternum runtime security will immediately auto-activated and can be controlled directly from K config. Lastly, you can define your own traces. We didn't really discuss the Sternum solution, but Sternum allows you to collect any kind of logs, events, and data from your embedded system. You can really easily define that on our system, collecting button presses, battery status, error codes, logs, crash logs, and then you can monitor those events in real time. So it really takes a few minutes, maybe 10 minutes to activate and maybe three days to decide to monitor some things. And then you get really four layers of security. Memory protection, control flow integrity, anti-exploitation on the operating system itself, like command injections, monitoring for file operations, and so on. And lastly, based on the data that we collect and you decide to collect, we apply automatic anomaly detection that is learning from your specific customized data to detect specific anomalies, including what was the expected range, what is the legitimate pattern, and what is actually going on in real time. The Sternum platform in general includes three components, embedded security, which we already discussed, thanks. Continuous monitoring, which is about live remote monitoring and analysis, powered by anomaly detection and AI. And business and operational insights that give you a complete view of many analytics and trends happening on your post-production fleets. How it really looks like, so some simple examples. Our system is divided to security, business intelligence, continuous monitoring and compliance, and some elements or workspace that helps you to find events, monitors, traces, and manage the fleet of devices. The kind of things that you can view is really up to you, but here are a few examples. Attack types, data sent by female version, reported errors, devices affected by recent CVEs. You can see zero days or attack attempts, you can see here it was prevented, you can respond, you can click on the alert and view the exact details, including the IP addresses, the line of code that has the vulnerability. You can see correlations between different events, you can see security anomalies, you can actually get crash reasons and crash reports from the devices. And really so on and so forth, including downtime per customer, some users want to monitor batteries because they cause a lot of pain to the organization. And what you don't see on the screen is really our device view and debugging capabilities, where you're going to issue those events and anomaly, a crash, an attack, you really get a complete view of what's going on in real time and it can reduce solving issues from five months process to 15 minutes. So security and monitoring are actually tied together and this is why we're doing that. Being able to collect any kind of data and learn the specific data from devices, not predefined values like CPU or memory, combined with on-device proactive protection, very similar to endpoint protection in other industries, only brought with less than 3% overhead and around 7% increase in code size, you are able to really do a lot in terms of operations, security and scalability. Stelnum is the device-centric security and data platform with the three layers I mentioned before. We are already embedded in millions of medical, industrial and consumer devices and you can read some of the recent case studies on our website. Thank you very much. Time for questions. Memory corruption, stock overflows, things like that, right? So I'd love to hear your opinion on going forth. If you believe like the promise of rusts of avoiding those vulnerabilities, if you believe that's kind of true, if people should pivot to those kind of memory-safe languages. To pivot to rusts, memory-safe languages. Yeah, so no, I don't believe that. So first, there are a few types of vulnerabilities, software vulnerabilities, logical vulnerabilities, design vulnerabilities, and I think deterministic protection or runtime protection that prevents your days can be applied only to software vulnerabilities, software mistakes like overflows and command injections. For the other types, you really need continuous monitoring and anti-malware techniques, and this is why we provide the cloud platform that continuously monitor the overall behavior because we can't prevent everything. We can prevent specific set of vulnerabilities that are very common, but it's not enough. Then you are talking and asking about secure languages. Unfortunately, I've been to places where we exploited applications, like, you know, WhatsApp and others. Usually they're written in Java or Python. Those still have those types of issues. Moreover, they are based on libraries that are eventually using memory allocations, if it's Lipsy or the kernel. So it's, of course, it helps. It's not like coding in C and C++, but if it was really preventing memory vulnerabilities, we wouldn't have so many of them. Like 70% of the vulnerabilities are eventually memory vulnerabilities in applications, in mobile applications, and so on. So the SOCs I regularly work with usually combine a processor system and an FPGA. Have you ever looked into, or do you see any potential in memory security concepts where, you know, for example, critical data is in a block room within the FPGA and it's continuously monitored by a state machine so that basically the FPGA bit stream is a second source of security because you would have to tamper with that as well. Are such concepts suitable for mitigation of memory leakage issues? Yeah, so shall I repeat the question, or? We're good, okay. So it's, of course, every layer that you had, and I even had the hardware security as part of the infrastructure layers, and I'm talking about trust zones and what you just mentioned, it helps. It helps secure the data, but it just means there is another hope that the attacker needs to do. So for example, there is always APIs between trust zone or FPGA to the application processor. If someone is controlling your application processor because they found a memory issue, of course it's not protected just because you have hardware security. Then now, if there is also a problem in the API, it can exploit it and go into the trust zone or secure environment, and then, yes, leak the data and tamper with other state machines and tools. It's harder because you need two vulnerabilities now and not one, and this is why those tools are great. But one question to be asked is, are you okay with your application processor to be hacked? And second, can we be sure that there are zero vulnerabilities between the secure environment and the application? And usually, that's a tough question. Thanks. Sure. Yeah. Quick question. What are the system resources required to run your monitoring system in terms of memory and CPU cycles? Yeah. So around 3% of CPU overhead or latency, and it can be reduced to around 1.5% in really mission critical systems. There is code increase of around 7% of the existing code size. So we protect the entire code, including third-party libraries, which means as you have more code, we need more space to protect it. And third, memory consumption is really close to zero in real-time systems and in embedded Linux is around bigger than that, but around five kilobytes from. Just to bounce on that, so it will be possible, so you say 3%, but does that mean the worst case execution time? No, this is the average. This is the average. And the reason it's an average is if you have a function, let's say for each three lines of code, it performs a mem copy, then we protect each mem copy. And that means a lot of protections on these specific functions. So there is a specific op codes that are being executed for each protected line of code, and the average is between one to 3%. But if you have a function that is very critical, you can put it on a blacklist and it will just remain untouched. Thank you, everyone. If you wanna see a live demo, step over to our booth.