 Okay, so hello everybody. I'm gonna be your last speaker today in this room about trust-based adaptive access control architecture. My name is David Halles, and I work as a principal software engineer. Seriously, guys? So I work as a principal software engineer for Red Hat. I'm in my ninth year there. I also am a PhD student and researchers, at the Maserich University in Bernow-Czech Republic, in the lab of software architectures and information systems. And I'm also an alumni of this school. You've probably seen those ducks outside on the corridor. This one with David text on it in the 2016 year is mine. My PhD research, which come on, Jim. Yes, but it's not for you. It's for the recording. So my research at the university is about trust-based adaptive safety in autonomous ecosystems, which I'm gonna talk about a little bit more. So what are software ecosystems? So if we combine multiple systems into one big unit, we can call them systems of systems or ecosystems. It's some kind of evolution as we are creating more complex and more complex structures. And of course, such systems can provide much more than a single system. Ecosystems can provide more than a system. So if you think about an analogy from buildings, so an architecture of a regular house could be a software system. Then the architecture of an ecosystems could be like a city with all these plans of every individual buildings inside. So this is what I'm researching. This is the main focus. Of course, if we bring autonomy to the topic, so we don't talk about software ecosystems, but autonomous ecosystems or autonomous software or cyber physical ecosystems, we can get much more higher degree of autonomy as we have multiple actors in the world who are working together, collaborating or competing. Member systems can join or leave at any time. So context changes are producing a lot of unpredictable situations and a lot of uncertainty, which brings to the main topic of my research is the safe and secure behavior of such systems. We're gonna look into something that's really a really small focus field of this research, ecosystem coordination. By coordination, I mean, for example, if you have a set of cars in a multi-lane highway, you can coordinate them to move them in a single lane, reducing aerodynamic resistance, basically vehicle platooning. This can be called as an autonomous ecosystem of self-driving cars moving in one lane. So we can do ecosystems coordination in two different ways. One is sending messages. So we can ask these autonomous vehicles to talk to each other and have some implicitly implemented features that can react to these messages that they send. This is relatively safe because you have some pre-trained stuff that you already know it in advance, but it doesn't really give you flexibility of uncertain situations, which we are looking into more is sharing software, so sending some kind of code, something executable between autonomous systems which receivers can execute. Obviously, it's not that safe as sending messages because you can execute stuff, but it brings you the opportunity to have new features on the fly. For example, if you have these vehicles here, it's enough if one has the smart agent or the software module for platooning, it can share the software module with the other vehicles which don't even support this feature and after everything's working fine, they can move into one platoon even though they didn't have this supported feature before. Of course, running a third-party software module on your autonomous vehicle in privileged mode is kind of a bad idea. I guess everybody can agree here that we shouldn't do that, but if it's a bad idea, it can be still fun. So our view of how to make it safe is to use the concept of trust. And by trust, I don't really mean the trust as you know in software engineering in general with trusted computing, trusted execution, or key pairs, encryption, and the trust in HTTPS certificates. I mean more like trust as in human psychology or in philosophy, where you have some kind of relationship between a trustee and a trustee, and the trustee is having some kind of vulnerabilities by trusting the trustee. So we are trying to model something in this area. And somebody smarter than me said that reputation-based trust can be effective for securing communication, and our belief is that we can also use it for interactions, not just any kind of interaction, even physical interaction, not just communications. So our idea is to use trust as a decision factor in real-time evaluation. We already know that this trust will be not binary. So I'm not sure about the representation yet, but in a simple solution with binary, true-false-trust-not-trust, you can have a lot of very, very dangerous false-positive situations. So dual solution is definitely not something what we want. We are also looking into reputation, which is we interpret reputation in a way that trust assessed by other actors, basically gossiping. So one, let's say, autonomous vehicle has an interaction with another one, it has a bad experience, and it shares with others. Basically the same as people gossip about other people, which is not nice, but in autonomous ecosystems it might be useful for us. So to calculate trust, if I say number, it might be a number, so we can go with that. So we would use this external reputation on these smart agents, software modules. We can use static analysis to find some kind of vulnerabilities there. The important part is, which is not my research, but I'm gonna use it as an input, is predictive simulation using digital twins. And based on the result of these digital twins, we can do live compliance checking, which I will explain later. So is anybody unfamiliar here with digital twins? Okay, so a digital twin in our case can model anything from the physical world. Basically it's a digital representation of a physical object. In our case, the physical object can be the software module. Doesn't have to be, but I will explain why it was our solution. As you see it on the picture, what we do with predictive simulation is that in a simulated world, we run ahead of time and run some kind of simulations on the digital twin. And the smart agent, which is running in software, in case of software, the exact same stuff, in the real world, you can compare these two things and based on that, you can assess some kind of trustworthiness, how much the model itself is trustworthy based on the predictive simulations and also how the two differ from each other, which is the live compliance checking that I was talking about. So I really don't know how the trust would be. My best guess in our papers, we talk about the percentage, but more and more research is pointing towards a vector of different aspects, so maybe like five or six percentages based on some kind of metrics. But I'm pretty sure that we would like to go with this smart agent digital twin bundle with some digital signature because it gives us some extra safety. So if we talk about digital twins, this is our generic idea that we would verify the digital signature. If it fails, we don't care about the bundle. If it passes, we do static analysis. Again, if it fails, we don't care about the bundle. Then we verify the digital twin on some preset simulations. If it fails, we not just reject the bundle, but we also propagate the trust score to the world, telling others that this might be some untrustworthy software module or smart agents. Then we do some predictive simulations and execute it and do the life compliance check based on those results. This was a simplified thing and I would like to show you kind of the whole architecture, which looks like this, which is kind of complex, but we'll go into this more a little bit later. What it does actually, so it does predictive simulation of some scenarios using the digital twin. So you have some reality and you can have some multiple scenarios in the future based on that reality and you can do all the simulations with the digital twin. Compare the simulations with the results, that's the life compliance check. Based on that, we can calculate that trust score in which case it can be a number, let's say 45%. And based on the score, we can set up a decision tree that would allow us to expose certain features to this smart agent or on the other hand conceal those features. Basically some kind of access control based on how much we can trust this module based on reputation and other metrics. So again, looking at this, we have some external reputation coming from other vehicles. We have the smart agent and digital twin coming to some kind of gatekeeper. We verify that digital twin on some preset simulations. Then we load it to the simulator. We load it, the smart agent to the sandbox and these two entities work in tandem. Calculate some kind of comparison that goes to the trust aggregator that creates a trust score. And based on the trust score, the sandbox basically has or not has access to certain features on the vehicles platform. But, which might be interesting for you, software engineers not really working with automotive and let's say different technologies in the cloud. If you squint on this a little bit more, what this architecture resembles you? You have some rules. You have some role. And based on the rules and the roles, you have some access. What if I say Kubernetes RBAC is something similar? And what if we could implement something trust based in Kubernetes? So instead of RBAC, we could do something like T-BAC, which would basically implement a similar architecture in the cloud. I mean, it would probably totally useless in software environments. But I see it as a viable way of doing a proof of concept. And I heard some rumors that there is a team at Red Hat who tries to run containers on vehicles. Okay, somebody is shaking his head so it might be not true. So it would be probably useless in a real cluster where you're running it in the cloud some services because you might not be able to run some kind of simulations. But in some edge computing cases, it might be useful. So that's our vision how and why we would like to run it in Kubernetes at some point. So to wrap it up, there is some future work that we are looking into. So based on this, we are trying to swap the smart agents with a pair of sensors. So we could calculate trust on sensors and we can trust or not trust the sensors and based on that believe or not what the sensors say. Another approach is to use digital twins of autonomous vehicles and extend this architecture for a whole ecosystem. So we could simulate other cars, calculate their trust score and behave accordingly. So it wouldn't be just inside one vehicle but the architecture would be extended among multiple ones. And the last one is that we just submitted a paper and it's not yet accepted, is trustworthy execution in an answer-worthy way. So let's flip the concept that the smart agent wouldn't trust the execution environment where it's running. We had some initial discussions with people who understand blockchain and this might be an unsolvable problem because the vehicle manufacturer might have some god-like power on your smart agent but there are still ways how to detect that such thing can happen. So stay tuned for that. It might have some interesting results. And last but not least, I would like to mention that my supervisor will have a talk tomorrow at five o'clock or actually I will do it because she twisted her ankle today. So if she recovers then check out her talk otherwise see me tomorrow as well. Thank you. Yeah, I try to make it fast because dinner validation. Which one? This one? Yeah, sorry I cannot hear you. Because that would fail on any other vehicle. This is an architecture that has some kind of static analysis so it doesn't really make sense because each of them would... Yeah, well the point is that everybody would implement this framework. They would have the same kind of static analysis so they could catch the same issue. Anyway, the question was why the static analysis isn't notifying the aggregator. So the question is about how the aggregator gets protected from spamming from false positives and false negatives. So the results are actually not coming from the smart agent but the simulator that runs the smart agent. So the smart agent cannot really spam it. Yeah. Yeah, the aggregator gets a message from an existing component that's also under R control. No, no, no. Yeah? Yeah, so the question was about track record and regarding the static analysis that we might get some interesting reputation if from a certain vendor we have failing static analysis. We weren't really looking into specific vendors and track records but this is an interesting aspect, thank you. Yeah, go ahead, just please be louder because I cannot hear from the AC. So the question was about GPG and trust chains that the way it resembles what we have with GPG and trust chain. No, we were actually trying to get around this topic of trust chain and static kind of trust. So our approach is trying to be dynamic and we try to avoid and throw away everything that was interpreted as trust in software engineering and computing. Any more questions? Please? Yeah, so the question was about vulnerabilities and attack surface in specifically in the trust aggregator in the trust gate. Well, the trust gate isn't really in that shape yet that I can talk more about the details. So it's just a high level thing. Regarding the trust aggregator, there are the some research with some results on how to do countermeasures against such attacks. Yeah, so the trust aggregation and the calculation isn't really my research. My focus is more on this part than the left. Okay, I guess thank you for your attention.