 Hello everyone Obviously, my name is not Barbara Binova I'm a last-minute jump in because there was an accident There is an argument between Bora and the staircase here at the conference and she couldn't make it today So my name is David Halles and I would be talking about trust management in digital ecosystems Please try to sit in the first rows because in the back you cannot hear anything Stay in the front so you can hear unlike yesterday Okay, so I'm gonna stay seated because first of all I'm tired second I have some notes from my supervisor who Actually prepared this talk and I'm just the last-minute replacement so at the beginning At the beginning I would like to talk about our university, which is not this one, but the other one It was established in 1919 It's the second-largest in Czech Republic after Charles University in Prague. It has over 30,000 students Our youngest faculty is the faculty of informatics Which has over 2,000 students Bora is the vice dean of industrial partnerships, so I guess that's the reason why she had these slides there so as digitalization is advancing and getting more and more Normal in our lives. There are new challenges that we have to tackle and Also, there is the dual use dilemma that technology can be also used for good or bad and If we want to embrace the good we actually have to use it banning it is a bad idea It's the same as with fire. You can use it for good and also for bad And if we talk about digitalization there are challenges regarding Hyperconnection so everything will be every device will be talking to every device Humans will have more interactions with with machines Which brings us to the topic of dynamic Autonomous and not just autonomous but dynamic cyber physical ecosystems where Uncertainty and unpredictable situations can happen on a larger scale than before And obviously we have to secure and future prove these technologies So we can catch issues that can happen that we don't even know about right now So ours research specifically is around critical infrastructure Which Makes it critical because it's usually human lives are at stake or are the way of our life is at stake So we talk about smart grids autonomous vehicles smart cities It's it's basically everything that's around us that can harm us if it's used in the bad way I mean, that's our view. There are different definitions of what a critical infrastructure is So we believe that trust might be the way out of this the way how systems trust or not trust each other or Even humans or system trusting humans human trusting systems so our First view on this is the trust management in an internet of vehicles or a network of vehicles or vehicle area networks We can talk about collision avoidance. So if two vehicles meet then Made what may vehicle a can trust vehicle B if they behave to avoid each other from colliding Then there is Vehicle platooning when you have as you see on the picture multiple vehicles we can move them into one lane and the aerodynamic resistance gets lower for except for the first vehicle so they can save fuel for example can we trust can the Randomly joining vehicle trust the other members of the platoon or can the platoon trust the Vehicle that's trying to join the network Then there is my research kind of the runtime update or running smart agents in the vehicle and Trying to assess some kind of trust against that software that's being run and Of course as I said, it's the human to Autonomous vehicle trust. Let's say there is a human that's Driving a vehicle that starts becoming autonomous slowly. I'm not talking about like ours But like in weeks or months So a human starts driving the car, but the car is slowly taking over as the human allows it so We can have situations like this where we have traffic lights still which wouldn't be needed in a fully autonomous system because Only for only for only for pedestrians so if we look at how Trust and trust worthiness was handled by the industry before Increasing the security reliability or availability of of these services these technologies doesn't really Doesn't really improve the trust itself in the system. It's It's not a conventional problem in computer science because trust as as we interpret it is a is a human or social concept a belief that the other system will not do harm to you by exposing vulnerabilities to it and Even though the system might say that hey I'm trustworthy it still can and it's certified and everything's fine by some Checks in the past it can have it can behave bad in the future and We can even intentionally design devices that would Have one sole purpose of do harm and do damage at a certain point in a in an ecosystem Let's imagine a vehicle that behaves well for 14 days and on the 15th day It will just do something that would collapse the whole city and Kill a lot of people So that's what I was talking about the agents agents with malicious intentions Banning this Which sometimes comes up in legislation isn't really a solution Because somebody else will anyway implement it. That's that's one of the reason So we need to be proactive and come up with technologies that will solve these problems Again, so understanding trust we we started doing some service about in other fields of science than computer science about what trust is and we found these nice definitions So basically trust is a relationship between a thruster and a trustee, I mean If you want to read these I can like give you some time But what we got into is the trust in automation definition that suits mostly our needs Which is basically a belief a relationship between the thruster and the trustee in in the sig in the context of uncertainty and vulnerability to expose some kind of Some kind of side of the thruster that the trustee could exploit but in a safe environment it shouldn't happen So this kind of trust is subjective So if a is trusting B, it's not necessarily true that B will trust a which is also asymmetry and transitive so if a Agent a is trusting B, then it and B is trusting C Then a might trust C as well But not always and we we can go into like reputation and how like social structures Let's say humans gossip about each other or have reputation about each other. So I let's say One person is very famous and they talk about him and you hear good things about that him or her You hear good things about that person then then you might trust that person implicitly So we can evaluate trust in various scopes There is a local in a situation locally like two vehicles meet then they can assess trust Then based on that history based on that Relationship in the future they can build up some kind of reputation that can be consumed by other consumers of this trust framework and Of course, it can be a context specific that you can trust vehicle in Collision avoidance scenario, but for example, you cannot trust it in a vehicle tuning scenario For example, because of a faulty component or a wrong implementation so We are looking into some kind of dynamic evaluation where where we Deal with these things so we came up with this nice picture of How is trust guys if you go to the back you will not hear much So Basically, we have direct and indirect trust direct is when you interact with some some other entity Directly indirect is if you if you hear some kind again gossip if we talk about humans Then there is some kind of context information about the specific situation and based on that We can aggregate these results and make a decision about trust Which in our case as I stated yesterday if you have been on my talk Should be probably non-binary some kind of complex structure We don't really know yet what but so far we work with percentages in some kind of proof of concept case studies But it's will probably like a vector of different percentages So if we evaluate trust directly, we don't talk about reputation, but just trust only Then there are there is a quality of service metrics when we talk about reliability Availability or accuracy and there is the social metrics when we talk about these human aspects of Friendship honesty benevolence altruism on on or unselfishness and we need some kind of ways to measure these So To metrics we already know how to measure There is openness or transparency and our view on this that we could use digital twins Is anybody familiar with digital twins here? Okay, the ones who were on my talk yesterday. So a digital twin is a model of Cyber physical system, but only existing in a cyber world So you can use it to simulate what this actual system would do in the future if you put it into some certain context and If the cyber physical system shares with you this model, it means some kind of transparency like hey I'm willing to give you my model I'm open about it. You can use it to determine my future moves So this is our way of defining openness in certain cyber physical situations The other one is honesty Is how good this model is how honest this model is Basically if you run this model it behaves in a certain way But the vehicle doesn't behave in the same way then we can assess that Assume that the model that the vehicle wasn't really honest about this this digital twin Again, we have these challenges that we are trying to solve So There is a thing to think update trust So the vehicle is Trying to download some kind of upgrade a black box and can we trust it? Can we run it? Can we expose it to critical? Functions of the vehicle or will it kill us? Then there is thing to think trust when you have two entities trying to Assess each other's trustworthiness how much they can interact how much of their safety features Can they turn off to be more efficient? Usually if we talk about safety features limiting vehicles, let's say slowing them down If you trust another vehicle, you you will probably Pass each other with a higher speed because you trust it and you don't feel that much vulnerable But if if the trust level is lower, then you probably do it in a slower way when you have more time to do Sudden reactions if there is something bad happens Then there is my research about adaptive safety That how much should we impose these safety features directly based on Based on the level of trust or the trust score And what happens if I do a false positive and false negative? Assumption Which one of the solution one of the partial solutions is that as I said the non-binary the Percentage where you have much much lower chance of false positives and false negatives Because you don't don't just have the two extremes But we will probably take it more to some kind of vector which is still in progress and the last one is Governance that who should be responsible for keeping up such a ecosystem? Who should be managing the trust from a centralized way? Who should be responsible for certain things? And of course we also look into ethical the ethical side So regarding things to think update trust One of the scenarios is what I was talking about Is a digital twin being ran? Let's say a software module that ensures In a smart city speed limits or certain traffic rules How we how we ensure that this third party software will not break our vehicle? And also do its purpose limiting you in school zones. Let's say We treat it as a black box. We simulate its digital twin And do predictive simulations and based on those predictive simulations We can do the life compliance checking Assess it's how much we can trust it And based on that expose it to certain features or not Or if it's really bad, then we can trigger some kind of safety system that disables it and doesn't allow it to run the agent in their vehicle at all This is the architecture I was talking about Yesterday, so I'm not going to go into the details about it, but this is supposed to somehow enforce software module to run in a secure way on on autonomous vehicles And the next one is collision avoidance. We have some experiments with drones that assess Trustworthiness of of drones based on previous behavior and also pull in reputation Into the picture and try to avoid collision in mid-air Then there is adaptive safety my research where We are trying to adapt safety features based on the level of trust and safeguarding how how Vehicles will will behave in an uncertain and unpredictable world Not just vehicles, but we are trying to demo it on autonomous vehicles. Most of the things Again this architecture that I'm not going to go into details with We also designed some kind of safety adaptive safety framework where You have a model that calculates you a trust score or value. We propagate it to the outside world There is a safety module that based on a decision tree turns on some kind of safety features or Exposes some features based on how the trust is changing over time And there is governance at the end where we are looking into how to actually calculate the trust How to and how to represent it how to punish or reward systems that are Breaking or increasing their trust what kind of attacks are there And also we are looking into evidence collections some kind of forensic readiness That we are we can give it to legal institutions later If there is some kind of misbehavior because that might happen. There is also pre and post incident Then there are problems that we haven't solved yet like What kind of score trust score should we assess to newly joining? Let's say vehicles or entities to our system Then we will probably need some kind of erosion or inflation on the trust score Because you might have been trustworthy yesterday, but it's it's not necessary that you will be trusted today and There are some kind of attacks like blacks won't blindness and other sources That we are looking into it also So the challenges are The scope is situational So in once and as I said earlier in one scenario, you can trust it a vehicle for Let's say lane keeping but in another one for speed control. You cannot That their trust is subjective Again the default trust score as I talked about and erosion we Also need to figure out how to detect malicious intentions That are hidden. So let's say 14 days system is doing well increases its trust score Then on the 15th day, it just goes on a rampage and starts killing people How to ensure safety against untrusted agents like cyber physical system can limit The cyber part like vehicles you can disconnect them from the network, but it will be still in your smart city It can do still damage. So we probably need some kind of means to enforce our rules Like police will be not jobless at that point And there is also still a high degree of dynamism and uncertainty We will never know in the in the future how how will be the Convergence of a network of let's say autonomous vehicles or any kind of smart systems because they will change over time and you will have millions of unpredictable combinations So regarding attacks One of my colleagues is looking into What kind of attacks can happen the important part is their individual and collusion attacks And usually they can end up either either in unreliable decision-making. So we have some wrong trust Evaluation and based on that we do something that we shouldn't do and the other one is false trust recommendation when we again we have too high or too Low rating and recommend to another system that can do bad things Some of these attacks are like self-promoting When The system is trying to say good stuff about itself and there is whitewashing when Probably you've done it in your childhood that you created an email address. You did something on the internet You got banned and then you created another email address and tried it again There is a discriminatory attack when It tries to attack other Actors of the system and tries to pull down their trust score And there are other Other things like on-off attacks when it's sometimes good sometimes bad We need some kind of system that can handle these kind of situations Then there is bad mouthing when I I can give a ring up like from human social situations In in primary school when when five kids are starting to pick on one kid and start saying bad things about that kid This can happen with autonomous vehicles Or the other thing the balance of thing that Five kids will agree that one of them will have a high or five vehicles will agree that one of them will have a high trust score by trying to Prove that it's doing good and good and good and at some point on the 15th day it will start going on a rampage So This is the things we are working on Uh for different aspects of of this thing mine is more the uh adaptive safety and the architecture around Software modules, so that's the thing I can take questions to The rest of them I can try Thank you any questions Yes I cannot hear you please come closer or shout Do you need a one word answer Yeah, so I have to repeat the question So the first question was if this decision tree is static or dynamic and the second one is how it is designed It will be probably dynamic Not sure how much dynamic I cannot answer you it's right now But in uncertain situations you cannot have a static one It might happen that vehicles will exchange their decision trees at some point We are also looking into that, but we are not really deep in it And uh how it will be designed that's also an open question anybody else Please Okay, do we have any questions on matrix? Do we have the session chair? Okay, if we have no session chair, then I guess this was all thank you everybody for coming