 Hi, everyone. Welcome to our session. Thank you for being here. My name is Nicola Saqlampanakis. I am the Tech Lead for Blockchain and DLT in Digital Incubation Fujitsu Central and Eastern Europe. Hi, everyone. My name is Adekubilai. I'm software engineer for Blockchain and DLT also at Fujitsu Central and Eastern Europe, Digital Incubation Department. Thank you all for showing interest and joining our session, Critical Reporting Chains with Hyperledger Fabric. Okay, let's start with talking about critical reporting chains and why they're so critical. To get us into context, we can first talk a bit about critical infrastructures and to give those a few examples, we can name transportation, water systems, defense, infrastructures, energy-related installations, industrial units, so on and so forth. Now let's take transportation, for example, and take an airport and imagine there is an alarm being raised for a potential fire that could eventually result in significant harm to public safety or any other kinds of dramatic consequences. What happens next is theories of entities with different levels of handling and different sorts of long and complex protocols will be involved and at the very end of it, there might be an auditing process by an auditing body taking place to determine if each level of handling has been done correctly or not. Nicolas? So based on what Ege just described, let us consider a simple scenario. Let us imagine an incident taking place in a train tunnel in the subway, close the subway station, for example. This tunnel is monitored by some devices, by some monitoring equipment that raise the alarm and as Ege described after this, we have a process of different stakeholders being involved into different stages to intercept it. So the first one in our scenario could be the local security department in the station that need to start taking actions, they need to start following their own processes and it could be that they send some security personnel to check on the situation, evaluate it, report back and based on the knowledge and the understanding of the situation that has been gathered, they need to make a decision. It could be that it's something insignificant or it could be something critical that needs to be escalated to the next stakeholder, which in our scenario could be a public emergency service. The public emergency service intercepts this piece of information and they need to start acting accordingly. So they start acting as their protocols dictate, which means they maybe need to shut down traffic around the station, probably they would need to dispatch some emergency units that are around the area to take over the handling of the emergency on the spot where the alarm was created. At the end, we have the auditing body involved where it needs to audit if everything were according to what was expected from all the stakeholders that were involved. They need to investigate and see if things were as expected. So we have identified several challenges in this and a major one is that the actors are expected to make critical decisions based on information that originates beyond the boundaries of their own organization and of their own control. In the case of something going wrong, each actor must prove that they acted according to the protocol. In the case of something going wrong, every actor has the motivation, of course, to deny wrongdoing or deny any negligence that requires to liabilities. And obviously, this makes the auditor's job very, very hard. The auditor must make conclusions based on collective evidence to investigate in each stakeholder. Each stakeholder makes the claims from his own perspective. And this makes the audit a process of high complexity, high effort and high risk for the involved parties. Eja? So we as the digital incubation department at Fujitsu partnered with Hexagon to try and address these challenges that we just talked about. To give a bit more detail about us, we are at digital incubation. We're located in Fujitsu Central Europe, CTO's office. And we are basically a group of experts in novel technologies such as blockchain, artificial intelligence and digital annealing. What we do is we identify use cases with high potential of scaling from business perspective and work with early adapters from different market segments for co-creation of POCs and MVPs. We then pass this to production. And Hexagon is an established global company who is a pioneer in making of next-gen sensors. And they are domain experts in IoT monitoring fields. Nicolas? All right. So let us have a look how the monitoring ecosystem looks like. A core component is the IoT platform of Hexagon. The IoT platform integrates sensors and gathers information that provides to the dashboards for the different operators that need to make decisions. The IoT platform can collect and combine data to build context and enhance the situational awareness of the operator. The IoT platform can also perform actions based on rules that are defined in its configuration and utilize computer vision, AI, and for example, decide by itself that the information that it receives from the sensors is critical enough to automatically escalate the alarm. Another interesting component that we work with into this project is the BLK247. The BLK247 from Hexagon is a next-gen multi-sensor device. It comes with a lot of very interesting features like laser radar. It has thermal scanners on it. And we work on that. And that's a device that is integrated with the IoT platform like the rest. But being so advanced, it can also directly speak to the monitoring dashboard of the operator. And for example, stream some data there. When we are into this situation, the operator is expected to evaluate the data that is represented to him by the dashboard, start following the process that he has to do from the perspective of his own organization. He starts following his protocol. He evaluates the information that he is in, the context that he is in. And one decision could be that he needs to escalate. And that means he would involve the next stakeholder that needs to intervene and take over the situation if it is so critical. And we should assume that this is an out-of-bounds communication because the next stakeholder, the next organization is not necessarily tightly technically integrated with him. So in order to address the challenges that we described before, we introduced a layer of hyperledger fabric that goes across and it is crossing the boundaries of each individual organization, of each individual stakeholder. And we integrated all the relevant components on it. First, the BLK247. The BLK247 being a smart and next-gen device was able to be integrated. And what we get there is that the alarms, the alarming event themselves and the relevant data are being recorded in timestamped and secured by the ledger. The device itself generates some transaction transmitted to the ledger. This brings in not cancelling out the pre-existing communication channels but brings the additional benefit of having additional communication channels where we can address things like managing the middle attack and the communication channels with a backing. Because imagine that potentially this is a really edge device in monitoring a location that might be targeted for malicious actions. So this is important. An interesting feature of BLK247 that is also interesting with combination with using a DLT is the fact that BLK has anti-tompering security measurements on it. So if you try to mess with its own operation and tamper it with, it will raise an alarm over the ledger saying to the whole ecosystem that is involved, you should not trust me from now on so much. You should consider twice about the data I'm transmitting and we can trigger counter-measures on that. The next component is the IoT platform. This is obviously also integrated. The IoT platform can intercept alarms that are generated by fabric-integrated device like the BLK247. It can also create alarms on behalf of sensors that cannot be integrated to fabric, serving as the trust anchor for the blockchain and for the DLT. Moreover, the IoT platform can enhance the context of an alarm as it is received on the ledger or it is on the ledger, it's being recorded on the ledger, by providing additional information. And this is important because it can enhance the information and the situational awareness of the operator that will take place in the process later on. As we described before, the IoT platform can even drive own decisions based on the data it receives and based on its configuration. So these decisions from the IoT platform are also recorded on the ledger. So if a decision is taken from the IoT platform to escalate, we know exactly who decided and why and in which context to escalate that. The next component that gets integrated is the dashboard of the operator. Now we can cross verify the information that we receive from the IoT platform and from device like the BLK that transmit to us against the ledger. We enhance the security. We know exactly how the context has been gradually built. We know that in this context we need to make some decision as operators and proceed with the situation that we are handling. Now if we manage to integrate this, the obvious next step would be that we integrate next stakeholders that previously were not tightly integrated. Because of the decentralized nature of the DLT, we can enable APIs that enable the involvement directly on the technical level for the additional entities, which previously was not possible maybe. And this is granted and enabled by DLT in the decentralized nature. We know that we can in essence grow beyond the single organization boundaries. This obviously can continue as long as the chain of involvement and engagement demands the situation to be handled. At the very end, we have enabled the audit to be done and performed not against assumptions or indications or claims of individual stakeholders, but on cryptographically verifiable and undeniable facts. As I reported on the ledger, we know exactly who made what decision, in which context, in which time. And we have a tight context of the situation that the operator was working in when he made the given decision. And everything is timestamped and all the information leads back to everything on the ledger. So we are managing to record the whole critical reporting chain on the ledger with undeniable ways, cryptographically verifiable facts and actions. And this reduces the risk and the effort that is required to perform an audit. Because we do that, and in this way we address these challenges, we claim that it's a smart monitoring ecosystem. So to recapitalize, let's see the major benefits of DLT enablement in this. We have the alarms and the relevant data recorded on DLT. The handling is recorded on DLT because we have the whole process recorded there. And the process itself is enforced by rules of the chain code. So the application enforces the rules. Who is required to engage in which way and when. The decision and the relevant context is tightly bind to the time that it was taken. And this leads to the auditing being greatly enhanced. We reduce the risk. Actors who perform does the protocols dictate are out of risk. They do not need to prove it. It is easy also to identify wrongdoing if this is the case or negligence in a non-disputable way and address potential liabilities. Finally, the technical ecosystem is extensively decentralized way. Stakeholders can be integrated to Hyperledger Fabric and enjoy the same benefits. So let's see how this works again. Typically we have the Fabric framework components that are relevant for each organization. So we have peer certificate authorities, databases and so on for each. And also the ordering service. Each integrated component for each individual organization is integrated through an API gateway. We follow the API gateway pattern. The API gateway externalizes Fabric calls to traditional APIs that are meaningful for such more mainstream applications, let's say. And our API gateway is deployable in flexible fashion. It can be deployed as a single binary or it can be built up as a container. To see how this is actually possible, let's have some insights about the API gateway itself. So what the API gateway is at the end is a modular application. And it is modular because it has flexible internal components. These components are actually playable from us as developers. We can choose exactly the services that are relevant for the integration that we need to provide. Typically it will provide some external APIs, enabling other actors to call it. And typically it will also provide a Fabric line so that it enables the interaction with the Fabric infrastructure there. If it is required to handle configuration, connect to databases externally and so on, perform business logic and so on, this is something that we can choose depending on what we integrate. And this flexibility is what makes it so great for us because we can stay in micro level and we can create a binary that we deploy directly on the BLK247 on the device. But we can also build it up and be really a containerized cloud application that provides enterprise level APIs to other solutions like the IoT platform, the dashboards or whatever other infrastructure is required to be integrated from the different stakeholders involved. At the end of the day, it externalizes APIs, performs its own business logic, potentially works with other components, and provides Hyperledge Fabric functionality, Fabric calls on that. And we have written it in Goal line. Ege? Yeah, last but not least, it's matching perfectly with our project requirements. We definitely needed a permission network. Fabric offers modular architecture, hence maximizes the flexibility and the resilience of our solution. Fabric follows enterprise paradigms, thus meeting modern business demands. Finality of transactions is not an issue as they're instant. Fabric offers features like channels, transaction anonymity, data access scoping, privacy regarding who can access what sets of data. Hence, we do not have to implement this from scratch ourselves. And all in all, it is certainly meeting our project's expectations in regards to overall performance. Nicholas, I think it's time for a little demo. All right. Let us try to make a little demo, as Ege said. And we can stay back in the context of the example we brought before. Let us consider that we are handling some alarms in a train station or in a tube station, where we have two virtual BLK247 devices installed. In this example that we see here, we have the monitoring dashboard of a subway security operator in a subway station in central Munich. We see the two virtual devices that are there. We see a dashboard for the local police department. And we see at the end information from the dashboard of the authority that we need to audit how the whole process went. We have a demo control environment here that we can raise alarms from the two simulated devices that we have. The sensor one, the first one, can raise fencing alarms, fencing violations. That means it can raise alarms that match the scenario where a perimeter that is defined with a laser radar has been compromised, and that for an alarm needs to be raised. The second one is working with computer vision. It can recognize a piece of luggage, for example, that has been left unattended for too long in the train station. And that for it raises an alarm, it requires intervention. Let's try to play with it a bit. So I'll try to create a fencing violation first with the sensor number one. Bear with me. I think I didn't register the link. I'm going to refresh. All right. And we see that an alarm is generated. And as I said, it's an early preview of the dashboards that we are working with for storytelling reason. The call makes it a bit slower to load, but let us see. So typically now we should be receiving the information of the alarm on our dashboard of the operator. The map loads a bit slowly. I think it's being affected by the fact that we have an in-browser call and sharing the session and it makes it very slow. So all right. It loads. Let's wait for it to render a bit nicely. So we have now information about the alarm, the nature of the alarm for the storytelling perspective. We have skipped the specifics, but you hear typically what you would have is that information specific to this alarm is being passed to the operator's dashboard. And okay, here we see only a summary of that. We are skipping big information. Oh, this is glitching over the connection, I think. All right. So we see who created the alarm. It's coming from the BLK device. It is a kind of fencing violation in a realistic environment. We would have much more information there. So actually what we have in the bug ones, we have geolocational information, more technical information coming directly from the device and communicated here. And the operator is expected to make some decision based on that. For example, escalate the alarm because she says, well, it's critical. We followed our protocol with this guy, despite some guys. And this alarm should be coming to the dashboard from police now because it has been escalated. And the police, it's intercepted. They see that there is an alarm that is coming in. And they need to handle it. First of all, they acknowledge that an alarm has been communicated to them. And they start following their own protocols here. They, for example, maybe need to shut down traffic to the station and so on. Dispatch some emergency units to the spot. And when the units get there, for example, hearing a simple story, we say, okay, this is, I think that the sharing of the browser, the session is a bit killing the, all right. So the unit has arrived. And this concludes, let's say, the story. This is again a very simplistic story that we have, only to make the demo. The process has been concluded. So we can make an audit on that. So we can filter by dates here. So all right, we are in today. We see the events that were created today. We can choose the relevant timestamps and apply the filter. And then we see exactly what happened, when, and from whom. We see that an alarm was created. There was a fencing violation from this sensor that was being managed by, I apologize, from Hexagon. We see that it has been intercepted by the subway security operator. There was some decision to escalate it and start some emergency process. We see that the police, the local police department has acknowledged it, started following their own protocol and that an external unit that was dispatched has arrived on the spot. Obviously, this is a very, very simplistic workflow that we describe here, more for storytelling purposes. And when we have skipped the internal information that come from the sensors and so on. So obviously, we can have also other decisions, like for example, the subway operator deciding not to escalate an alarm, handling it internally, and so on. And this at the end becomes always transparent to the end. And this audit view comes directly from retrieving data from the ledger. So even if we require some analytics to be performed, we can all the time do that because every stakeholder that is involved has its own cryptographic material that signs transaction that identifies his actions on this series of events that handle the alarm. So that was our small demo. And I'm happy to see if you have any questions. Please use the Q&A session to address any questions that you might have or what we just presented to you. So to maybe... Hey, I think we already have three questions, Nicolas. I don't see them in my dashboard. I can read them for you if you'd like. Yeah, please. Sure. Could you tell me the advantages you have in using fabric instead of a shared database? Yes. So if you would have a shared database, it would make the auditing process, let's say, a bit more complicated. So the benefit that we have here is the decentralized nature and the fact that the actions of each stakeholder that could lead to liability, so they could have big impact, are cryptographically bound to the entity that makes the decision. These entities are liable for their actions, potentially. I actually described before that we might be talking about major events that come into critical infrastructure. Take this example we brought. Assume that the local security guards have decided that they handle the internal... They handle the event internally, the alarm internally that happened in that tunnel. And this concluded to serious damages. It ended up with some accident on a train, with people's lives or health being at stake and so on. That is something major. So you really want to have the possibility to nail down and exclude any possibility that might have contradicting claims. The way that you decentralize this and you can really bind the identity of the actor against the action that he took and the context that he had at hand. On the ledger is the benefit here that enables for the process. The benefit also counts for the actor himself. If he didn't have the ledger that directly shows that he acted as he should, as the protocols dictate and so on, he would have to prove it himself. Now he doesn't really have to because it's cryptographically evident on the ledger. What happened? I hope this is answering the question. Okay, one more. How quick is the escalation to police? Is there a latency? So with fabric we have the possibility to really define how fast we want to finalize transactions. I just spoke about the instant finality that we have. We don't have to wait for a generic timeframe that dictates how long it should take. We are flexible to be finding ourselves when we address the use case and we can be also confident that when we get a transaction through the system, when we get the block and everything, it is really a final block. Because of how fabric works, it cannot be that we are on a split of the chain or something like this that could happen with a public Ethereum chain, for example. So with finality, it's practically instant. What you saw in the demo took place in real time so we can define, for example, that the decision to escalate an event, generate a transaction, have it finalized in a block and being transmitted to the relevant stakeholder takes one second. I hope this is answering the question. I don't have access to the Q&A. Unfortunately, it's not shown to me. I see one more. Does not posting the alarms to the ledger represent a privacy violation? So we are talking about a controlled network. We are not talking about the public network. We are talking about a permission network. So we can define exactly who has access to it. One of the reasons we used fabric is because we can, out of the framework, have tooling that really defines who has access to which data. And this diverges from situation to situation. So we can make private collections that only the relevant entities have access to or we can use channels and have responsible organizations that need to, let's say, transfer the sequence from one channel to the other and the channels group different stakeholders that need to be involved and have different kinds of access to that channel. So we are really flexible and obviously do not expect that we are writing on the ledger any pictures from phase or something like this. It has to do with the event. It has to do with a thermal sensor that detects high temperature. For example, it could indicate fire. Okay. We have one more question. Who is running the nodes of the fabric network? Are we running out of time? We are almost running out of time. Again, this depends on the scenario that we are addressing. We don't have a generalized and we are not trying to provide a global ledger where everybody or all alarms in the world need to be recorded to be handled with. We try to provide the bespoke private network with the relevant entities that are involved and they should be able to provide peers. If this is not possible because of how fabric is modular, we can come with alternative scenarios. But we are running out of time to discuss with all this. Thank you for joining our session. Please feel free to get in touch with us and organize additional calls to address further questions or if you have any ideas for such applications or provide feedback, we are happy to do that. Thank you very much. Thank you, everyone.