 Yes, we're now live. Vip and Arneema, if you want to kick it off and then introduce, please feel free to do so. OK, hello. I'm Nikos Kapsulis, software engineer at Inovax, Nikosia Cyprus. And I'm here with my colleagues, Alexandros Psichas, research engineer at the National University of Athens, Greece. And Ariana Bolivio, project manager at Inovax and lecturer at the School of Business, University of Nikosia Cyprus. Today we're going to talk about establishing service level agreements self-assessment on blockchain. We will first start with a project which called Bledger and how during the project we are trying to approach matters like this on the blockchain networks through Hyperledger. And Ariana is going to give us more insights about the project and present on this part. And on the next part of the presentation, Alexandros, along with me, will provide further insights and details on the research work that we have been doing called the Reinforcing SLA Consistent Blockchain and how we approach the matter of self-assessment as a laser function on a decentralized ecosystem. And they are independent of the monopoly providers that are trying to have their own centralized tools. So without further ado, I give the phone to Ariana to speak about Bledger's project. Ariana. Ariana, can you unmute yourself please? Yes, please. Sorry about that, I was speaking to myself earlier. So I was saying that it's a pleasure to be with you today. So I'm going to give you a very brief overview of what the Bledger project is about and then Nikos and Alexandros will guide you into the research part and provide more specific details. So Bledger is a project that has started in December 2019. It's a project funded by the European Commission. We actually received 4.7 million euros in funding. And the project actually aims to provide an architectural paradigm and a set of software tools that will guide the next generation of edge computing. We're already working on this as we're in the third year of the project now. And if you move to the next slide, the project has partners from six different countries, including large software ventures, such as IntraSoft, ATOS, engineering. InnoVax is also an IT consultant, which is an SME. Research organizations, such as the ICCS, and also some pilot sites that I'm going to talk about in the coming slides. So what is the vision and the objectives of this project? So we have started to work on addressing how to resolve problems of the current and future massive use of edge computing, especially when it comes to large IoT infrastructures. So we're working from different sides. So we're working from the partner side, the provider side, and the customer side. From the provider side, we aim to enhance the stability and the performance and the effectiveness of the provider's infrastructure. From the user, from the adopter's customer side, we're aiming to enhance and empower the customers to be able to understand the nature of their apps, to be able to better understand how to balance between the cost and the performance of their infrastructure. And if we look into the specific challenges that the project is addressing at the moment, starting from the very first, we are focusing on automated deployment and adaptation, monitoring, benchmarking, offloading, and trade-offs. We're also focusing on improving quality of service, quality of experience, smart contracts in SLAs, privacy trust, security, and also network slicing when it comes to 5G networks. Today's discussion is going to be on the smart contracts in SLAs, and also privacy trust and security part of the project. And a few words about the architecture. So we have management subsystems, evaluation subsystems, and support subsystems. From this part of the architecture, again, today, we're focusing on the subsystem relevant to SLAs and on the subsystem relevant to blockchain. And as I mentioned earlier, the project includes three use cases through which we're trying to test our implementations. The first use case is about the manufacturing in the manufacturing industry, about data mining on the edge. The second one is about road infrastructure, and especially about how to use edge infrastructure in order to enhance the safety of vulnerable road users. And the third is about mixed reality apps on the edge and how to address issues associated to that industry. So this is more or less what the pleasure project is about. Let me now give the floor to Alexandros, who is going to guide you into the research part included in the research paper that has been published. Alexandros, you're muted. Yes, I know. Don't worry. So getting into specifics now, as far as the reinforcing SLA-concessing through blockchain. So the proposed solution that we have, and we have also created the appropriate journal that is established in the MDPI, Computers General, has the area of interest. Basically, it's the public cloud infrastructure as a service. We have to say public because in the private sector, we have different solutions and the negotiations between the provider and the adopter of the solution is much more flexible. So it's important to understand that the landscape that we're going to describe is about public cloud offerings. For instance, Amazon, Azure, Google Compute, et cetera. So the motivation behind our work are based on three main pillars. So firstly, there is a limited amount of SLA metrics. Basically, for those who are not familiar with the term of the SLA, which I don't highly doubt there are many here, is that this is a contract between the adopter and the provider of a service. So SLA's are everywhere. SLA's is something that every type of service has. So as far as the cloud computing is concerned, we have a limited amount as far as the SLA metrics that we can negotiate and we can understand about the service. And also a very important thing is there are no tools to evaluate the SLA's on the adopter side. Basically, every AS and public cloud provider has a lot of tools. So for instance, Amazon has a CloudWatch, Azure has its own tools that are very good on the quality of service assessments and the monitoring of the infrastructure. So there is no better than that. And no, let's say research development can't even reach to the level of maturity that these tools have. But as far as the SLA assessment, which means that we are able to assess not the quality of service, but the guarantees that the provider gives to the adopter of a cloud infrastructure, there is no formal way or a standardized way that these providers have accumulated in their solution. And finally, the last part of the motivation is that even if there is a compensation or a standardized way to measure the SLA's performance and the guarantees, there is no formal way for the parts compensation. This means if even you go now to the public SLA of Amazon, you will see that what they say is that if you provide the log files of unavailability of the specific machines, then you will be eligible for compensation. This means that there is no standard procedure apart from the legal document that binds them in order to compensate in case of SLA violations. So what we propose here is a solution with two pillars. It's basically a non-SLA intelligence solution with native operational transparency and privacy, but by leveraging the adoption of trusted execution environments in permission blockchains. So the first part is standardizing the way that we can evaluate the SLA. And the second part is to adapt this solution in a mechanism that is trusted by both peers, which is a provider and the consumer of the service. So moving forward, let's understand basically better the problem. So as I said before, as far as the quality of service and performance evaluations concern, everything is set there. We have a lot of tools. We have everything. But in the case, if we want to, let's say, self-assess our service and see if the provider is true to his word and to his qualities, even though we can, let's say, adapt and make our own measurements, there are some questions that still remain unanswered. As far as, I mean, the way that we can assess it, because as far as the document of the SLA is concerned, everything is described there. But of course, it's a document and it's not machine-readable and parameterized in order to, let's say, create a tool to assess actually the SLA. So these questions are, which SLA parameters are contained in an SLA? For instance, someone has a network problem. If it's not guaranteed in the SLA, then there is no reason to monitor the network as far as the SLA is concerned. Also, how are these parameters are computed? Although we have a very formal way of understanding a lot of quality of service parameter, either network ones or computer-intensive ones, like CPU utilization around. In order to measure them in the formal way, you need to have, let's say, a very specific description of how you are going to do it. And also, even if we do that, we have to understand if these parameters are all aligned with the definition that infrastructure as a service provides. So in order to tackle this problem, let's move forward. What happened is that in previous European project, let's say, apart from pleasure, we tried to standardize the way that we can describe in a machine-readable way the SLA's, which are legal documents. So as far as the parametrization of the SLA is concerned, we have to take into account three main things. The first thing is the metrics that are described in the SLA. The SLA guarantees more or less, which are essential metrics information about achieving the appropriate monitoring and measurements that you need in order to assess the SLA. Furthermore, we have the parameters, which basically it's a subclass of the metrics. It's the way to describe a specific metric in more detail in order to be measured. And finally, we have a set of rules. Even though that we can assess an SLA, there are specific rules that might get in the way of creating the SLA guarantees and evaluating the SLA properly. For instance, if we take under consideration one of the most common SLA parameters is an SLA metrics, which is the availability. Even though we can have a formal way of measuring the availability with using the pink service, this means that we can think a specific infrastructure, a specific machine that we have, and we can see that it is unavailable. What Amazon says in the SLA is that if the machine is unavailable in two availability zones, this means that even though this machine might be unavailable, the way you should measure it is that you have to divert the traffic and the pink method that you use and measure it in the different infrastructure that we provide you. So there are a lot of fine prints and a lot of small details that can dim the self-assessment of the SLA and work less. So it's very important to be able to standardize the way in which the SLA is measured. So in that effort, we have contributed to the ISO standard that you have seen in your screen and taking that into account, we were able to move forward and not only provide a way to, a standardized way to monitor SLA's, but also a vessel in order to do it in a blockchain environment and more specifically in a trusted execution environment. So just to give you a little more detail on how this procedure, in order to create the parametric SLA, we have three main steps. First of all, as I said before, we have the sampling methods. Basically, what the sampling method does, the part I mean of the standardization, is the definition of the rules for measuring this metrics. And also, it's important in this procedure to define success or failure of the measurement. Taking again, as an example, the operation of pink, as you can see also in the JSON schema that we have provided. Even though we might get a pink message that something is unavailable, this message is differentiated if the provider is not accessible or we don't have internet in our infrastructure that we use in order to self-assess the SLA. So it's important to understand that either a measurement might be bad because of a lot of circumstances. So it's important to understand how a sample is considered for failure or success as far as the measurement is concerned. Another important part is the period and the interval of computation. This is a very basic metric, which implies that there is a specific billing cycle. This means that the SLA guarantees might be for a month, might be for a year. So basically, in order to be able to compute the SLA, you have to take into account the specific billing cycle. Finally, we have the most important part, which is the metric calculation. Basically, what we do is we outline the high-level metric. For instance, one of the most commonly used ones, as I said before, is the availability. So in this sense, if you can see here, we have the unit of measurement. We have the specific parameter, which is basically the limited which the SLA is supposed to be violated. This means that if the availability is less than 95.95% of one month, this means that we have a violation. And in this way, in this formal procedure, we were able to create the parametric SLA. It's important to mention that this effort derived and arose from the Slalom project. It matured through Cloud Perfect project. And now we want to take it a step further and be able to formalize the way of the conversation methods through blockchain. So that is all from my side. So I give the floor to Nikos now to explain to you exactly how the trusted execution environment and the blockchain solution is comprised. OK, thank you, Alex, for the proposed system here. We have this whole list of architecture. And each and every component has its specific role, of course, in the whole workflow of the procedure. What we are trying to achieve here is the so-called SLA trusted monitoring. And the thing is that in this sense, the SLA monitoring procedures and computations happen inside isolated smart contracts, the so-called Englaved structures, that they happen actually with the use of the private chain code, which is an enabler in the hyper-related fabric blockchain. And why we want to use this technology, actually, is because when we say that we isolated these SLA computations and all the SLA intelligence, is that we have it measured and only the inputs and the results are available to all the participant blockchain entities. So in that sense, the computations happen in specific nodes that they happen isolated from all the other entities view of the blockchain. So as we're seeing here, all the architecture, what actually happens from the beginning is that a provider and a client, which here we have, as we have here in the infrastructure as a service and the software as a service in these roles, they come up with the agreement payment, a contract between them. And this actual agreement is signed by both and contains also the details of the SLA contract that it is agreed between them. So the parametric SLA is signed by both and it is offered on the trust and the traditional environment inside the whole permission network. I reminded here that all this happens on chain, of course. And in the continuation of the workflow inside the trust execution environment, the continuous procedure of the enclave computations, which is the monitoring and the comparison, adds a new agreement as it comes from the parametric SLA, as the SLA, parametric SLA signed comes. It is added in the portfolio and it is monitored as well as the other agreements. And what this means is that the actual logs that come from the outside, from the logger, the cloud logger, and they are stored, of course, on the external file system, on the IPFS interplanetary file system. Together with the rules of the SLA that is signed already by the two parties, they actually contribute to the SLA monitoring, which is happening here inside the trust execution environment. And it's called the Gled monitoring. And on the continuation of this workflow, actually, we have the Gled comparison where these metrics coming to use and all the rules of the SLA's and the algorithm drivers are executed in the isolated environment, of course. And the decision of an SLA violation of log not is made at this point. And of course, the next point is the funding and the commutation mechanism that continues and charges the provider and compensates the client for the violation, of course. So this is a holistic view of the workflow. And we now go to further details on each part. So we have the agreement payment and the parametric SLA. The scenario is that we have the two clients, the provider and the customer. And here, as we are talking about the public cloud SLA, we have the infrastructure and the service provider. And we have their clientele, which can be a software as a service client. So this agreement payment, actually, it's a blockchain transaction that happens between the involved parties. And of course, it involves the signatures, which is the proof that the two parties consent to this agreement. The SLA per se, the SLA contract, that is actually the proof on the ledger that this is an agreement contract and between these two parties. And of course, we are having the wallet addresses, the SLA metrics details, and algorithm drivers, which are, as mentioned already, which are the way, actually, that the SLA computation at the end and the regulations through which it happens. So all this is contained in this agreement between the provider and the client. And it is packaged in parametric SLA, which is signed. And now, as mentioned in the holistic architecture, this is forwarded to the rest of the execution environment. OK. OK. The Enclave Monitoring includes the new parametric SLA signs. As mentioned, there is an Enclave portfolio of the SLA that are monitored at the moment. And the new cams enters also the monitoring process. So the Enclave Monitoring includes the rules from the SLA that it is agreed already. And the metrics, the SLA logs, that come from the outside, from the cloud logger. So the cloud logger actually stores these files outside the blockchain in order to move some storage outside of the chain, actually, not to use on-chain storage. And through the integrator enabler, which is the tunneling sim, this integration actually happens between the trust and execution environment and the logger. This procedure is actually a goal is to conclude everything and prepare it for the calculations in the client comparison. Of course, as we mentioned, the trust execution environment, it's implemented with the Fabric Private Chain Code, which is the dedicated enabler. And here we go to the client comparison, of course. The Enclave comparison, as mentioned, calculates the SLA and what it happens actually. It is also a smart contract structure, as the Enclave Monitoring, that it is coded inside the trust execution environment in the Fabric Private Chain Code. And the calculations and the computations that happen inside, they're isolated from outside and from on-chain activity, actually, from other providers or adopters that are on the chain. So at the next step, the Enclave comparison performs, of course, the SLA computation. And what happens is actually that we have an outcome about the SLA violations, whether we have or we don't have a violation. The Enclave comparison, of course, takes into consideration the agreement metrics, the real metrics, as they are handed already from the monitoring. And the other rules and regulations, together with the algorithm drivers. And this computation happens at this point. And it is the second pillar of the trust execution environment. Of course, at this point, we mentioned again that because this procedure happens isolated and private with innate privacy, actually, it is sure to say that we have a system where all participants are offered a first service, where the regulations and the rules are approved in the beginning. And afterwards, the result is revealed to them without any intermediary in the computations. And of course, after the usage of the trust execution environment and the isolated and acclaimed computations, we have the outcome or not of a violation. And here we have actually a simple smart contract, a chain code, actually, that runs the respective code in order to compensate the violation. What happens is that the compensation scheme is that the provider actually should be charged for the violation and not for the escalation. And the client should have a kind of compensation or refund, which is a simpler approach here. And as we will see in the future work, we want to add some more functionalities. OK. And the whole system actually has been measured in terms of the performance shift of one computing or not a violation. So we have the SLA violation assertion when we have a violation. What it is observed actually is that in general, more computation is, of course, as it is expected, more computation is done in the opposite of not have any violation. But the thing is that we have a high transaction throughput, of course, because of the underlying platform. But also, we have a transaction pipeline in the sense that we have the different stages of the transaction and how this continuous calculation and transactions of the transaction execution environment chain codes can be pipelined and be faster, actually. And apart from this, also, we have some smart code that code, of course, that it is not, it is inutable in the sense that it's computed once, it's programmed once and deployed. So in order to measure the system, we observe that the thresholds actually in time, they don't have a very big deviations from certain seconds. And what this means is that because of all the aforementioned reasons, of course, but this means that it can scale to more users, of course. Of course, again, because of the new ability of the code, we know that there's a violation as soon as it has a certain time of execution. It won't be a large deviation from this kind of time in different violations, as the code is on the blockchain, it is immutable, of course. And again, the scalability also comes for the network entities, which means the amount of users, but also the transactions, as we're having thousands per second on the hyperlensure fabric, of course. So apart from this, we're going to see a bit for the future work and conclude here. So for our future work, actually, as mentioned, we want to change some things on the refund chain code. The SLA product scoring is an example of having an SLA that has actually a history on the chain. And the users are, the clients actually are prone to choose it or not to buy it from a provider. Of course, technically, we want to upgrade to the next fabric private chain code version. And we're having discussions about profiling chain codes on this schema here as part of the future work. Here, we have also the link for this research work that we presented. Thank you for your attention. You are muted. We're open to. I'm sorry. So thanks, Nicos, Dr. Adrian and Alexandros. So participants can ask questions either in Zoom chat or in YouTube. So we have one question from Peter. He's asking, is the source code of the contract, the fabric private chain code available online, or is that the proprietary? Nicos, you are muted. Can you unmute please? If you are. Yeah, sorry. No, I was sure. You have to have a service with him about that. No, it's OK. I think you can't hear me now. No, I was saying that, yes, for now, it is not something that it's provided open source, but it is in our plans to consider something like this, as we had the previous discussions on this channel here. So I have one more question for you that can you explain about this trusted execution environment, means if you can elaborate more like what it do and what is the objective of using the? Yes, I can go to the slide also. OK, I can go to the slide, which there is the holistic architecture, in order to understand the purpose actually. The thing about the trust execution environment is the following. Let's think that we have this ecosystem here, that it's a permission network, as we see, and we have different providers, clients, that they participate. How we want to do this SLA computation, and not only computation, all the SLA intelligence workflow actually, is to hide it actually, so to have its own privacy inside the blockchain network, which means that we have a ledger that it's a common ledger and will share the ledger, submit to this ledger. But we want to have the SLA intelligence, which is the two basic pillars, it's the SLA monitoring, of course, and the computation. To have it, not in the common ledger, but to have it happen only in a hardware node, and the result of it to be populated. And the reason behind this is it's actually the privacy, the subtraction of intermediaries on this business relations, actually. And the fair result for the client in the sense that when we have an example where a provider is providing, of course, is offering a mechanism, a resource, an asset, a product here, and the same provider has its own rules how to monitor this product. This is where this system here comes, actually, and establishes this fair mechanism of doing these business relations. OK. And also, could you explain about more on the public IAS? Means every time is it for the public cloud things? Means in every diagram, I can see that these IAS, there's a contract or agreement between SAS, client and IAS. So is it like a general SLA mechanism which can be done on blockchain? Or is it between always between IAS, some public cloud operator? So can you actually say a few words about that? Of course. So basically, as far as the SLA is concerned, the thing is that almost all public cloud providers, as far as the public offerings, they more or less have the same type of SLAs. But of course, there are differentiations. That's why you cannot have, let's say, one type of mechanism in the trust and execution environment in order to assess the SLA, you need more than one. Let's say, take the paradigm that I said before. Even though the definition of the availability is the same as far as AWS and Azure, let's say, is concerned, they have different definitions of the availability zones. So we have one definition from AWS and one type of constructing their infrastructure. And we have another way of doing that in Azure. Also, we might have differentiations on the boundary period. This means when you want to assess the SLA and the availability for instance in Amazon, a machine is considered unavailable if you have 60 seconds consecutively unavailability. But as far as the SLA is strictly concerned, Google Compute might need 30 seconds of unavailability to consider it unavailable. So we can see that the different parameters create different ways of measuring it. So the code base which we can assess necessarily for the specific provider cannot usually be used for another one. I don't know if I've answered your question. Yeah, it creates me when you say that, okay, let's compare with AWS and GCP or let's say with Azure. So are you considering things in terms of hybrid cloud as well? So for example, let's say in the future once we have more interoperability between different clouds. So in that case, how this SLA thing works, what do you think? This is just opinion, right? Yeah, this is really difficult. If we're talking, let's say to hybrid cloud, if we're talking to IoT, it's computing if we take politics and consideration. So computing something is one thing. Having the business model to let's say and the actual compensation methods for the SLA assessment is another thing. So basically, if there is a formal way that the hybrid cloud solutions assess the SLA or give the SLA guarantees, then it can happen. But even in a hybrid cloud, this means that you have more than one ES provider. So you have to assess the SLA's independently based on the contract that you have signed with them. So it's more on the standard side as well, I hope. And because I don't know whether what are the standards related to SLA's are currently means how these cloud providers provide SLA or maybe in future, we can have a one standard for SLA or currently, I don't know whether. I think IS, so that's what you showed are one kind of standards with SLA, right? Also, there's a term you used SLA LOM. What is that? Like in your one of the slides I checked, what is this SLA LOM something? Yeah, yeah. Is it some kind of standards or what? No, no, okay, I talked very briefly about that. This is a European project that an initiative actually that its purpose was to standardize the way of assessing the SLA's to create the standardization, the formal, let's say schemas, JSON schemas in order and all the information needed in order to assess the SLA's and also evaluate different SLA provider, providers on the strictness of their SLA's. For instance, as I said before about the boundary period, Amazon says that it considers a machine unavailable for 60 seconds and cloud computer for 30 seconds. This means that the SLA of Google compute is more strict, let's say, in that sense. Okay. And who wrote this standards? Like is it the ITC or ITF, like, what is the standard body behind it? Like, any standard body behind this SLA LOM? If you go to the previous slide, you will have the exact answer. Okay. Previous slide, Nikos. So it is an ISO standard. Oh, it's ISO. I see. Okay. Yes. This is the exact standard that we have contributed. That's good because all the time means we also started in telecoms like one SLA subgroup, but because like, because we got a lot of response in terms of standards, like, because if we don't have standards regarding this, then it's really difficult to write smart contracts for SLA's. So thank you for sharing this. And I think, David, do we have more questions on YouTube? Can you please check once? Yeah, let me log into this one second. And meanwhile, one quick question means any particular reason why you choose IPFS for distributed storage? You mean IPFS as a brand or as a decentralized storage outside the blockchain? Yeah, outside the blockchain means maybe you can use some side chain or something. I mean, just asking means any particular reason for choosing it? Yes, the thing, no, I mean, the thing initially was to have a simple way in order to store the SLA logs and not have to think about the storage that they have to, the capacity storage that they have to be containing. So this was the initial, but the side chain option, I mean, maybe could be another key here to do, I don't know. But the focus was mainly also to the permission network, of course, as you saw. And thank you. And I wanted to add also that it is true that we're trying, on the previous discussion that you were having about the SLA's, that it is true that we have a kind of vision how to see this architecture that we have created actually and how to see it in order to serve SLA's in general, as you proposed actually in your question before. So this could be a mean of value, I think here. Okay, thanks guys, thanks for sharing this. And we hope you to see you soon in our SLA subgroups so we have special subgroup for this SLA things. And we are planning to write one white paper as well regarding SLA and blockchain with LF, HN, LFN, Linux Foundation, HN, Linux Foundation Networking. I hope you guys will join this. And also those who are new to this group, David already shared one link where you can join this Vicky from Hyperledge and we have bi-weekly calls every Thursday at the same time. And David, if you don't have a normal question- I see a couple of questions on YouTube, but I don't think they're necessarily related to SLA. Somebody's asking how can blockchain be used to secure a 5G network? And somebody else is asking how to create network using Hyperledge Fabric. I sent the last question in the link to the Fabric Getting Started Guide, but I don't know, the other one about securing the 5G network seems like maybe it's not directly related. And maybe the 5G would have an answer to that. We'll have another call about 5G. Yeah, that's the next call. Yeah, because in the slide also, you added 5G network slides, right? Maybe in the future. Yes. Yes, the 5G networking is a different part of the project overall. It's something that regards the orchestration of cloud and edge computing. We don't particularly use it in the blockchain matters. So I can see the confusion, but as far as the blockchain is concerned, we don't use 5G yet. Okay, so if you don't have a normal question, then we can leave. And thanks, guys. Thanks for joining us. Thanks, everyone. And as Vipin said, hopefully we'll see you in the future. Thank you very much. Great presentation. Thank you, everyone. Thanks, everyone. Thank you. Bye-bye. Thank you, bye-bye.