 All right, you're live. And Vipin and Nima, if you want to kick it off, go ahead. Yes. Yeah, I think Nika's getting shot, yeah. Yeah, go ahead, Nika. OK, hello, and welcome to this presentation of the Solution Brief White Paper that we have documented in the telecom, especially in the interest group in Herper Lecce. And I want to start with the white paper per se and see just the beginning. And I will jump into the slides in order to explain how we are actually doing what we are doing. And what we are doing in this Solution Brief is actually we designed and presenting a novel method of assessing SLAs and relevant agreements. And how we do this with the Herper Lecce Fabric, of course, and dedicated trust in the execution environment that runs on the chain. Briefly, just the table of contents of the Solution Brief we approach in the beginning, and we see it like the conventional SLAs, the standard SLAs assessment, and how it is done now. From a telecom perspective, the SLAs and how they are deployed for different vendors and services on the ecosystem. And later in the white paper, we see the architecture that we propose and therefore unfolding of the operations. So let me jump to the slides. I believe I'm OK now, we can see. So briefly, how we're going to present the white paper, we're going to start with the standard and the SLAs process and some details about it for general knowledge and more specific one, more deep knowledge about it. And then we're going to jump simply to the solution and how it is implemented in the Herper Lecce Fabric and the chain trust execution environments. To note, we have the collaboration with the Elephant working on this white paper as we will close here. So let me start from introducing what is the problem actually, what are the issues of the conventional SLAs and how we wanted to approach this and come up with the solution. First of all, this approach, of course, is relevant to the telecom industry. It can be applied to other, it's supposed to be applied to other kinds of level agreements. And standardization could bring this to other industries, though through corresponding work on it. So the motivation behind it, the thing is that with the state of the art SLAs, how they are used, how they were born, and they are used in the industry. There is a lot of power, of course, there is monopoly on the provider side. There is not a lot of transparency between the agreement. This is an issue that stands from the centralization, of course, of the system and that its provider is bringing their own tools and methods for the whole process of SLAs monitoring and computation. So in this approach, we have found that certain pillars as the misunderstanding that happened between the terminology of the certain level agreements, the instant transparency that can be, again, through this kind of terminology, but how the parameters are defined in certain level agreements and how the users and the adopters can actually understand it for their case of deployment. And of course, the credit system on the state of the art use cases of SLAs is a bit more outdated. And of course, we have the refunding and compensation processes here that we will present in this design. So the solution brief is about an ecosystem setup on top of HyperXFabric that utilizes trust execution environments to bring privacy and transparency, which actually in the world of focusing, they have a trade-off. So what this means is that more transparency is less privacy and less privacy is more transparency. And this kind of way we handle here in the presentation of this solution brief. To note that we have started the conceptualization of this work through this added publication with my team. And through this, we came up to extend it and bring this idea into fruition through the white paper that we are presenting today. So let's go a bit more for the SLA world, actually. What we are having in SLA monitoring is private and public cloud cases, a few providers that have the monopoly like Google, Amazon, Microsoft, of course, it's the big corporations. And this is how it works, actually. But also, there exist not so famous providers more like SMEs and these kind of smaller businesses. Each of these kind of providers, they have their own methods and tools that they will bring to an agreement, which is a contract between a provider and a doper of a service, or an infrastructure. And how these tools that the providers bring, they actually control the whole process and it comes to less transparency for the user and the adopter. And we could describe just the gray area of the state of the art. Along with this, in this landscape, of course, there is the metrics that are the essence, the main point that everybody in this ecosystem is measuring, actually. And coming up with an SLA violation with the breach of an agreement and the definition of this metrics, actually, the definition of this metrics brings the SLAs to be as much as understandable between the involved partners. However, the thing is that there is difference with monitoring this SLA and defining a metric, which is different from describing the individual parts of a metric, which call parameters and how they are combined and the defined metric. So the three questions that we see here are the most important ones in order to widely monitor an SLA. We want to know the parameters on the content side, how they are computed, and if actually the process, until now, it's what the agreement says, actually, from the provider's perspective. The infrastructure as a service. So going further on the SLAs assessment standard, the standard process of SLAs, we have, for example, here, an ISO, an ISO standardization document, which defines the ways that these agreements are handled between the provider and client, the infrastructure provider and the adopter. And we're talking, of course, about specific kinds of descriptions. So in order to define an SLA, which is, I mean, in the context of this presentation, it's for us to understand, OK, what is it? If we have no idea about this. We have the metrics, which are the SLA guarantees. So the SLA guarantees of an SLA, it's the measurements that we want that the provider offers to us as they promised. In order to construct these metrics, parameters are used in order that actually define the metric. And they also provide how it should be computed. So all this until now, and the rules that I'm going to say in a minute, all this is the agreement. So it's the contract between the provider and the adopter. And coming to the rules very quick, the rules actually is how the parameters of the metric are computed in order to define the metric. And what kind of evaluation we have, for example, failure and success of a metric if the rule fails or succeeds. So all this process of the contract of the SLA service level agreement, in order to be deployed through the standard SLA process, the thing is that we have, of course, the corresponding deployment of the dedicated software of the providers. This kind of software is called the algorithm drivers that actually have a layer configuration, as we're going to explain a bit. And the layer configuration is the individual parts that compute, that define that sample in the sense that they measure the metrics and they come up with a result. This, as we will see in the next explanations, are actually how we want the chain code to execute inside the fabric network, as we're going to see on the blockchain. So this is the intelligence, if you want. The SLA intelligence, that should happen in an agreement. This is how it happens in the standard and conventional, if you will, SLAs. So the three layers are the sampling, the interval layer, interval of computation, and the calculation. The sampling, of course, I guess that this layer, OK, maybe it's pretty obvious what it does so very quickly. I explain a bit. For the sampling, we're having actually the rules and the methods, how we gather data from the machines and the services that the provider is offering and collecting it, actually, in order to process it. The interval, as the word says, it's about how often we do this process, actually. And there is different kinds. As the building cycle, there is the SLA evaluation cycle. And of course, lastly, the calculation, the metric calculation, is about the actual computation. So we have collected the data. We did it with a specific frequency. And now we do the computation. And we have the SLA. And here we have the parametric SLA, which is part of our design. And before going into that, the whole process, it's actually how we have, from one point, the real data and from the other point, the contract data. And if we come up with a violation or not, so this is the logic here. And last but not least, as I said, the parametric SLA is the concept that we have defined in our architecture as we see in the next slide, of an SLA that exists as a template, if you will, on the chain. So in each agreement, we use this parametric SLA. The parameters are filled. Of course, it is standardized in the sense that it follows the standardization of the ISO that we saw in the previous slide. And it is deployed as the slalom specification model that we see on the right of the screen. So this model actually defines the different kinds of SLAs with variable specific therapy terminology. For example, we have the availability in terms of accessibility. This kind of metric is not only that the client actually can access their service or their infrastructure, but it's actually not only available, but it can access it. This is accessibility. The difference, for example, from availability is actually that, OK, maybe there is access, but maybe it's not functional. So it doesn't work. So these kinds and different kinds of SLAs are defined in this model that we use according also to the ISO standard and for the parametric SLA. So jumping into the architecture, as we call trust monitoring. And we see in the center actually the parametric SLA, which is signed by the two involved parties. So to go briefly on the involved parties, and they are agreeing in a contract. So this comes online from the provider. And the client pays, of course, the providers through the agreement payment. And they signed the contract, the parametric SLA, which is a new SLA field and signed by both of them and submit on the ledger. The enclave operations in the trust execution environment actually is the deployed and hardware-exholated environment that exists, a system actually, but exists on the chain. And it's contribution, when we say hardware-exholated, it means that actually from a different kind of processors, it's not possible to enter the kind of computation that happens through a certain period in one specific processor. So the hardware isolation brings computational privacy to the hardware level, as the word says, and protects from eavesdropping literally every kind of transparency that could bring data to leak outside of where we want to have it. And of course, we have dedicated chain codes inside the trust execution environment, as we will see. The tunneling chain, it's a connector actually that we are passing through the computing SLAs, the computing measurements of an SLA from outside world inside the blockchain and the T. And the compensation is actually the refunding that we're going to see that it happens automatically. And precisely in terms of comparing to the contemporary methods. So again, a bit about the signed parametric SLA. As we said, the customer purchase in the product, this involves, of course, the blockchain transaction that happens between the important parties. The agreement, which is the scientist LA, exists on the chain as a proof of the contract between the parties. And of course, it contains all the previous information that we have described here about, for example, the contract follows the ISO standard, the dedicated methods and functions rules that need to be used by the functions of the chain code, which is the algorithm drivers that defined in the contract elaborating on how to compute an SLA and so on, of course. And to go to the next one, the logic is that this is fed to the trust execution environment in the sense that it enters the TE and it would constantly audits for newly signed SLAs. So in the Capitalizer fabric, we have the fabric private chain code, which is actually a specific deployment of trust execution environment on fabric network and where we have enclave chain codes that execute with a hardware isolation and are guaranteed that their privacy is even private data, is even private, I would say, from the peers that run it. So this is the logic of the FPC. I will rephrase this, that the chain codes that run inside the FPC, which is a deployment of trust execution environment on the Capitalizer fabric, they isolate data from the peers that even contain it. So we have a total privacy of data collection, which is the monitoring, actually. And execution, as we will see in the enclave comparison, the next step. And to end the enclave monitoring, of course, from one point we have the data from the SLA contract, which is what the two parties have signed and agreed, which is, again, as we described, the metrics, how the metrics are computed, which are the function to compute the metrics, which are the parameters of the metrics, et cetera, et cetera. And from the other side, that we see also the IPFS deployment, we use IPFS to store the logs of the machines, for example, and that doesn't need to be on the chain anyway, we just use the hash to access. So this kind of data, the logs, are actually the actual data and the real metrics, real measurements of the metrics from the service or infrastructure, or you name it, that the provider has offered to the client and the client has purchased it. So this loop inside the monitoring newly signed SLAs are monitored and they are pushed, actually, to the enclave comparison for detection violations, as we're going to see now. So the enclave monitoring feeds the data, which is actually the data from the contract, the data from the actual world use case. And it feeds to the enclave comparison, which is actually another chain code inside the FPC, public private chain code, isolated from its PR, where it exists, actually, because the chain code, OK, as we know, it's deployed inside the PR of a network. So it's isolated again from the PR and can validate for state updates inside the FPC. So the enclave comparison is nothing more than the actual running and execution of the logic of the algorithm drivers of the contract, which are represented as functions inside an enclave chain code, which is a chain code inside the T. So again, we have the computation of the data. So the SLA computation that happens in the standard conventional SLA cases. And now it happens inside the enclave chain code on the one chain T. Of course, the purpose of this computation is to be totally private and not able to be interrupted or if it's dropped or attacked from, yes, actually, we're in the process as a next step of starting a state of the art attacks on this. But this is out of the scope of these presentations. But this is how we try to extend this white paper here. So I was saying about the enclave computation inside the trust execution environment. And of course, the computation results are unbiased in the sense that nobody can change what the result would be. So we have a fair computation of the contract was not reached. So this is the output of a violation or not step of the conversation, actually. So the automatic conversation goes just to let you know. Sorry to interrupt, but your audio is a little choppy. From which point? Last minute or two. OK, I will just say that the last thing, I will just repeat that. We have a fair result of the computation of the SLA metrics that happens inside the enclave comparison because we are using the isolated hardware technology. And the important thing is now that whether we have violation or not in order to continue the SLA intelligence workflow, I hope I wasn't chopping the same point. So at the end of the logic, the conversation comes. And what this means is that at this point in the state of the art, there is no such thing as automatic conversation. Providers use credits that are added to the client. There are different rules. For example, if a credit level goes up to a point, then it makes sense to refund or not, these kind of things. But here, and since we're talking about tokens that can be totally minted in such an ecosystem, the refunding can happen at the point it should happen in the sense that when there is a violation, we have this schema that we reward the client for the violation in the sense that OK, they didn't receive the service promised that they are paying for actually. And of course, in order to keep the economic model valid, the provider starts on the same amount. So this is the essence of the compensation. Here we come in the closing of the life cycle review. And we name this as self-assessment in the sense that we have a contract, the service level agreement, which is a contract between the two equal parties. And we want to perform assessment evaluation of the SLA. So in this sense, we deploy from the beginning of the architecture this schema. And we have an ecosystem where the SLAs are self-assessed in the sense that when there is a violation, there is a refund automatically. The second point, the provider cannot modify any data on his favor, neither the client, of course, because it would be possible. So we have a fair system with automatic compensation and where the SLA, where the client does not wait for the provider to either provide the SLA evaluation to run the algorithms. And so we avoid the provider's monopoly in this sense. So in the beginning of our work, early on, we had also some preliminary experiments of performance. The thing is that since we're developing on this, we're also aiming at extracting new. So we have not included something like this in the web paper. But just to mention for completeness, the most important thing is that the deviations are not critical when we're deploying a specific contract that does this kind of calculations. And we are talking about the acclaimed monitoring and the acclaimed comparison. And this is because the code is standard in the sense that it's determined, it's specific. And the specific deployment of a code will end up to specific individual time constraints in performance, time constraints in a measurement of time. And what I mean is that the deviation between the time running this kind of contracts will be negligible since the code is submitted on the ledger. And we're using the smart contracts that their functions are standard. So this was the concept of the whole decision that we have described in this architecture. We have this collaboration between the ledger, telecom, special interest group and the LF Networking. And this is what I had for today for the web paper solution brief presentation. And thank you for listening. Thanks, Nicos, for the great talk. So this work with SLA was really a great use case of smart contracts in general, especially hyper ledger, because it fits very well the whole idea of SLA with the automation that the smart contracts can generate. So the SLA was really on top of our agenda as telecom sick as one of the main use cases since the beginning, really. And then Nicos came in and they had done some work on SLA and they just completed this work with us in telecom sick. So thanks, Nicos, for bringing in your use case. So if there are any questions. Thank you for the cooperation and the help and the guidance also. Thank you very much, Joel. So if there are any questions, I think both me and Nicos are available. Hi, this is Medalser. I think we probably like to see a little bit of more front end application that the service provider may have to create to join the blockchain and how the service order from a customer moves through the MNO wall and then become an SLA and moves on the chain. Thank you. Yes, so far we have a hyper ledger lab that it's built towards a kind of add on wallet. And now we're focusing a bit to try to have a new one, a new proposal that combines all the logic that we have described and the interface for the user and provider as well that you are mentioning. So this would consider from my side a bit future work and in order to be presentable actually. Thank you. I have a follow on question. You mentioned a trusted execution environment. Now that is called the blockchain computation, right? That's not for any services that MNO or service provider may be providing. Is that correct? Yes, as I understand the trusted execution environment is deployed on the ledger, on the fabric network actually. And it's using actually transpire a privacy on a node level. So we have a hardware isolation of the transactions and the chain collaborations on a node level. So trusted execution environment only validates the hardware itself, right? It does not provide any protection to the application workloads. Would you consider confidential computing in addition to trusted execution environment? So if I got the right confidential computing you are mentioning and comparing it to trust execution environment, right? Correct. OK, and from the perspective of confidential computing, I mean, the trust execution environment could be described as is if you don't mean something else that I'm not aware of. So definitely it has a layer of confidentiality in the sense of privacy that it brings to the table, if I understand correctly. Sort of slightly differentiation between TEE and confidential computing where TEE is just validated, trusted hardware and firmware environment for your workloads. Whereas confidential computing allows you to have encrypted data, encrypted data transfer within the computer system and encrypted data execution within the CPU, which is different than TEE. And are you considering? So we had some work from Xilinx. There used to be some people attending their calls from Xilinx. So they had kind of what you were describing. So they are actually working at the moment on kind of the verification step of the smart contracts to kind of build them into their kind of Xilinx kind of FPGAs. So I think that's mainly what you're talking about, right? Yeah, I was just curious whether the group have looked into not only the validation of the hardware platform where the blockchain will run, but also providing because it's all encrypted, right? So why not use confidential computing for your blockchain nodes to provide additional protection? No, I mean encryption is used. So in this sense that, OK, maybe it is if I'm not mistaken for confidential computing because encryption is used in this kind of operations that we're using in the trust execution environment. So there is encrypted data between, for example, the enclave chain code and that transfers data into the registry of the enclave and the validation component. So the communication between them is encrypted in this sense. So I would say that there is a layer of confidentiality there in the sense that you are putting it. I mean, obviously there could be always more layers of security and encryption added, true hardware encryption, and so on. But yeah, again, so you have to also trade off, I'd say, the application with the level of really cost and so on, each one of these additional layers of security middle care. And I assume here you were because targeting around 1,000 of transactions per second, right? So adding more layers of security will probably have a significant impact on that as well. I agree. It's just when you write a proposal for these kind of new technology adoption, you need to lay all the cards, cost benefits on the table, and then pick and suggest to your audience, based on our analysis, we think this is good enough. I'm not sure if that has been done. Yeah, I mean, again, so we could probably, if you're interested, we could probably as an extension of this work, we could start a new solution brief just specifically working on that, which is quite interesting as well to see what can be done in terms of hardware, but making sure that that is not impacting too much the other metrics that we have, other KPIs that we have to meet when we are talking about telecom, right? What? So yeah, that would be definitely interesting. I would be personally interested to see how that actually would impact, say, the latency of every transaction or other factors, how much computing overhead that would add and then so on. So do you expect that the solution will be implemented and the chain will be limited to a service provider and its customer, or do you see this solution to run across many service provider and customers? For the deployment, yes, I would say that the architecture is actually about an ecosystem. So it is a network of appellate fabric. And the users, the involved users that is the provider, the adopter, and so on, they log in into the network and they use this kind of functionality. So the thing is that we're proposing from an ecosystem perspective of Web3, if you will, and how this would have all the providers participate and the corresponding clientele. At the end of just to add to that, at the end of the day, this is an effort to automate things, right? And automation kind of shows this value at scale. So obviously, the more number of operators and the more number of parties that are involved in this whatever agreement that we are applying here, the value of this automation is going to be higher, right? So yeah, I mean, and the way the telecom kind of ecosystem is moving, we're having more open networks. And you have the concept of open run, you have programmable networks, and so on. So these trends are all going towards kind of systems and kind of networks that you don't just have one vendor working with one operator. You have a way more complex ecosystem with multiple vendors working with multiple service providers each connected to the network of difference, network providers. So these are like, these are first kind of enlarging the scale and also adding to the complexity of these sort of agreements. So the agreements that I think we are going to have in a couple of years are not going to just be like five nine availability kind of agreements. They're going to be very more complex than that. And they're going to be more difficult to be done manually. So these kind of automated and distributed smart contract based SLAs are going to be more and more important. Thank you. I think that's something needs to be considered whether or not an M&O can implement the chain for itself and its customer or the chain has to be open to all M&Os to join and use. Because so far my experience with the service provider has been they like to control their responsibility and liability domain. That means their O&M and other applications manage all the resources that they are not responsible for other than the DNS being one shared service. There hasn't been many shared services in a service provider environment. Yeah, I mean, the implementation of it, I can see how it could be difficult to convince different M&Os to kind of share their usage information and performance information and so on with other M&Os, right? But at the end of the day, there are lots of technologies now kind of being developed within Hyperledger and other areas for cross-chain interaction without having to reveal all the information. So these are hopeful kind of initiatives that hopefully are going to improve the situation. Thank you. Appreciate it. Thanks, thanks. And we generally call this as a private permission chain. So of course, this whole network is private. And we are also adding permissions, like separate contracts for zero knowledge kind of things. Or one more answer, like for example, if we have a different M&O, can we have a multi-lumine orchestrator kind of thing where every M&O is sharing some contracts and then we are running this blockchain over multi-domain orchestrator kind of thing? So that is also might be one solution for this. Yeah, so the zero knowledge proof could be one of the solutions. But yeah, that's a design thing really. So this is something that the operators should really answer and then see what level of privacy they need when it comes to SLAs. Any more questions from the audience? So I have shared the two papers that were mentioned here. So I've shared the solution brief. I've shared the link there. So obviously it's open access and also the MDPI paper from Nicos that was mentioned in this talk. So that's also open access. So you can just download and use. So yeah, thanks everyone for joining and thanks Nicos for bringing your use case to us and also kind of giving these couple of talks about the work which is going to be, I hope, very interesting for people. Again, so our group is open to new ideas, open to new applications of hyperledger. So even if you have a working kind of idea or a basic idea that you've worked a little bit in your own institution, you can bring it in and we have experts from different areas, mainly hyperledger, and we can work it together and kind of produce kind of collaborative works such as the solution brief that we had with Nicos. So feel free to join the mailing list and you will get to know about our bi-weekly calls and you can join and collaborate with us. So thanks everyone for attending and have a nice day or evening, wherever you are. Thank you everyone, thank you everybody. Thanks, thanks everyone. Thank you. Thanks Fax, great presentation. Lot of promise in there, talk to you soon. Thank you. Thanks Nicos, thanks everyone. Yeah, thanks, goodbye.