 Kia ora tātou, no mai haere mai. Greetings to you all and welcome to this EHF live session with fellow Maniak Malik. This topic is about decentralized identity management systems designed for distributed energy resources, often referred to as DER. In this session, Mayak will discuss the algorithmic underpinning keymaker and keychecker, a set of smart contracts, protocols and helper libraries that enable social verification of their DER identity. Mayak has a software engineering and technology development background with experience in investment banking, wholesale finance, trading algorithm and financial model development. He's currently the chief data officer and blockchain lead for grid integration systems and mobility. A little bit about EHF for the first time as to one of our conversations. Edmund Hillary Fellowship is a collective of over 500 entrepreneurs, scientists, storytellers, creatives and investor change makers who want to make an impact globally from Aotearoa, New Zealand. Now these live sessions are informal conversations between the fellows in the New Zealand ecosystem so you can get to know them and what they bring to New Zealand and often the start of an ongoing conversation through to action. We'll be having a 45 minute presentation then moving into Q&A and discussion with you all during this next 60 minutes. Just a reminder that this conversation is recorded and you can either put your question in the chat as we're going or you can actually put your hand up. Mayak has said he doesn't mind having questions during the presentation. So over to you. Thanks Michelle for that introduction. Thanks everyone for joining or if you're watching this later recorded thanks for watching. Let me quickly share my screen here. Michelle, do you see the presentation mode or do you see perfect not perfect. Thank you. Sounds great. So we're going to talk about secure ID. It's a project that was funded by US Department of Energy. It's a project around decentralized identity management of grid assets. Before we get into the details of the project a little bit about me. I work at Slack National Accelerated Lab. It's one of the 10 Office of Science labs that are owned by US Department of Energy. And it particularly is operated by Stanford University so I have one leg in academia and one leg in research in energy research in the US. Slack is a lot of things from pure science. We do a lot of research from pure science basic sciences to applied sciences. We've just won a few Nobel prizes over the last 60 years. And particularly my work resides in the energy sciences directorate in a group called grid integration systems and mobility. So our agenda today is we're going to look at the evolution of the grid where the grids headed right now. We are going to then talk about some background and current state of identity management systems that are being deployed on grid assets. We're going to look at the pros and cons of the approaches they are taking. We'll then get into the core of the presentation today, which is secure ID and discuss the algorithmic underpinnings of what makes it work. And finally we look at some of the applications of the technology we've developed and what future improvements can be made. Other things before I get into the slides. A lot of what I'm about to share has a very US centric or California centric view maybe the trends generally applied to all over the world. But the numbers that I'm going to be showing here are very specific to California. So let's talk about the first thing evolution of the electricity grid. We are seeing an onslaught of DRS and we are seeing deep electrification of transportation. What does that look like to see where the grid is headed we first need to look at how the grid was designed in the past. And in the past the grid was designed in this layered topology, where all of the generation was happening in a generation layer so you see all your power plants nuclear power plants hydroelectric stations. And once the energy was generated it was then transmitted through the transmission network, which is the second layer. And finally delivered to the distribution network from where it reached the consumers so when you and I, you know, switch on a light bulb and get the electrons delivered to us. Those electrons were generated in the generation section transmitted through the transmission network into the distribution network to our homes. And that worked really well. The grid was designed with centralized production. It was designed for reliable and sort of power in one direction from generation to transmission to distribution to us. And it worked really well with predictable loads and it serves its purpose so anytime you know hit the switch on, you get the electrons delivered to you. And this is changing and the two big things that are happening right now are one, there is growth of distributed generation. So distributed generation or DER distributed energy resources are things like photovoltaics PV panels you see rooftop PV panels here. And there's a huge growth in solar and in of itself it's not a problem because it's you know accelerating our transition to clean energy. When you start generating at a distribution grid, you are now having bi-directional flow of electrons so electrons that are being generated at the generation level and flow down, which is how the grid was designed. And now you're getting some generation at the end, and there's imbalances that that causes to the, you know, power system. So let's take a look at how much growth are we seeing in DERs. This is a graph for U.S. You can kind of see this is by the way this is only showing renewable energy sources. So you see the hydropower is somewhat linear. There is some growth in some years but in some decades but it kind of continues along a linear path. But when you look at solar and wind, we are seeing a hockey stick curve. There's a huge immense growth in the deployed solar, employed wind on the distribution network. And as these assets are coming online, they need to be integrated with a grid. One of the challenges it creates from a cyber security standpoint is these assets come up with their own sets of protocols. They come up with their own sets of communication mechanisms. They have their own hardware. And now we have to integrate at every level data software as well as hardware with the rest of the grid network. And that increases the threats of this area for cyber attacks on this network. There's another chart that shows the increase in the installed solar capacity across the world. And you can kind of see it's it's a global trend right now, where more and more rooftop photovoltaics is being installed, and therefore being we are seeing an increased, you know, dirt surface tank area coming up as we integrate these assets with a grid. The second big trend that I want to talk about is electrification of transportation. And the, the number that I use is this. If you have a Tesla, you know, model s 85 kilowatt, and let's say you're charging it four times a month. That effectively is enough energy of a 60% of a home in California. That effectively means now you have a 60% of a home in California, moving around on the roads, plugging in wherever it wants to, and putting that load on the different parts of the grid. And that creates an interesting problem for the grid operators and planners. They need to know where to deliver these electrons, they need to have this infrastructure in place to be able to deliver those electrons to you. So if you have a 60% of a house moving around, putting it load, putting it's load wherever it wants to, you need to have that information upfront and have that entire network of charging stations integrated into the grid assets. And when you have these charging stations they again come up with their own sets of protocols that they use for communication they come up with their own hardware, they come up with their own software. So we have integrated these three layers with grid assets that again expands or widens the threat surface area for cyber attacks on the grid. So that's kind of the underlying, you know, sort of premise upon which the need upon which we are starting to see why new security mechanisms, particularly identity management systems are needed. This is as we are making our transition from largely fossil based electricity to renewable electricity grid. So background and current state. The first thing to the first thing to maybe talk about is the reason we want to integrate these devices is because the data that they bring to the table is incredibly useful for us to plan and operate the grid. So we must be able to do two things we must be able to trust the device so we must be able to trust the device and say it is who it claims to be. And we must be able to trust the data that's coming out of that place. So we have to be able to do both. Currently, the way we do that the way a device proves itself and proves the integrity of its data stream to other devices on the network is we either use secured non volatile memory plus hardware cryptographic operations. So we have this relatively newer technology called physically unclonable functions also called paths. And we have a very special case of paths called physically up to scale the keys parks and I'm going to discuss each of these in brief detail before getting into the work we have done in security. The current best practice is using a non volatile memory. So this is the hardware component with hardware cryptography and these are commonly, you know, you've seen these commonly as TPM chips. So if you work in a big corporation with a laptop being given to you by the company, typically a card that you put in the card or USB stick that you put in the laptop to allow it to authenticate itself to the rest of the network. There by giving some assurances to the rest of the network that the device can be trusted and the data coming from the device can be trusted. And this is the current, you know, top line solution that is being used for the world for devices that are mobile that move around a lot. And effectively what this approach does is it takes a secret or identity information in this case, and it puts it in a non volatile electrically reasonable programmable you don't remember you from. Or it can have another hardware component called static and Maxis memory SRAM, but SRAM has has to always be powered on so you have to have these hardware components on the device, or on the agent that represents the device and be able to store the identity information in there so it's protected. On top of storing the identity in that hardware, you then have hardware cryptographic operations such as digital signatures or encryption, and you have, you know, fair amount of guarantees around that the key is now secure. And therefore when a device does present its identity, there is high likelihood that it is the right device that you know that you're talking to. So, what is the limitation what is the problem with this approach. Well, one, it's an expensive approach because it's a hardware based approach. And it's expensive both in terms of the design area of the hardware component itself, and it's expensive in terms of the power consumption particularly for SRAM based solutions, because they need to come up come up come with a power mechanism, a battery that's always attached. And digital signatures that require expensive cryptographic hardware to implement the shy algorithm. And even with all of the protections that are around these devices, one of the things you see is that they are vulnerable to invasive attack mechanisms, meaning if you have physical access to the device, you are now, you know, able to tamper it. If you can tamper that device, you compromise its integrity. So we have built more hardware on top of the hardware we already placed, which is called active tamper detection or preemption secretary, thereby making the hardware itself even more expensive than it needs to be. So we have a really expensive hardware solution, state of the art works really well for the most part, but it's expensive. And as we are bringing more and more devices particularly more and more dumber devices towards the edge of the network to integrate with the rest of the network. We can't afford the type of cost that comes with, you know, having these hardware resolutions on each of those devices. So that's EEPROM based methods, the new and emerging, and I say new and emerging in air quotes here because the technology is, you know, almost more than a decade old but it's still, you know, fairly new in terms of its deployment are called POPs, Physically Unclungable Functions and POPs are based on the idea that the mask and the manufacturing process, even though it's same for the devices or the ICs that are being produced. There is enough variability that each IC is a little bit different than the rest of the ICs that are produced on the same manufacturing line. And POPs leverage this variability to derive the secret information or to derive the identity of the device itself and therefore POPs are generally called as Silicon Biometric. Now, typically when we model a POP in an equation, the equation looks like r equals fc, where c is the challenge so we set a challenge as an input to the POP. The POP function itself responds with a response r and that r is unique for every IC that is ever produced. And that's the fundamental idea. So you each device because of the variability in the manufacturing process is carrying with it its own identity. There needs to be stored. There's no separate keys that need to be created. The device can identify itself simply if we model the device as a function r equals fc. And there's lots of advantages to this. You know, one of one of the things is POP hardware uses very simple digital circuits so it's super cheap to fabricate. We don't really have to spend the kind of time and dollars that we do on non-volatile e-tron based solutions. POP applications do not require the use of digital signatures or expensive photography. And since the identity itself is derived from the IC characteristics that are on the chip already, nothing new needs to be done, nothing new hardware needs to be added to the device itself. And any physical attack to the device is meaningless because the moment you power off the device, there is no key anywhere to be had. But POPs are also not without their disadvantages. And one of the biggest ones that we are seeing which is why this technology is not widely deployed is that they are prone to degradation based on environmental conditions. So if you take a device and operate it at in a low pressure environment at high altitudes, then it's r equals fc response may be different enough where we now no longer recognize the device to be what it claims to be. And things like that are prevalent. There's many, many different types of POPs and each of those POPs are drawn to different types of degradations to different types of environmental conditions. The second thing is they are challenging to implement in untrusted networks. And the reason this is important for us as we are talking about DER integration into the grid is most of the time DERs, you know, whether it's home batteries solar panels, etc. They are owned by the customer. So the grid operator does not own these devices and therefore does not have any kind of control they can exercise on these devices without extensive permission from their orders. Effectively rendering the DERs that are operating to be in untrusted network, not in the trusted grid communication networks. And then the third limitation of POPs is they are, they are slow. The authentication speeds are inversely proportional to the number of CRPs. These are the challenge response pairs. So if you think about a device that supports 10 challenges and for each of those challenges responds with one unique response, you now have a map of like 10 CRPs. And then as you give a challenge to a device, you'll have to go through that map and you'll have to look at the right response and then match up to the response that you are for the device. The bigger the CRP table gets the longer it takes for a POP based authentication to occur. So it's even though it's linear for deployments that are 100,000 devices or a million devices on the network. These are prohibitive enough, the latency is prohibitive enough that POP may not be the solution to COVID. And to solve this particular problem, we came up with a special type of physically office key, special type of POP physically office key. It's still modeled as R equals FC, but it only accepts a single challenge. So there is no CRP map, there is only always one challenge. And for a given challenge, there's only one response. So you get faster response times and that's that you still have all the other limitations of a POP. So even though POP solved the latency issue and improved the adoption in a real world environment, they're still subject to the same limitations we have with POPs. Now to the fun part. So we have all of these technologies. They have their pros and cons and some are expensive and work really well. Some are cheaper, but don't work well all the time. How do we solve the problem of making something work all the time and still have it be cheap. And that's kind of what we have done here with SecureID. Another approach, fundamentally, was informed by this idea that we want to develop a software based information theoretic approach that allows for the benefits of having a hardware based approach, but purely using software. So no silicon evolve. Can we get to the level of informations? Can we get the information to your security security to the level where hardware is today, and not have the cost associated with hardware based solution. And at the core of SecureID is KeyMaker. So KeyMaker is a set of smart contracts protocols and library functions. I'll give a brief overview here, but I'll explain this in quite a bit of detail on how this works. But effectively what we do with KeyMaker is we create a key, which is the device identity, and we then shard that identity into multiple different smaller pieces. We distribute the shards to the rest of the network to adjacent devices, to other nodes on the network. And when we need to do verification or authentication of that given device, we simply collect the shards back, and we match the shards to what the key would have been. And it's not quite accurate, but I'll explain, you know, how that matching works. Currently we call it a social verification. So because there's a few devices in the network, and they are holding shards for the device in question, these devices on the network are in a position to socially verify that the device that's talking to them can be trusted. It is what it claims to be, and we can trust the data coming from the device. In the main security ID, there are two algorithms. What is called Shamir secret sharing. It's an algorithm that was developed by Adi Shamir, and I want to say in 1970s, so it's a really old technique. For those of you who don't work in cybersecurity, you know, on a day-to-day basis, Adi Shamir is the S of RSA. So RSA is, you know, a very popular algorithm, and Shamir is the S of RSA. So he's done incredible work and has a lot of work, particularly in information security. So we are using his algorithm, and then we are combining it with symmetric encryption to produce the security keymaker protocol. Before we get into how the protocol works, I want to talk about some definitions in some words that I tend to use interchangeably during the discussion. So I'll use the words secret or secret key or private key or the device key to explain the device identifier, and I use them interchangeably depending on the context, but it all means the same thing. When we are talking about DER integration to the grid, each of the DRs have an identity, and that identity is a secret or a private key or a device key. And anytime I use these words, I'm referring to the identifier of the device that is known only for that device. The second thing is secret owner. Secret owner is the device of whose identity we are talking about. We talked about taking the device identity and splitting them into shards. So shard is a single encrypted share of the key or of that secret. And shard custodian or a shard holder is a peer device that has access to that shard that has, you know, that has received the shard from the originating device. And finally custodian is a device that holds a complete set of keys for itself or for its peers. So with those definitions, let's talk about sharding for a little bit and we'll simply talk about we'll talk about simple sharding not computational sharding in this case. So let's imagine that we have a device key that's 123456. Let's now split the key into three different shards. So we'll say n equals three so n is the number of shards. And we'll call each of these shards s one s two and s three. So in our case s one is 12 s two is 34 and s three is 56. And when you combine all the three shards, you can reproduce the key. And it's pretty evident if you are to perform any kind of validation or verification or authentication in this scenario in this simple sharding technique, you'll need all the shots. We'll need to put all the shots together to be able to perform that verification. So what happens, I'm going to go back a moment what happens if one of the shard holders now is unavailable, or is switched off, or is undergoing some type of maintenance or cannot be trusted. If any of the shard holders are unavailable, you cannot perform verification. So we can solve that by improving our sharding scheme a little bit more here. And we still have the same device key, we have 123456. We still have the same number of shards so we have three shards in this case, but we introduce a new variable called k, and in our case k equals two, but what that reflects is. I don't want all the shards that I have distributed to be available for me to perform the verification, just two out of the three should be sufficient. So how do I shard in a manner that two shards are sufficient, even though I've distributed three shards. And in this case, I create three shards 123456, and the third shard is 123456. And now I only need to know two shards. So one of the devices even if it's unavailable. I, you know, I can still do verification with two shards. And I have a little bit of typo here so please ignore this. The shards are represented here in the key under the key right here. So the question is what is the problem with this scheme, like I've now, you know, solved my previous problem of some devices being unavailable and still be able to perform verification. The problem with this scheme is it violates perfect secrecy and perfect secrecy is the idea that, regardless of the size of the shard that I hold, I should not have any more information about the key. And in this case, obviously, you know, since I have more bits about the key itself, I can now brute force just guess the key, and it only takes about 100 attempts to figure out what the right key is. And in this case, it would take a lot more two orders of magnitude more attempts. So because I've changed the size of the shard, now I have more information about the key and that violates perfect secrecy because perfect secrecy secrecy says, no matter what the size of the shard is, I should never have more information about the key itself, meaning I still need the same amount of computational power required to guess the key as I would if the shard were to be smaller. So we had to come up with some constraints around what our identity verification protocol must be able to satisfy, what it to work well, and still have the same degree of protection that a hardware resolution would have. And this is this is the core of keymaker now. So the first thing we want to do is, we want to look at the case shards. And we want to say that only case shards are required to perform social verification of the key itself case always less than n, and n is the total number of shots that we have distributed to the custodians. The second constraint is any shard sx must not be a subset of the key. The third constraint is all shards s1 through sn when combined together must not reveal the key. So this is like really counterintuitive right you have n shards that you distributed. You only need less than n shards to be able to perform verification. None of the shards is a subset of the key, and all shards combined together still do not reveal the key. And it's really, you know, contradicting requirements but let's apply these to a simple sharding example we talked about earlier, and see how they satisfy those satisfy the simple sharding technique. So our first constraint only case shards are required to perform social verification of the key case less than n. Of course, in our case here, the key is 123456. We have three shards 123456 and we require all the shards to be able to perform the verification. So we have failed that constraint. The second constraint is any shard sx must not be a subset of the key. Of course, each of the shards here are subset of the keys. So that doesn't work either. And finally, all shards s1 and s2 all the way to sn when combined together must not reveal the key. So when we combine this we do get the key and therefore we have failed the constraint. So how do we produce shards in a way that not all shards are required to verify the key. No shard is a component of the key itself, and all shards together cannot produce the key. And here's the algorithm. So we'll, you know, talk about the algorithms for two different examples here. This is the linear example. But we take the key. So here's our key 123456, and we place the key on the y intercept. Then we generate a random point in space, this one right here, because we have two points now we have a unique line with a slope and we draw that lineup. And then on this line there are infinite points, and we take any of the end points and make them shards. The points that we distribute to the rest of the network are these points. So let's say we distribute three of these points to the rest of the network. Now, when we do verification. All we need is any two of the shards that have been distributed, produce the unique line that they fit that they sit on, and then match that line against the random point and the key that is being held by the device And if those two lines match and the slope of those two lines match, we know that this device has the key without ever revealing it. And if you look at whether this satisfies all our constraints. Well, we have three shards and we need only two to do the verification so it satisfies constraints number one. The, the second is none of the shards are subset of the key itself. Yeah, these are independent points and duty space. They are not related to the key in any manner. And finally, if you get all the shards together just the shards, you only produce the line but you never know what the secret is because that line has infinite points and nobody know none of the peer devices are aware that the device key resides on the wide set. So that's why all the constraints, but what if you know so this is this is what we call to three solution, meaning three points distributed with only two shards required. What does it look like when we go to the next polynomial so we have a parabolic curve here. We are going to do a three and scheme, meaning we have infinite charge that we distribute but three shards are required instead of two. So we do the same thing, we put the key on my intercept. We generate two random points in this case. And then we generate the shards on the parabolic curve that we have created here out of these three points. We distribute these shards. And now out of the end shots that we have distributed, we require three to be able to produce this curve. We take the two random points in the key, we produce the same curve and if the curves fit each other, then we know that the device in question has verified its identity, and we can keep increasing the polynomial up. So we have this curve or a cubic curve, and we'll require four points or four shards from the network. And when K is large enough, there, you know, it becomes incredibly challenging, especially in peer to peer network like a blockchain network to be able to compromise a device key, but we didn't stop there so what we wanted to be able to do was in addition to hiding the key and creating a zero knowledge proof as with shimmy secret sharing. We wanted to be able to recover the key if the device in question was lost. So instead of applying shimmy secret sharing to the identity key. We created another random key, not the random key that we created here so not these random keys yet another random key. And we pass that random key through shimmy secret sharing to produce the shards. We then take the actual key and pass it through a symmetric encryption algorithm to produce a cypher. We concatenate it with each of the shards, and then we distribute this each of these to the peer devices. And the advantage of this is now in the shards, there is absolutely no information, including derivative information about the key, because the key was not used to produce the shards at all. And yet if the key was lost due to any activity happening on the device in question. We have recovered ability for the key for a system operator to go in and be able to extract or go from the cypher, apply the random key and be able to produce the identity again. In terms of its actual implementation. So algorithmic implementation is generally the random key. We split the random key into n plus one shards using SSS. We then encrypt the identity key, using the random key as the encryption key for symmetric, symmetric encryption and it's a lot, but we basically use a library called NACL or the, you know, salting secret box library, which consists of an excel and a cypher and a poly 1305 message authentication code. We use that to encrypt the identity of the device to be able to then produce the ciphers, we append the ciphers to the shards being produced by SSS so the output of two and the output of three, get combined together, and then we distribute the shards, the same as a gossip based protocol. So that's it. I'm going to pause here to see if there's any questions if not we can quickly go into the applications and future improvements. I see. That was just Cheryl saying that she was leaving. She has another commit mentality. Now that's actually fascinating. It was really good to see the difference. You know, how you're solving and how you outline those original problems about the environment first. I quite like that that was really interesting. I think at the end of it, it would be great thing to see how we can help you in New Zealand what there is that we can help get this through into New Zealand what's needed there so I think this is any questions from the floor from T why we will continue otherwise. Sounds good. So let's talk about applications and future improvement, future improvements to the algorithms. So the applications are many and I've listed a few broad stroke sort of use case categories if you will, for this algorithm to be applied. So here where we need identity verification and have the need to preserve the privacy of the participant. The algorithm is a really good fit for that. And there's obviously fit that scenario because at least in California, there's a California privacy laws are incredibly strict just like they have in Europe with GDPR. So grid operators need to integrate with behind the meter DRs that are owned by the consumer, not by the grid operator. They need to do so, while satisfying some of the constraints around privacy of the consumer that owns those devices. So we want to have the algorithm be privacy preserving, as well as be able to provide a zero knowledge proof around its identity so we trust the device and trust the coming from it. So grid asset communication security is huge. We are increasingly seeing autonomy and transportation. So, and also in DR. So anywhere you have a command control scenarios with great to behind the meter device integration. We need to be able to trust the device and protect the privacy of the consumer. Yes, that's a great use case. And finally in transportation we are seeing autonomous vehicles coming up and in fact there are, you know, even on slack campus we have autonomous buses and cars being tested all the time. And these devices typically carry a partial neural network with them that they use to process their surroundings and be able to make decisions in the area of time about in driving decisions in the area of time about, you know, based on the environment that they're in. And the challenge that we are increasingly seeing the need for is these cars who are driving themselves on the road will need to share the results of their neural networks with other other cars that are around them. And the challenge with that is the cars are not necessarily produced by the same vendor same manufacturer. We need to create an interoperable interoperable trusted layer where a Tesla can communicate with a Toyota and be able to inform. So let's say there's a Tesla in the front and the Toyota in the back going in the same lane on the road. And for whatever reason, the Tesla sees an obstacle in decides to hit the brakes. And now transmit that decision to the Toyota behind. So it can now anticipate that the Tesla is going to be breaking and therefore can start slowing down, increasing, reducing its response time. And there's, there's a couple of things that can go wrong. But the two things that are put in into our discussion is one we have to be able to trust that the Tesla is who it claims to be. There's no injection, there's no data injection happening in that network in that the car must stop. And the second thing we need to be able to trust is the data that is coming from Tesla is actually correct also. So there's no man in the middle attack happening where, you know, falsified data is being put into the communication stream. And for all of those scenarios, we have to be able to have a decentralized distributed protocol that can verify the identity of the device and the identity of communication and the integrity of the communication stream. We can be able to do it at low latency levels. So we can't use really expensive systems, and we cannot use really latent systems, thereby, leaving us with one approach information theoretic approaches which is what we have. You know, I think, you know, applications in low latency high performance, high privacy preserving use cases is, you know, this is a tremendous fit for that this algorithm is a tremendous fit for that. I have partners in the US, particularly, you know Hitachi North America who are in who was research group is interested right now in deploying the algorithm on their devices and their research lab and commercializing if that ends up being a good fit for them. And to answer your question Michelle, that could be one of the one of the things to look at also in New Zealand are their device manufacturers or vendors who are interested in, you know, this type of mechanism to be able to provide some sort of trust around the devices that they are shipping. And, you know, what kind of environments are those devices running in, does the use case fit the advantages that this algorithm brings to the table. I'd love to have some connections there. So this is a, so your timeline on this is actually now isn't it you can you're a wish you to do this now and yeah. Yeah, so we have the core algorithm we have tested it out we have deployed it in two different test labs one at Stanford University, and one at Slack, and we are in conversations with not only device manufacturers but also in conversations with the Department of Energy to further the work that we have done so far, and that brings me to my final slide which is, these are the features that we are hoping to add in the next two years to this protocol to make it a full blown identity management system, and we want to be able to do continuous authorization so we don't want to authorize the device one time, and we just every single time on devices transmitting data. We want to reauthorize the device to make sure that it is accessing the resources that it should be, it should be role based access control. We are looking to add new roles and policies to different devices. And finally risk based authentication so if based on the telemetry and the logs authentication logs from the devices. In the history of operations, we want to be able to infer if the device is showing some sort of anomalous behavior. If it's logging in from a different geography maybe that it doesn't sit in typically and see if that changes the risk profile of the device. Authentication for it to happen or not and change the authentication mechanism. So, maybe be able to add a second factor, if need to. We're looking into, you know, making that happen. We have some ideas that I'm probably going to do that, but that's still work that is not, you know, published. So I'm going to withhold those details. And I will pause here for any kind of questions anybody has. It was amazing. Thank you you could even if you share your screen now actually. Thank you for sharing it was you put it in such a fantastic way for to learn and quite easily understandable for even a person that's not in security or blockchain to be able to understand have a fabulous way of presenting it. It's great. Thank you. Does anyone have any questions.