 Um, I'm going to go over two, two things. So one of them is, uh, talk a little bit about privacy and the second one is, uh, I'm going to talk about, uh, one of the, uh, products that is, uh, that is architected to use, uh, spire, uh, specifically spire and, uh, some of the things that we're doing at privacy there as well. So to begin, uh, a little bit about me and, and Doc AI. So Doc AI is a company we build, uh, mobile health products, uh, AI technology and algorithms, platform preserving infrastructure. Uh, I'm a, uh, I'm a, I'm the head of edge infrastructure at Doc AI. Um, I also co-founded network service mesh, uh, and I work with groups like the CNCF as well to help them with, uh, early adoption of cloud native technologies in, uh, telecom and, uh, and healthcare space. Um, uh, and I'm also a co-author of, uh, solving the bottom turtle, which is available at spitia.io slash book. Uh, go get it. Um, it's, um, it's, uh, free to download. And, uh, it's a wealth of information. It was an amazing cast of writers there. So let's jump into, uh, directly into, uh, some of the content. So, uh, first security versus privacy. You've heard a lot about security, uh, recently, uh, and security, uh, confidentiality, integrity, availability, uh, and how those three pillars are integrated together. Privacy, uh, the privacy is, is not security. There are actually two separate concepts are related, but, uh, they're different. So security is freedom from danger. Uh, privacy is the quality or state of being a part. Uh, it's, uh, of, from a given company or given, uh, uh, given observation. So, uh, so the, the two of them are related, but they're not, they're not the same thing. And in terms of privacy, uh, there's a variety of different areas where we are building as an industry, we're building towards, uh, more secure systems like we're learning how to secure Kubernetes systems, which are going to build and run AI models. Or, uh, they're going to build them and run applications to, to do a variety of things in banking or so on. Um, but, uh, from a privacy perspective, there's also, uh, there's also issues that we have to look at on, on the privacy side. And, uh, to give you an example of, uh, one, uh, one area, and it's not the sole area. I just want to use this as a, as a basis to, as to establish, uh, some issues around privacy is, uh, you look at, uh, AI was like neural network. So neural networks learn information about the properties that, uh, of the features that they're, that they're fit, that they're fed in. And, uh, they, they, they tend to overfit on, on information that is, uh, that is in there. And those, uh, overfitted models have basically learned too much about individual, individual inputs. And the same can be said about a lot of other products as well, that they learn more than they, than what they need to, in order to fulfill their purpose, have access to a lot more information than, than, than they should. And so as a thought experiment, uh, if, if you look at the, uh, visualization, uh, take a guess where the, uh, where the circles are and where the squares are along the, uh, along the edges of these two simple models. And if you look at the dashed line, it's easier to make a guess that there's probably a square where it dips beneath the line. And, and if you're only looking at the model from, uh, from the solid line, it's much harder to tell you, you know, that there's some shape there, but you can't tell information that's there. And if you compare the actual, uh, information that it, that it's trained on, you can see how that, those information, how those spikes can be used in order to, to find information about them. So the, and this is true not only for, for AI, but it's true with any dataset that you gather, any information that you're, that you're collating or, or putting together. So it's important to secure that information if you're making use of it, but you should also consider what information are you bringing in? What, what is the cost to privacy that you're, that you're introducing? And if the dataset were to be leaked out, uh, put onto the black, uh, market, there's no way you can take it off the black market. And so the question then becomes, what is the cost to the individuals that, uh, that are there? Um, there's a variety of different techniques you can do in AI to reduce this, uh, we won't go into these because there's not enough time, but just know that there are things you can do to help with these, but it's, these are not enough. There's just, uh, and so one technique that we're starting to use is, uh, is to process data on the, on the edge. And, uh, again, just to reiterate, these are, these go above and beyond just AI, AI training, uh, like what, what information do you want to, to work on? What do you want to, what do you want to operate on? So if you keep traditionally, we try to centralize as much information as possible, we perform our computation on that centralized dataset and then produce some, uh, some type of action. When, when you start looking at new patterns that are coming along, especially in edge computing, we're starting to see, uh, a central place where some model or some computation, uh, uh, where some code is kept or some binary is kept and those get shipped down to the devices and the devices then run it on their own set of data, perform whatever sets of updates that they need to do or whatever work they need to do. And you may, uh, you may update, uh, some aggregate information in a, uh, in a, uh, more privacy preserving way in order to perform some actions. And there's two reasons to do it in a specific way. So the first one is preserve privacy. The second one is that the value of the information may actually, uh, not, of any given piece of information may not actually be worth sending up. So they give you a, an example. If you were doing a bunch of, uh, uh, training or work on light bulbs and you want to know, is this light bulb likely to give out or, or not? You look at the total amount of information that's being brought in for, across the board, for every device or every light bulb that is in your organization. There's a tremendous amount of information there and it may be, uh, very limited value of collecting every bit of information that's there. So performing the computation at the edge, aggregating that information and only sending the results back up is also an economic decision in addition to a privacy preserving, uh, decision. So in the AI space, we do things like federated learning, which help, uh, again, it's not enough because you still have the, you still have the potential for spikes there. And so one of the things that, uh, that we do in order to help preserve privacy is we add, uh, is we add noise into, into the system. And one technique for this is differential privacy. So suppose you had a sensitive question you want to ask, like is this, uh, is this system, like if you're in the healthcare, you, uh, doing a survey you might ask, have you tried, uh, heroin in the past, in the past year? You're going to get, uh, a bad, uh, a bad outcome from a survey perspective because people will not want to respond. Or if they do, there, there's incentives, both legal and social to, to not tell truth. So one way that, uh, that they get around this is they add privacy into the question in such a way that you can put a person into a room that's private. And, uh, there's a fair coin. And they toss the coin. If it's heads, they toss the coin again. That way they don't know what the first coin toss was and they answer the question. If the first coin toss was tails, they toss the coin again, answer yes if it's head or tails if it's no. What this does is if someone is asked, hey, you answered yes on this, they can say, yeah, I answered the coin toss. So what it does is it gives people plausible to liability. You don't know what the value of any given person was the coin toss or, or the real value. But, uh, you still preserve the signal of the, of the population, uh, that you can, you can still reason about what the whole population is doing because, you know, the probability distributions of those coin tosses. And so, so we want, we have things, uh, that are going on in the industry that are designed to preserve that, that privacy. And one question that you should, that you should ask yourself is how can, as engineers, how can we, how, how can we push to preserve privacy, but still make sure that we meet the mission of, of the company, make sure that we still, that we're still able to get those, those positive, positive outcomes. And so, driving a bit further, going into the security space, one of the other trends that, that we're seeing is a push towards zero trust. And I won't go too much into it. That you've heard a lot here already. In a nutshell, though, think of zero trust, think of, uh, as the next generation of, of, of, of infrastructure, moving away from perimeter defense. So perimeter defense, you have trusted networks, you create secure connections between those trusted networks, and then the workloads can then communicate. If an attacker enters into the trusted network, you are at risk. Zero trust, you still have networks, but they're untrusted. They're not the core thing that you trust. Instead, you trust the workloads, create secure connections between the workloads, and if an attacker enters your network, that's, they don't immediately gain access. So that doesn't mean you're, that it's, you're immune to attack, but, but you, you're constantly valid, identifying and continuously validating the, the things that you're connected to and things that you're communicating with at a very fine-grained granular level. And so one special call-out that, that's in the specific area as well, is you start looking at things like trusted execution environments. And one of, one of the reasons I'm, I'm calling this one specifically out is that we're already starting to see this available in modern, not only servers, but also in phone hardware. There are encrypted sections of memory that, that a particular system can, can have. And what they, what this does is it, is it has a level of security that it prevents manipulation of a particular process from, from the host. In other words, the operating system cannot make changes with, without breaking the, the, the system, or without breaking the container. And it also provides privacy so that you cannot peek into what's going on in that particular, in that particular container. And so again, these, these are available in modern, in modern hardware for both servers and phones. I don't think desktop computing has this just, just yet to the same degree, but we'll, we'll start to see that pretty soon. And one of the things that I'm really excited about is the concept of spiffy identities being tied into these things so that you're able then to do a reason about them. And so now that we have a primer, primer on some privacy and security, talk a little bit about some of the things that we're, that we're doing with spiffy inspire. So we have a product called Passport. Passport is our return to work solution for, for helping companies get people back into, into their workplaces in a, in a safe way. So like we have the vaccines are starting to come around. We want to make sure that people are, are as safe as possible. So what, so what we're doing is we're, we're using some of the techniques that we showed before in order to help, in order to help people answer a set of questions that, that provide them with a cryptographic image that's basically like a QR code that, that is, that is signed so that that gains some admittance into, into the building. And so we ask them a variety of questions. Some of the questions can be added by, by the employer. And what the employer is looking for is, is this employee safe to enter the workspace or not? And are we, are we following the rules of the public health authority? And, but to do these on the edge, not to aggregate most of that information, but instead to leave the sensitive information on, on the edge and only process them on, on the phone itself. So from an architectural perspective, it's a pretty simple architecture in terms of how we integrate things like, like Spire and Open Policy agent. So we're, we're, our, the system has been architected to, to support those, that infrastructure so that the front end and the back end communicate over that. And also to, to gain some metrics and, and logging on the overall use of the, of the system whilst, while not, but without having that, that privacy information added in. And so the user doesn't send that private, that private information. So the information we can log by default is already, already preserves privacy to, to a wide degree. And so, so we, we built this with, with passport, we built passport with Spire in mind. And it's, it's one of the areas that we're using in order to, to work out some of the operational concerns that, that come up with, with Spiffy and Spire. And then we're going to take, take that and then continue to apply some of the learnings that we have there with other customers, other groups that we, that we work with on a, on a regular basis. And so as a, and this one last thing on the privacy side, the, those questions also have some differential privacy put into them in terms of how they're, they're set up. And so there are the results are then aggregated together and, and sent off so that as the employer, you only know that the user has pri, has positively attested to the questions in a way that allows them to enter. If there's a failure in the attestation, or they'd answered negatively, then we don't report the negative response. And so we're not the employer never stole this person has COVID. Instead, they just don't get that positive response saying that the person is allowed into the system. So in short, I'm happy to see so many people here focusing on, on increasing security, going towards zero trust. Ask the question as we move into this new space with edge, edge computing with, with spiffy inspire being used for zero trust pay systems. And we gain more, and we gain more heterogeneous systems. Do ask the question about privacy. Are we doing, are we setting up good privacy structures? What happens? Don't assume that spiffy inspire are going to save you from all, from all attacks, but instead ask the question. If we, if we have had a breach, what then? What has been exposed? And how, and what, and how, and how can we reduce that overall risk? Anyways, I want to thank everyone for your time. And if you have any questions, I'll be on the, I'll be on the chat. Thank you very much. Frederick, one question if, if you don't mind taking it out loud, an orthogonal question that often arises is compliance. And you have been part of the brain trust for the project around all compliance matters and, and a sounding board for, for that matter. What pointers do you have if, if briefly or succinctly you can like direct people's folks minds that like what they're right framing to reason about spire in the context of compliance is. Okay, so when you talk about compliance, you have to ask compliance to what and a lot of the things I look at are, are HIPAA environments or things that are treated as HIPAA environments, even if they're not HIPAA. So you have to ask, like, what are you trying to, to comply to? So in terms, in terms of compliance, compliance adherence, once you've identified what that thing is, you have to have that observability to tell are you complying. So it's not enough to comply. You have to be able to prove that you're, that you're complying. And this, so from an observability, from an observability perspective, there's, we, one of the reasons that I've been pushing for spires because I can get that workload to each workload to have its own cryptographic identity and to be able to say that this, this information, this particular system, we can reason it shouldn't have access to this data. Should it, should it be able to decrypt this piece of, this piece of information is if you have systems that that are approved for HIPAA, and you have other systems that are not approved for HIPAA compliance, then you can create blanket rules that, that default reject those, those type of things. And if someone tries to write a rule to over, to overwrite it, then you can flag those as policy and get those, get extra review in those spaces on an ongoing basis. So the, so in terms of compliance, my viewpoint on this stuff is that Spiffy and Spire and the, the related ecosystem are, they won't solve the problem directly for you, but they give you the tools necessary so that you can work that out. And to give you a really quick example, if you have a breach in, in the healthcare space and they access a specific database, you, you may have to assume that the entire database who has, has been breached if, if they, if they've gained access to a system that has full access to our database. You can use observability to potentially to, to narrow that down. And so by reducing, by using Spire, we'd perform the attestation, we get the identity, we then use something like OPA to say they can only ask for things that are related to that specific user. And so there has to be some JWT token or something similar that is, that is shown that the receiving system can then pare down those, those requests. And you have that observability as to what it was called. Then if there is a breach, you can then look at that and work out, well, yeah, there was a breach, but you know, if we have so many customers where we're compromised, you can then say that these are the, the, the customers, potentially the exact customers and even the exact data that was expiltrated out, which is a huge difference from saying, you know, your, your customers have had 2 billion records with some wide variety of data coming out to 500 or 1000 records exposed with, with exact data and those people have already been contacted. And so from a trust building perspective, it's not just compliance, but also from a trust building perspective, that, that ends up helping tremendous say that as well. That was, thank you. Hope you, hope you, every conversation with you, it does, is every conversation with you, this has been wildly informative and educational. Thank you very much.