 Hey, Aloha. Welcome back to the Think Tech Hawaii studios for another exciting episode of Security Matters Hawaii. Today we're going to be digging deep into an area a lot of people I don't think pay enough attention to. We're going to talk about identity and privacy and then we're going to get into consent. We've got one of the, I think one of the foremost gurus in this space with us today. Sal D'Agostino is with us from Open Consent. Sal, welcome. Thanks for taking some time out of your schedule to join us, man. Appreciate it. Aloha, Andrew. Always nice to be back on Security Matters. Thank you. This, this, so I saw an article today that was in I think Security Watch Magazine. There's some folks in California in a district there that want to outlaw facial recognition technologies. I think until everybody gets a handle on how they're being used. What do you think the overriding sort of problem that we're having with identity in our country is? Did we forget how powerful it was? Was it, is everybody too open with it? I mean, I don't know what to say other than people share anything, but then they don't want anything shared. It's kind of this dilemma I see. Yeah, so I do think the pendulum's swinging a little bit. I also think that the concept of privacy, you know, something that's evolving. I don't think it's very, very well developed as would be evidenced by the fact that both sort of like the, well, both the International Standards Organization and our National Institute of Standards are both right now working on putting together privacy framework. So it's not, it's not as if there really was one that people had before. That notwithstanding, you know, there's been a lot of people who've done a lot of work around privacy for a long time. And, and so coming down, and I think that what we're, what's happening right now is that you're beginning to see a change in what, what privacy is. For, for, if you look before at both security and privacy, because they're very closely related, what, what you would see and what most of the effort and regulation and other stuff was around was the concept of personally identifiable information and sensitive information and protecting that. And that was considered the goal of privacy and, you know, and security was an important component in accomplishing that in terms of being able to do things, you know, in a secure channel or, you know, encrypted or whatever the case may be related to security way. What's changed is the fact that a lot of it, well, in some ways due to GDPR, which is the general data protection, you know, regulation, which basically said that users have privacy rights. So what is changed from protecting people's personal information and then trying to make them a hole and a breach in which they never consented really to have their information used in any way, near the way it ended up being anyway. And something which was very much trying to minimize the risk through an end user license or, or a I agree button, like the wrong kind of consent form. Yeah, no one ever reads, right? Like that big long, you know, five page, 10,000 word thing that no one ever read. Right. So so on one hand, that's that's your introduction. And then your recourse is when you need when there's been a breach. And it's all been about simply that well, it's too much been about that. And again, again, what I would I think is changed is the fact that now there are places where there are people are establishing privacy rights. And, and those rights very much are then something that actually people have to respect and are enforced. And, and, and so if you then look at what the goal of security and privacy becomes rather than protecting personal information, it's it's ensuring people's privacy rights. And so if you then begin to look at how you do security and privacy from that perspective, it has a profound impact on our industry. And I so I think you then need to look and, and as someone who has done this for a very long time, I've been reading people's license plates and doing facial recognition for a pro 40 years. And so, and so security, security surveillance as opposed to commercial surveillance, and there's a very big difference. Security based surveillance system should basically be public. And be and because the purpose and justification for it is that you're there to provide public safety by general. And so that's so it's very holistic in that sense. And so when we were doing things like putting in place license plate readers as an example, we would be over saying we're doing this to so that you don't violate the toll booth, which otherwise you drive through and then the road can get funded and things don't work. And so there's there was a purpose for surveillance there. And I think what's what's happened is because, because the deployment of surveillance cameras since 9 11 has gone through the room. Along at the same time, Moore's law continues to march forward on multiple different vectors, one of which is the resolution of the cameras, then the processing power of the cameras themselves. And at the same time, the development, the proliferation of images that could be correlated to along with the open source software to do some of the things that we were used to be very complicated. Like when we were doing it 40 years ago and how to make a license plate reader, it was a it took a lot of stuff. It was a band full of equipment. It's hundreds of thousands of dollars of stuff. And today you can do it with your smart phone and open source software. Right. So so in some ways it's been commercialized and weaponized if someone were to get all hyperbolic about it. But it's generally available. And so so so then now that it's generally available, it was then used, I think, in some sense, with the lack of foresight about what people would think after the fact that you began to use machine intelligence to do identification and location. And there's also ways that you can do this intelligently. Because when you do machine vision, you try to use AI with video, there's stages, right? So you can you don't necessarily have to be processing and trying to get every bit of information out of the picture. There's a concept of when, you know, maybe something was like an alarm condition occurs, then maybe you begin to try to do stuff. So so there are privacy, there are privacy forward ways of doing things. And and a lot of that has to do with, again, goes back to the fact that if you do it from a user rights design perspective, you end up then focusing on consent and notice and and having that the part and parcel of any activity, including, you know, surveilling yourself. Sure. In the in the in the municipality thing that I started with, like in San Francisco, for example, so do you think that people could opt in or out of being surveyed in the public, you know, just as an example? No, I don't think I don't think you can. I mean, I don't think I don't think I don't think that I don't think people can get blacklisted from the database facial database store, right? So I mean, you could you could say, OK, well, I know I've gone to Facebook and I bought their graph and I've got all these images. And so or, you know, I've been just streaming the internet forever and I'm correlating people's pictures. It's not that hard doing Google image search to correlate a name to a face. Put your name into Google and look at images. Your face shows up, right? So because because that's so easy to do is is why you now need to be careful about whether or not it's OK for people to do that, right? So do I want that to be the case? Well, you know, how but at some point it's kind of fun, you know, what other salvatore Dagestino shows up, you know, you go through all that. But then there's the other one of, you know, what what what is it being used for? And the fact that machine vision analogy. So you have the raw data. I mean, you have sort of like a general detection of stuff. You then can have sort of like classification of activity. And you can have identification sort of further downstream. So there's a yeah. And so what happens sometimes is people grow all the switches on all at once because because they're there on the machine, right? So like in some ways, you know, security people say, oh, if I can get metadata, let me go get it, right? And since data is the oil of the new economy or whatever that is, you know, people forgot that, you know, privacy is the toxic, you know, oil spill or what, you know, so, you know, it's like, you know, and if you anyway, so there's no need for that over overkill in terms of generating metadata about whatever you touch online or anywhere. And so and I think that I think that's exactly what that law is going to, right? It's like, no, no, and the other thing is that if people want it, then they can say it. And so there's, so if you look at what's going on in China, a state and there's no, there is no opportunity for people to say, wait a minute, I don't want to have, you know, you know where it's at, right? I don't want the government to be tracking me when I go in and out of public or any or in some cases, private restrooms and count the number of pieces of toilet paper I use, which is actually true, right? There's no chance in China for people to say no way. And, you know, and so I think it's in some ways really important and encouraging that, but I'm not surprised at San Francisco, but that somebody says, wait a minute, we prefer not to have public identification and surveillance and tracking. And in under GDPR, I mean, this is a special category, right? I mean, this is a high risk processing of personal information called out in that rule. So in some ways all that San Francisco is doing is saying, wait a minute, this is high risk. Why do we do we need to do it? And that's what you should always, that's a question you should always ask whenever you get the high risk personal information processing. And it's not just the risk of data, but you know, the overhead of getting people to buy into it is would not be trivial. And so that should, that's also kind of a clear indicator of whether or not it's a good idea of if you're looking at it from the kind of open consent or protocol or design principles that we think can be very useful. Okay, I agree. So let's go, let's go pay some bills. And I want to get into a little more in consent and what's possible with consent when we come back. We'll be back in just about one minute. Hi, I'm Rusty Komori, host of Beyond the Lines on Think Tech, Hawaii. My show is based on my book also titled Beyond the Lines and it's about creating a superior culture of excellence, leadership and finding greatness. I interview guests who are successful in business, sports and life, which is sure to inspire you in finding your greatness. Join me every Monday as we go beyond the lines at 11 a.m. Aloha. Aloha and Mabuhay. My name is Amy Ortega Anderson, inviting you to join us every Tuesday here on Pinoy Power, Hawaii. With Think Tech, Hawaii, we come to your home at 12 noon every Tuesday. We invite you to listen, watch for our mission of empowerment. We aim to enrich, enlighten, educate, entertain and we hope to empower. Again, Mabuhay and Aloha. Hey, we're back with Security Matters Hawaii. Thanks for joining us. After the break, we've got Sal D'Agostino on the line and we are talking about privacy. We're at Intersects where identity and privacy and security intersect. Sal's been getting into consent and he's a co-founder of Open Consent. Sal, I wanted to get into we talked just a little bit about how people accept these in-user licensing agreements and they don't read them, they don't understand what's in them. Where can we go with consent? What's possible and what's the vision for what you're doing with Open Consent? Talk a little bit about that for us, for our audience. I mean, a lot of it, Andrew, has to do with changing the balance in terms of how that interaction with my information takes place online or yours or personal or an individual's information. If you think about your current privacy experience, it's basically accepting cookies, consenting to downloads, operating systems in the end user perspective, which is a little different than consenting to your tracking. But even in both of those cases, it's not the usability of it near zero. There's no opportunity for the individual to actually do anything. It's very one way. It's not like you sent something back. You first are sent to other places to go look at other stuff to tell you which are also internally difficult experiences as well. What we're looking to do is to change what that looks like. The idea that in particular, a lot of it comes through the concept of notices coming not you agreeing to things, but you getting notices back about what's happening with your information. And the notice is generated based on your consent. Again, with that comes the ability to do certain things. In cybersecurity, Andrew, we promote the concept of people having a cyber landing page where if you've got any questions, there's places where you can go to file with DHS and CVEs and there are places that you know you can go and do something about security. Think about where do you go on a website to do something about privacy? I think you've got to pick up a phone and talk to somebody. I would say is your only But that isn't even available there. So if you look there, and this is what I'm saying, the analog to a cybersecurity thing would be for there to be a cyber point of contact and something and some workflow that you can follow. It's a dead end today online. And so what we're trying to do at Open Consent has changed that. So instead of making a dead end to begin to make it usable and operational, and I have had the fortunate ability to work in different kinds of automation and machine intelligence, not only surveilling and machine vision, but robotics and other sorts of automation over the years, the IDOLA platform that I have with my other ID machines is tech automation for the physical security industry. A lot of what Open Consent is about is privacy automation and how to make it usable. Because again, there are gaps in understanding and there are certain things that you can propagate as a result of having rights automatically because they're just there and there's a legal basis for and that's the thing. So in Canada, it's very far along in Washington state, it's very far along California, it's very far along. So in Europe, it already is. You're seeing substantial, you're beginning to see a variety and some very substantial enforcement of rights here. And so there's going to be a need to be able to provide users with those rights that I was talking about before. And if in fact, if what security and privacy both manage or they're trying to manage risk. And many of us are managing and doing a good job putting in countermeasures or modeling, changing risk equations and for the better is in some ways what we do for a living. And so not being able, and so there are many, many benefits to having the user involved and actually put in control of managing their risk. I mean, doesn't it make sense, right? So if you think about it, where is the risk that the enterprise suffers from the breach of people's personal information when the, you know, when the OPM was hacked and Sarah, my wife's stuff was taken and then they were laterally. Yeah, sure. Yeah, exactly. You know, that was a breach of personal information. You know, they had to then go through this process of doing all these sorts of things. I mean, they're the ability for you to have received a notice and have been able to take action should just, you know, need would necessarily be built in. And, you know, it's more efficient. Even when that hack happens and you're on notification, if you knew ahead of time that that's how it worked, it probably would have, you know, you're prepared for that to take place. And whatever next step she needed to, as opposed to this whole sort of snake eating the antelope gulp that, you know, suddenly everybody can convulses at the time of breach. And that's kind of maximizes the damage of the breach too. So if in fact you had sort of automation in place with the users already ahead of time kind of knew what to do and how this mitigates. And yeah, it's one of the things that could happen. I'm not saying you design for breaches, but I'm just using breach as an example of how if you bring the user in, there are light layers of good things and benefit. And the same thing when it comes to deploying security systems. You want people to know that there's a security system in place, right? It's being public about security. And so I think open surveillance and democratic surveillance, as opposed to state sponsored, where we were alluding to earlier that you didn't have any, you don't have any say, is the only way to go. And, you know, and so I think the article in San Francisco is an example, a very healthy example of how you work that out. And I think that then we can become much better at maybe it's no longer surveillance and maybe maybe it's actually structured sharing or I don't want to be too optimistic given the fact that most of the world economy is based on this stuff. But who knows what might occur. Yeah, I'm just picking a number out of the air. But it does provide a certain amount of lubrication to some international machinery. For sure. Do you think that that surface is going to expand with your identity being tied to like biometric signatures today that can be shared now? You know, what I said in the PLAA demo a few weeks ago, and they're making your biometric templates portable across different systems and things like that. So is the scope of your privacy surface expanding and do we have a way to deal with that? Well, it happened already with facial images. And now the commercialization of print, unfortunately, and face, right? So, you know, I've been doing some work recently to look at how face is used in modern devices and how you can restrict it to the trusted platform and if you know how to wipe it. But, you know, it's not trivial to keep secure your biometrics. And I would not be a fan of my biometrics on server any time, unless I absolutely have to. I mean, that's just no thank you. That's just, you know, really, you know, explain to me explain to me why you don't what why there's not another option. Okay. Fair enough. You know, again, that, you know, and yeah, I mean, yeah, and again, you know, certain, yeah, where's where's consent network? Yeah, well, that's, that's the thing that concerns me is that there's I mean, there's no in no way is anybody building in the workflow to inform users about what might or might have been a compromise their biometric information. And this is just raising a flag for all that, right? I mean, so what I think it's good because what it's not like you have to do it that way, you can like the match on card for in the smart card world is an example of how your template gets captured at the device sent to the the secure element in this case matched on the secure element and then an answer comes out. No one ever stored your my template, right? It just was it was taken it was taken, transmitted and then an answer came out, you know, and it you didn't have to hang around. You don't need to keep it. Yes. After it's after it's been sent, you got an answer. So you can decide, right? Exactly. So you there are, there are, there are just, you know, just inherently better ways of doing this stuff. And unfortunately, no one is considered by the excuse me, going over the top there. Unfortunately, not as often as should be are people considerate of the implications of schemes that don't adhere to some of those design principles. Yeah. So yeah, I mean, I think I think that's a that seems to me to be an unfortunate and more stuff in the news over the next several years, certainly about bad biometric thing. Leakage. Hey, so we got a couple minutes left. Why don't you give us give us the sort of open consent pitch? What can how can people get engaged with some of this information? What should they be trying to do to protect themselves? So again, we're going to provide people with an ability to do that sort of home page for privacy. We're going to let you know, we're going to we're going to open that up for free. Open get sense going to do that in the relatively near future, operating out of the UK and in Canada. And I mean, this is some extent anywhere, but I mean, in those cases, there's some stronger incentives, I guess, though, in fact, I think the same incentives exist anywhere. There's stronger laws, I guess, would be the better phrasing. And then we're going to do that. And then and then anybody can do that, and we're going to make it easy for people to do that. And then we're going to offer a service keeping track of whether people keep that up to date with a few simple things. And so there's then going to be a paid service, which is having us sort of basically give you a lot of some some some current information about the state of privacy for people. And and and so you know, again, so rather than going to the privacy page and reading a privacy policy, there actually be some you'll be an iconish thing there, which you can do something to and it will tell you a lot, or as it will tell you as much as that company is smart enough to tell you about what's going on with your your your personal information and how how they're treating, as I said, your privacy rights and affording you the yeah, from there, you know, in order to, you know, and then from there, from the paid business perspective, we can work with companies to then make operational and automate, you know, to the full extent, some of some of those rights, the free version has a few as a few simple things made operational, there's a, you know, the paid the paid for version, the live version, it's, you know, it's not terribly much more. But you know, I mean, it's just it's just keeping a check on it. Awesome. And you know, and I think, you know, and it's just in general, you know, it's a better signal. I mean, and it's easy to use. I mean, that's that's the goal is to, you know, is to is for it to be relative to me or individuals. And and for you know, and for us to be able to do something with it. And and and it'll be interesting to see how companies respond to that. And yeah, and in particular, I am looking forward to working with the security industry, because, you know, we should have a very good understanding of this. And, and in fact, you know, rather than, as I said, the commercial capitalism case, we have a public safety and other, you know, other basis for and justified purpose in which in all of these cases, I mean, purpose is a very important thing. Right. And so if you're and so having knowing how to provide just because we have a purpose doesn't mean there is a requirement for notice and for being public. And I think that, you know, and I think the the it will be to the industry's benefit to to adopt that kind of forward approach. I'd like to, you know, I'd like to commend the Security Industry Association, actually, Andrew, you know, we did some work with them and develop some resources jointly open consent and SIA. They were really out in front on this and, you know, was and as a security trade association, you know, really forward. And, you know, we continue to be collaborating with them on this. And, you know, then there's some good references there, actually. And there's more and more available at open consent. And yeah, anytime people like to go take a look, you know, we should be out, we should be out, out from one of the covers in the very near future. Awesome. So folks, if you're watching out there, your privacy is something you need to take a good look at, check out open consent, check out the information that's maybe on SIA's website and find out what's coming at you, because you shouldn't just be giving it away. Okay. So thanks for joining us so much. Yeah, security industry.org, Andrew, I think. Yeah, see a dot, see a dot org. Yeah. All right. Thanks so much. And we will talk again soon. Hello. Hello. Yeah. Yeah. Always, Andrew. Mahalo.