 Good afternoon, everyone, and welcome back. This is a panel on self-solving identity that at all times is a rich and very meaningful topic, probably even more so given the recent data for each of the Equal Facts that you're all aware of, which I'm sure will come up. But I also wanted to, as part of studying the stage on this discussion, to the extent you're not aware of it, I also mentioned another recent data security issue which came up in Australia. For those of you who follow the e-government-type initiatives, Estonia is essentially regarded as a leader in that respect and recently had identified a security issue that would affect the 750,000 citizen, which in a small nation like Estonia is approximately half the population. Similar to Equal Facts, which is also an issue affecting approximately half of the population. So I just wanted to set up a stage with that. And for those of you who want to introduce myself as well, I am a doer and a partner here at McCarthy-Tetra in the Financial Services Group. And I need a fintech practice. And I'm very happy to be here today to be moderating this panel on self-solving identity. The rest of our panel, you probably know quite well and these two of the members, Andres and Nautilus, which you all said yesterday, give a very inspiring keynote. And people in the crowd would know quite well. Joe Kappler has joined us from Seattle. He's a partner at Berkins-Cooley, which is a leading law firm in the blockchain space. And he's written a paper on self-solving identity that we'll be talking about further. And Emma Scott, who you also probably know as well from this morning's session, if not from before, very well-known person who was watching this page on the AML compliance with back then. So I thought it probably made sense for us to start maybe by turning it over to you, Joe, to talk about the white paper that you worked on in this area. And I think we'll end up taking it from there. I think it'll be a pretty dynamic discussion. And I think there'll be enough food for thought from that for us to go from there. Thanks. So first and foremost, I really hope this is not just us talking at people. So we're gonna have a discussion, get some ideas out. But I'll just set the table. My background's in privacy and data security. So I'm technically a privacy lawyer who sits in the commercial litigation practice and I spent a lot of my time dealing with identity theft and cybersecurity bleaches and also doing enforcement of terms of use of various social networks and other sites. So we became familiar with the concept of taking an identity or hiding one's identity and doing things on the internet. And when I moved and pivoted into the blockchain space, one of my roles at the firm is to bridge the gap between the fintech people and the privacy lawyers. And the first mistake we made, I think in 2014, was naming our blog virtualcurrencyreport.com because just as soon as we did that, if they started calling it digital assets and digital currency, not virtual currency, and of course digital assets, that already has gone and everything else. But if you want to see the white paper, it's on virtualcurrencyreport.com and you can download it. We also have some white papers on smart contracts, on the concept of property in Bitcoin and one on the smart contracts property and self-server identity. So just real briefly, how many people know what self-server identity even means? Okay, so I'll kind of just take just a second to talk about what that concept means and then I won't talk anymore. But what it means is taking, like today, the way that we know who you are in terms of when you go to a bar, when you sign up for healthcare, when you get a job, when you get government services, when you, whatever, when you get paid at your employer, the way that we verify that you are who you say you are is a mishmash of truth claims about you. For example, when I go to the Department of Licensing in Washington and I present myself, they ask for evidence that I am who I say I am, like my birth certificate. They take my picture, they put it on a card, they keep a record in a database. But those databases don't really talk to one another very much. I mean, they may have a loose connection, but your identity is fractured between your presence online through profiles that you may create, like on Facebook or through some social networking site, maybe a LinkedIn profile. There's also databases kept by hospitals and educational institutions, the government. And all those databases store your information and there's two results of that. The first one is you don't have any control over it. Once you give it up, it's out. And the second thing is that they create a huge focal point for attack. I mean, all of the data of a group of people is concentrated in one place. And so there are a number of entities that are working on, you can name a few, like Civic and there's a company called Sovereign. There's a bunch of other companies that are developing ways to leverage blockchain, decentralized ledgers, and this concept of being able to automate the process of creating a truth claim, putting it on a blockchain, verifying that it's yours, that you control it, and then being able to leverage that fact as a way of transacting. And so the concept of self-sovereign identity means that rather than relying on Facebook to tell people who you are, like, whenever I log in on Facebook, I'm literally telling Facebook, please vouch for me. I'm saying here, I'm going to push a button, I'm going to push your record of me to some third party who's going to rely on you to tell them that I'm who I am saying I am. And that's really awkward, like it's not really who I am. Another scenario is when I go to a bank and I sign up for services, how much of my personal information does the bank really need to give me an account or conduct transactions? What do you guys think? Too much. Well, no, how much do they need? Do they need to know my name? And after I've created my account, do they really care that my name's Joe? Do they need to know where I live? Not really, right? But yet, when you sign up for an application, there's like all this stuff you've got to give. And the reason they do that is to have some high degree of certainty that I'm not lying about who I am and to verify certain facts, like that I make a certain amount of money, that I have a stable job, that I live in the right state for their accounts, that I'm not in some prohibited country. But once that verification's taken place, all they really care about is that I'm the guy who has the right to that account and that it has a certain amount of money in it. Well, a lot of that is, in my opinion, mostly security theater. Like you go to a venue, for example, and they have a bouncer outside and they say, ID, please. They take your ID and they look at it and they look at your face and they look at your card. They're not checking for age, right? I mean, sometimes they are. But let's say they're not checking for age. Let's say it's a venue that's open to anyone and they'll check everybody's ID and they're looking at the card and they're looking at your face. And so you have a card that has a name on it. What did they learn from that? Absolutely nothing. Are they waiting for that moment when out of the 4,000 people that come into the venue, they're going to pick one of the cards and it's going to say, Osama bin Laden, we got him, we got him. Do they have a memorized list of suspicious people? Are they subscribing to the Bolo and APP of the local police department? Do they, nothing, it means nothing. At best, it means nothing. At worst, it means that they're going to use all of their implicit bias to profile the people as suitable or not suitable based on that. And they're going to use that as a checkpoint. And even worse, they're going to take your ID, dip it into a card reader, and make a little honeypot so that they can lose that information and expose you. The biggest problem with identity is that identity is a very poor proxy for risk, a very poor proxy for trust. And it is assumed that it means something as if human behavior doesn't change, or as if the past is a good predictor of your future behavior. And none of those things are true. My interest in self sovereign identity is, what if we take the idea that a blockchain allows us to conduct transactions without identity and with trust, and use that to drive a wedge between those two concepts and say, identity is not trust, and trust does not require identity. And once you split those apart, what do you need my identity for? You don't. And I think that gets into an interesting one, especially when we get into things like AML and KYC, because we have entities like banks, like money service businesses like Hewler's Surprise, Hewler's Impressions, Meadows and Strones, and all different sorts of entities that have to collect under certain circumstances. You're in your own district, you're in your occupation, all of these other pieces of information that they are then required to be custodial of. Because they've been deputized. So the next five years, in most cases, as far as that information goes, I think this is fascinating to me as someone who practices an AML, and pragmatically, if you're regulated, you have to do that stuff. It's hard to try not to do it. But then practically, just as a risk manager and a practicing geek, that's a terrible idea. Because I don't think that anyone, be it the largest bank or the smallest single person MSB, is properly equipped to be a custodian of information. That's not their job, that's not the core of what they do, and not to pick on any particular bank, but you can have a million employees, and you can't tell me that every one of those employees is ready to be a custodian of all of the information they have access to. Well, and none of them are as interested in protecting your identity as you. And that's the whole point of self-sovereignness, is that you have control, that you have the right to dictate, that you have the right to sort of present what you're gonna present, and not present what you don't want to present. And I wanna push a little bit. Like, the statement, the statement, well banks have to collect this information. That's not true. They have to be by law as it's written today. But the only reason why those laws exist is because there's no better way. There's no better way for a bank to satisfy like the two concerns, which are are we dealing with someone we don't or can't deal with, and do we know that you're not faking who you are in line to us. And those two questions are important to a financial institution because they could either lose a lot of money or give them a lot of trouble. If there were a way for me to just tell a bank, I'm not on that list, and I do have this amount of money, or I do live in this place, or I am this type of person, I believe that if that state of technology existed and it was adopted, there would not be the need for the AML KYC law that we have today. What if I told you, Neil, that one day you will not need to tell a bank because banks will no longer exist? What is this incredible fascination with proving who we are to everyone? To me, what this speaks to, and it's not against you, I'm just more generally, what we see is this slippery slope almost where we have some goals in mind, and these goals are things that are good goals, lofty goals, consumer protection, prevention of crime, protecting against terrorism, lofty goals. Then we take the most obvious way to do that, ignore all the negative consequences, and say, okay, to do that, we have to identify everyone. And we don't for a moment question the fact whether that is effective or what the consequences are that are negative, and then we completely forget the original goal, and the means becomes the goal. Now, it's not about fraud protection or consumer protection or counter-terrorism, all that matters is KYC. All that matters is knowing the identity. And we forget why we wanted the identity in the first place, and then we don't even need to think about whether it works anymore. The problem with that is four billion people sacrifice the poverty so we can maintain that illusion, because identity in itself is privilege. The ability to have a properly verified identity and the history to back that identity and make it worth something is one of the most privileged positions we have in society and more than four billion people don't have it. So that's the problem, because we're punishing people with that. When, so, let me flip the question back to you, which is when is identity necessary? I think very rarely, if ever. That's not an answer. When is it necessary? I don't know. Okay. I don't know what the idea is, right? Anybody seen the movie Catfish? To the recipient of that use, identity was pretty important, right? If you're in a property dispute and someone's claiming to own your property, identity is important. If I present myself as a group of people and I claim to belong to the group, I'm an employee, I work here, identity is important. No, not identity. The attestation of the truths that you just said, ownership of the property, access to the building, being of age, having the qualification to drive the motor vehicle, none of which have anything to do with where I was born, where I live, what my name is, what my eye color is. All of those things are means to assert that truth. And of course, that's what self sovereign identity is, the ability to assert the truth without revealing your identity. I completely agree with that. So it's not identity that's needed. It's secure attestation. And that's a very different thing in security. I think we do need to drill down further here. But I think we're confusing different concepts because identity is such a nebulous term. And I think there's different concepts of authentication and what that's needed and verification and all sorts of subsets of identity. And I think we need to start to really think about identity is there are many different things that are right now think of identity and we need to partition them out for different uses and figure out how to do that. So I think that's what we need to focus on and have in terms of drilling down further. What is the future of identity and how can we design something that better suits the end goals that we're trying to get out here? For those who don't know how the self sovereign identity applications are sort of being designed today, it might be worth sort of laying out how it works. Let's take a newborn. Like a person is born in a hospital and you divide the world into three parts. People who are making an attestation, people who verify the attestation and the people who need the attestation of some truth claim. So let's take a baby who's born. A baby's born in a hospital. The two proper people to attempt to verify the attestation that that baby was born are probably the doctor who saw it and the parents whose child that belongs to. And that could be pulled into maybe the hospital. Would be a trusted authority on that question maybe. And then without belaboring the point, imagine going through life and every time you have an interaction like that with some trusted authority, whether it be let's start small and the Social Security Administration rather than them just blogging it, they verify that you are a citizen of the United States. You go get that attestation. Everybody trusts that attestor. And over time, you collect these attestations about yourself that you're the children of these parents, that you live in this address, that you go to this school, that you have this kind of education, that you are sort of, I'm a past art exam, that I'm a bona fide lawyer, I have a license. And at some point you have this collection almost like a wallet, right? Of all these attestations, that then who I am, this is your point, who I am no longer matters. It really doesn't. And identity kind of shifts into the background. It's not important who I am, certainly not my eye color, certainly not my race, certainly not my gender, certainly not my socioeconomic status, right? The things that matter are, what does the person I'm trading that with need from me? Like I need you to verify that you're over 21. I need you to verify that you have a law degree to represent me. That in that shell is the future state that self sovereign identity is supposed to kind of represent. And I think drilling down into that, we need to think about what is the role of government? What is the role of corporations? What is your role as individual? Because I think right now we've essentially surrendered identity to the government driver's license and to in many respects banks in terms of access to a lot of other services. So we need to think about in the future what role there should be for different parties. And I started with the example of what's done yesterday and what happened at Equifax. And I think that also kind of shows the risks for both of those, you know, the roles of government and the roles of corporate entities. But on the other hand, as private individuals, there's different levels of sophistication and interest and not everybody has the capability or the desire to maintain a very complex identity structure. So what are the different roles of the various players going forward? Can we also carve out some space for anonymity as not only a viable but a valuable expression of the fundamental human rights to privacy, association and freedom of expression? And anonymity is enormously powerful. And yet culturally it's seen as more and more marginalized. And I think we need to fight hard against that narrative. You can do it on a very personal basis. And I can challenge all of you to try and do that from now on. When somebody asks you, what is your name? Lie. Just try it. And see how often they actually need your name. You're buying a coffee at Starbucks. Your name, sir? Montezuma. Why not? Do I have to have that person shout out my actual name to a hundred strangers? And do that and repeat it. And you'll notice that all of the people who say they need your ID, they need to see picture ID, they need your name, they need your date of birth. When you present them with the fact that I don't have ID, I don't remember my date of birth. I was adopted. I don't have a home. I'm homeless. And see how quickly they suddenly discover they don't need that information and can provide you with services and access. Run that test. Privacy and anonymity just are cultural, right? And there's a big contrast, for example, between Europe and the US, especially, in this fact. So I think in many ways, people have traded privacy and anonymity for convenience, especially with the study internet, really. Transactions were much more anonymous before that. And so are people making that choice? And to what extent should we be interviewing with that? And should we be giving more choice, really? I don't know that one of it is a choice. And I say that because I've watched people just give up all of this information. All of this information is especially valuable for trinkets, trinkets, or nothing. I'm checking out the supermarket and I see, man, what's your postal code? Pass. I just say pass. I don't even ask the questions. I just say pass. And every now and again, someone has to call a manager to figure out what the button is to go to pass, but I don't need that to check out. It's much more fun to say 1600 Pennsylvania Avenue. Pass. Because then they get the junk mail, too. Like that. I mean, I think those are fascinating. And the question of becoming a honeypot for information is interesting. The question of trusted parties, I think, is interesting to me. I mean, you mentioned Equifax and I sort of have, if anyone wants to see it, I have the Estonia sort of what the ID card looks like, and what the little dombell looks like. But when I was a security vulnerability, I immediately got an email about my Estonia and your residency that said, hey, there was a security vulnerability. We found it. We don't think anything went wrong, but we want you to know that you probably can't do the step that you thought you would be able to do right now. Sorry. And I think that's vastly different from saying you lost all your self-print between because it's the government or do you think that's different because they're just handling it in a different way than everyone else? I'm gonna say that there's a different culture. I'm going to say there are different economic interests in some respects, but I also just think that they were prepared and better put to handle it and had a culture that had been in an organization that was built on respect for privacy. And had a very different mission statement as opposed to come back and see us in a few days and give us all your information and sign off on the idea that you'll never see us because of this breach. I didn't have to do anything. I first of all, they detected stuff before there was a breach, as far as we know. And then I didn't have to do stuff to protect myself. That was done and that was handled. And those are very different in terms of approach, but they're still both reliant on a trusted third party. I'm still not sovereign. And I need to make that clear in either one of those situations. One, just a follow-up experiment for you guys. Let's sort of look at how anonymity and those four billion people who don't have the right to an identity kind of rammed together. We've been working with a non-government organization, Africa, that is working on a self-sovereign identity solution for refugees. And they have a keen need to be anonymous. Many of them are escaping persecution. Many of them have people who are looking for them individually. I represented a woman who was raped by the people who had raped her in her hometown who had followed her to the refugee camp and went tent to tent to find her because they were afraid that she would humiliate her more. And so not only did she have a need, A, to be anonymous as to them, but B, she had every need to be known to the UN and to the United States in applying for refugee status. And in addition to that, she really had a keen need to change her identity to break that in half forever, right? So she could make a break with the past that would forever follow her if she held on to her original identity. And the concept that you could memorialize the facts needed to gain refugee status without disclosing the live, touchable, you are who I know you are, aspect of who you are. Really powerful for people in the protection programs who are refugees, who are in a position of powerlessness because of the system and the environment. And so it really makes identity kind of fun to write that that concept is not nearly as important as the need to be able to transact with slivers of it based on your situation or risk. There's a counterpart to that, which is you talked about the three roles of verifiers, consumers and producers of identity or self sovereign identity. There's the risk we're not talking about, apart from the honeypot and the hacks and the identity fraud that can occur when our information is stolen. The other big issue is the fact that a system of identity that is total in its scope, that is comprehensive, that is efficient, that is effective, is a dystopian nightmare. Verifiers of identity at that point become all powerful to control our ability to interact with the rest of society. And that is a very, very big risk now. And most Western liberal democracies, we discount that risk because we cannot remember the last time we had a malicious oppressive government. It wasn't that recent. But if you go and work and live and speak to people in countries where that memory is as recent as 30 years ago, I had some very revealing conversations was talking about identity with Argentinians. And their government 35 years ago was throwing dissidents out of aircraft. That was their method of execution and disappearing 20,000, 30,000 people, separating children from their parents. That's government. That's the verifier. That's the identity holder. Do you really want to give them the ability to be efficient at it, at collecting and controlling access to identity? So part of the self-sovereign part of identity is really important because you want to remove that power from the verifier. How do you develop a system where self-sovereign exists? Yet you have a backup plan for if you do something or forget something that attaches you to that self-sovereign. That's a very good question. I think it emphasizes another aspect of this, which is that as a species for us, identity is not a mechanistic function. It is an organic function. And identity as an organic function has constraints. And those constraints are the Dunbar number of efficiency in organization and the limits of human memory, which means that our identity is only designed to really percolate in a small society where we know each other. And it's also designed to be forgotten. You cannot remember all of the bad things that somebody might have done. And the ability to forget gives you the possibility to forgive, gives you the capacity to change as a human being and not be tied to that one mistake you made when you were 14. If you turn that into a database mechanistic, time-stamped fingerprinted device, there is no forgetting. There is no forgiving. And as a result, there is no possibility for human change. And you've turned that organic function into a mechanistic monster. And that's the other aspect of it. How do we introduce within identity the mechanism of forgetting? I mean, the other thing, and I agree with that, the other way to answer your question is that in a decentralized system where your identity comprises of a lot of individual truth claims, it's hard to imagine the world where you lost all of what it wants. Right? I mean, in sort of blockchain terms, maybe you lose your private keys and you're screwed. I get that. I get that. But that's a problem that exists for everything blockchain. Like key management is the Achilles' Halo blockchain. It is. And until someone truly solves that issue, blockchain is going to continue to be an incubation. That's just my own personal belief, that key management has the potential to be the thing that makes it never come to pass in reality and do everything as futurists say it's going to do. I want to react to your kind of mechanization. I mean, isn't the flip side of the ability to forget? I mean, how much does that discount the value of memory? I mean, you know, every monument, I mean, it's just September 11th just happening in my country. And, you know, every monument I visited in New York has the word remember in it. And that serves a pretty valuable purpose. Maybe there's another value to that and that's what you're trying to identify. But, you know, do I want to know that someone robbed a bank five years ago? What do we remember, though? First of all, I was there because I lived in New York in downtown Manhattan that day. Secondly, what do we remember? Because on every one of those monuments, I would like to take a sharpie and write underneath, remember, the Saudis did it, right? Because that's the one thing we don't remember, the fact that we know exactly who did it and they are our best funded ally while we've spent 17 years invading the wrong country. Memory, when written as the history of a manipulative system that is designed to profit from war, is not true. There is no true memory about those events. And if you take that to individual identity, you have the same problem, which is the ability to capture truth as memory is in itself privilege. And we don't have the privilege of truth. None of us do because that's not what we remember about 9-11. Yeah, I'm not sure what that means in relation to an individual truth right now. The whole point of truth is that you verify small enough facts that you can verify a truth. I mean, you're making like large statements, philosophical statements, drawn in prime facts, like I can prove that I was there when some of them were wrong. Has anybody watched the movie Brazil where a fly gets into the typewriter and they mistype bottle for a tattle and they end up destroying the life of a completely innocent human being? Right? Look up Hamdi versus United States and learn about the Afghan peasant who had that happen to him in Guantanamo Bay. It's not a broad philosophical statement. The verifiers of identity tortured people for 10 years under mistaken identity. They only attested to a very narrow truth that that person was someone who had done something but they hadn't. How do you erase that mistake in a digital system? How do you fix that? And that's the problem because we rely on identity to judge behavior, both past and future behavior. And when the system is given the power to do that, the mistakes that it makes are atrocities. I thought that was a fascinating question in the sense that we're relish in doing that in human beings. Right. It's one of the things that I like most about Bitcoin because code doesn't care what you are. Yeah. And I think that that's a really beautiful thing but even just thinking about what you're saying in the context of you and me and everyday people, a lot of my closest friends work in compliance and law and sort of the funny conversation that us nerds have is if you did something really bad, how much money would it have to be for and what non extradition jurisdiction would you take off to? And we have those... Friday night conversations. I know, I just want to share some of those lines with our lawyer friends type of conversations but that's a really weird, privileged conversation to be having. And I get that the other side of that coin is how do you avoid, if you're not me, how do you avoid having done nothing wrong and going to jail, which happens to a ton more people than the people that get to sit in a hot tub and have that non extradition conversation. And that's a real fact. I mean, the way that we deal with things is pretty screwed up in that regard. And so I question the idea about mechanization in the sense that if we can have a system that cares less about some of those things, if we can have something that looks more like code and less like judgment, does it also have the potential to be free? Well, part of my concern with all of this is to me it comes down to centralization versus decentralization. If you have centralized systems in whatever form, right? Even if they're centralized just in terms of your control of your own self sovereign identity and the truths that you're willing and able to control in a test, one of the problems you have is that the data you collect and the principles you espouse diverge over time, meaning that today you say, well, speaking up about X isn't a crime. Let's collect data because this isn't a crime. And then 10 years from now it becomes a crime, but the data still exists and has persisted, passed the expiration of the principle that protected that data, right? So take examples from history, apartheid, right? So you get married to a person of another race today, you get married to a person of another gender or your gender changes or something like that. And today that principle is perhaps protected in your country. And 10 years from now your country changes, but the data is immutable. And that principle is no longer protected. That principle condemns you. What do you do then? Very recent example of that is the dreamers. The dreamers act in the United States became a honeypot where in the process of applying for dream status, they provided exactly the evidence needed, which by the way was protected evidence under the Fifth Amendment that they weren't required to self-incriminate. They self-incriminated so that now that the law has changed, the immigration services have the evidence, self-professed evidence, to deport them, right? So that's an example where principle was great, collect the data, principle changed, data is forever. I think touching on, Andreas was just speaking on, and you were alluding to the fact that it's close with better, I think there's a real bias in the collection of data that we need to be aware of, right? And the more data you collect, doesn't mean that you're getting something that is more accurate. And each time you're making a decision, do I collect this data and do I not? And how do I formulate it in the way I collect it? So I don't think we can get away from the fact that there's always going to be bias, there's always going to be issues with the concept of identity. We just need essentially to build the right controls in terms of transparency and ability to correct in order to try to better maneuver it going forward. Well, and for anyone who's developing applications in this realm, that's a really important question, right? You think about the European right that you've forgotten. How do you get out of a blockchain is a beautiful, I mean, it's tough, right? And one way to do it is you don't put the data that you want forgotten on a blockchain. You put some kind of sample of it that's divorced from what it really means. That's one simple code case play, but very good point, right? That if you're a developer of that technology, have a certain, and maybe you guys just agree, like a certain kind of almost moral obligation to think carefully about how it's designed to collect information and to think through, like, okay, this makes sense now when dreamers can register for a good thing. What happens if that's taken away and we've collected the data and we plan for that? And the warnings were very clear. Sure. A lot of the people who understood the potential for abuse made very clear warnings that that's what would happen if those warnings were dismissed as outrageous, right? I mean, so I think the idea of the right to be forgotten goes back to the universal right of human privacy. And maybe we should do a thought experiment and tackle some of these ideas, like what if you can reset back to factory defaults, wipe your identity clean and start fresh? A witness protection program for individuals. Let's call it the universal witness protection program that allows anyone to reinvent, to protect themselves against a stalker, to protect themselves against their own past, to protect themselves against whatever, a universal witness protection program. Wouldn't it be nice they've had that in 1932? How do you deal with the misuse of that though? You deal with the misuse of that by penalizing action and not identity or history. And not assuming that you can predict the action of someone based on what they did before or who they are. Because we see this constant shift, and it's a modern phenomenon I think in our societies, where the crime is not the crime, the crime is the means, either the money transfer or the conspiracy or the other thing that led to the crime. Why? Because the crime's too hard to prove and prosecute, so we're gonna get you some other way. And the more we move away from that, the closer it gets to thought policing, the closer it gets to predictive policing, the closer it gets to presumption, profiling, and condemnation of innocent people. And I think that's the real issue. So yes, you address people's bad acts, not the means that they use to get to a point where they could commit those bad acts. I mean, I think part of the problem though is that recidivism belies provisional crime. I mean, this is a legal concept that we've struggled with for centuries. The people who are deterred by a good, solid punishment aren't the ones you worry about, it's the ones who aren't. And that's the reason why there is this incentive, and I'm not arguing that it's a good one, I'm just suggesting there is an incentive that exists to identify that person who did it twice or three times and say, no, the more times you do it, it is predictive of your future behavior. And I don't think that's that. They're the ones that are most adept at creating as many identities as they want, rediscovering those identities, rebuilding the profiles, stealing other people's identities. They're not getting caught up in the mesh. It's everybody else who has a mortgage in a house and a fixed address and an Equifax profile. Your average non-recidivist sociopaths criminal, their Equifax profile is probably very thin. No, I point those, that's not a good argument for wiping identity. I think we're dealing with a different time and age. There was a time that people think that whole history wasn't available on our research. Now it is. And so the right to be forgotten has become more valuable as a result because people make mistakes when they're teenagers or any moment in time. And you can't erase that mistake anymore. So Andreas, what you were referring to, the way you're characterizing the right to be forgotten is almost like a bankruptcy.com. Yes, it's an identity bankruptcy. Absolutely. And so maybe that is something that should be part of identity going forward. Yes. The idea that it's a very different one. And maybe you have to go through a process in order to declare identity bankruptcy, a process of acknowledgment, restitution, retribution, or whatever else. And it is restricted to people who are not heinous criminals, right? So Dick Cheney shouldn't be able to do it. But... Before it has a tail, just like bankruptcy notation lasts a while after you do it, but then eventually goes away. You know? Right. But we have to think about all of these organic components of identity that work at a small scale, that if you take them and operate them at global scale with ruthless efficiency with digital fingerprints, you completely change the nature of that social interaction. And you remove all of the protections of that social contract. And potentially you vest enormous power in some entity or some collection of entities. And the problem is that that entity that has the power, their identity can change too. Their principles, their powers, their targets. So I think that's the real risk because we understand identity in a very visceral, personal way when we deal with it in a small social group. And we can't conceive of identity at the level of a global machine learning-driven database, which does not share our humanity, does not share our principles, and does not share our limitations. And that, to me, is always scary. Well, and one of the things that seems like such a travesty today is the way that we deal with the errors of the way we do identity is just with insurance. Mm-hmm. I mean, we're known to breach, right? How many of us have received word that yet another one of our services that frankly has a lot of information, we really care about keeping proper, I hope that's a good example, got breached. And it's like people don't even react anymore, which is exactly what they want. I mean, those entities, the companies that do it, the PR firms that support them, they want you to just gloss over the letter and keep on going. And why wouldn't they? I mean, they have a lot to lose, and so there's a good reason they do that. But at the same time, the whole system, it's just like the mortgage crisis in the U.S., right? Everybody banked on FDIC insurance. Everybody banks on, you know, look, I'm somehow indemnified for this, but that's a lie, it is a lie. And I mean, the company may be indemnified for it. You know, these artifacts really gotta pull out the amount of money it costs them for this breach? No, they have an insurance for that. And that insurance, I wish it extended to the individuals affected, not the individuals who cost it. And that, you know, no one's offered me a policy for my identity. There's a fantastic bot online as of two days ago by a thing called Joshua Borwell. Can't pronounce his name exactly. He's this guy who created a bot for contesting speeding and traffic tickets and things like that. He now created a bot that allows you to sue Equifax and Small Claims Court for up to $25,000 in a single click. And it just prints out the things, you sign them, you put a check in, you send it to the Small Claims Court. Class action, forget class action. Tsunami action, just 143 million Small Claims Courts against them. And then we'll take the insurance money. Thank you so much. You know how much time that is? Equifax can't bring their lawyer to the Small Claims Court. Of course they can. I'd like to play with that a bit, which is there's two ways to deal with these big monumental societal challenges. One is the very serious formal, methodical, process-based appeal to government and regulation and reforming the law. And the other one is one I'd like to call the Cocopelli approach. Cocopelli is an ancient Native American deity who is the deity of mischief and playful chaos. And the Cocopelli approach is to bury the system in noise, to bury the system in false data. To each of us play with the system and playfully destroy it from the inside. Don't just send your address change to your bank when you change addresses. Send them 30 address changes a year. Use a different name with every service. Pollute it with invalid data. What is your date of birth? I give a different one at every service. Equifax leaked my record. My record didn't have a valid address for the last five years, doesn't know anything about half the things I do, and is absolutely full of junk, like 20 pages of completely false addresses and junk. In there, there's some signal. There's two ways to deal with security. One is diminish the signal. And the other one is crank up the noise, and we can all do this individually. I think that makes for a beautiful, cultural grassroots movement, which is to jam the system with junk. Don't have to go on. I think this is a good time to pause and see if there are any questions to grab. I need some noise. Actually, I do have a question for everybody, because everybody's talking about American records. What if they've lost American records, they won't have lost Canadian records? There are some Canadian records that have to do with law. You can get them, you're a social insurance member, and you've got a hundred or personal information, and they'll search for you and tell you what about it to be optimized. And then you can go back on a later specified date and sign up for a program, where if you sign up for it, you are signing a way to write to pseudonym chat bot or otherwise, which you may not be enforceable. Which we may not be enforceable. Yesterday, the other big announcement was Apple's face ID with a whole bunch of passive identity that you walk around every day on security cameras 23 times a day. What about the passive identity that now able to be you can begin if we start assembling your passive identity or shadow profiles? Are we screwed? Yes? It was a revolution, I sorted that onto it. Could you repeat the question? Can you check your face every 90 days? Yeah. I don't understand the question. So the question was about face ID and whether we now carry elements of identity in the form of biometrics that can be easily aggregated by machine learning algorithms from background video and stuff like that. And it's completely unavoidable as to whether we share our whereabouts, our activities, and things like that. And people are rightfully worried. But there's one great solution to that. If you want to see reform of identity, if you want to see reform of machine learning and biometrics and things like that, you go to something that the French call surveillance, which is the opposite of surveillance. Surveillance means watching from below. So what happens if every one of our smartphones is using its new A9 chip and machine learning building algorithms to do facial recognition of every judge, federal official, congressperson, and police officer who has a public record, anywhere they appear in a background video or photo and tracks that in a public database? They just walk past and 2,500 smartphones that are doing selfies along that broad walk log the fact that congressman was walking next to this lady or that person or whatever. And you watch them like a hawk using the very technology that they built to watch you. And guess what? There's seven billion of us. There's only a couple million of them. Asymmetric information warfare is the most effective way to switch the tables. If you create Big Brother and you make it stare, sometimes the abyss will stare back. We are the ones who will stare back. Pull out your phone, film a cop today. So, Joe, I wanted to present this to you. I read your paper, which is great. The concept of self-suffering identity is almost as important or even more important than the concept of self-suffering financial freedom. And you set a very insightful point, and you gloss over it, and I agree with you, is that the keyless field of blockchain is key management. And you said, I hope we find solutions. And the question is for you, but it's also for Andreas, which adds a company which provides solutions for key management. And the question is, so what are the solutions? We need to have practical solutions to bootstrap this. Is it going to be hardware? Is it going to be multi-stake? And talk about some of these issues with key management, so for example, recovery and revocation. So practical logistic problems that we have and practical logistic technologies and steps that you think may solve those problems. Since I don't have a solution, I'll talk about the problems and then I'll let Andreas talk about the solutions. When I say that it's the keyless field of blockchain, what I mean is that every solution I've seen today relies on some centralization of the verification of the thing you're trying to hide. And so all you're doing is replacing and trusting a central authority with the data with the key or access to the key, one of the two. And so in the early days, we were helping set up a few exchanges. And they were like, OK, we've got to be super, super secure. Everything needs to be super secure. And you'd have a pooled wallet. And they'd be like, and because we're pooling funds, they're going to be really, really secure. And then every user's access to move the money was a username and password. And I almost fainted. I was like, guys, that's just the same old problem. And so the only things I could say to that is we have to, I mean, this requires completely creative thinking. It requires a revolution in the way that we do things. And my own personal belief is that it'll have iterations. But the one right now that tends to make me personally feel secure is the ability, much like what the iPhone does. And I disagree with you that the iPhone, as it's currently designed, and as the service works, exposes that threat because it is resident to the phone and it's not being back home. And that's an important fact. So my answer to you would be a piece of hardware that uses something that is genetically, biometrically unique to me that allows me access to my keys. It's the closest thing that my creative mind that's not creative can imagine today that would work. And you add a layer on top of that of, say, multi-signature or some other layer of security that says even if you were faking it, you've got to trick other people to it at the same time, can add a level of security that people can get comfortable with. And before we move on, the topic of biometrics would be at least the whole panel to itself. But there's always a risk of biometrics that it's lost, it's lost forever. So that's something that you always have to keep in mind. That's inherent in the nature of it. That's quite similar to you, like a social insurance number or something along the lines of single authentication. What about something where you have a network of trust that after a certain amount of time, of non-interacting must be forgotten? Well, one of these, yeah. So let me address that and answer the second part of your question, Francis, which was about key management. There's a tendency, especially among Silicon Valley geeks, to assume that every social problem has a neat technological solution that can be written in Node.js and ICO tomorrow. And the truth is that key management as a practice, and more generally, the information security principles around key management, are a very careful balance between a number of competing interests, and they require the very careful deployment of a number of intersecting disciplines. So let me explain what I mean. The key interests are achieving the perfect balance between security against attack, theft, et cetera, prevention of loss, accidental, natural disaster, et cetera, continuity. In the case you are incapacitated or dying, you want something to be accessible to someone after your death. And at the same time, as we all know, convenience, ease of use, speed. And unless you balance all of those things, then users will actively undermine the security to gain more convenience, or will abandon the system because it's not suitable. That balance is almost impossible to achieve when it comes to identity, because people do not perceive the risk profile of their identity. Now the good news is this. We now have an opportunity to combine the mathematical tokens, keys that we use for controlling our money, with keys that we use to control identity. And that allows us to persuade users that they need to secure this because it's their money. And as a result, the identity gets the side glow of security just because they're really just protecting their money. So we're seeing people now who are learning about information security and digital keys for the very first time in their life. And they care because that treasure has some money on it. And that money just went up 20% again yesterday. And woo! So those are the issues. But actually protecting those keys requires three different disciplines. You have to look at technology, tools. You have to look at people, the social aspect. And you have to look at the process. And you have to support tools, people, and process in equal measure. And if you fail in any one of those, it collapses. So it's a very complex problem to solve. The one answer I can give you is that it can't be solved by technology alone. We use sometimes very antique kind of methods to do key management. For key storage, my one most enduring piece of advice is use a pencil made of good old graphite on paper. Why? Because paper is a recorded medium that has historically lasted for 5,000 to 6,000 years without erosion. We can still read paper from Egypt. We can still read paper from Sumeria. So I'm not going to put my key on a USB stick. I'm not. I'm going to use old style technology, handwritten notes on paper. And so it creates the situation where the obvious technological solution is not always the correct thing. And it takes time to learn that. Well, 25 years ago, none of us could understand where you put the slash slash dot www. And if there's a colon before or after, we learned. We're going to do the same now with keys. And I hear you mentioned technology people process. So I think, you know, Amber, compliance. So just turning it over to you in that perspective, you're so committed to control. So any perspective on that? I love the CCSO standard. So the cryptocurrency certification consortium, which I actually think you did do it by some amount back in the day. No, I'm on the board of directors, and I'm in the steering committee for that. Big fan, biased, but. Fair, fair. But they have a fabulous standard. It's auditable. It deals with things like security and key management. And it's really good. But the other bit that I wanted to say about that is that I think we have a paradigm shift from a personal level. And I used to think that key management and sort of figuring out ease of access was really going to be one of the last mile problems in Bitcoin. And I don't think that way anymore. I think that having a paradigm shift around that is going to be one of the last mile problems in Bitcoin. And I say that because I recently closed a bank account, and it took me two and a half hours of my time to get my own money back, and it was an extremely painful process. And I was thinking about that versus how much, you know, having your own keys and having your own control, it's a huge amount of responsibility. It's a huge amount of responsibility. And the paradigm shift isn't that we have to make that easier. The paradigm shift is that we have to teach people how to manage that appropriately. We have to make info set personal responsibility. We have to make people understand that they need to put the right controls around their identity, around who they are. And I think that's the piece that comes back to self-sovereign identity. We need to empower people and educate people to be able to manage these things in their own lives instead of designing things that make it easier but are giant security goals. It's got a built-in motivation, though, which I really like, which is with great responsibility comes great power to flip that one around. So you have that enormous responsibility, but what it gives you is the power of banking out of your back pocket. And for some people that is quite an exhilarating power. It is really free. It's a very interesting juxtaposition, though, on top of the way that, you know, if you think about the largest entities in the marketplace today, they run exactly kind of that concept. The idea is we'll do it for you, right? We will make, we will manage things for you. We will manage everything for you. I mean, you know, I do a lot of product counseling, right? And as a lawyer, every month and a while, you have to collect a click, right? You have to. It's just, it's proper risk management. It's formation of a contract. You know, if someone's going to come to, you know, I mean, we're stepping out of this, what I think is a much more fun conversation like philosophy and the future. And the reality is that if I want to bind my user, not to sue me, or bind them to terms that are important for both of us to have a predictable relationship, I need them to sign a contract, but they're online. I can't get a signature. I can't get a click. The United States, that click is important. Guess what happens when I go to product people and I say, can we get one more click? They freak out and they say, no, no, no, no, no, no, no. If you ask me to get another click, my rate of attachment to new customers drops in half. That to me is indicative of how much consumers have sloughed off their responsibility. And one thing I worry a lot about with sloughed sovereign identity is the fact that it's on you. I mean, at the end of the day, it's on you to be responsible for your own identity and you don't get to be like, oh, I screwed up, I mean, I screwed up, so fix it. What was the Q metaphor the other day? Don't stop trying to tame the wolves and let's start. Armoring the sheep. Armoring the sheep? Yeah. Yeah, let's, let's. Right, exactly. I like that theory better. I think that we don't give people enough credit as humanity, as consumers. If I'm not even trying to educate anyone, if I'm not going out there and saying, we can do this better, we can build better systems, we can be more accountable, no one's ever going to. It's not a problem, it's self-correcting. If you give your digital currency to someone else to hold for you, they will lose it. The only question is when. And so there are two kinds of custodial accounts in Bitcoin, those that have been hacked and those that will be hacked. That's it. So you decide how much longer you want to maintain that illusion, but it will be corrected by reality very, very quickly. If you do not control your own keys, you will lose your money and you will learn that painful lesson and then next time you will control your own keys. I've now had accounts at more than half a dozen exchanges that got hacked, shut down, exit fraud or some other disaster like that. I was an empty Gox customer, I was a Bitfinex customer, I was a Bitstamp customer, I was all of those. I haven't lost a penny because my money does not stay on those platforms for more than the 15 minutes I need to do my trade and get my money out. And so that's the lesson. And you watch that happen again and again and again and if you don't learn the lesson, life will repeat it for you. But you're really getting at one of the fundamentals of regulation which is how paternalistic should we be in regulating. And some people just don't want to go that extra mile and are gonna end up with such a different one percent of the ones that can and the ones that can't, right? Can the people as a whole and should they have the motivation to actually go through all this effort as a population as a whole? Well paternalism as a practice, meaning giving essentially being a parent, right, to where it comes from. You always have a choice in your parental style and you can either remove responsibility or you can teach responsibility. And so we can be paternalistic in the simple fact. In the simple fact. And it's a balance, it's a careful balance and you can never get it right. I'm not diminishing that. What I'm saying is, should our governments be making decisions about which investments are good or bad? Or should they be investing in teaching those things in school? And both of those are paternalistic functions. But one of them is a paternalistic function that assumes you have the ability to learn. And I'm always skeptical of the idea that the peasants simply don't have the skills to vote, to decide, to learn, to make their own choices. We need some elites to make the choices for them. And sometimes it does smell a bit like that when I hear these arguments that really the regulators need to protect consumers because they're not equipped. Equip them. So I agree with that. You have a question, right? Yeah, I had a question. Back to key management. So what happens in the simple use case, like with private keys where you can write them and paper it for them and nobody knows and then you suddenly die and something happens and then how do you pass that asset along for the next generation to your kids where you hide it from everybody else and now nobody knows? So the vast majority of our practice is estate planning, which Pamela runs at Third Key Solutions. And if you are staying for the legal workshop, I'm going to touch on this, that's all. Yeah, so estate planning and continuity planning is this careful balance where you want to make sure that your descendants get your money after you die and the keyword there being after. Right? And it's a very careful balance of giving enough access credentials to achieve continuity without giving enough access credentials to promote continuity. Let me give you an example. My wife makes fun of me for being this big technological guru but I did what you did. I crowned a piece of paper and a pencil and I wrote down my private key twice and I cut it in half and I put one half with one set and the other half with the other set and one of them is in a block box in my office and the other one is gonna save in my house and I have a key and my father has a key. That's it. And he has instructions that if I pass away, he lives 200 miles from me, right? That access in my cryptocurrency is located in one of those two locations. Right. And you have the key. He even brought me, he could go to my house and break in, or my office and break in but like that's a very good example, a very practical example of how I've ensured that if I die, I have a will, right? My father knows who the executor is and he knows how to get that money to the right place to be disseminated when he needs to be disseminated. People process technology. You need the actual legal will. You need the means to be able to execute on that will with the key management technology and you need to inform the people who are going to pull all of that together when you're not around to explain it. And that's really why it's complicated because most of the people who have expertise in this space, you know, lawyers understand the process part, computer scientists and key management and security experts understand the technology part, but putting it all together is difficult and it's a multidisciplinary effort and that's exactly what we work on at Third Key Solutions. In fact, it's our biggest effort at the moment. So I don't want to miss the opportunity before you ask a question. I want to ask a question to really push people's concept because I agree with what you both said about, you know, people are better than that, right? Basically, people have the ability. We can be, we can be. So I want to ask a question that pushes that. You're a customer of an exchange, okay? And let's set aside how stupid it is to leave all your money there for a second. Your money is there in exchange and you mistakenly visit a fishing site that looks just like the exchange. You mistakenly log in, thinking of logging in and you give your credentials to the fishing site that has a bot that immediately logs into the real exchange, conducts a bunch of trades into their account and outgrows your money. Simple solution, right? You lost your money. You're gonna take responsibility for that? Or you're gonna go demand that the exchange give it back? Yeah, well, I mean, that the fundamental problem is that we are now in a new world where the circumstances have changed but many of the users or customers or people who are trying to use these services have a mental model of what they think actually is happening that doesn't match what is actually happening. And that mismatch, that cognitive dissonance, the gap between expectation and reality, to me, is fundamentally a user interface and user experience design parameter. So design is all about creating expectations that will be fulfilled by the application. So you're designing, we should just do a better job telling them that they're screwed if they lose their key? Or should we not have to? We should not give them the illusion that using credentials to log in via an insecure web browser is a correct way of managing their security. And it will never be, that you can never stop that from happening. I think that from my perspective, when I look at that example, that exchange is screwed anyway at that point in time, because they haven't done any of the other or they have allowed their users to continue without setting up any type of 2FA. I mean, if like any exchange that I'm gonna use. Two-factor authentication, whitelisted addresses, back off timers. No, the big question is none of that works. If you have 2FA enabled, they get through it. Let me give you a really scary example. How many of you read the New York Times article or read it as a journal about cryptocurrency sort of gurus getting their phones sent? Yes, that's not 2FA. Two-factor authentication is to have two factors that you control. Your Verizon SIM is not one of the factors you control. It's one Verizon controls. And Verizon can't be trusted to even pick up the phone. And they're a phone company. So no, that's not a factor. When we say two-factor authentication, that is a cryptographically secured one-time password applications you run on your smartphone. And it has a pin in front of it so you don't just give it out any moment. That's one. And of course, none of these measures work in isolation. Security is not about finding the one measure that defeats all attacks. It's about layering enough of these measures that the cost of breaching all of them exceeds the value that it's at the end of the journey. One of the problems we have with custodial accounts and honeypots and centralized exchanges is that they put so much value at the end of the journey that there are not enough barriers you can put in the way of a determined attacker to make it more expensive for them to complete that journey. Whereas my personal wallet, trust me, does not have enough value at the end of the journey. And I have put a lot of barriers in the way. So I know that I've got that security balance. Now, if you take 1,000 wallets like mine and you put them in a custodial exchange, what does that exchange have to do? 1,000 times better security, and that does not exist. So that model fails from the get-go because you cannot achieve that simple balance of risk reward. There is a reward always for the attacker. The trick is to make it so dangerous of them getting caught, so costly for them to break through the barriers, so difficult for them to break through the barriers, that that reward is simply not worth it. Well, I mean, when that happens, you have to sacrifice some convenience. And that's part of the answer to this. If you can immediately change the address, and this is what I was trying to get to, if I can just log in and immediately change the address that my coins are sent to, that's super convenient if I'm not a bad actor. I prefer the convenience of still having my Bitcoin F to find yours.