 Good afternoon, a good morning or good evening depending on where you are and welcome to the session on security and privacy challenges for digital currencies. My name is Vijay Maury and I lead the digital currency global initiative secretariat in the standardization bureau at the ITU. This is the second session of the security thematic track of the DC free conference from cryptocurrencies to CBDCs organized by the ITU in collaboration with the future of digital currency initiative of Stanford University as part of the activities of the digital currency global initiative. I'm joined today by our panelist Daniel Benaroche, director of research qq edit and zk proof.org. Mr. Paul Lloyd, cyber security strategies from HPE and also the vice team leader of the security and assurance working group of the digital currency global initiative, and Mr. Scott Carlson, director of global architecture and strategic development head of digital asset security at Kudalski security. So the session today is a bit like looking a bit ahead in the future, like up to 2025 and discuss you know what would be the security and privacy challenges that could be faced, you know by the implementation of various types of digital currencies, you know from CBDCs to stable coins and cryptos. So the session will provide an overview of the current technology risk landscape, as well as explore you know what are the measures including cryptography measures and progress towards quantum safe digital currency systems. So each panelist will have a 15 minutes for presentation. And this would be followed afterwards by a Q&A session where the audience can also provide their questions. So, before we start just to remind the audience, you have the Q&A window at the bottom of the screen. And we invite you to type your questions in the Q&A window and panelist will respond to them during the session. So I'd now like to invite our first speaker, Mr. Daniel Benaroche from Q&A to deliver his presentation, which will look at security and privacy challenges for CBDCs and stable coins. So Daniel, you have the floor. Well, thank you very much, Vijay. And to all the IC3 or IC3 and the ITU team for inviting me to be part of this panel. I think the first of all the whole world of privacy and CBDC is quite exciting. We're only seeing sort of the beginning of it, but it's definitely important to maintain. And I'm going to talk a bit about some of those considerations and what it means. So as Vijay said, I'm a director of research at Kedit, company building products based on privacy enhancing techniques, and the lead organizer of ZKproof.org, an effort to standardize zero-knowledge proofs. So let's just start first by understanding what is the importance of privacy in blockchain. As you know, very briefly, blockchains are not actually private by default. There is kind of an understanding that blockchains are pseudonymous, but they're not anonymous, right? Like the fact that transactions can be read and linked by anyone and through their kind of transaction chain means that anybody can see what's going on and can de-anonymize the identities. You even have tools like EtherScan, for example, that help you actually see what is in different accounts and what are the transactions. For example, here I'm just showing one of Coinbase's, the US exchange, one of Coinbase's addresses holds sort of $24 million in Ether and $33 million in 120 other tokens, right? So you can see very much all the details. Now, what is important here is not to highlight sort of what happens in each blockchain and whatnot, but it's sort of what are the pros and the cons of having this kind of transparency. First of all, the pros are that of course it's more difficult to hide criminal activity or fraudulent behavior and those that have tried many times have ended up being caught. And of course, governments can audit these blockchains to track the well-being of the system, right? If we think about it in context of CBDC, for example, based on blockchain. On the other hand, we have the cons of this, which basically mean that people can take advantage of public data to identify users or monetize some of that data that should be private. And of course, we have the idea that validators of these blockchains as kind of this decentralized consensus-based mechanism can kind of what's called front-run the transactions, meaning that they can add their own transactions before the one that they see coming, right? In a way, maybe taking a better deal on an exchange or leveraging some position of liquidity providers or something like this. Won't get into the much more into details of this, but the question is what if there existed a tool that actually provided the best of both works, right? Like this is kind of what is needed for a CBDC. And I'll mention this explicitly in a second, where what we want is sort of for users to be to have all this system that they can trust or that they don't necessarily need to trust because the mathematics takes care of the integrity of the system. And that can keep their data and the transactions completely private. On the other hand, we want to have this system that sort of the authorities, the government or the central banks can actually ensure at least to a certain degree and maybe even with lower costs and friction that the users are behaving properly, right? That there is no fraudulent behavior. And so cryptography can actually unlock both of these things by unlocking the value of applications while providing the privacy and kind of ensuring that trust is not an issue in interactions. And also cryptography can enable kind of these checks or prevent certain fraudulent behavior, even before it gets to the point of detection, like even at a preambler state. So that would be amazing. And in fact, this is kind of what we call today privacy enhancing techniques. And I'm going to review very briefly kind of four of them, which I think are the most interesting for today's applications. The first one you may have heard of zero knowledge proofs. This is essentially a tool that enables kind of verifying the integrity of a computation when there is someone that has some data and wants to compute some function over this data. And it proves, you know, the outcome of this computation while maintaining the data privates. Today it's used mostly on blockchain scalability and transaction privacy. And actually can unlock auditability today. So if we think about a CBDC in the future, governments could essentially audit financial behavior through zero knowledge proofs by having users generate proofs of this, you know, specific behavior that is expected of them, maybe a transaction that was sent wasn't sent over $10,000, while not having to necessarily show what the transaction actually was, right? Or maybe we can prove that the transaction that we just received comes from a chain of, you know, not fraudulent tokens or, you know, not money laundered in funds. In terms of multi-party computation, so this is another tool, this is kind of a holy grail in cryptography, it actually enables computation on data that is distributed. So if we have several parties and each one has different data that they want to keep private, the multi-party computation allows through some interaction, computing an outcome out of all the data but without actually revealing that data, right? Today it's very useful because it allows proper private key management in the context of blockchains as we know this is one of the issues that users are now owners of their funds because they own a kind of a private key around these addresses. Specifically, it allows you to do multi-signature or key recovery. And in CVDC it can even enhance further applications. If we think about, you know, for example, decentralized exchanges today, well, today they happen because they are mostly transparent, like on the Ethereum blockchain, you know, the market is made through verifying all the asks and bids. Whereas in a private scenario, you would essentially need something like a multi-party computation to understand exactly and to get an integrity of what the market price is based on all the supply and demand, right? Then we have homomorphic encryption, which basically enables computations in the cloud to happen on private data. So essentially the computations actually happen on encrypted data and the output, when sent back to the user and is decrypted, you get the actual same result you would have if you had sent non-encrypted data to the cloud, which is quite amazing. Today it's used mostly for basic statistical aggregation because of the fact that it's still not fully efficient to use, but there are several companies in the space building libraries and products for doing machine learning on private data, right? If we think about, for example, the health industry, I mean, you know, Google got into some trouble not long ago because of this, because of trying to analyze data, health data of patients that didn't necessarily consent to it or that had, you know, different kind of regulation aspects of this data. So this machine learning could essentially be computed on encrypted data without revealing that. And in the context of CBDC, well, it would be interesting to think about how to compute a credit score, for example, without revealing all the financial data where users that hold the data and, you know, the credit scoring company or the insurance company has a risk scoring function that they do not really want to reveal. And this would be one way to solve it. And then we have differential privacy, which is essentially a tool that adds kind of a lot of randomness to the data set, and then allows it to compute certain macro results, like overarching results on the data set, without revealing specific individuals data. And, you know, if you usually have a data set, what happens is that the ends of the data set can reveal a lot of, you know, information about those individuals, because they tilt a lot, you know, the different parameters of the statistics. Today it's actually being used in the US census, for example, and also in Mozilla Firefox to collect information from the users without necessarily understanding what the individual usage is. And again, in CBDC, it can be used sort of for, you know, governments or authorities to keep a kind of high level overview of what the transaction usage is or what the financial system looks like. And this could be very useful for the well-being of the system. And essentially like this, this is kind of needed, these tools are needed to kind of unlock the full value of CBDC and stablecoins. So as I was trying to say earlier, CBDC needs certain properties in order to really truly succeed. The first one is kind of that it needs to be a censorship-resistant system, meaning that, you know, if we have, you know, an authority that built this system, well, we don't want the authority to be able to temper with the system. We don't want the authority as users to change the rules of the system without going through the kind of appropriate governance process, right, like going through, you know, lawmaking and so on. This is also getting a little bit into the kind of the world of what the security aspect of building these things means, because we're talking about the code and it needs to be audited. You know, users shouldn't just trust code, and that goes as well for the blockchain space in general. Then we want it to be trustless, to have trustless interaction, meaning that if I'm going to be, you know, sending transactions to VEJ and to Scott and to Paul, well, I don't want these interactions to have to think about whether I trust the person on the other end, right. I want to only basically trust the fact that I reviewed the code or that the code is safe, and that's enough for me to know that the math will do the job. Of course, for this, it's really important to have standards in place. And of course, this is one of the main jobs of ITU and, you know, as I said earlier, ZKProof.org is also in this space. Then we have that, of course, as we said, we want, you know, the privacy of transactions and data. This is extremely important to have a well, you know, running system that has mass adoption. Because, you know, if you've heard of many stories in Venmo, where basically people are sending money to their friends and saying exactly what the reason for that money was in the concept. But at the end of the day, it's kind of a social platform. And then they can, everybody can just see what was the purpose of these transactions. We wanted to have kind of a unique user identification. In this context, what we call in the blockchain space is that it should be a federated system. Instead of just an open system, like for example, Bitcoin or Ethereum, where you can actually have many identities that are unlinked to each other. Here we definitely need some kind of onboarding for that. And I will mention exactly why that is in the context of identity, because identity is going to be a core aspect of a proper and wealth functioning private CBDC. And of course, from the context of the authorities, we needed to have to, for the system to preserve integrity and prevent fraud, and also to allow for auditability of the behavior, maybe for certain asset types, maybe for, you know, regulation that is specific. To, for example, stock markets and derivatives. And of course, for the purpose of tax payments, right. We don't want people to start evading taxes just because there is a use a digital, you know, infrastructure that seems to be more useful. And of course, when I say identity here, there is a lot of it behind it. But if we think about what identity means in the digital system in the digital world. Well, of course, that's a whole, you know, to our presentation in itself, but essentially we want to be able for a, for a, you know, a government system that to have a single identity onboard it. As I said earlier, in a way that still transactions are not linkable. So to anybody looking at the blockchain that is not say the authority or, you know, a regulator, the transactions are not linkable and nobody can identify the specific addresses or the identity of the people in these transactions. And this also goes to the fact that the content of the transaction and not only the addresses should be private. As well as all the data generated. If we, if we are building applications on top of the CDC, right, like it could be an exchange like if we stock market, we still don't want the privacy, the data that is generated in those interactions to be public. And potentially not even stay at the application provider, like kind of changing a little bit the way that the systems work today. And still, we would like to give the ability to say an authority be able to de anonymize specific transactions based on behavior that is considered fraudulent. Right. And this would kind of what it would generate the way it would work is kind of it would generate certain triggers based on, you know, a proof that didn't verify correctly or a computation in homomorphically encrypted that didn't necessarily pass a threshold on the result, etc, etc. But these are kind of the main important points to consider for what an identity looks like in a federated CBDC system. And again, with all this is to say that, you know, some of the schemes in this privacy enhancing techniques field are today usable for many applications actually and I think for CBDC, they are really ready to be used. There is still a lot of research going on and innovation, which also sort of means that, you know, standards are kind of at this edge of the balance where, you know, you want to, you know, freeze some scheme in time for people to start using it and adopting it in widely. But at the same time, you don't want to stifle innovation, right. And all in all, again, it's important to understand that as much as the mathematics here is what counts and what is insuring the security, we still need to have well, you know, defined systems for reviewing the scheme security itself like the proofs of security for the mathematics behind it, as well as several experts and auditors reviewing the code for these systems, because we know that the biggest vulnerabilities come from human made systems and human mistakes. In that sense also one of the main aspects we need to consider is how does the usability of these systems look like. What is the kind of product or user experience perspective we can add and innovate on because this is kind of different from the more traditional systems that we've been used to. So I'm very excited to engage in the conversation with the rest of the panelists in the Q&A and thank you for much for listening. Okay, thank you very much, Daniel, for sharing your thoughts on the privacy enhancing techniques and how this would be applied to digital currencies like stable coins and CBDC. So let's move now to our next panelist. So I'd like to invite Mr. Paul Lloyd from HPE and also Vice Team Leader of the Security and Assurance Working Group of the Digital Currency Global Initiative to share a bit now the work that he's leading and the cryptography work stream in the Security and Assurance Working Group. So Paul, you have the floor. Thank you. Next slide please. Yeah, when obviously there's nothing more important to cryptocurrency than cryptography with more foundational and especially when we look at the future of digital currencies, it's good to ask yourselves. You know, all of us in this space three questions and I'm going to go through all three of them here relatively quickly and a relatively high level but I hope that it gets the point across. And the first question is simply this, what is the quantum computing threat to classic cryptography? If we go on to our next slide please, then let's make sure we really kind of understand the importance of what we call classical cryptography as opposed to quantum cryptography and anything quantum related. But fundamentally, there are a handful of cryptographic creatives that are just foundational to digital currency systems and by definition we don't have digital currency without things like digital signature or public key encryption or key exchange and key agreement mechanisms. Those are the things upon which we build cryptocurrencies. They work just fine today in the classical world because we have identified, we as mathematicians and cryptographers, that community identified a number of what are called hard problems that we can use for these algorithms. Interfactorization is one, the RSA kind of thing, discrete log in a finite field, kind of classic Divi Hellman, or discrete log on elliptic curve, elliptic curve cryptography. Mathematicians and computer scientists have called these hard problems because as computer scientists and mathematicians, no one has been able to produce algorithms for classical computers that can effectively solve these problems, especially solve these things in acceptable timeframes. So as computers have gotten faster, we've increased key sizes bit here a bit there. The result is that the bad guys with all the computing power they may be able to bring the bear aren't going to break those keys. If we go on to the next slide please, then we notice that quantum computers are a threat to that they change it. Why? The reason is because algorithms have already been designed, you know, since the 90s timeframe for a while now in advance of quantum computers themselves that demonstrate that these problems are no longer hard when we have viable quantum computers phrased a bit more to the point and directly quantum computers will break some of the classic algorithms upon which cryptocurrencies have been built today. Furthermore, for other cryptographic algorithms that are widely used in cryptocurrencies and elsewhere and elsewhere, quantum computers will make attacks against them much easier. They're not broken, we have ways to deal with it, but the problem gets worse. What does it all come down to? When we look at the future of digital currency to me, it's as simple as this last point here, we must prepare now for a post-quantum world. Next slide please. So let's talk about that viable quantum computer impact a bit more here in our second question. What would a viable quantum computer mean today for existing cryptocurrencies? The stuff we know and love today. Next slide. Well, here's a high level perspective to kind of get our heads around it from the top there. Any currency today that's based upon traditional classic cryptographic primitives that rely on these classically hard problems will be at some level of serious risk. It's kind of unavoidable. Details will matter, but the risk is going to be there at some level in some sense. Ranking them by Bitcoin versus Ethereum versus whatever would be an exhaustive, technical deep dive. But there are a few things worth highlighting to help us understand how nuanced the situation can be and how important it is that we prepare now because that's the message I'm going to leave you with is let's just begin preparing. Next slide. Let's look at digital signatures based on lift a curve because that's a real, real pervasive thing that we see today in cryptocurrencies. Bitcoin, you name it, there's probably an elliptic curve digital signature in there somewhere. An attacker today with a viable quantum computer today could forge digital signatures if given an actual public key because the algorithm is broken. When a quantum computer can run an algorithm knows how to do things like a factorization or in this case here, it's equivalent in elliptic curve. So that means that if a cryptocurrency truly exposed as a public key, then it will face things like for digital signatures. If that digital signature authorizes spending one's digital currency, then we have a catastrophic situation. Bad guys can only own your wallet. If owning the wallet means the ability to exercise a transaction on your public key, that public key is out there for the bad guy to bring his quantum computer bear on. Boom. That bad guy is you. Your wallet is a bad guy's wallet. It's a bad situation. However, not all cryptocurrencies expose public keys. Some cryptocurrencies do things like compute your wallet address, the thing that's publicly kind of visible at least most of the time by one way hash the public key. That's a tougher problem. Bad guy who has a one way hash of your public key is not going to bring one of these super quantum computer algorithms to bear on it. And that's good. But at some point in time, if it's truly based upon a digital signature, there has to be a publicly a public key exposed. And that means that in the age of viral quantum computers, the exposure is there. There are things that we can can do to limit that exposure cycle key pairs after use use them one time throw them away. Those can all help. And there may be there is work in progress and some of the various cryptocurrencies today to incorporate these sorts of approaches. Some cryptocurrencies have incorporated them from the start. That's a good thing. But long story short, it's a tough problem for today's cryptocurrencies. When it comes to digital signature is based on liquid cares, some much tougher than others. Next slide please. That's considered proof of work based on a hash function. Very common thing today in our cryptocurrencies. So anyone with access to Bible quantum computers today has immediate advantage over those who do not. Not because quantum computers have broken hash functions, but because they have made them much, much more efficient. And that gives them advantages as miners. If cryptocurrency is mine based upon proof of work, then obviously someone who can do that mining more efficiently has an advantage over others. Now, we could argue that in the real world, everybody will just converge on the quantum computers and that's probably true over time. But it's at least very, very disruptive. It's a risk. It is a threat in my estimation. And the worst case of that, if it just went to the end of the spectrum, it could be as far as a 51% attack where someone got there first. They had the resources, the funding, whatever you want to call it there to be well ahead of the rest of the environment, the rest of the ecosystem, miners on an access to Bible quantum computers at scale, 51% attack comes into play as a, an arguable risk. Next slide. So what's the message here? Don't panic. Yeah, hopefully as I went through that, you didn't get the impression that I'm a sky is falling guy or a gloom and doom guy or it's all over these existing cryptocurrencies or dead in the water a couple of years. That's not the message. I want to stress that because it's the important part of the message. It's not a panic thing that we want to do. It's a plan that we want to want to do it now. In the work that I do in the working group that BJ mentioned is we prepare for a post quantum future. We kind of want to go into this digital currency thing from the start as we're planning its future and tell ourselves that, you know, great, we have things that exist today as cryptocurrencies. We know them. We love them. But when we think about central bank sorts of digital currencies, other sort of emerging digital currencies, why don't we make them quantum safe from the start? That's the message here. Next slide. How far away are these public quantum computers? That's probably a good question to ask at this point. Since on the one hand we identify that it breaks things, and on their hand we said, well, it breaks them kind of sort of depending on what you are and maybe we just want to plan. Well, if nothing else, this tells us the timeframe for for that planning. Next slide please. A good way to look forward on this in my estimation, my opinion is to look just look back to the past year 2021. It's some cool things happened in that timeframe. In February we had IBM's five year old map. And while I'm not going to go into horrible details quite yet here. Let's just say that it's very ambitious. We'll see some more details in a bit there. But it's very aggressive work and it's work on which they've made progress. It's cool stuff. In April, we had DARPA announced their quantum benchmarking program. I think that's important because it kind of shows the maturity of it in a sense. It kind of shows the other interest is getting and the need for people to do things like, hey, we need to start really treating it as real. I'll leave it at that. That's my interpretation. But I think it's about interpretation. We saw a big merger in June of Honeywell Quantum Solutions and Cambridge Quantum Computing. Again, I think it's important to mention in the same spirit as DARPA that it shows consolidation, it shows things happening. It's not just lab stuff and academic stuff. These real world things we're seeing as quantum computing becomes real and to use that word viable again. These are the kind of things that we see as a technology becomes viable as it matures and it enters the real world. In July, we had 66 qubits. If you're not a quantum computing geek, then a qubit is the analog to a classic binary bit. It's how quantum computers operate. It's the bit that is not just a 01. That's the two sideways data apparatus called Zuchongzi and it achieves 66 qubits. And again, if you're not a quantum computing person, that's actually a very, very impressive achievement. It was topped in November when IBM's Eagle achieved 127 qubits. So point being that compared to where the state of the art has been and real quantum computers have been, this is powerful stuff. This is impressive stuff. If we go on to our next slide, then what does he tell us about the future? Because that's what we care about here. What does it look like? IBM's roadmap, there's a roadmap again for 2022, is targeting 433 qubits with their Osprey platform. And a year later, they get to 1,000 qubits with Condor. Looking a few years ahead to set a 20th into the decade time frame, IBM and others are targeting million qubits. And million qubits is scary to a classic crypto person like myself, maybe some of you. But million qubits is when you say to yourself, you know what? My classic stuff is assumed to be broken. Bad guys can do bad things. So next slide, please. What is this all again? Where's that message again? It's the same message. In the case of this timeframe for viable quantum computers, what I would want you to take away today is that the exact dates are not important. I'm not going to give you a date and say it's March of 2026, the new world ends, or it's August of 2032, the world ends. That's kind of missing the point entirely in my estimation and in the work that they were doing in my working group. It's all about looking at very recent evolution, the roadmaps, the maturity of the industry, and realizing that the exact dates don't matter. What matters is that we accept that viable quantum computers will likely be here within the next decade. And the fact that it likely will happen and will happen before or in the same timeframe as brought adoption of things like central bank digital currencies is all it takes. Reality bites, but we have to accept it. We need to influence our thinking today, assuming that in the timeframe of rolling out central bank digital currencies and similar sorts of future digital currencies that they need to be quantum safe. And so we need to plan and we need to plan now. Notice there are a couple of questions, a question from countries without quantum computers should adopt a central bank digital currency. I think any country should pursue digital currency as they think they need to do. I would only amplify the message back to any such country that as they do it. Assume that the that viable quantum computers will exist in the next few years probably in their timeframe where they need to at least be worried enough to adopt a central bank digital currency in a post quantum safe manner. Abandon the algorithms that we know break adopt quantum safe algorithms from the nest or from that so you're from other standards bodies and and move on, you know, move on to your your digital currency future. And that's what the question was trying to plush out. And with that, thank you very much. Okay, thank you very much Paul. Thank you for sharing the, you know, the different rights that quantum computing pose to digital currencies and also to the blockchain. Thank you also for not predicting the quantum apocalypse. And for saying, you know, we can still plan for it. So, and I think that's what you're doing right now in the security and assurance working group. So hopefully for the next edition of the DC free conference. We can hear a bit more from you on what, what is it that we need to do. So now let's move on to our next panelist, Scott Carlson from Kudalski security. So Scott, Scott, I'd like to give you the floor for your presentation. Thank you. Thank you so much. One thing that I specialize here at Kudalski security and that we do as Kudalski group is we help make sure systems are safe to use that they're built right that they're built with integrity and that you can trust them. When you take a look at, you know, the whole foundation of the CDBC, if we centralized digital money, if this system is going to be where people's livelihood is, wouldn't we want it to be, you know, roughly the safest thing on the planet. And so when we think through that, you know, how do I build it right how do I design it right. And then ultimately, what kind of things do I do as a security auditor assessment to make sure that is actually doing the things that it says that it's actually safe to use or at least, you know, as safe as the risk tolerance shows that it needs to be. And so I want to I want to be clear that you know what I'll be talking today about is mostly the technical assessment side of things, as well as some of the maybe more traditional security flows around, you know, confidentiality integrity availability, a security assessment when you bring in somebody who is looking at your system like they want to prove that it's safe. They want to make sure it keeps the data private keeps the data available only to those who should see it right that's the confidentiality side. The integrity side, nobody can change it, you know, suddenly you have a million more dollars that that shouldn't happen or a million less, it should enforce the integrity. And the integrity thing I think is a big question around the quantum is that it could affect the integrity, or heck even the confidentiality availability of all of these systems. Availability is less a traditional cyber thing, but some people consider things like general service attacks, and the availability of accessing your funds, a security problem. And when you think about, you know, a centralized digital currency, it has to be available all the time when you need to transact 24 seven 365 like it cannot take a day off. But there's some other sort of maybe more squishy areas around security assessments, you know, is it safe to use it legal to use it accurate doesn't force the local rules of the country you happen to be in. Sometimes these things can get coded wrong, which is maybe why it's a security problem, because it's an incorrect thing maybe that's not just a, you know, quality assurance thing where it was a typo. But some of these rule enforcing or legal questions could result in seizures, or inaccurate posting. And when something on a blockchain is inaccurate, you know, it takes some effort to fix it. And that's why when we want to do a security assessment here, we certainly want to make sure that the pieces that humans built are actually safe to use and have integrity. And so kind of I asked my question here right. When you are looking at this world let's pretend for a minute that everything is becoming digital right we're moving the human from the flow chart removing the bank we're starting to automate all of these flows from end to end. You need to deliver features that are safe to use. And I don't mean they'll cause harm and things like that I mean they will be accurate all of the time, because people need to trust these transactions. They might not get a choice whether to use a CDBC because they'll be a resident of a country that adopts one, but they still want to have the emotional and practical trust of that system. And I think that's what this security assessment helps show, you know, but what does it actually like do. Right when you when you look at a blockchain blockchain is many layers of technical stuff that makes it work. The foundational part of that technical system is the mathematics, you need to trust that the math will keep the information the signature practically safe. Often in the case of math, it's is the math strong enough to withstand the attack for the duration of time you need it to stand for. In, you know, in the case of quantum, it's making it faster for the computer to compromise what the math is trying to prove. So you want to check to make sure that the math is on because this whole system is built on foundational math. That math is implemented in code. People build this code, we have to make sure that the code does the math right. Especially in the world we're inventing some of these new algorithms, putting these chains these technologies into production for the very first time that math better be implemented correctly. In all of the use cases that it intends to quite often some of this math works in academic papers, but it does not work when implemented for real. Because there are, you know, practical reasons, you know, there's language problems, there's cloud problems, there's local problems, there's country problems, there's Kiesco problems. There's, you know, all these things that the real world applies upon academic math that you need to check with these sort of assessments and these two foundational things I would argue have to be done by everybody when they're deploying these kind of systems. As we go through it, it gets more into the business logic there, right? Do I trust the sites that these technologies live in? Can I then trust the transactions going through them? It's money. So does the money go the right place? Do the right thing? But then there's people, right? Should you ever trust that a person gets it right 100% of the time? You can't. You know, especially when you get to every possible person in your country. I don't know that I could trust every person to not lose their key or to, you know, not lose an important password that gives them access to their money. Today in the traditional world, there are ways to do recovery operations. There are ways to, you know, get your money back if it is stolen in many cases. When there are humans and the public internet in the middle of these sort of security environments, you need to pull the human out and say, what if I can't trust any of the humans? Well, I need the math to do it right. Okay. Well, how do I combine, you know, a bunch of humans trying to break it, a bunch of humans misusing it, a bunch of humans not knowing what to do with this full scope of technically challenging things. You know, security assessment is going to help you look at that whole layer of things. And you know, whether or not you're going to be a bank who's going to adopt and advise CDBC country implementing this. You want to go all the way up the stack to make sure that you at least have an answer for these or a plan maybe because you can't answer all of these questions all the time. You can't make this 100% safe. You can just make it usable to the point where you're willing to accept the risk of use it. So we do a lot of security assessments here at Kedelsky globally. We audit CDBCs and we look at blockchains and we look at algorithms. We've been doing this for a lot of years now. But what's funny about it, we find the same stuff in almost every type of implementation. People use bad libraries, they trust bad dependencies, basic math errors. You know, add before you divide instead of divide before you add. These sorts of things are human error quite often. They're not technical problems. They're software build problems. When we look at these attacks that have happened over the last 10 years. The ones that compromise things the most, right, it compromises the person's password secrets management. It compromises the way it was implemented or it attacks one tiny little flaw. If anybody saw the log JS, you know, issue from a couple of weeks ago, that was a tiny little flaw in a system that was a software library essentially that everything depended upon. These are common scenarios that will also be in CDBCs that will be in the systems that are used for these. And we have to go through these basic things. Because I think us as security practitioners, us as designers and architects need to at least think about these six things as we're building our system. How do I keep it current? How do I trust what I have there? How do I manage these secrets, these passwords, these keys, because those are the keys to the kingdom. How do I make sure that the cryptography is good, and that if something goes wrong, I can handle it? How do I handle that error when it does happen because it will happen? What is true in the blockchain world more than in this world is that there are brand new coders with brand new ideas who are crypto native folks. They haven't been doing this for 30, 40, 50 years like some of us, and they don't know these lessons of the past. And I think what we need to do, especially in some of these standards, especially as practitioners, is we need to make sure that we take some of the things that we have learned about the common flaws in the past as we've designed banking systems, tax systems and stuff, and make sure that those are also implemented in the theories and practices that we put on top of the blockchain system. People ask me a lot, right? When we do an assessment of these things, do they ask about privacy? Do they ask about quantum? And yeah, they do. What often they don't ask us to do is go break it because I don't have a quantum computer and I can go break their system. But what I can do is help them think through the ways that privacy goes bad, the way that their key handling, their key management could go bad. Quantum cryptography, quantum attacks are a very interesting thing because you have to plan for it. Maybe today a quantum computer cannot break the math. Well, as far as I know, none of them can break the math. In 10 years, maybe it can break the math. So if I think I need a transaction life to last one second and it takes one year to break it, it's probably safe because we'll already have moved on to the next transaction. If I think I need to keep every transaction written safe, even if a quantum computer exists, and I need to keep, for instance, this information, you know, for the history of time. At some point a quantum computer will be able to look back at that historical data and decrypt it. Maybe that's okay. Maybe that's not okay. But thinking through, you know, that encrypted information that's written on the ledger that you are keeping private or safe or, you know, as somebody's private information or key, you're not going to want this quantum computer in 10 years to decrypt information from the past and use it to attack in the future. So these interesting timing problems really need to be thought through by the designers. And when you bring in an assessor, you know, they have this skill, you know, as I heard from Daniel as we have here at Kedelsky, like there are specialists here, you know, Paul's a specialist in this. This math is hard. We need to bring in the people who understand, if this happens, then that happens. What do I do? How do I upgrade this? How do I ready to roll out new keys if suddenly we get attacked? These are really important things. And this will only exacerbate it as we move forward into the future here. As these quantum computers get closer and closer, we're going to start asking these questions, you know, if a government is running my system, and it traditionally takes them 10 years to upgrade their whole stack. How do I start upgrading so that I can catch the upgrade cycle? You know, they have a 10-year upgrade cycle, but math goes at the speed of three years. They better start upgrading now. Maybe they need to start designing a system that can be updated now, but then updated in the future when it's ready when we standardize on something. These are sort of the things that we have to think about here. We do a lot of advisory sessions right now with companies, traditional key managers, mathematicians, other academics to talk about the practical risks of what we do about these sorts of things. It's funny because when I think about the seriousness of these issues, sometimes I wonder if we are taking this seriously enough. We are still having password problems. People are still getting phished. People are still losing their money to somebody sending them a tweet or a telegram or an email. Why does that still happen? Isn't it silly that one click wipes out your entire earning? We can't let that happen as we go through this experiment, as we implement these CDBCs, as we implement this for real as a digital environment. I think we need to make sure that we take this seriously and go through all the scenarios and use cases. I think one of the important things that these workshops and people who are looking at this to design this, you need to ask, let's assume this happens. How do I render this safe? How do I not create these sins of the past? Bring in an assessor, bring in a team, bring in some experts and really truly think through this. One of the rules in cybersecurity is that you always lose to a nation state. There's always somebody with more money than you. There's always somebody with more resources than you. And if you're fighting against somebody with unlimited resources and unlimited money, they will find a way in. And in the case of CDBCs, they probably will compromise your key. They probably will take over an insider or a consumer. They probably will do a configuration change, you don't expect, or they will find a flaw in that contract that was deployed. What is true here is you need to be able to handle that scenario. You need to know if a single person is compromised, it can't take over the whole system. If a single server or configuration or key is mismanaged, it can't drain everybody's wallet. You want to keep that attack surface, the impact of that as small as possible because what we don't want to happen is we don't want this to crash. I don't think a nation state can simply delete everybody's money and give it back if there's a flaw here. Maybe they could, but doesn't that so huge amount of distrust if they were to simply say, oh, sorry, we screwed up, delete, here you go, it's back, but it's safe this time. So we have to think about this from the beginning because these double spending problems that people talk about, if I could spend my million dollars twice online, that would be a problem. And we need to make sure that as we look at these unlimited money, unlimited resources, attack scenarios that we work through these ones that are essentially business ending for this kind of currency. When we think about what can be done to prepare against this, we think a lot about the algorithms, the safety of upgrades, the ability to manage. We also have to think about humans, human behavior, adopting this ability to track and control. Maybe this needs to be shut off for a minute to keep it safe. Maybe it needs to be open and audited and fully open source. We can't really future proof anything. We can just make it such that, you know, you can safely manage it and upgrade it and roll forward. I think bringing in a quality assessor, surrounding yourself by experts, making sure that you're, you know, appropriately compensating the teams you bring in to stay with you to learn and to grow. This technology is really immature, right? Banking has been around for a long time, hundreds of years depending on how you count thousands of years. This technology is at most 10 years old, 20 years old in academics. We still have some work to do, so we need to make sure that it is, you know, going to be safe. You can adopt strategies. We are going to bring in all of these things. I've said these, make sure you implement preventative trolls in these systems because sometimes you need to react in zero seconds, because that's how fast digital money moves. It's gone, right? That's how fast these things. You can't just, hey, look, the money went. Let's go get it back. That might not be a thing anymore. So we need to put these controls in the front. And the last thing I'll put in front of you is sort of this list of things you need to think about, right? There is a huge list of bullets that go into how do I get advice? How do I test this? How do I build it right? How do I do R&D? These sort of bullets should hopefully give your brain some thought areas on, okay, if I need to build this, I'm going to be, need to be ready to do a smart contract review. I need to do tabletop exercises. I need to do a full stack architecture assessment and build it with the right people. All of these strategy topics should be somewhere in your book of materials and your plan as you design this. You know, companies like the Kedelsky Group can help there. Other researchers, organizations like ITU should be your partners in designing this. And with that, I will turn it back to more Q&A here as we continue on. Thank you. Okay, thank you very much, Scott, for highlighting all these, you know, what is the security assessment all about and, you know, what are the different steps in the security assessment, what are the different dimensions that companies should consider and also the different attack scenarios, how to address them. So now we'll move to the question and answer session. So as our audience is going to, you know, like I said, you know, the audience is invited to type their questions in the Q&A window that you see at the bottom of the screen. So as our audience is providing their questions, so let me maybe ask a few questions to the panelists. Let's go back to Daniel. So Paul, can you also turn on your video, please? So Daniel, you talk a bit about the security and privacy aspects of CBDCs and stablecoins. Specifically with regards to stablecoins and their applications, what would you say are the current challenges right now as far as privacy is concerned? When it comes to CBDCs, I think there is a big pushback from government entities because privacy is usually the realm where authorities are very scared of going into when it comes to financial interactions. In terms of the technological advances, I think, you know, we can see at what happens today in the blockchain space, more and more applications and blockchains are kind of moving to a privacy model. There is kind of a very default standard for how to implement those transactions privately, which comes at the amazing innovation of the Zcash team has done by using zero knowledge proofs for that. And one of the things that they have showed as well is that you can kind of create a protocol where, you know, there's, for example, what they call viewing keys, where you have specific permission based keys that you can provide to, for example, an authority for them to purely see what is the transactions that you're doing, but not be able to act upon them, right? Or, for example, steal your funds, etc. So cryptography is very powerful here and really, you know, the space is ready for this. It's just a matter of kind of making sure that the users are putting enough pressure so that any CBDC that comes out will have inherently a privacy model built on it. It's important to have standards for adoption as well, because that's, it's tricky. It's tricky to get the privacy right. Okay, thank you. Let's go to Paul now. So Paul, there is a lot of talk about, you know, countries working on unhackable satellite quantum system. And this is to protect against cyber warfare attacks, for example, on telecommunications networks through the electrical power grid. For instance, you know, so how and why are they claiming that this is unhackable? What is the primary difference between the satellite network communications and a traditional telecommunications network that we use today in terms of security? Can you elaborate a bit on this? Sure. Thank you. So as is always the case, whenever we hear terms like unhackable, we need to look carefully at the details and see what the claim applies to what is or isn't being hacked or unhackable. The essence of what you just described for quantum satellites is something called quantum key distribution or QKD. I'm going to call it QKD here. If you're familiar with key based cryptography algorithms and protocols that you understand the principle that the security comes down to knowledge of the key ultimately, and therefore protection of the key. If two parties wish to protect their communications using key based cryptography, they first need a mechanism to securely obtain those keys. In order to minimize the damage done by an attack who compromises any specific key, it's customary insecure protocols, communication protocols to generate keys dynamically, use them briefly, and then discard them. This means that two parties who are communicating over any non-trivial distance generally have to securely generate industry keys using infrastructure they must consider hostile and subject to being actively attacked. Their approach is to solve this problem based on classical photography that we rely on all the time things like a TLS protocol. QKD, quantum key distribution, however, is a non-classical approach and it takes advantage of the quantum properties of nature. More specifically, QKD takes advantage of something called the observer effect. The observer effect tells us that merely observing a quantum system can disturb the system. There's a lot more detail complexity, obviously, than we're going to go into today. But the important thing to take away is that the two parties using QKD can detect that they're being attacked while arriving on a key that they will use in cryptic communications. So if they detect that attack before they use the key, they don't use the key, the bad guy doesn't have the key. It's a fascinating approach to the problem, but it brings problems of its own. One of those are serious technical challenges as a distance between the two parties increases. So to get to the satellite part now, when people talk about unhackable satellite quantum systems, they are almost certainly referring to some recent work from China. Use a satellite between two parties to enable much, much longer distances than we've seen with approaches up to now to use quantum key distribution. The most fascinating thing to me about their work was that their approach did not require even trusting the satellite at all. As one of the scientists said in an interview, is you could use your enemy satellite. It's pretty cool. It's impressive stuff. Now, does all this mean that things are suddenly unhackable in some overall blanket sense and the world is great forever? No, it's not really. But I characterize it as a very important achievement because for the pieces of the complex system that this new technology encompasses, it makes some very compelling security claims. Things only get better. It's cool stuff. Okay, thank you. Maybe now to Scott, how practical would it be to use QKD, for example, in the cryptocurrency industry? Well, right the second is not practical at all. At some point in the future, we are going to start making some decisions on technical difficulty of implementation versus time it takes to upgrade that once we get it installed versus what the attacker will do if we make it unhackable. Right. A lot of this tricks with math, as I mentioned earlier, is about time to break the algorithm versus the amount of time the data or transaction has to exist. Right, if the existence is less than time to break, you're mostly safe. But if this data is around for 50 years, then, you know, it's going to be important. And today that's impractical to deal with. But if it ever gets practical to attack this stuff, we're going to have to be able to upgrade really quickly. You know, maybe we'll have to rotate your blockchain keys every one second, or maybe if it never becomes practical, the bad guys will simply break it. You know, wouldn't the layman ask, well, if observing something quantum breaks it, well, let's just observe it all the time and it's just broken all the time. And that's not what we mean. But as scientists, we use words like that. And so we need to be able to handle questions like that. I'll just break the satellite. How do I break a satellite? Well, I'll mess with the communications. I'll go to the desk that controls a satellite. I'll crash it if it's a low earth one. I'll, you know, upgrade bad firmware. I won't break the math anymore because that's the hard part. I'll start breaking all the other little tiny things along the way. I'll compromise the human because that might be the easiest thing. That might be the most practical attack in the future because today it's impractical to attack the math. It will still be impractical for a while. It will be a lot more practical to attack the weakest link, which is probably the person or probably the password or probably some system in between. Okay. Thank you. So will we be ever safe? Well, we're for all the panelists, what's your always, you know, mostly safe today because there's generally an undo button in life for a lot of monetary things, depending on where you live in the world. That's important as we answer these questions are on CDBC's is how to keep people safe, even if there is an event and monetarily safe. I think we talk about here. So I think we're, you know, smart to keep it that way, and we will be able to protect the vast majority of situations in the vast majority of places. Of course, there are always edge cases where we have to accept the risk. You know, today you can get mugged in the street. Tomorrow you will maybe still get your password compromised. Thankfully, that's 10 people a day in a city of a million. Okay, let's still make it 10 passwords a day in a population of a million, right? Let's not make it any worse. Yeah, if I could add one term that's come into its own the last few years in the cyber industry is the same called cyber resiliency. And I really like what Scott just said, and to me, I think it's really reflection of that that if you adopt cyber resiliency principles, the most important which is that you assume bad things are going to happen. You don't fool yourself. You know, when these naive people, if you're to use this term, if you're resiliency native, design it in, then yeah, bad things will happen. And so you design in your end you button. It's, it's, it's as simple as planning for that resiliency to need to be there. I think it's interesting. sort of two points that I want to add like when we talk here about an undue button. Well, it really depends, right, we've been shown in this new space of blockchain that undo doesn't necessarily do better. I mean, when you need to, you know, take back the Ethereum blockchain a few blocks because $150 million have been stolen from a smart contract. Everybody will be happy. And then you create two different blocks. I mean, like, there is, you know, plenty of things here to consider. But I do agree in that sense that we need to make sure that there is a way to, you know, allow for human error. And I think that's the key point that these systems need to capture. And then the definition of what safe means, which is not only in the usability aspect, but in the kind of non usability, like what happens if we don't have these systems, right? Like, we keep thinking about these human errors and about money being stolen, but what about censorship? Like we are, you know, I guess most of us here, maybe some of some people in the audience are not necessarily lucky to be, you know, in, you know, developed or democratic countries and, you know, these can have serious consequences on the financial freedom or freedom of speech of many people. And that's also why I think enabling privacy needs to, you know, take a priority because it will enable, you know, everyone to use these systems without feeling unsafe from using those systems, right? And so I think it's, you know, it's important to consider all aspects of what safety means. And of course, safety of the assets is imperative for these systems to be adopted. So, yeah, I think we are in the right direction, though. This is important. I mean, as long as we recognize that, you know, we are still in the early stages of these things of these systems, we allow ourselves to make a little bit more mistakes, at least in the innovation part of it. And I would say that, you know, a lot of the kind of money that is being used in those systems today were printed in those systems. I mean, it's not, you know, the usual money that governments have been, you know, issuing for years. There is, of course, a lot of that, because that's part of the industries and the institutions that are coming in. But as long as, you know, a new token is created and suddenly there is a lot more value than, you know, it's more okay to be more risky with that, with those funds. Okay, thank you. Thank you everyone for your response here. Let's maybe focus a bit on the privacy issues and also the auditability of digital assets and data. So, Daniel, in your opinion now, what are some of the tradeoffs that exist, you know, between privacy and auditability of digital assets and data and maybe also, how can cryptography optimize maybe both at the same time and feel free maybe, Scott, if you want to jump in on that as well. Great question, Vijay. So, as I was trying to explain earlier in my call in my presentation, usually when, you know, in the traditional sense when we think about auditability of, you know, transactions and financial data, we think about kind of pushing, you know, all of the transactions and all of this data to, let's say, a third party, right? Like it would be like sort of specific, you know, auditing companies, right? And this inherently, you know, subverts the privacy because you're really literally giving all your information to a third party. And, you know, the excuse of, well, if you want to hide something, it may mean that you are doing something wrong is not enough to really subvert all the privacy of the data of an institution, let alone like of an individual and let alone of an institution, right? And so when we think about using cryptography, the power of cryptography is not only that it can actually provide privacy on basic applications or basic usage. And even in decentralized systems, but that it can actually enable, you know, these authorities to get the kind of auditability that they need from the systems. Just by getting a certified check or a verification of a proof or a result from a multi-party computation that at the end of the day relies on mathematics to say that it's okay, right? So finding a way to, you know, on the one hand, kind of appease for these regulators and authorities to be able to use these systems to trust these systems legally speaking, because this is also an issue, the legality aspect, right? How can you use zero-knowledge proof in court? Well, today it probably can't. Nobody has tried it, I don't think. There's also papers on this, by the way, this is really interesting field, but, you know, convincing, you know, authorities to use these techniques and, you know, accept just the way that they would accept, say, an OK from a third party that is trusted by them, they would be able to accept an OK or a check from a cryptographic, you know, output. And on the other hand, having the users, you know, be able to ensure their privacy, also trusting the math, right? But at the end of the day, depending on the math, so that they can feel free and safe to use these systems. Maybe to add on top of what the general perception of people have today, and it's, you know, if you're not doing anything bad, why do you care about being private? Well, there's just some things that should be private. You know, if I give my kid a dollar, nobody needs to know why I gave them a dollar or that I gave them a dollar, right? That's just a benign activity. But, you know, there's actually some laws that most people agree are good to enforce. You know, if I gave somebody a dollar, nobody cares. If I gave them a dollar 500 million times, maybe they would suddenly say, huh, why did you do that? That's a little bit out of the norm. And so when you look at what people consider private, it really means, you know, private and less private unless it hits the edge case where it might be suspect of terrorism or money laundering or something that is considered illegal in your jurisdiction. Of course, we have to really think about the idea of a fully monitored society versus a fully closed society and the practical implementations of letting anything go versus nothing go. Maybe it is the case that my dollar digital transaction goes through a system which decides gives it the little sniff test, and it decides at some point if I have crossed the bar of maybe I shouldn't be private anymore they need to take a closer look. Maybe I've done enough transactions where it decides that maybe this should be taxable instead of not right. These are computer systems that will have to make these decisions going forward. But in general, these technologies have to have the option to keep it fully private, maybe have the option to keep it fully anonymous. But maybe completely undone in the case where something bad is happening, right, we all around the world have different definitions of bad. There's a couple of things that we generally all agree are really bad. You know, exploitation use cases terrorism use cases and such. So we can generally agree that it's a good idea to have a rule to enforce some of those things. But then against the normal citizen we only turn those on, if the law gets involved in the right places. And these things are really tricky if then else clauses to implement in programs, you know, if you live in this country but then you're this and you're not at this and you're going to do this but you're only doing this with that. Come on, there's, that's just an M to M to M problem I call it, which is got so many different permutations you can't possibly design them all. So design for the edge cases on on some of these privacy questions right it's kind of a sloop. Okay, maybe one, one more question to Daniel on the privacy enhancing techniques that we mentioned in your presentation, and also the need for digital identity. In your presentation I think what we didn't see was you know how can privacy enhancing techniques maybe reshape the field of self sovereign identity, like for example any individuals to own their digital identities. So what's the, what's the role of privacy. Digital identity is right digital identity is a big big topic right I mean, if you look at, for example, the web three foundation who's been taken out of like these the IDs right digital, we call the identifiers, yeah. And they've been basically thinking about what you're called credentials, right, so credentials are basically the, the way that we are now kind of imagining identity working in the digital space, where essentially, sorry, where you know you go to the DMV to get your driving license and instead of getting like a little card, you're going to get a credential online where you can access it from your application and whatnot. And anybody later who needs to kind of check your driving license will maybe just like check, you know, a QR code for example for for a driving license well credentials can also take a step further. And your knowledge, for example, as your knowledge proves can help you even more. Let's say that instead of, you know, doing a QR code of your ID, where maybe some information may display or or something like this. You're going to go into your ID in the application and say, you know, tap once and say what property do you want to prove out of your ID. You want to prove your date of birth, maybe you want to prove your, you know, expiration date of your, of your license, maybe the type of license, etc, etc. And this can work, you know, for many, many, many types of credentials your university credentials like knowledge skills work, you know, identities, like authority, government identity issues. Or even, you know, your actual, you know, interaction with certain businesses, it could be, you know, logging in, for example, you are a specific identity. The biggest issue there is to make sure that you have a way to kind of allow for a single like for for all this information to be tied to a single identity, right, because usually when you're thinking about self sovereign identity. These identities are kind of also a little bit siloed per system. Like if I think about my identity, for example, in the Ethereum blockchain. Well, I have several addresses, I may have NFTs, I may use some DeFi platforms, etc. But then if I go to the Bitcoin blockchain, that's a different identity. Right, so how can I sort of manage all these identities and all this data that is being created. And, you know, if just to throw another hype word, you know, if we talk about the metaverse, you know, this is going to be imperative to whatever the metaverse looks like but you know we know that identity is going to be key in there. So, yeah, doing this in a way that is private like why do I need to show all my information, you know when it's being asked and how do I get these things issued. It's a big, it's a big field, I mean covers a lot a lot. The other aspect to that is you know on the internet, there's that joke, you know nobody on the internet knows your dog. There are times when you need to be a legal persona there are times you want to be, you know, the knight with the sword. And having these digital identifiers allows you to choose which persona to have, depending on if it's legal or not legal. And I think that's the most interesting thing about it. One of the practical security implications of having a digital ID that's a little too open though is how do I know that Scott's digital ID is actually being used by Scott. Like we have to start combining some other things in ways that start to make people a little emotionally weird. Like it's great if I use my digital ID at the liquor store to buy because they know I'm 21 and they scan my QR code, but how do they know that's me. Do they want that in a tracking chip do they watch my phone as I come from my house do they combine a bunch of other information about me okay, he just you know full on a plane and now he's in this place and so it's probably him. Like these, these sorts of things have to be decided in a computer versus holding up the little card that says I'm old enough. Sometimes we are going to have a hard time actually implementing something that works in every use case. Hey, thanks. Thanks for these comments. So we're also coming now to the end of the session. So very quickly let me just summarize a bit the key takeaways today so we've seen, you know the different. So maybe vulnerabilities, you know the of the blockchain and also the threat of the quantum computing, you know that was mentioned by Paul and also maybe how privacy enhancing techniques can also address you know the privacy issues that we face with with blockchains and using them in CBDCs and stable coins. And then we've heard also about the importance of doing security assessments to make sure you know the system is actually doing the right thing. And it's actually, you know, respecting you know the security best practices and protecting and preserving confidentiality integrity and it's also available all the time. So we've gone through, you know, a number of these issues during the session and also highlighted, you know, some of the media actions that are there that can be considered by organizations. And now I'd like to, before we close the session would like to inform you, you know that the DC free conference continues again this afternoon so we'll be live again in the next 30 minutes or so. And the next session of the security track is going to be the deep dive session on the digital currency validation for protection and resilience, which will talk a bit about, you know, a security validation model on how we would assess, you know, security of different digital currency types. And this is a work that's being done under the security assurance working group of the digital currency global initiative. So we hope to see you all in the next session to maybe discuss a bit, you know, what's how the validation model that's being discussed and the DC GI will help address some of those security threats that we discussed in this session. So I'd like also to mention that the recording of this session will be available later today on the website. And now I'd like to take this opportunity to thank our panelists for their participation in today's session, and also for taking the time to share their insights and experience. We'd like to wish you all a very pleasant day and thank you all for your attention. And this session is now closed. So thanks everyone and bye bye. Thanks so much. Thank you. Thank you so much.