 Okay. Hello everyone and welcome to the Mineral Research Lab office hour with me, the host, Justin, and the person who actually want to see Dr. Stringnother. So to start, I would like to have Dr. Stringnother introduce himself before we get started. I think it would be useful to set expectations for what this session is. It's very casual on purpose. We are here such that we can answer your questions, mostly it'd be Mr. Stringnother answering your questions, and yeah, so I will be paying attention to YouTube. I'll be paying attention to Discord just to relay questions over. But otherwise, this is really your time to make it what you would like. So yeah, we're here to answer anything you have. So how about, Serang, can you introduce yourself please? Sure. So I am Dr. Stringnother, a cryptographer and mathematician who is a research contributor to the Mineral Research Lab. Mineral Research Lab is a research and development work group, not the only one, but it's a research and development work group that conducts research and development for the Mineral Project. So that ends up being stuff involving protocol research, some of the math, prototyping, coding, all sorts of things, to kind of push the Mineral protocol and privacy-preserving digital assets forward in general from a technical perspective. And like Justin said, the purpose of this office hour is to give an opportunity to just kind of very informally give a video forum in this case in order to answer any questions that come up, topics that people want to know more about. So this could be Justin and I sitting here in the quiet for an hour, like often happens in kind of standard university in-person office hours, or it can be really whatever sort of technical aspects that folks want to talk about. So I guess I'm using whatever media you I guess have access to. Justin, you said you're watching the Discord on YouTube, is that right? Yeah, so for any questions that people have, go ahead and shoot them off to us there, you can talk about them. Otherwise, let's sit here and have a little coffee. Perfect. I brought my coffee. I haven't had a lot of coffee with my actual coffee chats recently, but I feel like I doubled down on the coffee the last few days. Read the type of person in high school, maybe high school depending, but in college it went to office hours for you the type of person would usually go or is this something you did not go to that often? I did go to office hours when I had questions and later as a TA and even later as an instructor. I don't know. Then I had a new appreciation for the nature of office hours when I was the one running the office hours, but I did discover that oftentimes it was like this really cool combination of folks who came because they really wanted to understand something they didn't. You know, prior to that, but there are also a lot of students who really did know what they were doing and they were just really interested in making sure that they kind of had as much face-to-face time to kind of go over new problems as they could. So one thing I like is that it was like, oh, this is not some kind of sign of weakness to go to office hours. It's like a sign of, I guess, being motivated and dedicated to what you're learning. But there were still many times when I had no one show up to office hours and then it was just reading books for an hour. What are you going to do? Yeah, I guess. Have any questions or any topics of any kind? At the moment, there are no questions and no topics of any kind, but of course it's open. Well, I mean, we can also just talk a little bit of, I guess, something. Yeah, I was thinking one way we could do is sort of troll the answers out of people. We can just start saying like, well, what do you think about this obnoxious thing, just to get people all riled up about it? But before we do that, how about you tell us, talk about what you've been doing with the Mineral Research Lab the last month or so? So I would say probably like the most interesting thing that people might care about, or hopefully at least they'll care about the effects of it, is going to be CLSAG, which is a new linkable ring signature, no, a linkable ring signature construction that was really intended to kind of be a drop-in replacement to the linkable ring construction that the Mineral Protocol used to use. It's called MLSAG. I will say that in hindsight, I really regret us not giving it a cooler name. What would have been a cooler name for that? I have no idea. If anyone has any ideas, you should tell us. I mean, we ended up getting a pretty cool CLSAG logo. Basically, the community came together and we had a few ideas that were pitched, one of which we ultimately went with in the blog post and they're credited at the bottom. But it's not quite as easy to mark it as something like Halo or Arcturus. Because originally, so prior to the confidential transaction model, the linkable ring signature scheme was won by some other authors. It was called LSAG, linkable spontaneous anonymous group signatures. And then from that point on, moving into the confidential transaction model where we replaced in the clear amount, commitments to amounts. Then we moved to one that was developed by Shen Noether and others, kind of more of an in-house kind of thing. That was called MLSAG, is a multi-layered, linkable, spontaneous, anonymous group signatures. The idea there is that you had both information in the signature that dealt with both signing keys but also certain commitment keys. And by cleverly setting up and arranging how you do that signature, you could both show this kind of sign or ambiguous signature model that you're looking for, but also throw in a proof of balance, which is very important, but also completely in sign or ambiguous way. The downsides of the MLSAG signatures of course, you basically kind of have two sets of data floating around in the signature itself. You have some sets of data that deals with like the signing keys and a kind of a separate set of parallel data that deals with the commitment keys. So the scaling on that is not very good. It scales as the anonymity sets per transaction goes up, it scales that way. But at the same time, every time you're adding on new ring members, you're actually adding on two pieces of data effectively, one for signing keys and one for committed keys. And so the kind of the new hotness now, CLSAG, which is concise, linkable, spontaneous, anonymous group signatures. So it is what it is. There was a name that was chosen and thought about changing it, but then we're like, ah, you know, we've already named it, can't really rename it. So people already knew it by that if we didn't want to make everything more confusing. So, but the idea there is to kind of take this information involving these, this data for signing keys and this data for commitment keys. And it turns out you can combine them together in this weighted fashion that involves some hash functions to ensure that someone can't kind of maliciously go through it and run a forgery on the signature. And basically do the same thing that MLSAG signatures do, both show in a signer ambiguous way that you're signing a message on behalf of one of the unknown keys, I guess one of the set keys, but you don't know which one it is, but also basically signing with this other commitment key that you need to prove balance. So it's more or less a drop in replacement, but now effectively you only have to have one set of data involved. So one piece of information per ring member and then some additional auxiliary information that's used just to make the algebra work. So the benefits to this are that it's basically a drop in replacement. So that's great. You know, everything involving key images takes around everything involving the way that these keys are structured gets to stick around. But the benefits, this benefits that you get for it are that first the signatures are much smaller. So effectively, the signatures you get for CLSAG are about half the size as for MLSAG. That's just the signature alone. Transactions include more than just signature. So the fact that they're smaller in that sense, but they're also faster to verify because it turns out you can do some optimization with the way that these operations take place. Because before you had to do some cryptographic operations on like the linkable side of the data and commit inside the data. And now you can effectively do them at the same time. And you can optimize that away a little bit. So benefits there are that you end up with about 20% faster signature verification. So transactions that spend multiple inputs, which most transactions spend between one and two inputs and generate some other outputs. But for every one of those spend inputs, you need a separate signature. And so it turns out that for the most common forms of transactions, like a two input two output transaction, for example, you end up seeing overall about a 25% decrease in the transaction size and overall probably about a 10% speed up in the overall transaction verification. So that's pretty good. There's really no downsides to this. Of course, we want to make sure that the security model for this is very strong. So the security model, it basically just says, what properties do we want this construction to have? And for our particular purposes, we want there to be these properties involving forgeability and non slanderability and linkability. And there's kind of this list of them that you want this linkable ring signature construction to have. So what you do is you basically build this kind of hypothetical model of an imaginary attacker. And you kind of give this attacker different powers to do things. So you might give this imaginary attacker like the power to convince honest users to hand over their private keys, or you might give this imaginary attacker the power to persuade honest users to build arbitrary transactions on the attacker's path. And then you show that even if the attacker had these powers, it still can't go and generally break these properties that we wanted to have without also breaking some computational problem like the discrete logarithm problem that we assume is computationally feasible to do. So you basically say, hi, this hypothetical attacker can't exist. Therefore we're okay. So we decided to kind of beef up the security model that was used for CL sag compared to like L sag and ML sag. And in doing so, ended up with I would say like a pretty good model for how we wanted to work. So that's pretty good. We're pretty sure that the same security model would equally apply to ML sag. But, you know, we didn't go back and kind of retroactively do that. It's considered to be pretty straightforward. Do you know if anyone else that is actually using ML sag in a sort of production like Monero is like, is there any other application that's widely used for this? Oh, I mean, besides like projects that, you know, use, use it for the same purpose. Yeah, code forks or, you know, you know, different digital assets that have at least like the same or a similar protocol, presumably use it. But I'm not aware of any other direct applications. So originally L sag, one of the original paper that originally introduced L sag, you know, one option that was listed was like a voting schemes, for example, where you want to be able to ensure that someone can't vote twice for a particular issue, but also some ambiguity among the set of possible voters. I don't think that that's actually been implemented yet. I mean, secure voting is really hard for other reasons. But it happens to be really good for the purposes that we use it for. And so I guess it's got to finish up to the Monero Community commissioned an external audit of both the paper that should be very careful and say preprint. This preprint is still technically a preprint. So that means it hasn't undergone any other peer review aside from what I'm going to talk about. But we decided to have the preprint externally audited as well as the implementation for the upcoming October network upgrade audited. So that was done. There were two auditors who were commissioned to do that. And that was in kind of in consultation with the open source technology improvement fund, which is a nonprofit that does support for these kind of things and supported by Monero Community donations. And the auditors had a lot of really good suggestions for how to improve the overall CL sag security model and preprint. So I ended up changing a few of the proofs around and generally just improving how the security model improves, restructured and run. And those didn't actually require any other changes to the scheme itself. So the construction itself and therefore the code didn't change as a result of that. But we're now much more confident in the way that the security model was arranged, some of the definitions, the cryptographic hardness assumptions and things like that. So that's great. Most changes have been made and that's now up on the IACR eprint archive. And then the implementation surprisingly didn't require any real changes for security, which usually you get some in there. There were a few kind of informational ideas for how to simplify the code. But we considered that changing those, they would have been fairly extensive in how we handle certain key structures and such. And it was thought that it was probably more likely to introduce risk if we made kind of these big sweeping more informational changes than if we were to just leave it. So yeah, so the report's available. There's a blog post up on GetMinera.org about it. You can read the full report, take a look at the pre-print, look at the code if that's your thing. Yeah. Awesome. Yeah, so it's scheduled to be deployed in the October network upgrade. So all you do is just keep your software updated. You use a hardware wallet like the Trezor Ledger. That's in process as well, getting their firmware and apps updated as well to make sure that that's good to go on a day in, day when we're ready to go to. So use those, make sure you keep your firmware updated as well. And it'll be good to go. Very cool. We had a question come in from Andres. He asked if there's any plans to, in the future, move past ring signatures to something that is not just hidden between other decoys, but, you know, has additional protections beyond just a decoy level protection. So I guess, you know, there's quite a few things that I know are involved in this question. One is like, we have the current decoy selection, which per transaction is a relatively small number of decoys per selection, which provides reasonable, you know, mass surveillance protection, but less so good targeted surveillance if someone knows information about particular transactions. So I guess given the current situation, what is the approximate timeline forward? Not necessarily the timeline, but the set of potential improvements forward for whether or not ring signatures are sort of the right approach for the future, and how you've approached dealing with the core problem of the relatively small per transaction decoy levels. Yeah, yeah. So really the question seems to be kind of about per transaction anonymity sets. I increasingly dislike the idea of using the term ring signature to purely mean a limited anonymity set, just because while it has been the case so far that our ring signature construction does have a limited anonymity set, I don't think it's like necessarily the correct term to use. So it's possible to build transaction protocols with limited anonymity sets or full anonymity sets. You could probably do all sorts of other things. I mean, we know there's transaction protocols that I've known and he says, but those are typically not the ones we're interested in. So what we do right now is we do in fact use a linkable ring signature as kind of a building block in a limited anonymity set transaction protocol for the Monero protocol. That's not the only way to do it though. So it's possible to build limited anonymity set transaction protocols that use, for example, specialized zero knowledge proving systems. So I really don't like the whole idea of like zero knowledge, meaningful anonymity and ring signature meaning not because like that's not, that tends to be certain implementations now, but it is not generally true. I mean those things have much more technical meanings involving kind of proof and signature constructions, but using them the way that we do today. So it's possible to kind of migrate over to a still limited anonymity set construction, but one that permits much larger anonymity sets for reasonable transaction sizes of times, but in a way that ideally would help against certain other forms of analysis or attack. Because again, right now the way that the transaction protocol signature scale is that they scale linearly with the size of the anonymity sets. So you're really kind of stuck there. And there's been some proposals for ways to use different kinds of specialized zero knowledge proving systems. Some examples of that are Omni-Ring is one of them, Volantis is another one, Ring CT3, Triptych tourists, there's probably other ones that I'm just not thinking of right now. But all of these essentially allow a transaction proof along with possibly some other auxiliary proofs that scales much better in terms of size and scales a little bit better in terms of time. So verification time is unfortunately kind of the sticking point for this. So if you want a trust-free, like a non-centralized trusted setup style proving system and transaction protocol, you can make those proofs very small. And we know how to do that already. They're not as small, for example, as like the proofs, not the transaction proofs in say Zcash, but they're still quite small for the size of the limited anonymity set you can get. But verification time is always a sticking point. So with those particular kinds of protocols and proofs you do need still like a almost linear verification time. And that'd be the sticking point. So that's kind of the limitation that exists. There are options. They've all got some trade-offs in terms of, you know, what you can do with things like tracing and how the construction works. Some of them involve changes to multi-signature operations, those changes to the way that linking tags work, which kind of require almost sort of like a pool of migration that can still be done safely. And so which one of these, if any, should be the one going forward is kind of up in the air right now. A lot of it depends on what trade-offs people are willing to accept and also what trade-offs people are willing to accept in terms of mainly like transaction verification times. You know, obviously we'd like verification to be as low as possible, because that means, you know, faster operations. But that has to be balanced again, what kinds of analysis and attacks you want to be able to protect against. So ideally what we'd love to do is move to something that isn't that full anonymity set. Kind of the classic example that right now is something like, you know, the Zcache protocols, where different anonymity sets that are involved there are basically enforced using proofs that, you know, do things involving Merkel tree proofs. And what that effectively does is it effectively gives you, you know, a full anonymity set within that pool, absent external information. That would be ideal. But right now, all the proposals that are kind of on the table for doing that suffer a lot in terms of either centralized trust or in terms of say proof size or time. Right now you can't really have everything if you don't want to have centralized trust. And of course, you know, there's also other issues with that. You know, for example, in many of the protocols, in many of the protocols, hang on here, someone else joined the room here. Yeah, I think he's probably one of the next speakers that I assume. But anyway, right now, that's kind of the limitation. So under the assumption that, you know, the project in its community are unlikely to move to something that would require centralized trust, you know, right now there has to be a limitation of limited anonymity sets. And so there's still questions about, you know, how do you end up choosing those anonymity sets? There are definitely ways that you can do it that I think provide improvements over the way we do it now. As you get bigger anonymity sets, you can do kind of, you can do certain kinds of binning with the outputs and lasagna anonymity sets. And that can mitigate certain kinds of heuristics involving common ownership. And, you know, the source of where those outputs came in terms of transactions earlier on the chain, as well as timing analysis. So there's a lot of interesting stuff you can play with. But I don't think there's a really good understanding right now of what exactly those precise trade-offs people are willing to make are. Yeah, in my experience, honestly, you know, we have code that can do this now. But I think it's a matter of kind of the community and researchers and developers kind of getting into agreement on what those trade-offs should be. So, and again, hopefully, you know, eventually we get to something that is efficient and trust-free, which would kind of check all the boxes for everyone. From there, you could, you know, very likely build a protocol that uses such a system and does, in fact, give you effectively, you know, full anonymity, which reasonably would be enforced. In my experience, when people talk about full anonymity or limited anonymity, it's sort of in an academic-only way. And then it's applied to, like, you know, typically the way things are currently, right? So limited anonymity looks like a Monero's case, right? And that, in many cases, people think that that just can't even really change. They don't really think about these. Limited is just, you know, anything less than the full pool. And that could be a, you know, a larger number than the entire size of the pool of a different network, theoretically. And then people are just like... Yeah, and it also depends on what exactly the threat model is, right? You know, are you concerned about knowing various after, you know, making a bunch of controlled purchases and then examining the possible transaction tree between you and, like, an, you know, I don't know, an exchange with whom this person or entity is colluding. So, you know, given that, you know, it's very difficult to get, I would say, like, practical anonymity, you know, in that particular circumstance. And there's plenty of other circumstances that you can dream up where under certain threat models, limited anonymity just, you know, won't work for you. And so it's one of those, like, it's one of those infuriating it depends kind of ends where, you know, it's like limited anonymity right now is a trade off. It is like an absolute trade off. And we all look forward to the day when we can do something that's trustworthy. And, you know, frankly, like if your particular threat model is going to require, you know, a more full anonymity scenario, you know, then, you know, you might have to consider whether or not going to like a trusted setup sort of scenario is something that you need to do. That may come with its other trade offs, you know, based on things like ecosystem availability and, you know, a host of other questions. But it's absolutely a trade off. You know, and I think it's important to acknowledge that, you know, we have things like the break in Monero series where, you know, we kind of try to tease out some of the particular scenarios under which limited anonymity can be a problem. And those for which it might not be as much of a problem. But it's, I feel like, no, it's not fun to like sit down and try to enumerate and think about the specifics of a risk model. But, you know, it's something that unfortunately right now I'm kind of have to do. We haven't had a break in Monero series in a while. So I guess, given that we haven't, what episode ideas would you like to see there be episodes for? Oh, man. I think it'd be interesting to do one that talks about, I guess, some more specifics on things like churn and self-send operations. So I would say things like that, something involving things like output merging would be very, very interesting, you know, where outputs from different transactions end up being pulled into different anonymity sets into the same later transaction. I'm looking at ways to possibly mitigate that. You know, larger anonymity sets with good output selection can mitigate that example. The churn example is a bit tougher because you could look at things like, you know, possible, possible chain history sizes and distributions and look at, you know, what happens with that. It gets, yeah, there's a lot of just interesting things to talk about. But I think it's important to do them in a way that doesn't end up kind of just rapidly devolving into, you know, confusing graphs and, you know, irritating math. But at the very least, I think I'm really, I'm really happy at least to see that there is a lot of interesting research into this area. You know, a lot of folks want general zero-knowledge proving systems that are trust-free and efficient. And, you know, that would, I think, be like the ideal, you know, checking of all the boxes. And it's interesting to see what different projects do, right? You know, I mean, things like Zcash and related projects get, you know, efficient proofs that are very fast to verify. And you can do things like affected with bull anonymity set transaction protocols in theory. Ideally, Zcash kind of has their whole optionality part to it, but are willing to sacrifice trust to a degree to like a multi-party, multi-party computation, you know, and there's always kind of the question of like, for your personal use case and how you tend to view multi-party computation, you know, do you think that a multi-party computation set up kind of diffuses the trust out enough to where you're okay with that or are you not okay with that? You know, because in the theory that does provide like a guaranteed, you know, this is how the soundness of this operation could fail scenario. If a multi-party computation were, you know, misused, for example. So, so many questions in this space and so many trade-offs. I feel like a lot of cryptography is basically just like the precise study of mathematical trade-offs. But, you know, no, I don't really have a good timeline for when, you know, we might be able to move to something that is, you know, where you consider full anonymity. But there are options for increasing the anonymity set, which lets you do a lot of other cool things, hopefully mitigate some kind of logistics. We can solve everything, but I think it'd be an improvement. So that's on the tip for October, right? Okay. Not for October. So, I mean, there's other interesting things, you know, on the horizon too. Ideas for making range groups a little bit more efficient have come up. There was a preprint that came out on improvement to bulletproofs or bulletproofs plus. The plus actually means fancy. The plus actually means taking away a few proof elements, but bulletproofs minus doesn't sound as good. So, I don't know. There was a proposal to do an implementation of that. It's really new. It's kind of an extension of the way that the bulletproofs inner product protocol works. It's cool. There's a school waiting operation with it. And it would let us, you know, take a few dozen bytes off of transactions. In theory, they'd be like very, very marginally faster to verify, but, you know, in practice, it would pretty much be a wash. I think in the grand scheme of things. So really the question about whether or not it is worth moving to that is worth the effort and the possible risk of something that's a bit newer has yet to be determined entirely. Were there any other questions that came up besides the one that got us into a very long discussion about protocols? Yeah, there's one, it's still protocol related questions. So it is going to be a follow-up though. It says, so the ideal solution for Monero would be ZK Snarks like thing that doesn't require a trusted setup. And I believe you basically said yes, but also needs to be efficient. Yeah, yeah, yeah. And, you know, and specifically saying like ZK Snarks specifically, you know, it's, there are different ways to, there are different building blocks you can use to construct protocols. And I think it's important to consider at a protocol level. So for example, like you could basically build something akin to, you know, like the Zcash sapling protocol, for example, but without using ZK Snark construction, you could do that with bullet proofs, for example. I don't think it's actually been done, but I know that there were some estimates made to say, okay, you know, bullet proofs is a zero knowledge proofing system. The scaling is, you know, in terms of verification is not as good. Just absolutely not. But what it lets you do is it lets you prove things about circuits in a trust free way. So that is trust free. And so in theory, you could take, you know, the circuit that's using Zcash sapling and some of the other constructions, and you could build that in bullet proofs. And you get rid of trust. And the size would still be pretty good. The estimates and the size would be competitive. Not quite as good, but still pretty competitive. But the verification would be, no, it wouldn't work. Estimates I saw are like still on the order of a second, which were like, well a second, that's pretty fast. It's like, yeah, but then you got to do that like a million times. So you can like amortize that down with batching and stuff. But, you know, it seemed like it was not quite doable at this point. So I would say something that would let you do like the whole Merkel proof kind of thing that, you know, effectively lets you do a full anonymity set based protocol would be ideal if, you know, we could make it mandatory, which other projects have shown you can do that to ensure optimal privacy. But also, I think ideally is making it trust free or, you know, at least abusing the trust and the soundness out to the point where everyone is satisfied with it. And I think it's been the general view of, you know, a lot of folks who support the idea of Monero to just reduce that trust down to nothing. You know, whereas for other projects, they decided that they're okay with, you know, kind of delegating that soundness risk off to a large enough, you know, set of participants in a multi-party computation that they are safe, you know, secure enough that they're, I guess, okay with the security of it's not really an upgrade to say that. But that is risky, you know, I mean, like the sprout multi-party computation that started Zcash, for example, did have a soundness problem. And there was a kind of a whole deal where they had to end up taking down the proof transcripts and hoping that no one was able to figure out and abuse that. So, like, there's a real non-trivial risk involved, even to something like such a multi-party computation. So that's not without risk either. And it makes the trust situation a little bit more tricky. So I guess I am personally the opinion that, you know, I like the idea of, I guess, kind of minimizing the possible soundness problems, you know, a soundness problem for something that involves like a trusted setup is the trusted setup and a multi-party computation, you know, for something that, for anything that uses, for example, Peterson commitments, which Monero does, other projects do too, Zcash does, you know, nimble little-based assets do, you know, in theory, if you could break Peterson commitments, that's a problem with, you know, something akin to soundness as well. So it's definitely kind of like this different, like, threat profile. And ideally, we want to minimize that. Got it. Something that was discussed quite a few years ago, but I don't think has really come up super recently is the idea of second-layer networks on Monero. Like, what if you wanted to put the lightning network on Monero? What if you wanted to put this, that, or the other on Monero? What if you wanted to put a, you know, a like ZK Snark mixer on Monero, like as an optional thing? Like, so I guess what, what sort of building blocks need to happen to allow greater compatibility, knowing that right after this, we also have a talk about atomic swaps for Monero too. Yeah. Yeah. Which is a, which is a really cool new thing that's still, that's still in progress. But yeah, so there's some other researchers who were looking at the possibility, what would it take to do, say, you know, atomic swaps between something like Monero and, you know, something like Bitcoin, for example, where in Bitcoin, there's a lot of stuff you can do because Bitcoin has both scripting capability, but also, you know, some different, some different kind of setups in, in, in how it's, you know, its protocol works and in how it runs things like signatures and the like. And that gets really tricky because Monero does not have inherent scripting capability. Adding inherent scripting capability would be pretty awful for fungibility. So I tend to view that as probably a non-starter on the Monero side. So given that the question is, well, okay, so what could you do for something like atomic swaps? And one idea that maybe talked about in the next talk, so we'll try not to give too much away, involved a particular zero knowledge proof that was kind of unspecified at the time in the write up. In theory, we could have done it with something like bullet proofs, but it would have involved hash functions. And that's kind of gets messy to do in, you know, in circuits. So that was not ideal. But then, you know, they came up with another idea that uses this clever cross group proof, where you basically prove that across two different groups, which are, you know, otherwise, basically algebraically incompatible, and you can show a quality of like an unknown discrete log. And it turns out if you can do this, you know, you can very cleverly kind of build it into this protocol involving some Monero transactions and some Bitcoin style transactions in a way that could let me do atomic swaps. Pretty darn interesting. You know, there's still still some kind of open ongoing questions about, you know, how you need to structure those transactions just to make sure that, you know, if one or more parties that are involved in this, and doesn't follow the protocol, you know, what are the risks to funds of anything? What's the worst case scenario that could occur? And even then saying, well, okay, suppose that the protocol does work exactly as you would expect. And the swap does happen. You know, what information that they need is that leak across, you know, one or more of those chains. So, I mean, for example, if you use a centralized service to do swaps right now, like that can obviously have some risk because that entity knows presumably where assets came from and where they're sending them to. So, you know, it kind of is a central storage kind of risk. But also you can do different kinds of timing analysis and other amount analysis across different chains to try to determine, you know, what's going on and what funds are moving where. And if that's a risk for you, then you don't really want that. So, I think the question of what, if anything, about that kind of analysis would transfer from the current setup, which is like, you know, use a centralized service to do my exchanges versus saying, well, I'll just do it atomically without a central service. Like, what kind of analysis could still happen? It's not nothing because metadata always exists. But I think the question of to what extent does it happen? And, you know, is that something that you want to do? I think are still open. Very interesting nonetheless. There were some other ideas for stuff like payment channel networks even within Monero. My DLSag was another signature construction that some other researchers came up with. And we looked at it and unfortunately had like this tracing problem that would have required this whole self spend thing. And it was, it kind of got messy and there wasn't really a great solution for it, unfortunately. So that kind of ended up being dead in the water for that purpose at least, which was really unfortunate. DLSag would have opened the door to some interesting stuff. Got it, got it. I know this is I don't know the LSAG names, just yeah, every letter is involving an LSAG construction just for even more confusing. So I know this isn't your work directly, but I know Istmas recently opened a community crowdfunding system proposal to look into potential limitations in Monero related to quantum computing. He was going to look at the initial scope of what basically challenges Monero needed to address going forward. Can you speak a little bit about what this general work is doing? And then also speak about Monero and quantum computing generally. Yeah, so this is this is work that's ongoing. They believe we're going to work on three months. They're about done with the first month, I think, of that. So again, I'm like not directly working on this, so I don't want to try to speak to them or anything. But their idea was okay, you know, under under the assumption that someday quantum computers would exist. And there's definitely debate on, like, to what extent people think that this would be a problem in the future. If so, how far into the future, you know, and if all of that's, you know, what actions would we want to take now in the protocol to try to mitigate the possible future effects of, you know, quantum computers? And they want to look at kind of different parts of the protocol of range groups and range signatures and kind of a one time addressing construction and all of this stuff. You know, to what extent would different parts of the chain be considered at risk? You know, the way the Monero keys work, you know, there's a private key and public key when they're related by, you know, this algebraic group operation. And it's pretty well understood that, you know, the one way map right now from private key to public key, where we assume that it's computationally feasible to determine the signing key. Just if you see a key on chain, assume that that's very difficult right now. And it's a one way operation. It's pretty well understood that, you know, given a sufficiently advanced quantum computer, that map would be reversible efficiently. So at that point, like spend anyone's funds, which, I don't know, that's a problem right now. But to some extent, like it's not necessarily just a problem for Monero, you know, this is kind of a broadly applicable problem. The entire Internet runs on these one way maps right now, which will large extent. So it's kind of one of those where your house is on fire, but the rest of the world's on fire too. Doesn't make your fire any less bad, but it doesn't mean that there's a lot of problems to worry about. But even beyond that, looking at things like, what would that allow you to do to ring signatures? So as one example, if since the way that the key images or linking tags work within the Monero protocol, it would allow you to look at different outputs that are part of a ring and determine which of them is the signer, because of the way that the keys work right now. They evolve private keys and you can basically do like a kind of a guess and check testing operation. So, you know, again, under the assumption of a hypothetical quantum computer, this is impossible today as far as we know, you could figure that out. There's some other questions right now involving things like, you know, what could you determine about stealth or one time addressing operations? And that's still, I think, a little bit ambiguous right now. One question that I have that I know is not unique to Monero is saying, well, okay, suppose that I will have a transaction here. Maybe I can't just use that transaction to, for example, figure out what the wallet address of the recipient was. Because remember in Monero, wallet addresses never appear directly on chain, they're used to derive these one time addresses. So right now, I mean, without external information, you can't just use on chain information to link those one time addresses back to the wallet addresses. So the question might be, even with the quantum computer, could you take a transaction and determine its recipient wallet address just kind of with no external information? Or if you, for example, had a candidate list of possible recipients that you think it might be, because, you know, maybe you have a hunch or some other external information, other ways that you could go and basically check those to determine which it would be. And again, the address space in Monero is like a possible wallet address space in Monero is, you know, unfathomably huge. And, you know, quantum computer doesn't just mean like can do everything fast automatically. There's particular algorithms that we know that can be used against things like the discrete log problem and stuff. So, you know, even if you had a hypothetical quantum computer, you know, saying, well, it couldn't just go through, you know, all possible addresses and see which it is and figure out, you know, what what recipients is linked to what transaction, you know, that's likely not going to be directly possible. But the question of, you know, what could such a computer do if it had like a small known list of possible addresses? So questions like that, I think are still kind of up in the air and are part of the subject of the research that they're working on. And I do know that they're also looking at kind of what current directions my protocol development take that could more efficiently, I guess, work on trying to like mitigate the effects of a future quantum computer. I guess one problem with a lot of algorithms and constructions that are conjectured to be post quantum secure is efficiency. They often tend to be much more inefficient than, I guess, would be reasonable to have on chain. You know, right now, if like post quantum signatures, for example, can get pretty large, and there have been some ideas for how to do some constructions like linkable ring signatures even, but they're very large. And even if you were to integrate them into the protocol, you know, you still deal with today's computer is that, you know, still have to store things and do a lot of processing. And if it gets to be too large of a transaction that's too slow, no one's going to use it. So, you know, ideally, we'd like to migrate the protocol immediately to something that's conjectured to conjectured to be post quantum secure. But there's a lot of parts of the protocol that it's really uncertain if there's anything at this point that's efficient enough. I mean, research in this area is always ongoing. So obviously, what's the state of the art today is almost certainly not going to be even close to what the state of the art is, you know, in 10 years, 20 years, you know, further down the line with maybe we have a better understanding of the likelihood of seeing a practical quantum computer. So I think one thing that we're trying to do and, you know, I don't know, again, I don't want to speak for them, but just my understanding of the work that they're doing is to, you know, look at what are at least some directions in research that might give us an indication about what could be efficient down the road for protocol improvements. I mean, I guess the general thing is like, to some extent, like in the age of, you know, post quantum computers, like a lot of the Internet's kind of screwed in terms of security. So I mean, like Monero would be, you know, at least to some degree, not immune to this, the extent of which different parts of the, I guess, previous chains history could be known. I think the exact nature of that is what they're trying to determine now. I mean, it's really interesting work, but not you think that it's going to be applicable in, you know, your lifetime or the lifetime of people you care about. I think that's also very much kind of a contentious issue right now. I don't think that there's a really universal agreement on, you know, when, if ever folks might end up seeing a practical quantum computer that would affect projects that they use. And I'm not even close to an expert enough in the area to be able to even hypothesize. Yeah, I got it. So, switching gears, what research have you seen outside of what you've specifically done for Monero, I would say, that has been not necessarily useful, but just really interesting and really surprising? I would say I think just like the extensive work that's been done on general zero knowledge-driven systems, I think is really cool. So, let me, right now we use very specific zero knowledge-driven systems in different projects for different purposes. I mean, bulletproof, for example, is a zero knowledge-driven system that does like one particular thing involving Peterson commitments really well. However, like that's just one application. There's a, there's kind of a variant form of bulletproof that you can use to take different algebraic statements, kind of mess them into this particular circuit format and then build a proof over the circuit arrangement in order to prove that you know, I guess, like a witness that ends up satisfying the circuit without revealing what that wins is. And like, that's kind of the general form of like a general zero knowledge-driven system. And so if you have like protocol that you really want to implement, if you can put it into this form and do this kind of representation, then there's always different tools that you can use right now to prove things about statements relating to this language that you built. So bulletproofs can do that. The different proving systems that are used, you know, and stuff like Zcash protocols can do that. There's a lot of stuff involving when CQA starts and, you know, there's a whole host of really interesting research on how to do this and what the different trade-offs are. And like I said before, like kind of the, I would say the holy grail for this is trust-free, small, and efficient to generate and verify proofs. And ideally also stuff involving things like batching that lets you kind of amortize the cost of verification over multiple proof verifications. Like just the fact that there's so much fascinating work going on relating to this is really interesting. And unfortunately, none of it that I've seen I think kind of applies directly to the narrow protocol based on what folks want from it right now. Now, right now the whole trust thing seems to be a pretty big sticking point. You know, like I said, you know, for users who decide that, you know, they need a protocol that does more than what Monero can offer, like you probably at this point are going to have to sacrifice that whole, you know, where is the trust lie question. But at the same time, the fact that the research is ongoing and has undergone so much improvement over a pretty short period of time, I mean, speaks really highly to where that area of research is going to go in the future. So I mean, I think it would be great to be able to move to a general idealized future zero knowledge proving system, you know, that lets us build really cool protocols that give us everything we want. Something. It's not today. Hopefully it's not too far off. There has been a lot of work in that area. It's been shocking. Yeah. And to be clear, like, I'm really glad that other projects are doing research in that and, you know, putting that kind of stuff in practice. Like as with all projects, including Monero, like, I think it's important to talk about limitations and tradeoffs. Like, if you tried to do things like this, you're breaking Monero. But it's still good that, you know, those different kinds of applications are also furthering for research. Sorry. I'm also helping some other later speakers come through. Okay. So you have another five minutes. There aren't any other questions that have come in, sadly, just the two from on today's. So thank you, Andreas, for answering those questions. We appreciate them. No one else came to the office hours. They're true office hours. No, that's okay. No, I mean, it's good to kind of just recap, like, how stuff has gone and hypothesizes to where it might go. Research is interesting. You know, you never know what's going to come up. What's going to work. It's not going to work. I think the upgrade in October is going to be exciting because, you know, bullet first was I think a cool example of something where transactions got smaller and faster. And there were really no big trade offs that folks had to consider. And I think CL side is another example where, you know, the transactions get smaller, they go faster, and not to the extent that bullet first did, but and there's not really any security trade offs. You know, if anything, working to CL side helped to kind of improve the way that we understand linkable ring signature security models. So if anything, like, I feel like there's more confidence now about the security of the setup that we're going to have. So anything, it's not really a trade off. It's just, you know, kind of an increasing stack of benefits. You're saying that it would be weird a little bit with whatever comes next, because with ring CT, for example, it was an absolutely necessary change to get to hide the amounts of transactions and all the other benefits came with them. And I think that the issues with denominated Monero were not, they still continue to not be well documented. And I think like a very clear way, we know they're bad, but also, I don't think we've had many research papers yet showing how bad they were. Oh, I mean, the early Monero chain is, you know, a fantastic source of material for analysis. You know, I mean, it's well understood that the effects of, you know, a lot of optionality and protocol has always been problematic. You know, a huge number of early transactions, you know, were effectively, I would say, denomized in the sense of like ring-based signer ambiguity, you know, not again, not based on wallet addresses or anything. But I think that that's kind of still influences work that goes on today. Like there's still parts of the protocol that have some optionality in them. And, you know, you can argue that it's good to have some flexibility in the protocol to allow for alternative use cases that you might not anticipate. But at the same time, like a lot of the work that especially folks like Itzmus and his group have done has shown that the optionality, you know, can lead to fingerprinting if you're not using standard software or using a service that does something in a strange way. And I think there, I think that a lot of folks are coming around to a better appreciation of the fact that limiting the protocol to, you know, improve uniformity and decrease fingerprinting is very much like a one-sided, increasingly lopsided argument toward, you know, limiting the protocol. And there's still things to do in terms of that. And I think the more that we understand about some of the early protocol decisions and how that optionality was bad, I think the more that that can influence better decisions going forward, you know, like the protocol always have like the old chains to deal with. And I think the best to do best thing we can do is try to make the decisions going forward to, you know, improve uniformity that can decrease risk and improve safety. Understood. I'm very sad to ask the last question here. Are there any projects working on voting based in Monero? I know like that one Italian government wrote up that one thing about how they maybe were considered. Yeah, that was that was a political party, right? Yeah, it was taken. Yeah, I don't know if that ever actually got implemented. I don't think it did either. Voting is, I mean, I'm by no means an expert on voting, but everything that I know, and I mean, folks who do different villages at DEF CON involving voting probably know way more than I do that white product voting is tricky. And if you think you've found out all the ways it could go wrong, like they'll always surprise you. So I don't know. I mean, I thought it was cool to see in the original LSAT paper an application to voting just kind of as a like a fun academic thought experiment. And who knows, you know, it's presumably something involving, you know, good sign or ambiguous proofs going forward, you know, might be beneficial. You know, who knows for voting, but, you know, I don't think that like LSAG or CLSAG would be the thing to use. I mean, there's enough efficiency problems that, you know, trying to scale that out to the, because again, like the whole idea was that you basically have like all the different possible voting entities as like part of a ring or the entirety of the ring. Like that isn't going to scale reasonably well. And, you know, the trust model is so much different for voting that, you know, you could probably get away with something that ends up trading certain kinds of trust for, yeah, I don't know, maybe it's trading that kind of trust for better efficiency. It's, I haven't thought about this nearly enough people will speak well to it. So just keep rambling. Yeah, understood. Okay. Well, that's the time that we have. Thank you so much, Rang Noether. If people want to follow Monero Research Lab, there's always the Monero-research-lab channel. Serang's always there, of course, 24 sevens manning this. Um, yeah, there's definitely the IRC channel and the folks don't want to be on IRC, you know, posting questions in R Monero. There's a lot of folks there who are good at researching development who can answer. On the Monero projects meta repo on the issues there, you can see when all the meetings are and logs from those meetings are always posted as well after the fact. So folks want to see like what people have been talking about. Yeah, and everyone's walking into the meetings if they happen to have anything that they think is interesting or useful research the group might like to see. Yeah, a lot of great contributors, not just me by any means, a lot of really great researchers and folks who are interested in protocol improvements and you know, improving privacy is a digital asset space. So thanks to all the other contributors. Well, thank you so much, Rang Noether, for joining us again.