 So, I'm going to talk about, you know, the standardization process in ZK proofs, which we've been hearing about today and yesterday, and I'm sure you've heard before, by the way. So, okay, so let me talk about standardization. So, I think I was conscious whether to put it, I thought maybe people would have talked about it before today, but I'm actually glad I'm putting it, because we are kind of like in the day of standardization. So what it has been to standardize, right, so, okay, so to bring into conformity with the standard, especially in order to ensure consistency and regularity, so in some sense it's benchmarking, conformity, and what does it mean to, what is the standard, so if you go to Webster, so, okay, so it's a banner for writing cry, et cetera, but maybe that's not what we want, but it is something established by authority, custom on general consent. As a model or example, so really, so this is something, a benchmark that we know what we're doing, that we know what we're doing is good enough, so this is in some sense the goal of a standard, but we know there's more, so, sorry, so it's really what the goals for this, for standardization process in general, so really what it is, what you want to create an agreement on an object to make the world more efficient, so this is something different than setting a standard or benchmarking just to create agreement, and this is for interparability for modular design just to help us build a better world more efficiently, but there's really also this issue of benchmarking, that to know that what we have is good enough, and different levels of goodness, et cetera, and this is also important, so we are playing between those two things, well the other reasons are such that they're protecting business interests, you know, and actually much of the world standard bodies, this is the main goal, but we don't want that, or just to say that we don't want that, so just put it aside, but another issue, but another good thing about standard and actually the process actually was said earlier is the process of standardization is getting people together, and specifically getting people from different walks of life, different backgrounds to agree on something, this process itself, as I'll tell you in the experience is really, it's a great thing by itself, so anyway, so, but a few words about standardizing photography, so we've heard the talk earlier about this, you know, some great points were made, but let me just say, you know, maybe some repetition, but these points again, so when we standardize crypto, so the first thing, the obvious thing is to standardize the APIs, like any software, the interface formats, whatever interfaces there are, and of course we want to standardize the operational correctness, like again, like any software, for instance, you know, if you encrypted with the same key, you get the same message, great, but this is, you know, most standards stop here, but for crypto, there's much more to go, so how to standardize security, you know, so it's really kind of think about it, it's a hard question, in fact, how to even specify security requirement is not, it's not always agreed, but many different ways to specify security requirements from certain objects, even simple ones, let alone complex ones, but even how to convince people that we actually need security requirements is not a trigger, so I can tell stories from, you know, war stories from 95, 96, I was a French, you know, PhD graduate, working in IBM, following Hugo, I'll tell you about him anyway, and I was following Hugo, he was involved in the standardization process of IPsec, people remember what IPsec is, everybody knows IPsec, you know, it was still the old days of the internet, it was still fledging, we were all, even optimistic, we were sure we were going to design it right, you know, IPsec is going to be the protocol of choice, you know, all of the traffic is going to be encrypted at the IP layer, you know, no traffic hijacking, no nothing, you know, didn't work, also these were days where, you know, section 230 was just passed, was still a good thing, we didn't know what it's going to bring us today, but never mind, that's another story, but anyway, so we were, you know, in the ITF, standardizing IPsec, and it was really hard to convince people there to make the point that one is to, you know, standardize security, it's a security protocol to actually have notions of security, and actually make sure that whatever is standardized meets those notions under some assumption, in some model, and I remember, you know, there were some very vocal people there, you know, really making Hugo's life hard, you know, this self-appointed cryptographer is trying to make us do extra work, you know, so it's, and Hugo, in his calm, Argentinian way, in very, with his charm, you know, very calm, he answers everybody lently and friendly and with a smile and kind of diffuses all antagonism. I learned a lot from this process, not just about crypto, but how to deal with people, but it's, I think maybe the, well, eventually there were IPsec was standardized, you know, this was specific about HMAC, and I think maybe the great thing about HMAC, maybe this is the one first cryptographic scheme that was standardized kind of, not exactly, but almost exactly because it has a security analysis. There are other schemes which are slightly more efficient, but did not have any and for good reason, so, but it's really a great win for Hugo then. So, just the fact that there should be a security requirement is something, maybe to the people sitting here, it's not convincing you, but just to say that it's not so obvious to many other people. And anyway, so, so, so then, okay, once we kind of agree on a security requirement, whatever it is, then what, what assurance do we require? You know, we want the proof, so we will proof the very reduction usually, you know, but to what assumption and what parameters and how to put the assumption against the parameter, how to compare reduction to one assumption with some parameters versus reduction to another assumption with different parameters, you know, what's better, what's worse, and then what kind of cryptographic proof we want? I mean, do we allow heuristics such as random workers? Do we not? I mean, how far do we go? You know, programmable and not programmable, you know, how to talk about all those things and understand them? And it's hard. And once you actually want to standardize and compare things, it's not, it's not easy. It's not trivial at all. And then, of course, it's not doesn't end there, then what level do you specify the code, right? So it's not enough to specify an API, you specify, you standardize a security transform mechanism. So you want to specify it down to the, you know, to the, in the specific programming language in what programming language, do you want to say specify everything down to the, to the, to the, to the, to the machine code and the specific hardware? Do you want to do, you know, PL style analysis, automatic analysis? So there's all things mean, it's not clear where to stop. I mean, it's obviously it's many different things, but where do you draw the lines? So standardizing crypto is, is really far from trivial, even for very simple things. To do it, to do it right. And, and then standardizing protocols, it's even more so because protocols are really complex objects and not just simple schemes. So the encryption or encryption signatures are not even, are not simple by any way, but, but protocols are even more complex, you know, there's several parties, different concerns, you know, it's, it's hard to capture security. Again, it's not clear how to do that right and how to standardize and compare and benchmark protocols. And then it also depends on other mechanisms. I mean, you can say that all you always depend on other things lower than you in the stack, but since protocols are kind of higher in the stack, they will depend on more stuff on the networking stack and also on the, on the actual network you, you work in because it's distributed. So again, how do you think about all those things? So, so you can imagine that when I was approached last January, even a year and a half ago by, by Shafi, you know, saying, how about, you know, do some standardization to zero knowledge. We just did, we just started the standardization for FHE and it's great. Let's do that. So my first reaction like what, you know, we are so far away from there, you know, let's start crawling before we do marathons. It's just, you know, but, but then she insisted, you know, she said, you know, but people are actually using it. And, and we're going to have to live with whatever they do, no matter what. So why don't we just try and make it better and whatever we can? Yeah, okay. And, but, but, but where do we start? And how do we even think about this thing? So what, and what really can be standardized? So then she said, you know, why not just do one step at a time, let's just have a meeting and whatever whoever is involved and have a committee and a meeting and then we'll see what happens. And I'm okay. And this is what we did. And it's really in retrospect, it's really, but we don't know where it's going yet. We're still in, you know, a year and a half into it. We did a lot of progress, but still in the first stages. But I'm now I'm sure that this was the right thing at the right time. And there was no, I mean, this is thing to do even if it looks crazy. Because, as we said, the process and sometimes even more important than the end results and the process already proved itself, even without anything else. So, so here's the steering committee that the chaffee and I think people would get it that organize it started with chaffee, I mean, so are and Daniel here put together, say, I would say it may be more at the end of it, but forget it, you know, Daniel has been amazing for this process without him, nothing would have happened. But anyway, so this is the steering committee. I think Daniel is not even written here. Why? Okay, never mind. Okay, we'll discuss it later. But anyway, so, so, so, so, and then we had a first meeting, we just publicize it whoever we know, we try to reach everybody that we can. And this was the first meeting. I know how many was 60 something people, right? So yeah, and, and it was, it was nice. And there were lots of nice pictures. And and the, and then, you know, the agenda was, you know, also what do you do in such a meeting, right? You get some people together. And you know, maybe a couple of plenary talks in the beginning to, you know, to make everybody feel together and feel happy and then believe. So, so, so, so what we did, we kind of again took example from the encryption effort. So we said, okay, let's do three tracks to split up to three groups. And the tracks that we did were security, we called it, which essentially was kind of the theory people can algorithm definitions. And then we had another group of applications, which try people trying like, let's see what the applications we have. And we had implementations, which is a third group. And each breakout group met for quite a bit, quite a number of hours over two days. I tried to hash out things, people explained what they were doing. And we're trying to understand what everybody else is doing and try to come up with some document to come up with, to summarize what happened. And we did that. And it was very rewarding. And the takes, you know, that's maybe my personal take from the documents and from being there, correct me if you think differently. So first, there was an immense interest in putting your knowledge into practice with a number of different applications, many different people, which surprised me. And then there were three main application areas that we came up with. I think this was kind of the one important first step. So one is publicly refurbished non-interactive succinct knowledge, kind of like zonal snarks. And this is like for national transactions, blockchains, Zcash, for instance. And what's important for those applications is succinctness and verifier efficiency because the verifier, you know, is everybody all the miners in transparency. In particular, reference string, et cetera, should be transparent. Then another setting was, and it's supposed to be non-interactive. It's very important that, of course, it's non-interactive. Second setting is designated verifier, non-interactive zonal knowledge, again, non-interactive. But this is something that's a different sort of applications for anonymous credentials of supply chains, provenance of data with secrecy. So here, actually, the prover is the guy that's supposed to be efficient because often this is the weak guy. And he wants succinctness, but maybe not as succinct, depending on the application. And then composability without the proof because if you want to do things over and over again, or put together a credential, et cetera, you need composability. And a third setting was interactive zonal knowledge, which is mostly for running within other protocols, the PC protocol, et cetera. And there, you know, it was more like application-specific, but overall efficiency and amortization because you want many of them usually at the same time within the context, et cetera. And so that's one thing. And another thing that came out of this that I think is that we need a common language and common APIs because otherwise, we cannot work. And when you say APIs, it means that really APIs for, you know, formatting and fields and what fields are what, and also, but also for security APIs for zonal knowledge in this context. And, okay, so it was kind of like it seemed clear that most of the interest was in the first, at least the people in the room, then was in the first applications in all snarks. Second was the second one. And the third was relatively few people. And this feeling was enhanced in the second meeting, which I'll tell you in a minute. So it's a subinterim work that we did. So first, it took a while to actually twist the hands of the people who were supposed to do it and people in the first meeting, but we didn't do it. And we got some redocuments from the three breakout groups who actually was very useful. And then we kind of saw that it was really interesting. And it was clear that this is something that's going to continue, that something is going to be not one of things. So we should take ourselves a little bit seriously. So we set a charter and a court of conduct and some standardization process, which we took from NIST, it took from ITF. And it was, it was, so we, and we set some process for ourselves. And then we compiled a document, I think Daniel compiled a document, overviewing the technology and setting some terminology. And I think it's a great document. I mean, even if you don't want to participate in standards in you, but you are interested in zero knowledge, it's really worth reading. It's broad, it's extensive, it provides an overview of what's there, what's the interest, and it sets some common language and terminology, which is very important. And then we set a second meeting, which happened last April. So there was no, for some reason, there's no picture of everybody together, but there were lots of small pictures. Yeah, yeah, well, okay, too bad. Yeah, okay, too bad. Sorry, I took it from the wrong page on the thing. Okay. So anyway, so but the second meeting was actually, there were many more people, there were, whoops, that first wrong thing. Yeah, let's see. And I think I pressed, okay, let's see. Okay. So, thank you, sorry. So, so, so over 150 participants, I don't know what was something like that. So, so it's really more than twice than the first one. And it was three days rather than two days. And we had many presentations, like of around the counter around 30 presentations, what people about from different, you know, for applications for, from theory, it was really interesting, the way we had two panels and several break up sessions of different topics. And most importantly, lots of whole discussions, really a lot of, a lot of discussion there. And in one thing that it was clear that communities convert, at least with the people again, there, converge mostly in again, and the, the Arnold Snarks, which was the first bullet from before, but not, not exclusively. So there, of course, is interest in the other ones. But the most of the interest seemed to be in the public very, very, very high ability, efficient, very fire, succinct proofs in the, you know, four contexts of payments like to blockchain etc. So, but then, and of course there is a lot of innovation there. So in fact, we have currently, we have three standard proposals that for under in a discussion, there is a mailing list, they discuss on the mailing list, so you should feel free to look at the agenda mailing list, contribute and contribute to your own proposals, but it's really, it's a vibrant community and many participate. So what I want to do with the rest of the time, I want to talk more about the API and about the Zeronology API and talk a little bit about the way we were thinking of doing it, which is again via this UC framework that was a great talk about it yesterday with a great motivation by Andrew that I could not have done it better, but if I did better than what Andrew motivated it, but it really seems like the more we are dealing with this, it seems that if we want to have a reasonable, good, useful standard of cryptographic primitives, which are more than just the basic things of encryption signatures and of protocols and primitives that will allow for construction of sound software. And applications that don't just interoperate, but actually are secure, then this in some sense the only way we have to go. I mean, it's not that it's perfect, it's far from perfect and there are issues here, but we just don't have any other thing. So I just want to say one more thing about this versus, maybe I'll say it clearly, let me say it now, versus the ITF standardization process. So in some sense, the ITF standardization process is something which is inherently autogonal to security because what they standardize is the API in terms of what goes on the wire. They have this philosophy of not standardizing things beyond that. So they now realize that there's some times that you have to standardize also things, how things operate inside endpoints, not just on the wire, specifically for crypto, but it is very limited. And the ITF is really in the mindset of standardizing for interprobability, just what's going on the wire. We don't want to mess up with the internal implementation because that's none of our business. And this is like inherently incompatible with security because if you want to talk about security, you have to talk about security at the APIs, not at the links, but at the APIs inside your computer between the application that's called you and the protocol. And it's impossible to talk about security without that. And the ITF is doing in some sense some crimes there. For instance, they talk about TLS security. It's really ridiculous to look at the TLS standard and say that it's secure, not secure because things are not specified there. I mean, the API between the TLS and the application is not even written because it doesn't need to be, it shouldn't be written if you talk about just communication API. However, you can't talk about security without talking about the inter process APIs within the endpoints. And in fact, I've been talking to people who are implementing TLS perfectly by the book, but in the end point, all the keys are sitting in some database that is accessible to everybody in the computer. Well, there's a complaint by the standard. Nobody says otherwise, but you cannot call it secure. So, and this is something where if you want to standardize security protocols, you have to do it differently. When you cannot do it kind of ITF style in the sense of interoperability, you have to go deeper into the guts of the protocol and talk about the API and also the implementation. Okay, this was just a side comment. But that said, let me talk about how we're thinking about doing the API for your knowledge, which is this using this UC Fairport. So this is again, it's a work in progress. So this, I stole the slides from Mutu who gave the talk in April in the April workshop. I'm only to some point, but at some point I'm going to do something different because I think we learned more than since April. And this is still a work in progress. Okay, so it is indeed a work in progress. So what is the, let's go back to the roots, your knowledge, GMR 1995, and the notion of your knowledge, there's just says that how to, what's the idea of your knowledge? So for any poly-time adversary, there is a verifier, adversarial verifier. There is a polynomial time simulator such that whatever the verifier learned from the interaction with the prover can be simulated by the simulator by itself. Just knowing the input text without talking to the prover at all without knowing anything to do with the witness. Okay, so this is the basic idea of the knowledge that I think we all know and love. But now we want to actually use it in life. And we know that this definition doesn't compose. So let me not tire you with examples, details, through half of people here know and the other half, they don't know and that's not the place to learn. But the point is that the zero knowledge doesn't compose in the sense that you can have, that it composes naturally, let's say. So you can have protocols that satisfy the definition, but still when you run two of those protocols alongside each other, or maybe the one protocol alongside something else, the old security breaks, the witness comes out, there's no security whatsoever. So it doesn't mean that zero knowledge protocols cannot be used in protocol settings in general. It doesn't mean that. It doesn't mean that the natural way to compose them, we think about like in programming languages, just have a subroutine and you call it and then you have another subroutine and you call it, this doesn't work. So what does this mean is that this definition is really unfriendly to building secure protocols. You have to do something else. So this is the problem that the UC is trying to resolve. And okay, let me actually skip that but I'm not gonna say that. So sorry, so the UC approach is to actually do something that you can use as an off the shelf subroutine as you will do in any programming language, any library of subroutines that you will do. So you do it just with security. So let me just say, so what is the idea? The idea is that again, the security of the system of whatever of the system that you build, the protocol that you build is only reflected in the way it affects the external system out of it. Both the honest parties that use it and the side effects to side channels, whatever the adversarial effects. Therefore, in order to capture the security and functionality of some system, your protocol P, what you're gonna do, you wanna do it in two steps. So first you write in a deal system that the functionality that capture the desired effects. So it's important here the desired effects are both the functionality in the sense that if you decrypt message encrypted with the same key, then you have to get the same message, right? Or in zero knowledge, if the X is in the language, X and W are really at the relation, then the very fresh would say one. And the security, right? And the security means that you don't learn anything else. Nobody learns more than what they're supposed to by the functionality. That means the functionality should define both what every party should learn from the interaction and also what every party should not learn and what combination of parties should not learn. So how do we do that, right? So you have to write a functionality that does that. So we'll get to a bit of how to do that. And then you say once you have that, then you say that a system is secure for this functionality if it looks the same for any external environment and how exactly this is formalized, it doesn't really matter right now. But the point is that again, F is just there for in order to specify the effects, you don't really care about the specific efficiency of how F is written. Although for composition, we need F to be still for normal time. But the point is that this is kind of the core of our exercise, because this is going to be the API. And the API is going to be this F that captures within it both the functionality and the security. Okay, and we need an API that on the one hand, we can realize and the other hand, we can use. So it's usual. Okay, so just to say, so what a secure composition means or secure, you know, but the notion means is that you run the protocol, the composition, the composability of this framework comes into effect in the following statement is that if you run the protocol, even in the concurrent setting when the protocol is running with you concurrently with many other protocols or many instances of itself, running the protocol is as correct and as private as, you know, as if you would replace every instance of the protocol with this ideal functionality, right? So here is a system with two instances of the protocol. Here, you know, there's a man in the middle or a woman in the middle between two endpoints and running two instances of the protocol. And this should look the same to everybody as if this woman in the middle is actually interacting with this ideal functional, two instances of ideal functionality. And then the way it's formalized in the definition is that you actually take all these guys. And so you have here the two adversaries which, you know, for any adversaries here, there is an adversary that is where the simulator. And then the, this other interaction or calling protocol, if you want to call it is replaced by an environment which is just one of a sort of machine that tries to distinguish between the real and the ideal. Okay, so this is how the definition works and this is how the logic works. And then, okay, so both the adversary and the simulator for another time, also the environment, nevermind. Okay, so how do we know about how to realize things and use the security? So in general, you cannot do it for any protocols. There are protocols that you cannot, particularly for zero knowledge, you cannot just do zero knowledge from scratch with use the security as opposed to the actual definition which is, looks like a bummer, but you also cannot do non-interactive zero knowledge from scratch, you need some set up assumption. And in fact, the same set up assumptions that work for non-interactive zero knowledge also work for use the security. So what we can do is with this set up, we can actually realize it either with the honest majority, but this is not the case here. For us, for zero knowledge, because we have only two parties, or we need the common reference ring or public infrastructure or some time model, there are a bunch of different things. But for our case, for zero knowledge marks, we're talking about a reference ring. Either it's publicly generated or publicly generated, while we're talking about the reference ring. So there are relaxations, but never mind that now. So let's talk about the vanilla zero knowledge functionality, the way it was thought about earlier on. So what is the API? So there is this environment which sits above and it gives the prover here's ID which is identity or whatever for the transaction. And here's an X and here's a W, a witness. And it gives just X to the verifier. And then in the real execution, the verifier, the prover, the verifier interacts and then the verifier just gives out the output in the end. In the ideal model, there is no protocol. Instead, the prover, the verifier interacts with a zero knowledge function, ideal functionality, and which works like that. So first, the prover gives the X and W to the functionality. Functionality records X and W and just gives out to the verifier the X and the one bit answer. So, and that's it. So what this means, it means two things. It means that the verifier learns the right bit which this is what it's supposed to learn. It also means it doesn't learn anything else. So it's completely, all the rest is completely hidden from it. So in the real world, you have perfect soundness and perfect zero knowledge. So this is great, which is nice, but somehow it's of a nicer functionality but it's not really good for music and it's not really good for us as specific for the applications of zero knowledge snarks because we, in particular, in this application, in this interaction, there is never a proof string. So the verifier just gets the one bit answer. There is never a proof string. Whoever actually, sometimes, you actually need a proof string. And in particular, the applications, we need a proof string because you actually need a proof string in order to prove on top of it. Right, you need to compose things. We need a proof string in order to maybe sign it, to show it later. So there is this proof string. It actually has meaning and it has value in the context of zero knowledge snarks. So this is too ideal in some sense. So it actually hides too much from the implementation. With part of the implementation, the proof string actually is important. So therefore, we need to actually write a different ideal functionality that captures that. And that's actually a lesson also in general that if you do, you know, this style of providing security requirements, the way you write an ideal functionality is very important because you can write the same thing, you know, for the same primitive, you can, again, define defense security one way and it works and another way and it doesn't work. And so it's really important to do it right. So here's another way to write zero knowledge functionality for non-interactive zero knowledge. And this actually, this way is from a paper form, I think, so growth of straw skins are high. So the way they do it, actually, they do the proof and the verification in two different parts. So the first thing, the prover goes to, well, let's think about how it works in the real world. The prover gets from whatever, from the person who calls him an X and a W and it returns a proof. Well, not exactly because there's a CRS. So the prover actually first gets a CRS and then he goes and sends back the proof and the CRS. So this is how the prover side. And then the verifier will do, you know, will verify. But let's talk about the proof, the prover. So now that the Dizika deal functionality will have to look like that. So it gets, again, the X and W from the prover. It records X and W and it gives X to the simulator, not W, call it secret, X to the simulator, gets some proof and reference string, whatever, and sends it back. Yeah. And thank you. So again, here, the proof string is part of the functionality. And then the verify side is the same thing, right? So it gives now that reference string and the proof and the X. And the verifier is supposed to answer zero one in the real protocol. And in the real protocol, it will work. It will, again, the verifier will go, give it to the functionality and the functionality will just look up in a database to see if it's there. If it's there, it says okay. If it's not there, it will actually go back to the simulator and give it another chance to say maybe now you want to give it a proof that if the prover was dishonest, it's never mind. So by any way, the functionality doesn't say yes unless it actually sees a witness and verifies the witness works. So again, we get perfect soundness and perfect zero knowledge. And now we have a proof string. So far so good, it's supposed to be good, but in the 20 seconds that I've left, let me just say, okay, so now that comes the issue of succinctness. Because the snarks have to be succinct, right? And there is an issue with that succinct proofs. We know that it cannot be proven under some standard assumption. The only way we know how to do them, unless one day we'll be smart enough to do non-rackbox reductions, then everything will be fine. We'll be able to prove snarks from only functions. So but until then, we don't know how to do that. So the only thing we can do is to do these knowledge assumptions. And then those assumptions only work for a specific type of adversaries and simulators. In particular, in the context of UC security, what we have is that the type of extraction that we need if we do it directly on UC security, directly on the protocol is actually impossible, assuming IO exists. But IO exists, so it's impossible. So we actually don't have any way to realize NISIC in the right way so that we can actually use it in a composable way for applications and then prove it. And so what are we doing? Kind of like we went up, a tall order, we climbed up a tall tree and now we don't know how to go down, which looks kind of a problem. But luckily, so incompatible with IO, but luckily there is a way out and it can be done in restricted settings. And this is again ongoing work, but the restricted second is the following. Let me just finish it in a second. So as soon as we have the proof is of a restricted form, that is that there is a seed witness. If there is a witness, there is an X and there is a W, which is the witness. But in addition, we assume this witness is structured, there is a seed witness, which is short and the rest long of the witness, which may be long, is actually computed in polynomial time from the short witness. And the point is if this is the case, so this is the criterion that we want and what we show is that if this is the case, then you can actually have UC secure snarks such that the proof size and the verify complexity depend only on the length of S, the seed of the witness, and of course the prove complexity is polynomial in the statement as before. And the point is that if the observation is that actually the non-application of snarks in particular Z-cash, et cetera, actually this is enough. We don't need more than that. So we are going to be good. So this again is work in progress. It's been written. Hopefully we out soon. But anyway, I'm out of time. Analyzed in these algebraic group models that don't rely on concrete assumptions that don't have proofs on concrete assumptions. What do you think about those? Are they up for standardization? Should we ignore them? What's your... Okay, so that's a great question because I don't know. I'm looking at that. So, okay, so those constructions, you can show that they realize you see zionology. You don't need this crazy thing to do it in general, right, et cetera. I don't know. That's again the issue of community decision, right? And the thing is like what will happen if something will go wrong, right? So the people in the informal community they're doing all kinds of, you know, to VCs and to people who put money in those things that say we have zionology proofs without assumptions. No set up and no assumptions. Kind of like forget that there is a random work over there. Then what do you do? Okay, but then one day somebody will break this because the hash function doesn't work and then what do we do? Then all of us looked crazy. I don't know. That's my take, but... Yeah.