 Someone's got to come up with a hacker handle for him before the end of the session, so think on that while he talks. All right, thanks. Can you hear me? Cool, I have to stop moving. All right, welcome. So this talk describes vulnerability in Azure V2C as the title sort of implies. It lets you impersonate basically any user in any tenant that uses Azure V2C. It has been fixed in a couple different ways. There's some lingering remediations that I'll get to talk about in a little bit. So my name is John Novak, as he was saying. I'm a technical director at Pretorium. Most of the time I do security assessments for IoT, mobile, cloud, different product security assessments. I do have a background in math and crypto, and I'd say I have a knack for identifying what sort of good crypto looks like in common implementations. And this one definitely falls in that category. So to get this out of the way at the beginning, I sort of believe that demonstration of security vulnerabilities is really vital for actually getting somebody to fix something. Many CVEs are theoretical, but demonstrating them is key for getting remediation. The neat thing about a crypto vulnerability is that I can prove to you that this is a vulnerability without actually telling you how I did it. In this case, I can provide a token, which is in this QR code here, which is signed by Microsoft with a key that I don't control. But the contents are all values that I do control. And in particular, the email address field in the token is one that's not owned by me. In this case, a token like this one could be used to enumerate all bug bounty submissions to Microsoft through their web portal and only using sort of the email address of a victim. All right, so before I get into the weeds of this vulnerability, I wanted to quickly dive into some sort of background info on crypto, JWTs, and Azure BDC in particular. So when most people think of cryptography and or encryption, they think of AES or symmetric encryption. And basically in this mode, you have a secret key that is used to encrypt plain text and the same secret key is used to decrypt cybertext. The sort of cons of this are that you sort of need to negotiate the secret key in advance. So both parties need to sort of have it. This does provide confidentiality. In some cases, it provides integrity in the sense that some block cypher algorithms can be modified in transit so that you might not know exactly that the cypher has not been sort of modified by someone in transit. Conversely, for this talk, asymmetric encryption is done with asymmetric algorithms like RSA or elliptic curves. In this setup, you encrypt with a public key and then decrypt with a private key. And there's reasons you'd want to do this, right? You say, for example, you have email or GPG or something set up. You want anyone with a public key to send you a message that only you with the private key can decrypt. So in this way that because it's sort of constructed in this way, there is essentially no integrity in the encryption, right? You don't know who sent you the message unless it's been signed sort of out of band of this method. So that sort of plays a big piece in this vulnerability. So building on encryption types, when most people think of JWTs or JSON web tokens, they think of this construct, a signed blob. So essentially there's a header, some plain text in there and then a signature on the end. These definitely have integrity. You can't modify the contents without invalidating the signature, but there's no sort of confidentiality, right? I can just read the contents. Another variant of JWTs are what's called a JSON web encryption or JWE. And essentially this is an encrypted version of the same token. The first block there is just a sort of header values, which tells you which algorithms are used and the rest is all sort of pieces of the encryption. Based on which encryption algorithm you use, this will or will not have integrity. And in particular, if you use asymmetric encryption, there's essentially no integrity because anyone with the public key could construct an encrypted token in using. All right. So Azure B2C is sort of the basis of this talk. It is a service developed by Microsoft. B2C stands for business to consumer. And essentially what it does is you can offload all of the authentication and session management, everything else from what you're developing onto this Microsoft service. You can can set up login flows to do things like login, password resets, that sort of thing. You can integrate with Twitter, Facebook, Google, logins. There's a lot of documentation out there I'm not gonna repeat, but if you're interested, go take a look. So when you're configuring your B2C tenant, you set up an application. An application is really something like a single page app, a native app, like a mobile or desktop app. There are several fields you have to fill in. And on the right here, you can see that this is what it looks like in the Azure portal. You can see you specify like your OAuth flows, your redirect URIs, that sort of thing. To set up the user flows that go along with applications, you can use something called the identity experience framework. Microsoft, essentially what you do is you provide a bunch of XML syntax, user flows, upload them to your Azure portal instance, and then they're used to sort of configure how users log in, right? You enter your email, you enter your password, your MFA, whatever you have. There are various starter packs and sort of community develop samples that you can go online on GitHub and download and install. And in particular, there's a tutorial, which I'm going to get to in a minute, which talks about signing and encryption keys and how to set up these policies. So in the image here, this is just set up using the basic starter pack, which has things like profile editing, password reset, sign up, sign in, that sort of thing. All right, so who uses B2C, right? If you, once you configure it, you set your sort of tenant name and every B2C tenant is a subdomain off of B2Clogin.com. So you don't get a unique certificate, so you can't look at like certificate transparency logs, but you can just Google online and find services which will just enumerate all subdomains of this one. It turns out that Microsoft also has, publicly on their website, they have these customer success stories, which sort of pronounce which popular organizations are actually using their service. So just from their website, there is a healthcare provider in the UK, university in Japan, manufacturer in the US, government ID service for a country of five million. And even in my own personal life, I noticed that my own power company actually uses Azure B2C. So it sort of seems to be all over the place. All right, so part one of this vulnerability is gonna be in this next section. And it all started basically when I started reading the documentation, right? So as I mentioned, they have this getting started. And what you see here on the right is just a screenshot of their documentation. And if you see it, it says to set up a signing key with type RSA and usage signature, and then an encryption key with type RSA and type encryption. And so knowing a bit about asymmetric encryption, this sort of struck me as a little strange. And so I began to dig deeper. On the left here is a screenshot of what it looks like in the Azure portal when you're setting it up. So it's just like a click box, auto-generate your keys sort of thing. So once you set it up, your environment gets sort of configured with this OAuth login flow. And I'll step through it briefly here, right? So you get your login page, then you post your credentials like your email, password, MFA, that sort of thing. Once you've completed your authentication, you get a code, you submit that to the token endpoint there, and it will, once it's validated, it'll provide you with an ID token and a refresh token. And then at some point later in your session, say when your ID token is expired, you can present your refresh token again and get a new ID token and a new refresh token. The format of the ID token is a JWS signed with that signing key that you configured before. And the format of the refresh token is a JWE encrypted with that encryption key you set up for. So when you generate keys automatically, using their getting started documentation, what you find is that it looks like this in the Azure portal. So if you're the admin for your portal and you log in and sort of look at all the keys, you'll see the two keys on the right here, the token signing key and the token encryption key. You can't actually export these and what you're actually viewing is just the public content, not the private key part of it. If you instead try to generate an AES key, which is a symmetric one instead of the asymmetric one, you'd get something that looks like this screenshot on the bottom, very similar, but again, the actual key portion of that is not listed in the portal and you can't export it. So it turns out that running through the sort of default setup, if you chose secret keys instead and then once everything was set up, you logged in with a new user, you would get to this error page. It sort of suggests the error message there says, encryption key must be 256 bit key. So it seems like what happens is on the backend, the auto generation function is generating a key of say 128 bits or something like that, but then when it's generating your token, it's expecting a key of a different size. So basically, if a user on the street wanted to just use this out of the box, they would hit this error message and say, what the heck's happening? So it seems unlikely that anyone actually uses symmetric keys in practice. However, instead of the auto generation function, they do have an option for generating a key locally and uploading it to your environment. So in this way, you can know what your key is for your environment, set it up, and this is great for volume research in particular because I can decrypt my own tokens with the keys that I know about. All right, so, I think a little bit deeper here. So when you go through a login flow with this open ID scope, you will get an ID token returned to you. And as I mentioned before, this is signed with the signing key you configured. When you go through a login flow with offline access, you will get a refresh token, similarly encrypted with the token encryption key that you configured. If you look at the, putting the ID token aside, if you look at the refresh token, you'll notice that it has some headers in the first sort of block there, but it doesn't have all of them. In particular, it doesn't have the algorithms that are used to encrypt the token, but it does have the key ID and the sort of deflate method in there. So in my own test setup, I can upload my own key. I can go through my login flow, get my tokens, decrypt them with the key that I know about, and sort of do a little trial and error to figure out what the algorithms were, right? So it turns out it's what you might expect, RSA OAP, for the outer layer of the JWE encryption, and then AES 256 bit GCM. And then once you've done all that decryption, the actual contents are just compressed with Zlib. One thing you do notice is that once you've gone through all those layers, you'll notice that it's not actually a nested JWT. And this is a concept where essentially you have a signed token, and then you encrypt that signed token so that you have a signed blob inside of your encrypted blob. So without being nested, there's essentially no integrity, right? I could craft my own token. And so this is roughly speaking what the two tokens look like. So if I go ahead and decrypt my refresh token, it looks something like the field on the left. I've formatted it nicely so you can see it. But essentially when you have a refresh token of this type and submit it to the token endpoint, you'll get an ID token corresponding to the values in your refresh token. So in bold here, I have a couple of fields which are modified. And essentially it's showing that if I modify some fields in my refresh token, they're actually reflected in my ID token. These values don't actually have to match whatever's on my account. It could match anything I wanted. You can see I changed my name there. I added some extra parameters that may or may not mean something to whatever environment I'm working in. So as I've said, with the known format and a public RSA key, I can essentially encrypt and generate an encrypt a refresh token with any contents I want, submit it to the token endpoint and get an ID token. It should be noted that this public key is actually just exposed in the Azure portal only to these three user roles, the fairly privileged admin roles. And so the sort of first step of this attack chain, you essentially lop off the whole authentication flow. If you have some say unknown means to recover this public key, but then using what I just showed, you can generate your refresh token, submit it and get a new ID token. So I went ahead, I did a whole bunch of this research about two and a half years ago, submitted it to the Microsoft Security Response Center. And my submission at the time mentioned that you need this public key and you can get it with this read-only role, but it's not exposed in any other way. It's not unlike some secret endpoint I could just query. And so after a little bit of back and forth, they close this issue with essentially no action taken in April. Personally, I'd sort of argue that if your security depends on hiding a secret key or a public key, then it really isn't secure, right? You're just hiding something that should be public. And going a step further, in instances, you know, on security assessments, oftentimes we'll get source code for our clients, right? And we wanna make sure that even with source code access, there shouldn't be anything like hard-coded keys or backdoors or whatever, right? Similarly, in cloud environments, I'd argue that getting read-only access to a cloud environment, you shouldn't be able to do things like read your database or read keys or elevate privileges or anything else. So that's sort of my two cents here. So if that was the end of the story, I probably wouldn't be here talking at DEF CON, but there is a part two to this story. And in particular, it's a side channel attack. So really what I'm, the objective here is to recover the public key, right? We have all the other pieces of this attack chain. All we need is this public key. So for RSA, the public key is, the public components of RSA are N, a modulus of 2048 bits, and then E, your exponent. So for all intents and purposes, you know, 99% of RSA implementations have a fixed E value, so we'll just say that's known. But the modulus is really what we want to figure out and recover it. Another thing we know is that anytime you encrypt something, you get a ciphertext, and that ciphertext is necessarily gonna be less than N based on the math that's involved in RSA. So you can get, you know, a rough lower bound for what N is. This, you know, is not very practical. It would take an inner amount of samples to actually recover something. But, you know, it's a stepping stone here. So what do I have here? I have, when I have a refresh token, and I'm submitting it to this BDC endpoint, effectively what's happening at some level, right, is it's gonna call decrypt RSA with a ciphertext that I get to control, and then the other components of my RSA key, right, the N and the D exponent. So essentially I can feed in whatever ciphertext I want. Looking at what decrypt RSA is at, you know, a 10,000 foot view, it effectively does these sort of math operations. It computes your plaintext based off your ciphertext raised to the power D. It then verifies the OAP padding, which involves doing a SHA hash on your plaintext. If that is verified, then it'll return your plaintext. Otherwise it's just gonna, you know, chuck out an error and say, you know, you provided some ciphertext that didn't match. It turns out that this is somewhat computationally expensive. The modular exponentiation and the hash both take, you know, non-negotiable CPU time to compute. So suppose there is a crypto library that does this, right? Since we know the ciphertext is gonna be less than our value N, why not just add an if statement at the top that says, if your ciphertext is too big, just error out, right? Don't do the rest of the decryption, save yourself some time, and, you know, return the error. This shouldn't actually expose any sort of information because, again, N is supposed to be a public value. And so there's no sort of crypto bug in any library that's out there just based on this. However, what you do get is you get a time differential, right? So if I am providing a ciphertext that is greater or less than N, this decrypt function is gonna return in one of two different times, right? And so based on if I can observe the timing difference, I could see what, I could glean some information about what the N actually is. All right, so I did this. I took a JWE with a cipher encryption key of two to the 2047, and then one with two to the 2048. I knew these values would be both less than and then greater than N. I submitted both of these tokens to my B2C token endpoint, and then, you know, observed the response time and compared it. One thing I will note that you're trying to observe a time that is sort of very small here, right? So doing this over my home laptop, over my Wi-Fi network, all the way across the internet to the endpoint is gonna introduce a lot of jitter in your timing, right? So I tried to reduce this as much as possible, doing things like, you know, say running on Azure Cloud Infrastructure, right? Just shorten the distance between where my code is and where the endpoint is. I also tried to avoid some load balancing by, you know, fixing IP addresses so that I wasn't hitting different B2C token endpoints every time I submitted a request. And then other things like, you know, pipelining your TLS session and stuff like that. So what you find is that with these two values, you, I submitted 2000 requests of each type and plotted them. And essentially what you see is that the ciphertext that is smaller is in blue here and the larger one is in red. And you'll note that the curve for the blue one is just ever so slightly to the right of the other one, right? So this really means that there is a timing differential and it is observable even across the internet to these token endpoints. These graphs are very close together, right? They're kind of noisy. But it is there and it is within reach for a sort of timing attack. In this instance, I recorded the averages at a 28.1 and 26.8 milliseconds. If you ran this at home, it might differ based on where you're running from. So let's generalize, right? We have a timing attack that tells you if you're greater or less than a certain value as any sort of undergraduate computer science major will tell you, why not use a binary search, right? So you take this timing differential, you submit a bunch of samples starting with an upper and lower bound and a midpoint and then you make some judgment call on if the time of your midpoint closer matches your upper or your lower bound and then you sort of shorten your search space by half, right? And then you repeat. So essentially this sort of timing attack based on this fact lets you recover one bit of your public key for each round. So again, I did this on a environment that I did not control. Because you're running a timing attack, each of these rounds is not fully reliable, right? There are gonna be instances where you do your timing attack and you actually guess incorrectly where the midpoint was. So I had to build in some logic for backtracking and assuming backing up one step up in my binary search so that I didn't just go down a rabbit hole that was incorrect. Additionally, the Azure token endpoint has rate limiting so I couldn't just spam it with requests as fast as I wanted to. But all in all, I implemented this attack and ran it on environment that I did not control and it recovered about 50, 55 bits an hour. This may seem slow. However, refresh tokens probably are valid for something on the order of 90 days and so the keys associated with those are likely valid for something like years, right? Even if your attack takes a day and a half for a key that lasts a year, I think that's perfectly reasonable. This graph here is the attack in action as I implemented it. You see in the zoomed in part there, the jitter is essentially the backtracking and then the little gaps that are in there are the rate limiting that I was hitting. So there we go, the attack is sort of complete now. I have this timing attack that I can recover this public key. I can generate a refresh token with any user details and then I can post this refresh token and get back an ID token with whatever I want, right? So essentially, again, I can use this to compromise any B2C tenant that I like. So going back to the very beginning here, demonstration, right? When you, if I'm sure some of you in the audience have submitted vulnerabilities to Microsoft, when you do that, you likely went through the Microsoft Security Response Center. When you go to the login page, you'll notice that it is this msrcweb.btclogin.com. So essentially Microsoft is using B2C as their authentication service for submitting vulnerabilities to their platform. So it seems like a perfect target. It's a Microsoft property. It's not gonna violate their sort of terms of service, right? But it's gonna demonstrate that I can do something on a live environment that isn't my own. So as I mentioned, you just, you go through this login flow. It's basically as easy as if you run your session through a proxy like Burp or something and then just observe all the back and forth. You'll see basically all the components of the login flow you need, right? All of the artifacts that are part of it, right? And how you get the refresh token at the end. So there's really no guesswork in how to set up the OAuth login flow in a custom environment. You don't really need that. You can just sort of replay requests as necessary. So right, I ran the stack. I recovered the public key for this MSRC key ID. I used the known format to make a refresh token for a victim that I did not control. I encrypted the contents, sent it to their token endpoint and then got back an ID token for that fake victim user. And so here we are back at the beginning again. This is one of a couple ID tokens I was playing around with. If you look at the actual contents in there, the email address is alice.bob at example.com. I don't own that domain. Don't have access to that account. Nor do I actually think it's a bug mining researcher. And I also, for kicks, added an additional parameter, defcon31 in there just to show that you can just inject whatever sort of claims you would want into your ID token. So if you're sitting in the audience and want to verify this, I haven't checked today. But I believe that the public key associated with these ID tokens is still the same. And you can just view it online and decode it and validate that this is a real token. So what do you do with these tokens? Well, on this domain, probably one of the most interesting things you want to do is list vulnerability reports. So I went through the legwork of constructing an ID token with the smallest set of claims that I needed to list vulnerabilities for a user. So I can't zoom in here. But if you look at the decoded section, there are actually a bunch of the fields are zeroed out, essentially meaning that I can craft the token, just put in dummy values for a couple of them. And it'll still validate for you can use it for any account. There are a couple of non-random values in there, I'll grant. However, again, if you just look at your sort of burp history as you're running this attack, you'll see that the audience and tenant ID values are just, they're in public information, right? They're in client-side JavaScript, URL letters, or HTTP headers, stuff like that. So they're not hidden values to find. They're actually just public. So I could essentially use this to list any vulnerability submissions to knowing only a user's email account, right? Luckily, these are things like Windows, Azure, Exchange, ZeroDays that people have submitted that are not yet patched. I'm sure that, as with any bug-binding program, there's a bunch of junk submissions that are in there that you'd have to wade through. But knowing if you wanted to use this in new security researchers' email, sorry, you could have used it at the time. For this environment, you could have also done things like change their payment processor ID. So essentially, stolen all the mounies they would have gotten. But I'm sure that would have been picked up as they wouldn't have gotten paid. OK. So how does this story end, right? So really the crux of this issue is the crypto vulnerability, right? It does seem sort of a bit silly to me to devise a side channel attack to recover a public value that should have been public in the first place. But I think in this case, it was very critical to do this, to get the issue understood and taken seriously, right? And in general, I sort of believe that crypto vulnerabilities and misuse can often be misunderstood. People will see it and say, oh, that's just a crypto bug that's theoretical and never can be able to do it. I wrote a blog post which talks about essentially part one of this talk, not the side channel attack. It's on our company blog. I'd say at this point, my recommendation to users of Azure B2C would be to, instead of using that RSA asymmetric encryption, switch to the secret encryption with AES. I believe Microsoft is still working on it, but I think they're going to put out a change that should make it sort of straightforward to make that change. And if you do that, essentially that lack of integrity that you get with asymmetric encryption can just be replaced with the integrity you do get with symmetric encryption so that it would sort of indicate this altogether. Long term, I think my suggestion to Microsoft was to use nested JWTs, as I mentioned earlier. I recognize that it's a sort of hard engineering problem and changing the structure of something that sort of underpins your whole authentication flow is not an easy change. So at the very least, if I was remitting this bug myself, it would essentially invalidate all keys for all sessions for all users. And so the usability alone would be sort of horrendous. But I think long term, that's what you need to do. And I believe Microsoft is working toward doing something analogous to nested JWTs for their platform. So disclosure part two, right? So in July 2022, about a year ago, I discovered the sign up channel attack and disclosed it to Microsoft. There was a period of not great responses for a little while, but I did manage to talk to the team and get them to validate the issue. And the first of two fixes was put in place in December 2022. And essentially what that fix does is that now, if you submit a token that is invalid of any format, your HTTP request will essentially just get dropped. It will, the connection will hang, it won't return. And so essentially what happens is you get no timing information, right? So there's no timing attack to recover that key in the first place. This is a good sort of stopgap measure because it sort of negates the attack as I described it. But it doesn't go as far as putting all the nested stuff in place. That second part was planned for February 2023, but there were some engineering complications as I sort of, as I understand it, I really don't know a lot of details about those. But that fix wasn't quite put into place in February. This is sort of the disclosure timeline from July to February when I published that blog post. You can look at the slides in your own time on the DEF CON media server, but essentially it shows sort of back and forth in what was happening with different parties at Microsoft. The risk rating for this vulnerability was assigned important, not critical. I personally argue that if all you need is sort of tenant details in an email address and what you get out of it is full compromise of a victim, that's gonna be a critical vulnerability, right? It doesn't seem like just something important, but I digress. It was also categorized as info disclosure, not elevation and privilege. Surely, yes, a public key was disclosed, so maybe by definition, that is the right way to categorize it. But again, I sort of argue that you're recovering a key that you can use to grant yourself privileges to anything you want, so maybe it falls into the latter category. Unfortunately, at disclosure time, Microsoft had two bug bounty programs that might have applied. There was the Microsoft Azure Bounty Program and the Microsoft Identity Bounty Program. So it turns out this bug was not eligible for the Azure one because it isn't an identity service. It also turns out it's not eligible for the identity program because this bdcloggin.com was not in scope for that program. So despite being a somewhat important bug for this service that they're using internally, it is ineligible for any bounty program, so it was not rewarded. I will say to Microsoft's credit and some lengthy back and forth I've had with them since they have added Azure B2C to their identity program in the intervening time in July or June a couple months ago, but bounties are not sort of retroactive, so it's not gonna get paid out. So the last slide here on Microsoft, there are some lingering mediations I mentioned. The first fix that I mentioned in December was kind of narrow, right? It was cutting off just the timing attack itself. So it's unclear if there are other artifacts in that session that you could use to still recover a key, right? If they're terminating a connection or whatever, right? And I was sort of inspired by James Kettles' talk this morning talking about his tools to do similar things. So it may be possible to get around this mitigation, but I haven't really found any way. I also haven't checked quite yet today, but I think they are actually going to encourage users to use the secret key generation instead of the public key or the RSA key generation. And as I mentioned, the new signed element in the refresh tokens isn't quite implemented yet, but I think they're still working toward that. Okay, so just for the deaf-con audience here. So most of this talk is focused on, you know, an identity service for Microsoft, but looking at other vendors and, you know, other OAuth implementations, you might find similar things. So AWS has a AWS Cognito. It's a service very similar to Azure B2C. It is an identity service. It has a similar login flow. You get a refresh token. And if you decode the headers, you'll actually notice that the encryption algorithm is, again, RSA OAP. I wouldn't put this up here to, you know, if I thought this was really a bug, but it is interesting that they use this. I don't have the same introspection in AWS that I do in Azure, particularly I can't, you know, specify my own keys and upload them and use them. So I can actually see what the contents are, but, you know, external indicators suggest that they're actually using nested JWTs. So they're likely not a, it's likely not an issue for AWS, but it is interesting that they have designed a very similar flow to what you see in Azure. That's all I got? Yeah, I have a little time, so I'll take some questions or if you wanna find me after, that's fine as well. So we have some mics here. If you have questions, please line up with the mics. If you have questions, please line up with the mics. Okay. Oh, hold on. I think they gotta heat it. Give us a thumbs up, thumbs down. Try it now. I'm not touching it because you're somebody. Hello? Okay. There you go, go ahead. Have you tested this? We can't hear you. Cut over to that mic. No, go ahead, you. Go ahead, go ahead. Go ahead. Have you tested this functionality against anything else besides the B2C? So, Microsoft for other things like the microsoftonline.com, they have a different, a wealth flow that is completely segmented. It looks like a different code base, right? From B2C, so it seems different in that respect. I haven't done this, you know, scoured the internet for every service, but I've looked at a couple, yeah. Take a look at the Azure shared tenants. All right. Hello. I saw a couple of references to the MSRC kid, and I was wondering if you managed to meet them as a result of the disclosure? If I what? The MSRC kid? Yeah. I wondered if you managed to meet them as a result of the disclosure. Oh yes, yeah. He had a funny and interesting name there. This is a fantastic vulnerability. I've done a lot of research into open ID connect based vulnerabilities, and the cryptography is especially interesting. You know, we've moved over the past five years into a world where people are no longer entirely home-rolling their crypto, but taking it from off-the-shelf JWK configurations. And as a result now, we have generally applicable crypto vulnerabilities without having to meet an engineer and ask them, you know, how did you think you put this crypto system together? Yeah, exactly. I just thought it was super interesting. Thanks, thanks. Yeah, right. Going after the one implementation that rules a bunch of different applications is gonna get you a beer bang for your buck for sure. I did have one question that wasn't a joke or a compliment there. Okay. And you mentioned the, so I've done some time-earning attack-based attacks and they seem very rudimentary by comparison. You mentioned the backtracking. What exactly could you elaborate on what backtracking means in the context of the attack? Yeah, I sort of kind of made it up as I went along, but essentially if, you know, as I was running the attack and I noticed that there was no actual timing differential in this current round that I was on, it seemed likely that a lack of any timing differential meant that all of my high, mid, and low points were all in this, either all higher than and or less than end. So basically using a crude measurement for just detecting that I'm no longer progressing, right? I need to go backwards. Yeah, the confidence mathematics is very annoying to have to deal with when you're doing a timing attack. Thank you. Yeah, thank you. All right, thank you all.