 Hello, hi, uh wait you can't hear me. Oh Hi Stella Hey, I was unmuting myself on the wrong voice chat Sorry, so I'm on that zoom. I can hear you. I know there's People here. I can stream this thing easily to twitch chat Oh Okay, I don't care if we do this or zoom Yeah, which ever whichever works It's funny. I was talking with the about the Fox paper with a coworker of mine on Friday We were wondering about reverse engineering it is the is the author currently on No, do you know the old sir? Do you want to ping him quickly? No I mean I Think I've met him once. I didn't my undergrad at Chicago, but I don't like to know any of them By the way, we're also live on twitch with my ugly magazine only visual reference Gotcha. Hello, Twitch So if you want to participate in this journal club, the link is In both this is the experimental side of the village So the link is in twitch chat There also the link is also in the discord If you are interested in using adversarial examples productively and want to talk about that Join the ai village general voice and just ask away It doesn't It'll be nice if you've read the paper, uh, but don't worry about that If you want to talk about this Uh principle like, you know adversarial examples for good sort of thing, uh Just, uh Come we'd love to hear from you So there are multiple ways for you to get on here Oh rich Hey Yeah, welcome to the process of moving and I discovered my uh My covet panic beans Yeah, when the whole thing was starting I bought a bunch of beans because I like making bean curries And I was worried that Beans would be a little bit hard to find And As I was moving I found my giant bag of Covet panic beans Isn't that like exactly what what we were told to not do? Uh, I mean I only I only I use about two cups of that so that would be like If I was eating, uh, bean curries three two three times a week, that's about, uh, three weeks of beans Some people bought like six months of beans Yeah, there was a while we we actually had to get, uh beans flour and yeast essentially off of Basically like the black market it was, uh Local company that was buying it in bulk and then like repackaging it and reselling it under the table Uh, nothing like people exploiting a Uh pandemic to make a quick buck That's true American ingenuity right there. Yeah, it's capitalism This is the part where we say yay capitalism, right? Capitalism is the problem Sometimes right sometimes I get my speaking points confused. So Um, I'll see how Well, how many So for everybody on twitch that is new to journal club, uh, basically what this is is we run this once a week on Wednesdays usually, um at five o'clock Pacific standard time or 8 p.m uh east coast time And uh, we have a paper that we're discussing. We usually get way off topic very quickly. Uh And oftentimes we have the author's on and we basically talk about the paper and the surrounding topics um And it's really interesting. You know, they usually have really interesting discussions to some like motifs because of our uh from our perspectives like Threat models whenever we have a model uh an academic paper with threat model in it. We usually rag on it for a bit Uh, unless the threat model is really good Because if you've never actually worked in security building a threat model, um from a purely academic perspective is hard and They do kind of uh, there's a lot of people who'd mess that up Um, but they at least try and try to figure out and uh when we have authors on We, you know, we'll talk about that all um and If you want to join us today, uh, it's free easy open. Um, you can come jump on the discord Voice chat. So we have air village general voice If you've got any questions about using adversarial examples to defend against facial recognition systems Please come jump on um Della since you've got like uh, you're invested in this paper a bit Do you want to describe what it goes over and like give us an overview of what what's going on? Okay, sure. So Is the is the author going to join us or or does it look like not no right now? No, it doesn't look like the author's gonna join us There was someone someone had a note that they were still trying to get a hold of them. Um, so that's the last update I heard I don't think we're gonna get them unfortunately Okay Yeah, yeah All right. So the paper we're discussing is called fox protecting privacy against unauthorized deep learning models I hope i'm in the right Talk, right? Yeah, that's the one so the So the background concept of this paper is that there are a lot of ways that people upload photos to the internet and Companies like facebook like google process your photo through a machine learning algorithm And use it to identify you and they can use this to identify you in future photos Um a couple years ago facebook rolled out an automatic tagging process where when you upload a new photo it tags your friends in it automatically And this creeped out a lot of people they were surprised that facebook was able to immediately identify who they work And so the kind of the conceit of this paper is to give people a a defense um a way to opt out of this kind of analysis Without necessarily opting out of like existing on the internet as a person um Another important connection that the authors are specifically is like clear view ai Which is a now or uh, which is a now infamous company that did uh analytics drawing on what was technically speaking public data To draw all sorts of conclusions about people Where the users were not particularly comfortable with the amount of information That clear view ai was able to obtain about them And this is kind of a repeating motif in Uh Like public use of technology, you know, nobody reads Terms of service nobody nobody has any idea what data agreements they're they're accepting and stuff like that um And even if you did, you know facebook has a monopoly on being facebook twitter has a monopoly on being twitter you either Go along with whatever they demand of you go along with your data Or you just don't exist on twitter and it's not like there's day a Twitter too that you can go on to and have the same sort of experience just not with the people who's data policy you don't like so The idea that's an idea that's been incubating Over the past year or so um in the adversarial ai community is hey Defeating algorithms like this is exactly what adversarial ai's are supposed to do They're supposed to adversarial ai's supposed to or adversarial examples. I should say um, it's supposed to take photos Transform them in a way that they look normal to humans But they can't be processed by ai algorithms correctly So instead of using these things to fool To as a these things are usually conceived in the literature as a negative They're as a bad thing they're used to fool ai algorithms and do harm But in terms of data privacy, you can view them as a very positive thing You can take an image of yourself And transform it so that ai's that look at it don't think that you're a person or think that you're a person who you're not And then when you upload it that photo that's been transformed onto the internet It looks like you to humans That's kind of the whole point, but it doesn't look like you to ai algorithms And so that's the that's the core idea at the center of this paper and that's ultimately what they try to achieve and what they do achieve um with Their approach they in the paper they refer to it as a cloak Yeah, it sounds about right, uh To cloak you but they're going to cloak you in a way that makes you look like an image from this reference data set um, so they're going to make you look like another person or look like another object and the reason that they give for this in the paper is that um This they find that this helps it get past screening out Screening algorithms. There's that that try to notice and reject um adversarial examples, so Typically when you use adversarial examples, you're inserting like a lot of noise and it's not anything in particular Um, it just happens to make you look different to the ai Um, but what they're doing here is trying to inject some kind of pattern thing that makes me look like one of poultry and then once it's made me look like one of poultry the idea is that the algorithms are supposed to uh Detect one this is going on Aren't going to notice because they're going to say this looks like a non adversarial photo of one of poultry I'm kind of salty about this because I spent a huge amount of time last year before the Talk I did on facial recognition trying to come up with a single like universal adversarial vector That would make one look like everyone was John Malkovich I'm literally written two-thirds of a paper on this exact topic. Yeah, definitely definitely an idea that's in the air No, um, yeah, this has been in the air for a long time. Um, there's a from an ethics point of view Because a lot of the the framework for this is is ethically oriented, you know giving users control over their data there's a really good paper from Either earlier this year or last year um called the ethics of adverse The politics of adversarial examples um That kind of discusses the fact that the AI literature like almost universally treats adversarial examples is a bad thing um, and how that's not really Fair discusses different ways that one could use them to achieve good, uh, though it's a it's a Ethics policy oriented paper and it doesn't Do any of them. So yeah, this is definitely an idea that's been around the block a couple times um For sure. Um, I know a lot of people who have had Thoughts along these lines Yeah, I want to say there was a talk at there was actually a talk at the village Last year. I'm drawing a complete blank on what they called it. Um, I think they were specifically targeting face books model and They were doing something a bit different. They were basically breaking the They were basically breaking like the facial detection step. So You know skipping the So the point where the the system would actually like put the bounding box and clip out the face tile around the face so that like Nothing looked like a face at all in it And so it wouldn't even go to the next step of like finding the vector for the face and then doing the embedding and then trying to match it Yeah, because a lot of a lot of these algorithms use some kind of bounding boxes or try to try to identify You know Images are complex. So you kind of have to first pick out where there's something that you're going to look at before you do that I don't think I've read that paper, but I assume you're talking about like disrupting its ability to find bounding boxes or Yeah, what faces are Yeah, so most most of these facial recognition systems go through like, um a three step process, right? You you first do you draw bounding boxes around the face and you get a lot of the time most of them I think you still Still use like landmark based Approaches. Yeah, then you use the landmarks to sort of like rotate and realign the face So it's sort of like dead on and in a standard orientation for those faces And then you put it through the actual facial recognition model, which will Take the face and embed it into a vector space And then you find like nearest matches when the within the vector space and that's the actual recognition step So this is what these guys are going after they're saying now take this face We'll let you find the face But now we're going to make you know, we're going to make still all look like one at the paltrow And what the guys at ai village last year did was they said we're not going to even let you find the face, right? It's going to look like You know like a landscape with no faces entered or something like that But it was the same idea, right? It's it's trying to find like the minimal Like least disturbing perturbation you can use that would Still like, you know, so your picture looks nice and you can share it on the internet, but You know either you can't find any faces in it or you do find faces, but they're they're tagged as the wrong people Yeah, I I that that is very similar and I think you just hit on something that's important to call out specifically You know, um, there are a lot of different types of data that might benefit from this sort of masking here They're using faces and they're doing that for important reason adversarial examples for faces seem to be Their performance is not better than it is On other kinds of data, but for facial analytics, you can make very difficult to perceive to So an adversarial example for Uh images and especially for facial analytics looks to a human like you haven't edited the image at all When you talk about like audio adversarial examples, um, like carlini has a Kind of the first major paper on that. Um You can hear in the background that there's a hum And as the strength of the adversarial example increases the hum increases and I've played around with these and you know, they're They're definitely you can even at the even at the loudest the kind of noise in the background is you can still clearly hear what's going on um But it is definitely perceptively modified to a human and so You know when you when you're talking about these being imperceptible changes I this because they're talking about, you know Editing your own data and then uploading it to facebook as if it was an original photo. That's something that's really important here You don't want people to have to You know the the ideal is is for people to be able to upload things that to their You know their followers or their friends or their I don't know whatever you want from your favorite social media platform You want it to be as close to the original content as possible to your desired audience Let's say to refer to the people you want to be listening to this despite being very different for your um You know your undesired audience of facebook's analytics algorithms correct And it doesn't help much with the however 10 15 years that social media has been a thing where It already has tons of pictures of you. So there's Yeah, I as much as I I love this line of research from a practical point of view it really does feel like like it's like the work is technically great, but In terms of making it a accessible to a wide to a wide range of people, right? Nobody See like a lot of people like just don't care about this sort of privacy thing, right? And so they'll update photos of upload photos of themselves or of of you and and and kind of not give a shit and then you know and then That we've also got the issue that there are tons of places where where you just don't have any control over The photo before it goes up, right? So we're You know if you're getting like an id photo taken of yourself Right That goes into the system as it goes into the system and we know that that's like part of what was being Collected by systems like clear view, right? They're getting like, you know state id cards and and passport photos and stuff like that And those are all getting like dragged into this dragnet and it it doesn't take Very many photos of a single person to get a good registration of them for a facial recognition system I mean in in a lot of cases you can get away with just one photo So so I understand that you can't go back and change the pre-existing photos but couldn't you give the application Permission to contact friends who have posted photos of you to ask them to use the application to blur you from them By the application What are you referring to because obviously facebook isn't going to give you that option Do you mean like the program you download to use this? Yeah, the one you were talking about blurring the image so facebook can't identify you So can you send a message to your friends to use that? I mean you could but it's going to be another it's going to be another hoop for them to jump through I mean people are lazy, right? This is one of the biggest problems with with doing any sort of security is People are lazy. They don't want to jump through extra hoops I understand I understand the people are lazy But it's it's easier if you can just invite someone to use a product Then having to find the link and send it to them They're far more likely to use it if you just send them an invite that's really easy to use Yes, that's definitely true. Um I believe that the authors are currently working on a Oh, why they have something they've one of the authors is a note on their website that they're currently developing like a download an exe plug and play A pre-trained model for people to use that will be like compatible with windows os and stuff like that So I believe that I believe the best answer is that they are aware of distribution problems and accessibility problems And that's something that they're actively working on. Um, I don't know if if invite links is something that they're That they're building into their program or not It's going to be very interesting to see kind of what they what what Sean Emily and the rest include. Um But I I don't know if that's something they're developing, but I think you're right that I know the question I have if you don't mind me asking is Does this cause problems were the visually impaired? for reading from like alt text Well, yeah, because When facebook like generates alt alt text, it's using a vision model. So if you're breaking the vision model, it is definitely Going to do something to the alt text. I mean the hope would be That this would only directly impact faces. And so, you know, it might be like, hey, you know, my buddy bob was hanging out with Gwyneth paltrow But there's always a chance that it could do other things to Like non facial recognition systems like object recognition or whatever Um, I don't think that this is something that these authors. I mean, it's a valid question It's definitely something that should be considered Um, I I don't think this is quite what these authors were focusing on in this paper though I understand the focus But it's absolutely. Yeah, no like usability and accessibility concerns are major things that that definitely My thing is my thing is there's currently um, the interpretation of the Of the ada Has been changed by Judges and now if you can't read the image whoever created that as the problem Can't be sued well It would be I think it would be on the Person who's making it impossible for facebook to do their job Of making it readable The I don't the ada doesn't apply to to private individuals though. There's no obligation I as myself Not as you know a company or as a Non-profit or something but if I just as myself upload a home video to youtube There's no requirement that that video be ada compliant in any way That was true until last year when the um when that changed Really? Yes Do you know the the court case or something? I'm very care. I'm yeah I can bring that do we have a text chat. I can put it in a text chat. Yeah, put it in then um, a g journal club juxtaposed um Yeah, this is a in a topic. I'm very interested and I have disabilities myself. I care a lot about accessibility um Honestly that wasn't something I thought about in this context And I'm right now on facebook trying to find out what the alt text for a photo of me is And I'm having trouble doing so um if anyone knows like you know I don't know if I don't know if the alt text says this is a photo of two people or if this is a photo of stella and madeline or a photo of stella and her girlfriend madeline Or just like a user uploaded photo But that is a definitely a very interesting question as to as to how it affects people who rely on augmented uh computer devices Yeah, I mean, I think it's Like the first step in any of these things is okay. Can we solve the immediate problem that we're worried about right? Can I can I actually like successfully convince? You know facebook to reliably retag photos of me as photos of john malkovich um, if you can't do that then worrying about the uh worrying about The accessibility stuff right it's it's kind of like the cart before the horse But yeah before you roll this into a widespread thing. I mean, it's definitely worth doing a careful look at Accessibility issues and I think there's also other things that that probably are worth considering like if you deploy this tool So that it can make everyone look like you know, john malkovich, whatever I'm using john malkovich as a silly example But has john malkovich been consulted about this about whether he wants to be tagged In a million, you know photos on facebook all of a sudden I don't know if you know like like again john malkovich isn't made up example but if if you're going to do a targeted attack like this paper is proposing who's the target and like Do they really want to be the target? So it's it's interesting because the paper kind of talks around that the paper like specifically draws the distinction between private individuals who it can which is a phrase it uses to refer to the people they imagine using their photos and celebrities are Celebrities and public individuals who are the people they imagine being in their database and I the reason I use gwyneth paltrow An example is because this is an example used in the paper um Her image is in the paper several times and it shows uh, I'm playing on his name, but the actor plays derrick shepherd in In grace anatomy. Um, there's another example in the paper and so it shows you transforming Derrick shepherd into into gwyneth paltrow and vice versa so But the kind the the idea that that these famous people are also private individuals I think is really important is something that the paper kind of Had a swing and a miss on I think and because they they certainly didn't ignore it But I don't I think you're right to question the way that they approach that See there is a way there's a way to do both. Sorry. I kind of want to can I jump in really quick? We've got someone on we got someone on the stream who is asking The tech critter wants to jump in with a question. I don't know tech critter if you're following on the stream Type it in the stream otherwise type it In discord I want to make sure that we get like other voices in here because we've got I think it's been like three people talking most of the time Definitely Okay, or or journal club dash text if you want to just drop a note and Yeah, but we've got a we've got a couple other people in the in the discord voice channel I don't know if they want to like chime in with any comments or questions Isla is robles versus domino's pizza Yeah, so I just um if anyone's following on discord It just got posted in the aiv dash off topic dash dash text I'm gonna post a newer one there This is the newest one and there's been following cases after this was rolled So the the domino's pizza case says that the ada applies to online venues such as web sites Which was not necessarily obviously true before This actually reminds me there was Definitely something that the the university of california ran into trouble where they had to take down a bunch of their videos I assume this must be related a bunch of their like, uh Online education videos that were publicly viewable Because not all of them were ad compliant. Some of them didn't have Some of them didn't had have transcripts. Some of them didn't have a colorblind friendly options And they took down videos instead of fixing it and that was a a whole balloon if you're in the UCLA world, um I assume that's probably due to this court case or a related one So when it comes to your home videos Uploaded to youtube youtube automatically creates a transcription Unless it's seeing that it has trouble for that But it creates a transcription and a lot of times it's really messed up and you can go on to fix it But the transcription is enough to comply Right, so I've I've actually used that a lot because I have trouble, um, the that's the automate YouTuber first says automated generated automatically generated That's right. I've used that a lot because I have trouble following, uh, large conversation Um, you know, I said it's quite bad And you know, if you're using a version of this for audio, um, where it's supposed to mess up what you're saying Um, it could absolutely trash the, uh, the transcription um There's a way to comply with the ADA while being while protecting people's, um People's from AI The way you do that is you just need to put something in the alt field that gives a generic idea like a picture and then put by The name that might be enough to comply. I'm not sure I mean that that definitely doesn't comply with the spirit of the ADA, you know, if you're if you're alt if This if the idea is that the alt text is supposed to describe what's happening in a photo and you could say this is a photo posted by Stel, Stella Biedermann, that's you know, it may not comply with the spirit, but it's better than nothing Yeah, that's a that's a very complicated question because The ADA has very strong limits on it like, um, If a company argues that complying with the ADA is too expensive, they don't have to Um, and the well, you know, this isn't actually what it was supposed to be But it's there and nothing is that is a pretty contentious argument to make in the disability activism community because that has been used I'm not saying you're doing this but that has been used a lot as an excuse by companies who don't want to be component That's not I'm just trying to come up with ideas how we can both be privacy and ADA at the same time Oh, yeah, no, I I understand that I just wanted to share that so so a blind employee at facebook Was the one that created their thing that tells you what colors and everything are in the m are in the images So i'm wondering maybe if there's a way to obscure the face but leave in the details like green tree blue water Well, I mean that's sort of the open question right because as far as I know What one one thing that hasn't seen a lot of work is How do like cross model attacks work, right? So if we apply fox to a photo Right, is it going to completely break like object detection as well as facial recognition? Yeah, so I mean it's I mean it's a it's a totally valid question Right, we don't know one way or the other that that applying fox to this would would actual applying fox to a photo would actually break the other object recognitions it might very well, but you know, that's An area for future research as they say are the um people who wrote the paper are they on the defcon discord or Which yeah, we tried to we tried to get um and jow I think to to join us. He's the I think he's the group lead. He's he's one of the authors on the paper at any rate. Unfortunately, he couldn't make it today So, okay, maybe tomorrow Well, we're talking about a different paper tomorrow, so okay Okay, you guys have a schedule if like do you talk about a different paper an hour or how does it work? We got a schedule posted if you go to the aiv general text We've got a schedule and that tells what what journal clubs were doing one And yeah, um and you can also find a lot of information on our website aivillage.org at aivillage.org I think slash events is uh Our schedule for the immediate future I've never seen this village before, but I really like it Thank you So I see that there are some other people in the in the discord do other people have questions or things they want to discuss Couldn't you just remove the glasses with a program? It could just automate it. I mean a lot of the a lot of the problem with adversarial examples in general Is that they don't tend to transfer terribly well between systems. So like you can have I can't remember the the the group that did like the funny glasses thing off the top of my head, but Um, I actually played around with that last year And if you use it on one model it works amazingly well And if you try to put it on another model like amazon's recognition They're they're just utterly useless. They they do absolutely nothing to the detection, right? So, yeah, I mean, oh, sorry. No, go ahead. Sorry. I'm around one. Okay. Yes. So That that's actually something that the that the fox paper discusses. Um, and at the end, um on page To do to do scrolling down Anyway, I don't actually see a table, but they have a table where they attack several open apis. Um, they attack facebook's Ah, there it is. It's on page, uh Nine since i'm not the one controlling the screen that's on on stream But if you scroll down to page nine, they have a nice table there where they show you, um They're attacks against Microsoft azure face face recognition api against amazon recognition and against facebook search api um And you're definitely right about the the adversarial sunglasses paper Um, which has the extremely innocuous name of a general framework for adversarial examples with objectives Um, but fortunately you can find it on google if you search for adversarial sunglasses I can also send a a link of this paper to the discord. Um, because it's a cool paper and worth Reading basically what they do is they 3d print sunglasses Um, that can defeat some facial recognition algorithms. Um, but The the that problem is something that there's at least been some progress on. Um, as you know, this paper shows where they're they're able to Successfully attack multiple open source apis Yeah, the one thing that always struck me about that was So if you go back to like the original adversarial examples paper One of the things they were really big on was like transferability and proxy models So like you can train a model off of this data And you know any two models like an adversarial example that evades one model frequently will evade another one But all of a sudden you get into the facial recognition domain and at least based on my experience that Tends to be a like You don't get it for free. You actually have to work for it unlike a lot of other adversarial example situations And I kind of remember it's because it's like a like these object classifiers are looking at like softmax versus the advert The facial recognition tends to be much more embedding oriented or or if there's some other difference about it that i'm not i'm not seeing My my my impression um as someone who has played around with these things but mostly does more theoretical research than uh Actually trying to use these against real models in the real world is that this is a problem of complexity Like the the more complex the uh classifier training is uh The more peculiar It's decision boundary is going to be and basically what most facial what most adversarial attacks do is they exploit peculiarities in Like strangeness is in the in the decision boundary of neural networks. Um rather than Quote unquote actually tracking it. Um But my understanding is that the the more complex it is the the the more complex the problem is Uh, the more different ways there are for an algorithm to be uniquely peculiar. You might say um So the peculiarities of a very complex problem are gonna of the a neural network solving a very complex problem or Going to wind up less across uh more and more complex fields that use more and more, uh Larger and larger datasets and especially private Yeah, I mean especially like for facial recognition because they tend facial recognition Models like those tend to be looking at hundreds or even thousands of classes. So yeah, kind of cracks Yeah, um, and it's just it's it's On the face of it, like, you know The transferability of adversarial attacks is not something you should expect like different models should have different strangenesses and It yeah, it it makes sense to me that the The more complex the model seems to be and the and the bigger the data set and the more training time is required. Um Kind of all that what that is doing is making the decision boundaries more and more convoluted In a sense Yeah, and you know, it makes a lot of sense to me that that would be Decrease the transferability of adversarial examples though. I haven't seen any uh Tapers that specifically investigate that core Yeah, um How did you when when when you ran to this problem? How did you decide whether it was a methodological problem that their method doesn't generalize or versus a reproducibility problem where You know what you implemented wasn't what they really had and that their model might generalize and yours doesn't mm-hmm Mm-hmm I'm just real quick. Is it okay that an eight-year-old joins? I am symphony. I'm the anchor old. I am the a-year old. I has hacked My uncle's tablet and a website once when I was young She hacked one of my clients. She did sequel injection on them I didn't even know what I would do it. I just did it. She was three Yet another reason to be scared yeah, so What one one thing that's really interesting about this paper is the way they do their targeted attack And i'm not going to try to explain that over voice because I think it's just better to just read it. But um Something something you have to be careful about they they look at there are a lot of different, uh Wait that facebook or whoever can counterban systems like this And so we've talked about the fact that they they found that it was hard to detect their cloak But you know an interesting and important question is if you know an image is cloaked Even if you can't detect the cloak uh from in general, you know, if you're told an image is cloaked, can you take that cloak off? um and question and you need to be careful about considering a lot of different options uh when deploying systems like this because there are a lot of different They can suffer. I actually strongly suspect that if you're told which images are cloaked you can invert them um because the optimization problem they solve you can basically solve an um Which is a Would be unfortunate. I'm actually like currently trying to do that. So Well, I mean it depends. It depends on how the cloak or um here they're assuming that you that you're solving a certain optimization problem against a publicly known and everybody Uh set of reference images You know I I'm certainly not going to say any any clip can be undone. Um, I certainly hope that's not the case because I think technology like this is And I hope that technology like this becomes very useful in the future. Um, I believe that's an open question This is something we've actually been discussing in the in the AI village recently though sort of can you reduce the Can you you know, there isn't a a good notion of like hard for AI the way there is like hard for deterministic computing but if you take, you know, um The image recognition problem as an adversarial example, can you use that as a hard for AI? Uh benchmark and you know, can you design in a a system where detecting the cloak is exactly as hard as Recognizing the image in the first place. Um, and some there's been some interesting conversations along those lines that I'm sure will continue to have Yeah Um, yeah, but rich did mention a specific paper. Is he still in the line or did he drop? Yeah, um So, yeah, no we can so rich definitely talks about a paper that does exactly what you're what you're curious about evs and we can Pull him up and and get that paper sent out to the journal club text for you