 I'm just waving to folks. Are we live? OK, I guess we are live. Hi, folks. Welcome to another Code of Conduct talk. I mentioned it in the keynote. Some of you were in the room a couple of hours ago when Joanna gave a talk. Our perspectives on this are somewhat different. She's got an amazing background as a lawyer. I've got a background more in community work. And I'm going to try and sort of on the fly expand on this talk. I've given it a few times unrecorded, so far almost like 20 minutes. This is a bit longer, so I'll fill it in a bit. And it's a bit interactive in a few spots. So a show of hands, a quick shout out is fine at the spots where I'm obviously asking a question. There'll be hopefully time for actual Q&A towards the end. The Code of Conduct is written so vaguely that really anything could be a violation. This is an actual quote from somebody that I got once during an incident response. I'm not going to say which one, but it wasn't just a random person. This was actually an elected community leader. So does that surprise you? I see a couple heads shaking sideways like no and a few bobbles. So yeah, hi, that's for the opening. This talk's going to also be a bit more emotional. If I am, then I'll go through some examples. Quick little bit about me. I think those of you here, a bunch of you know me. Name's Ava Black, pronounce they them. Currently work at Microsoft. I also serve on several boards and advisory committees right now. And most interestingly, I used to be a director of the Consent Academy, which is where a bunch of this work comes from. And on the Kubernetes Code of Conduct Committee, which is where the hands-on application of this work that I'm going to talk about has sort of been battle tested in the tech industry. So the Code of Conduct Committee for Kubernetes, this is a pretty large community. I joined that body in 2019. You may not know that I was also volunteering with the Consent Academy for years before that. And that body, it's if I won C3 non-profit in the Seattle Pacific Northwest, they also do education and outreach across the country, actually begun branching out during the pandemic to teach in Europe, taking a consent-focused framework and applying it to communities of practice, everything from Burning Man to local communities to the tech industry now. They've also done consulting for several companies and foundations. They also can act as a neutral body outside of any group when necessary, sort of an escalation point. And it's staffed by folks who are licensed therapists on that scope. So we brought them in to do training for the Kubernetes Code of Conduct Committee when I joined that body. And it's since become more of a regular thing, both for Kubernetes and for a couple other groups in the LF. Oh, I forgot, this deck has some transitions. I meant to transition that to show both earlier. OK, moving on. So their framework, Consent, is a voluntary agreement made without coercion between persons who have decision-making capacity, knowledge, understanding, and autonomy. Those four attributes are crucial to the way the Consent Academy's framework operates. It has to be voluntary. It has to be made without coercion. No one's applying force or leverage or threats. One needs to be mindful of the power differentials, which do affect one's ability to consent. If your manager walks up and gives you and says something to you, there's a power differential. You might not feel like you can say no, because your job could be at risk. So that's important to understand also when we're in communities of practice, where some people have huge platforms, reputations, jobs, board positions. The power differentials affect people's ability to have consent in any interaction. And of course, also when we're all out on the roads and many people are stressed, tired, hungry, drunk, those affect our capacity, our knowledge, our understanding in any given moment. So they teach these four pillars of consent. I'll break them down a little bit. I'm actually going to rotate, because it's too small to read on my screen here. Autonomy is does someone have the power privilege or agency to express their free will in a given interaction? Capacity of the mental, emotional, financial, or legal capacity. There are situations where someone legally might not be able to say no, or say yes to something. Does someone have enough information to really understand what they're consenting to? What they're getting into when they say sign a nondisclosure agreement or a nondisparagement agreement or agree to join a committee. Sometimes we join committees. They're like, yeah, I want to volunteer for that. We get there and we're like, oh, you didn't tell me what I was actually signing up for. Language and cultural differences also play a huge part in information, especially when we're talking about events and things that happen in an event where we are a global community. People are coming together. There might be a little sign somewhere with some domain specific language on it about what a code of conduct is or means, that someone whose first time in this country learning this language doesn't really understand. If they then make a mistake, are they really at fault or not? And the last one is really the agreements and boundaries. To have these explicit or to understand when they are implicit was coercion used to get a compromise on a boundary someone stated. And if so, how does that play in when we later look back at the incident? So based on those four pillars of consent, the Code of Conduct Committee applied to them and developed a process. Link down there, that is the result of a couple of years worth of work in testing it, evolving it privately, and then publishing. And so I would highly recommend every community work towards this as a goal if you don't have it already. To define the process by which you take in an incident, how you protect confidentiality, how you handle potential conflicts of interest among the members of a Code of Conduct Committee, how you're going to respond to, reach out to people who are named in that incident report, whether they're the reporter or the reported or a witness or a friend. There are some key things in building one of these. One is to understand if an event was traumatic and treat that in a particular way based on a trauma-informed therapeutic response. People respond differently if they have been traumatized. Hopefully no one has to deal with that, but the reality is it happens. Set clear expectations whenever you are communicating with someone who's involved in an incident report. So timelines, even if it's just to say, hey, I got your email or the committee got your email, we'll investigate. We'll get back to you in X time. Acknowledge it so they don't think it's just going to a black hole. That helps to build trust. If you're taking an incident report verbally, say on site at an event, don't ask leading questions. There's so much risk there in how you take the report. I can't stress enough. Go get training in crisis response and incident response before doing that. It really helps. It's too easy to accidentally bias the reporter. Follow a well-documented process so that it's consistent. That also helps to reduce the bias of any individual taking a report or working through a process. Make clear publicly how you're going to protect confidentiality and safety of people who report. Folks very often don't report because they fear retaliation, either from the body at large or from the person that harmed them or grieved them in some way. And this goes back to consent. Make sure you're telling, you're sharing with a person who is reporting how they retain control over the process, especially when someone's consent feels like it has been taken away due to a egregious harm. Restoring their autonomy is part of the restorative work. And so often, we all probably see in judicial cases, the autonomy of witnesses or of reporters is taken away as they are retraumatized or forced to confront someone that has already hurt them. So think how you model that in your incident response process and try to understand how past traumas might be affecting people's response. So I'll share some of the specific tools we developed out of this framework. A code of conduct is itself not a legal document, but it is an agreement. And thus, the violation of that agreement can be used to take some action. They're not laws. And however, like a law, it's just a piece of paper, it's kind of inert. You need a body of practice who knows what to do, who knows how to handle this, that's backing it up. I'd like to give this little example. We all have sort of evolved long codes of conduct over the past decade and a half. A friend of mine runs a Discord server for their friends and plays some games. This is their entire thing. I'm not saying this is a good one, but it describes the two most important functions of a code of conduct. First, to communicate and make explicit the cultural norms. Hopefully, don't be a jerk, isn't the entirety of your cultural norms, but that's all it is, right? In this case, there's a huge assumption of what those words mean. And secondly, the code of conduct makes visible the power structure in a community. It defines the social boundaries. Where does this apply? Is it just during an event? Is it just on a slack instance? Is it anywhere that project is talked about? If three of us meet in a coffee shop in Seattle and talk about Kubernetes, does the Kubernetes code of conduct apply? Great question. Think about where your community is and should abide by. And who can decide that? Who can adjudicate that? That's what your code of conduct should make clear to your community members. Your code of conduct should not center punishment. This is part of the criticism levied against this, if I've mentioned at the very beginning. They're often considered too vague. This is out of necessity, actually. When we build a code of conduct that specifies exact punishments for specific actions, it actually binds the conduct committee's response, often forcing them to react in ways that are harmful to community health when an incident was minor or accidental. And a restorative outcome could be achieved. And so, I actually have a problem with the contributor covenant 2.0 and thereafter, because of this, my recommendation is to stick to 1.4 for now. And there's work ongoing in the contributor covenants evolution to make something after 2.0 more inclusive here. Why am I suggesting this? Because people are messy and people deserve a second chance, especially when accidents happen. There's a cultural boundary and people want to do better. And my experience with both Consent Academy and Kubernetes is that most of the time it's not intentional. Maybe 5% of the time that Consent Academy saw, it is intentional. The challenge is that it's really difficult to tell which is which. But when it is intentional, you have to be able to respond. That harm needs to be identified and addressed. And even when it isn't intentional, the harm still needs to be addressed and apology some sort of restorative outcome worked towards. There are, of course, also trolls in the world. I'm sorry to say, they don't just live under bridges right now. Well, people who will just test boundaries of things for no other reason. And so if you codify those boundaries too rigidly, people will just test them. They will walk up to that edge just to spite you. Don't make the edge so clear. It doesn't need to be. The committee supports community health and individual safety. And really individual safety is the key thing here. You just be fostering an inclusive and safe environment. That's the hard part. That's the tricky part. It's hardest to do this when a project has a BDFL, a benevolent dictator for life, because that's unaccountable power. It's easiest to do this when there's transparency into the process. And it's important to prioritize safety when safety becomes a concern, especially for members from marginalized backgrounds, because they're often the ones who notice or who receive discriminatory behavior first. One of my mentors said this, I also said this on the keynote on Tuesday. The culture of an organization is determined by the worst behavior its leaders tolerate. What I mean by that is if there is someone from a marginalized background in your community being mistreated, even if it's unintentional, and it goes unaddressed, that defines the culture for the whole group. It perpetuates. This leads to the fourth point, that a code of conduct committee must enforce these norms. It can't just write them down and talk about them on a stage. It has to actually enforce them and respond in ways that are commensurate to the circumstances. If it's an accident and someone wants to do better, you have a private conversation. You say, hey, did you realize you stepped on that person's toe? They didn't want to tell you they're afraid because they're a new member and you're the executive or the whatever, but I wanted to pass that on. Did you realize it? And maybe the person goes, oh my gosh, I had no idea I'm wearing these clown shoes. Well, you should maybe not wear clown shoes to a crowded conference event. So you can have a conversation like that with people and act as a mediator. Maybe they respond in other ways and then you can address that if it happens. When you do this though, when you step into a mediatory role, you have to be mindful of ways that you might increase risk or harm to minorities, to people who have that power differential. Retaliation is a real concern. And so the committee must be empowered with both authority and autonomy to act in private. And this seems to run counter to the principles of open source communities where we all do work in public, but I will say that in Kubernetes and I think this model applies everywhere. Today, there are three types of bodies in larger open source communities that do need to have private function. And that is occasionally your executive council or steering council or your board of directors sometimes has to have a closed door meeting. Your security and vulnerability team, there are reasons they have to handle things in private. And then the third one is your code of conduct or your incident response team, your crisis response team also has to be able to have private meetings. Okay, I think we're about 15 minutes in, I think. I wanna check in because I'm about to jump into actual anonymized incident reports. So please prioritize your own wellbeing. Some of this might get heavy depending on your own sensitivities or background. If it's too much, raise your hand and I will stop or duck out now. Also, I know some folks have been asking to see real examples of things. This is as close as I can get to real. These are anonymized. If you think it's about you, I almost guarantee you it's not. And if you think you know who it's about, A, you probably don't and B, if you do, don't say a word. The purpose of this is to demonstrate in a constrained environment of a talk with you all as the audience, the emotional impact some of these kinds of things can have which I think, and in my experience says it helps people understand and get prepared to actually step in these spaces because it is really emotionally taxing. This is the invisible work that is the stewardship of open source without this communities just fall apart. But this work is so rarely rewarded and takes, when there's a real crisis, so much effort. So getting started. Nice and simple. You've got a slack instance. A chatbot joins and starts to spam a bunch of crap. Do you know what to do? Do you have the power to block that bot or do you know who does? How quickly can you respond? If that isn't documented somewhere and you're not there and you're the community lead or community manager, do members of your community know how to escalate quickly? What's your SLA to stop this drive by harassment? Okay, now imagine, I'm gonna do a little role play here. Imagine that someone just says this, buying from that vendor is like buying a drink in Tokyo. You never know if there's something extra in it. What's wrong with this? Like, they mean it as a joke. Let's take a moment to unpack this. If you don't notice this right away, it is a reference to something a roofie and someone's drink and drugging them. And that does happen at events, unfortunately. Hopefully none of our events, but it certainly happens at some tech industry events. This can lead to, among men, sometimes they joke about this and this is where this joke came from. Executives in Tokyo being drugged and their stuff stolen. It's so much worse for women. So this joke automatically has a different impact for different types of people. So we could talk about the intent versus the impact. It's incredibly culturally insensitive to name a country like this and say something disparaging about it. Now, the next question is what do you do if this happens in a very public forum? It's not just a joke and a bar between some folks. What if it happens on stage? How do you respond to this if it's an executive saying it or a member of staff or someone who's got a huge reputation and platform saying it on Twitter? That kind of all of these differences affect how you have to respond to an incident. The final twist here, what if the speaker was themselves Japanese making a joke about their own culture? Would that change your response if you were on that committee? Okay. Now imagine getting heavier. Each of these is a little heavier. Now imagine you over here. Well, sometimes a movement just needs a few more martyrs. Could mean anything, right? Given the cultural context of today, the preceding sentence, imagine that a person says, oh, you know, yeah. Yeah. Pretty bad, huh? Now imagine that's in a dark bar and everyone's traveling away from home. Is it a safety issue? Is that a threat or not? Super hard to figure this stuff out. When you receive a report, it's really important to listen to the reporter's interpretation of events, not just the facts on the ground or how, you know, what words were said, but where it was said, the context, who said it, the body language? Let's say you get this report in an email. I have an echo. Check, check. Check, check. Okay. So you get this report in, how about now? Okay, that sounds better. Cool. You get a report in an email. I don't even know what to say about this one. Yeah. How do you respond? Quick poll, if anyone's feeling up to it, give me a shout out for what you think you might do or what kind of question you might ask next. I'm seeing a lot of heads nod like folks are awake here, but no one wants to speak up on this one. Okay, I'll move on. Let's say you hear, you're on the cotechonic body and you hear secondhand that that group over there is just really unpleasant to work with. But no one actually files a report. Are you empowered to go proactively investigate? Are you empowered to go look at recordings of it and then make a determination even without an incident report? And if so, what can you do? So that also needs to be captured in whatever procedural documents you have, whether the cotechonic body can proactively do something to protect the community. Last one, why is it not here? Well, that's exciting. There we go. Okay. Let's say you hear about someone in your foundation or in your leadership body, overstepping boundaries consistently. Is your cotechonic committee in a position to respond to a leader, someone with power over them, over the community? Okay, I'm done. Sorry for all that, we're done. How are we all feeling? I see hearts, I see people saying shaky. I'm sorry, this is the hard stuff. I'm done with the hard stuff. We're gonna talk now about how to deal with all of this. So let's recap. A cotechonic is not a legal document, but there are often legal issues at play. There is liability, there is safety. Responses to these things can trigger defamation or loss of livelihood, lawsuits, or safety issues. There's so much potential legal risk for the more serious incidences. We need to not send our punishment because most of the time these are accidents and nuance is crucial to understanding and to responding with care and empathy. We have to support community health and a key element of community health is individual safety, because if as individuals we're not safe in a community, then the community is not healthy. And the committee must be empowered to act and even though it works in private, it must be seen to act otherwise rumors spread of bad things and if people, if the community does not know and trust that there are people working to keep them as individuals safe, it doesn't matter. Weird color on that font. There we go. Okay, so this matters so much. This is so important because the community in open source projects is the value to all of our employers, to the company's funding, all of this work. If the community evaporates, the project is useless. It's just dead code. It's stale in six months or less. So the community is the real value here and a cultural risk in your project community is therefore a product risk for your company. So if you need to justify why you're working on this, that's a great way to frame it. Why can't HR take care of this? Because HR is designed to manage internal risk. It relies on employment agreements. It relies on confidentiality and that doesn't work. Here, the community is an externalized risk. One of the awkwardnesses of all of us working together in open sources, we are working outside of our corporate bounds. Corporate HR cannot protect us and corporate HR doesn't wanna step into this space because it's outside of their reach legally. So we have an unmanaged community risk which I hate to say it, all the talk about supply chains right now, this leads to an unmanaged supply chain risk for all of our companies. This is the power of open collaboration and also the risk of open collaboration. We build stuff better together when we work together but the risk is also there. And when a community fails in public, invisible ways or is unable to provide a safe environment to especially members of minority backgrounds to collaborate safely, the product that depends on that code really suffers in the best of cases companies can fork because it's open source. We all want our code to be bug free and vulnerability free and there's a lot of concern right now with the whole cybersecurity movement around open source. One of the reasons I believe it is so important we focus right now on building and maintaining inclusive communities is when people don't feel safe, they can be coerced more easily by bad actors and that is a concern a lot of people in the security field have right now about open source that people are going to try to push hypocrite commits in. It was a wonderful talk seven years ago, Christopher Sogain. I wanna say ACLU director gave this talk at the Lisa, USENIX Lisa conference in DC, talking to room full of CIS admins that they were the primary threat vector to national infrastructure, not their code, but them as people, social leverage, social threats. People are rightfully concerned about that in open source right now too. So we have to be inclusive to help protect against that. So what can you do? Totally incomplete guide. I'm sorry the talk has been doomed and gloom so far. This is where I get to be constructive, helpful. Here's what you can do to make all this better. Shift from policing to supporting. If you're working in an incident response body, focus on support for members of your community. We have to build out as it were a service org for community health in large projects and from our companies. And some companies have been doing this staffing, community manager for years, but not enough, especially not at the scale of open source today. To establish norms of practice across communities, we've started doing a little bit of this. I wanna see so much more because the things that work well in say the LF probably also map more or less to some other foundation and vice versa. We should be sharing, not siloed. And there are some norms here that we know we've learned. We've also learned it doesn't work. I want to establish a body across orgs to share that knowledge. And I'd like people to also begin publishing transparency reports. I led that work in Kubernetes. I led it up to the LF. The LF just published one last year, earlier this year. While everything a Kodakana community does is shrouded in secrecy to protect confidentiality and mitigate risk and harms, what I've seen work with the research shows is talking about it in the aggregate when that's possible and it isn't always. Helps to instill a sense of trust from the community that, hey, people are actually enforcing these norms. And it can be as simple as in the last six months we kicked out 25 spam bots from Slack. Like, okay, that's generic. Not a big deal. But it also tells people that you're doing it. You work on it. You care about it. The community has that support. And that's really important to also build the trust of people when like, gosh, someone hurt me, but I don't know who to tell. I don't know if they're gonna listen to me when I do tell them. This is one way to slowly build that trust. The irony is what we've seen is once you start publishing these, the numbers typically go up real fast. Because in the first couple of iterations, people realize that there's a body there to support them, to help them when there's a problem. So they report more often instead of just leaving. Which is what happens otherwise. This is the weird one. Because there is legal risk to the folks who are doing this work, sometimes it's real good to give them actual liability coverage against it. And it's hard. I don't have any recommendations on like, what insurance company to go to or what to tell them. Just that you might want to. And I'm aware of an unfortunate number of communities both outside and inside the tech industry where folks have been sued over doing this kind of work. And it is terrifying. Much like we see people suing victims of domestic violence for speaking up, or victims of abuse for speaking up about it. I'm so frustrated by our legal system empowering people to silence problems like this. It's not an easy balance. Lastly, most importantly, fund training and create staffed roles. Consent Academy, OtterTech both have training courses. I recommend both of them. They're different, they're complementary. Can't recommend this enough. And have an escalation path to either a third party contractor or a trained mediator or something else. If your code of conduct committee has conflicts of interest, they can't handle something or they're overwhelmed or for whatever reason need to recuse themselves. Having a documented escalation path to some other neutral body is crucial. And we've seen the need for that in a couple of communities over the past few years. To recap, establish norms of practice, publish transparency reports, provide liability coverage and fund training and or staff roles or both. This is stewardship. That's a word I used in my keynote several times. This isn't management, this is stewardship. We are caring for people and letting them do good work. Leaders in open source are at their best in my opinion when they are simply stewarding a community, not applying authority but supporting others. A couple of quick closing thoughts. But consent when applied as a lens to both social and business interactions is really transformative. Not just here, but also when we're talking about building machine learning artificial intelligence products, gathering customers data and what to do with it. Consent makes a huge difference in the tech we build. And all of the concepts I was just sharing with you are not originally mine. I've adapted them a bunch to this industry but I learned them from Consent Academy, I learned them from the Occupy Movement, I learned them from indigenous communities in the Northwest. These practices about how we build community go back. And in fact that's part of the roots of open source. Like we go back 25 years and that's where folks were doing this stuff. It's all connected. So that's it for my talk, thank you all. I can take questions if I have time. I wanna thank you for this particular talk. I'm trying to formulate, I have a lot of questions and I just wanna stick to one in particular. So there is one case that my community is the InfoSec community. And there's one case where there has been a really, yes. I wanna pause, I don't wanna talk about any specific cases because we are on record. Okay, so big picture. What happens or what is your advice or suggestions? When people have comfort, they have made their reports and actions have been taken and the person that the actions have been taken against is consistently showing up and causing more harm to the community. What do we do, those of us in the community who want to move on, who want to continue building community? How do we deal with people who just want to continue to traumatize those in the community? If your community doesn't have a means to ban someone from a shared space, whether it's a Slack instance or a Discord or an event, make one. I don't know what to say in the generic sense other than building community safety is so important. And when a person shows an unwillingness to learn to work with others, to work towards that safety, they just want to antagonize, you have to exclude them, unfortunately. As the saying goes, I'm gonna miss the quote now, I said it in my keynote. The paradox of tolerance applies. We cannot tolerate the intolerant and if they're coming just to be intolerant and hurt others. But there's so many edges and gray lines there that I can't give and don't consider this any specific advice of what to do. So I don't know the situation, but lots of empathy, because that's so difficult to deal with folks like that. Thank you for this talk. Where do you see innovation coming into this space? I mean, right here. The intersection of bodies of practice like the Consent Academy and restorative justice movements intersecting with the tech industry. We've only really been talking about codes of conduct in the tech industry for about 12 years and so much of that has been mired in debates about whether women are on equal footing with men have a right to participate. Completely inappropriate on its face questions and yet those keep getting brought up. So where can we innovate? The tech industry, we already have a huge disparity here. Let's fix that question. Two questions up here, I think. And how much time do I have left? Five minutes, okay, cool. So I was wondering, do you have any frameworks or resources that you could point someone to whether it's an existing community that needs to build out a code of conduct and a committee or a new committee looking to form and establish those best practices? I think your question was, are there existing resources or like already documented practices for communities of open source? Well, just generally speaking, is there a body of work that you would recommend somebody that's new to community building to look to for this? I mean, the Kubernetes community has documented our process, k8s.dev slash cocivain process. That's what we built that works for us given the scope and size of Kubernetes. Last time I looked at the Drupal community, they also had a good one, pretty thorough documentation. I think PyCon North America has also had some pretty good documentation on how they manage their community stuff on the broader sense of how to build community safety. Yeah, I wish I had a great resource for that. I have a bunch of mediocre resources and a bunch of books that I've read. I haven't compiled a list. It's a good idea though, I should be able to point to like, here's my website with like 12 books on this topic. I should do that. Well, so the Consent Academy actually has written a book on consent frameworks. That's one place to start. I saw a hand or two on this side of the room at some point. Thank you Eva. Not just blow my mind, but sort of expand it in a very careful but intricate way. And this question is maybe in that. My, I said something early on, the talking about autonomy and the word authority kept coming into my mind in retrospect as a counterbalance, but it didn't come up. You didn't bring, you said authority a little bit later. And I've just, I've got a sense that there's a balance that's going on. I mean, part of it is much of us are existing in a space here that's controlled by authority where our bodily autonomy is subject to the authority as opposed to the other way around. So is there communities or places people are choosing to be in? So there's already the consent begins the government, but is there, like how are we getting away from the authority trying to be on top of the autonomy when autonomy is so key? Does that make sense? I don't know that I follow your question, but I'll try and repeat it back and see if I got the gist of it right. How do power differentials authority affect autonomy? And is that different if it's say online or in person? Yes, and then the other, the flip being how do we, if we want to center autonomy, how do we look at those dynamics and keep being able to center autonomy in those situations when the authority tries to say, no, no, no, no, no, we're the ones in charge of X, Y, and Z. Okay, so. Before the pictures and torches. Yeah, yeah. So there's a fun demonstration of power differentials that I sometimes give. I didn't want to deal with it today. Like imagine me and somebody else standing up here on stage, we shot with five tokens and we describe an attribute of the other, like you are six foot four and really muscular. Here's a power token. They might say, oh, I have a Euro board member. Here's a power token, right? And we can sort of model the physical, cultural, organizational power differences between people by thinking of it in this way. It's sort of an arbitrary model. It's more of an analogy than a reality. But it's a metaphor to talk about or think about how power differentials exist in our world invisibly all the time. As far as bodily autonomy goes, gosh, if any of our, in this professional setting, if any of our bodily autonomies are ever not consented to, that's, I don't think I need to talk about that. That's a violation right there. So I don't think that's what you were asking. Oh, we're in a space with people. Yeah, we've consented in some way to be in this building. So if the fire code, the fire marshal comes in and says, hey, there's too many people in this room. Like that's part of society. I don't know that that really reads on this. Maybe I've misunderstood. Any other questions? Go in once, go in twice? Okay, thanks for bearing with me. I know this was so heavy. Thank you all.