 Alright, I think we are about ready to start here, so thank you everyone for coming to my presentation. I think the room is a bit big for what we've got here, but I think it'll do. So, one of the things that I think is great about our open source communities is the way in which it allows a broad range of different kinds of people, representatives from large corporations, free software, ideologues, anarchists, to work together towards common goals. And most of the times these goals are aligned, or at least they don't conflict. But in any project there are inevitably going to be times where the values clash and where there's no one option, you have to make a decision, and there's no one option which will make everybody on the project happy. And one that happens, what do you do? So, in the Zen project we had such a decision several years ago when we knew we weren't going to be able to get consensus. We made the decision and things turned out actually pretty well. And so I'm here today to share some of that experience with you. Now, I'm convinced that very little of the so-called intelligence that we have as human beings is actually careful reasoning things out from first principles and thinking ahead. The majority of our intelligence is actually pattern matching based on our experience. So you make a mistake and you learn from your mistake and then you do better next time. But one of the biggest advantages that we have as humans is that we can share our experiences through language. And so most of the time we can learn the easy way from someone else's experience rather than the hard way from your own experience. So this talk will be structured as a principles, advice, a checklist to follow. But the reality is that every situation is different. The particular issue that you face will be different than the issue that we faced. The people and the relationships involved will be different as well. So, mainly what I want to do is tell you our story so that you can learn what you can from our experience. You have some patterns that you can take with you and take some ideas and adapt it to your situation. Right. So let's start with what was the actual issue that we were facing? Zen is a pretty secure piece of software, but on the whole, but like any large piece of software, it has security vulnerabilities. Unlike many large pieces of software, we have what we think is a pretty mature security response process. So we have a well-known place to report vulnerabilities. We have a structured way of announcing vulnerabilities to users. And most pertinent to the purposes of this talk, we have a pre-disclosure list, which allows us to inform a subset of people about a vulnerability and a fix before the public disclosure date. So many of you may be familiar with the concept of pre-disclosure. The basic idea is that users have a right to know the software that they're running is vulnerable, but simply having a patch that fixes the bug doesn't help most users because most users don't build the software they use themselves. They get binaries from software providers like distributions. So the idea here is to minimize the risk that users are exposed to by providing software providers with a fix ahead of time. So that as soon as we do the public announcement, there are already binaries that have been built, tested, and available for download. But Zen, as a virtualization technology, is a bit different from many other software projects because of the public cloud. So in the public cloud, users are relying on the virtualization software to protect them from other users. Neither software patches nor binary patches will help those users. So when the pre-disclosure list was made many years ago, it was decided to include public cloud providers on the list. But, of course, the more people have access to some bit of information, the more likely it is that somebody is going to misuse the information. So it was decided at that time that in order to keep the pre-disclosure list small, that only large public cloud providers were going to be on the list, where large was someone arbitrarily defined as having Zen deployed on more than 300,000 nodes. Now, I say it was decided as though there was some thoughtful, measured decision where lots of things were carefully considered. In reality, what happened was that some new vulnerability security issue came up and it was bad enough that the security team thought we should probably tell some people ahead of time. The initial list was made by the security team in secret because the issue itself was secret and it was basically people we think are important, which included some large public cloud providers. Now, obviously that's not a very good way to run an open project. So after the security issue went public, then we define an official policy with objective criteria for who could be on the list. But the criteria for that policy were defined more or less by the list which was already made. So that's where the size limitation got built into it. The security policy was sent to our development list for discussion, but the size restriction didn't get much scrutiny. So remember what I said about people not carefully thinking through from first principles but reacting to experience. So people didn't think too much about the implications of allowing only large cloud providers on the list. So with a few moderate tweaks, the policy achieved consensus and it became the official policy. The implication of that choice of having only large cloud providers on the list wasn't really realized until about a year later around the pre-disclosure of a vulnerability called XSA7. So this incident exposed a number of weaknesses in the Zen security policy, which I've talked about elsewhere. I'll summarize the situation here. So XSA7 was the Intel sysrack vulnerability. It had to do with slight differences in the error handling of the sysrack instruction between AMD boxes and Intel boxes, such that if you wrote your operating system according to the AMD spec and ran it on an Intel box, it was fairly easy to break up into the operating system. So most operating systems have been vulnerable at some point, including Linux, all the BSDs, Windows, and because Zen uses syscall and sysrack for 64-bit pre-virtualized hypercalls, Zen was vulnerable as well. The vulnerability was pretty bad. It allowed anyone running a 64-bit PB guest to break into the hypervisor and assume total control of the system, but the fix was also pretty easy. So we mailed the pre-disclosure list with the vulnerability and the fix, and we set the embargo period to two weeks. Unfortunately, one of the companies on that list, it seems, didn't get the message. Now, the company in question is still involved in our community and this is all water under the bridge at that point, so I'm just going to call this company Phoenix Corporation. So we didn't know exactly what happened, whether the person at Phoenix Corporation who was on the list had left the company or whether they were on holiday, but the upshot was that for most of the two-week pre-disclosure period, Phoenix Corporation was blissfully unaware of the vulnerability, and they only learned about their impending doom three days before the initial public disclosure. And at that point, they panicked. They said, we can't possibly test and deploy a fix to hundreds of thousands of nodes with only three days lead time. So their security team demanded that we push back the public disclosure. Now, for reasons I won't get into, the security team of the ZEN project was reluctant to do so. And at that point, the Phoenix Corporation pulled out all the stops. So they called and harassed the members of the ZEN security team, the CEO of Phoenix Corporation called the CEOs of the people that the security team worked for to try and pressure them to change the date. They called everyone else on the pre-disclosure list. And finally, they called the CEO of the person who had discovered the vulnerability in the first place. And eventually, with all this pressure, they managed to have the date pushed back for about six weeks. Now, I've covered a number of aspects of the situations and lessons learned for designing a security response process in another talk. For the purposes of this talk, the two conclusions were basically that, first of all, many of the core developers were very unhappy with Phoenix Corporation. We felt that they had been given a privilege to be on this pre-disclosure list and they had abused that privilege. But more importantly, I think for the first time, we realized how unfair the policy was. How unfair it was to allow a large cloud provider like Phoenix Corporation on the list and exclude all the other medium and small cloud providers. So we saw how panicked Phoenix Corporation was when they saw their vulnerability and they realized they only had three days before it was going to be known to the entire world. So imagine the feeling of all those medium and small cloud providers when they saw their vulnerability and realized that it already was known to the entire world. So furthermore, allowing large cloud providers on the list only because of their size gives them yet another advantage. So they already have an advantage because of their large. Now this gives them another advantage because in the marketplace too, because after all, our security list, the members of a pre-disclosure list are public. So you might as well say if you care about security, use one of these companies because they will be able to apply their patches before anyone else. So a number of people wanted to change the rule that said large cloud providers and only large cloud providers can be on the list. We knew we had to have this discussion in public, but we also knew that we were very unlikely to, whatever happened, we were very unlikely to reach the same kind of consensus that we had when the rules were first made. And whatever the decision was made, some groups of people were going to be quite unhappy. So that was our decision. And we also knew that how we ran the discussion was going to be just as important as what a final conclusion was because of trust. Trust is actually a major factor in the success of an open-source project. Even just to use or contribute to our communities, people need to trust that the projects are still going to be around and they need to trust that their needs and their preferences are going to be taken to an account when making future decisions about the direction of the project. And to develop a product based on your software to contribute significantly as a code, people need to trust it even more. And they also need to trust that they'll be able to influence the direction of the project if they do a good job and they work hard and that the contributors will not be rejected based on or obstructed due to bias or due to cronyism. So we knew that even if we made a decision that was in the best for the community as a whole, if the way that decision was made in a way that seemed to be arbitrary or capricious or biased towards some set of insiders, it would not only undermine the trust of people who didn't get what they wanted, it would also undermine the trust of people who agreed with the decision. Because even though they agreed with the decision, they couldn't tell that they might be the next people whose needs got ignored. So we actually had two goals when it came to this decision. The first was to find a solution which was the center of gravity of the community, the best solution for everybody, which best balance the needs and the desires of all the members of the community. But the second was to run the discussion in such a way that everyone felt that their voice was heard and their needs considered, regardless of whether they got their preferred option or not. So here's what we did. First, we made sure that we had a fullback in case the consensus can't be reached. So this cottage is... This is a cottage that my dad's family used to own. It's on an island on the Canadian side of Lake Huron. It's actually about four hours drive north of here. And my grandparents and their children bought it in the 1960s for the princely sum of 15,000 Canadian dollars. And it was really awesome. So growing up, I spent at least one week every summer there. It was a men's source of joy to me and my family. But it was sold about 10 years ago, and for the 10 years before that, so the last 10 years that my family owned it, it was actually a source of a lot of pain and heartache to my family. You see, when they bought it in the 60s, they didn't set any kind of official process for how to make decisions about the property. They thought, come on, we're family, right? We're brothers and sisters. We'll just talk things through and sort things out. In other words, they operated by consensus. And that did work fine for many years, particularly while my grandparents were still alive, and they could act as sort of de facto arbiters. But as time went on, people changed over time. As time went on, people's ideas about how to run and how to develop the property grew further apart. And if polite discussion, if friendly discussion is the only way... Sorry, if discussion is the only way to make decisions, and friendly discussion fails, unfriendly discussion is the only way to go forward. Crank up the rhetoric. That's the nastiness. And by the time this was sold, my aunts and uncles were really just sick of the place. So if there had been any kind of decision-making process, even a really basic one, like if you can't agree, then the oldest person gets to decide, then what happened to my family would not have happened. Not everyone would have gotten what they wanted, but everyone would have been a lot happier. And a lot of open-source projects are run basically the same way. So when you start out, at the age of 18, you talk things out and things work. Consensus works just great. So official decision-making processes seem like they're heavyweight and unnecessary. And process isn't necessary until it is. Consensus will work just great until it doesn't. And at that point, you have to have another way of making... Having a way of making a decision without consensus is vital. So make sure that there's a clear fallback in case consensus isn't reached. This could be something simple, like have a benevolent dictator. It could be a majority vote of the committers. Obviously, you want to make something that... You want to make sure that the decision-making process itself is something you can trust. So if you're going to have a dictator, make sure he's actually benevolent and not capricious or malevolent. And make sure that everyone knows what the process is going to be. So knowing what will happen if consensus isn't reached So in the case of the ZEN project, many of the ZEN project founders at the time of this discussion had quantitative from the scene. And we had just recently been writing up our governance documents. So a month ago before we opened the discussion about the predisclosure list, we proposed adding a text to the government's document saying that if consensus couldn't be reached, then it would fall back to a majority vote of the committers. And this change itself achieved consensus and it was made official policy. Right. So secondly, we had an online discussion, but we didn't stop there. So whether it's on mailing lists or forums or issue trackers or something else, online discussion is the lifeblood of our open-source communities. And for many kinds of things, they're really great. So they're really great for identifying all the important factors of a decision. They're good for clarifying thinking. They're good for exploring all the different possible solutions. And they're good for understanding the implications, the pros and cons of all the different options. And you definitely need all of these things if you're going to make a good decision. So you definitely need to have an online discussion. But in a case where what you also want to do is to be able to gauge the sentiment and the values of the community as a whole, online discussions have several pretty severe weaknesses. So first of all, they favor people who like to argue. Now, I love to argue, so this is great for me. But not everyone likes to argue. And just because you don't like to argue doesn't mean that your opinion is any less valuable. So it favors people who find it easier to express themselves, other because they're more articulate, because they have better English, because they type faster. And there's a social aspect to online discussions too. So online discussions tend to favor people who feel like they are in the in-crowd. Or people who feel like their opinion would be popular. So if someone doesn't like to argue, but they feel like they themselves are popular, or their opinion is going to be popular, they're much more likely to share it. And finally, online discussions are bad because they hide silent agreement. So if a discussion is mostly happening between a handful of people and then there's a whole large number of onlookers, it's hard to tell whether half the onlookers agree with one side and half with the other side, or whether basically everyone agrees with one side and the other people are all by themselves. So I think a good example of this, where both sociological factors and the silent agreement issue came into play, was in the discussion about the Debian Inuit system. So sorry if you're not familiar with this, but a lot of people were pretty annoyed with Ian Jackson for calling for a general resolution of the Debian community about packaged dependencies. But what the vote revealed was that 25% of the Debian maintainers preferred a resolution that would forbid dependency on system D over one that allowed dependency on system D. So 25% is not a majority by any stake, but it is still a pretty significant chunk of people whose thoughts and preferences were obscured or at least not revealed by the online discussions that had happened so far. So the Zen project, we kicked off the discussion online and it was actually really useful. I went back and read through it in part of preparing for this talk. It actually clarified a lot of our thinking. It went on for about four weeks and it brought good perspectives, clarified thinking. It highlighted a lot of the advantages and disadvantages of the different options, but what it didn't do is give it a consensus. So at that point, we moved on to the next step. We summarized the major positions that happened during the discussion and we held what I'm going to call a five-point survey. So at this point, the goal is not to nail down the fine details. The point is to get a broad sense of the direction in which the community would like to go. So once you have a general idea of the direction, then you can nail down the details. So first, identify the high-level options that people have been advocating. In our case, there were basically four options. Everything boiled down to kind of four general areas. So the first was no pre-disclosure. Just get rid of the pre-disclosure list entirely. When you have a vulnerability, you have a fix, then just publish both to the public at large immediately. You don't have this complication about who's on the list or who's not on the list. You don't have a secret cartel that says something before everyone else does. Everybody's on the same level. So the next group were people who said, well, we definitely have to have a pre-disclosure list, but limited to software providers only. No cloud providers at all. So this group said, look, no pre-disclosure. Everyone is not on the same level because most people don't get their software from software providers and they will be vulnerable until their distribution or whoever it is can compile, test, and give them binaries. So, but cloud providers, they're not anything special. They're just a different kind of a user. So there's no reason. There was never any reason to have them on the list. Limited pre-disclosure to software providers keeps things fair and also keeps the small. So the third option was the status quo, right? So the list, the pre-disclosure list being of software providers and a small number of public cloud providers, basically large ones. So this group said it's not true that cloud providers are just another form of users or the real users in this case are customers and they are just as much vulnerable until we applied the patch as private users are until you, the software providers, can build and test them a binary. So cloud providers definitely need to be on the list, these guys said, but it's also really important that we keep the list small to make sure the information doesn't fall into the wrong hands. So what that means is we're just going to have a pre-disclosure list, it's unfortunate, it's unfair, but sometimes life just isn't fair. So the fourth option we had was to have a pre-disclosure list that included software providers and basically all public cloud providers, right? So just open it up to basically all public cloud providers. This group said, look, we agree that cloud providers should be on the list and we realized that to make the list larger increase the risks slightly, but it's just not worth the unfairness that it generated to have favor large wins over small ones. If we're going to have any cloud providers on the list it is only right that we allow all public cloud providers on the list. So notice that we didn't try to define here exactly what we meant by all cloud providers or what a public cloud provider is. You can hammer out the details later for now focus on the big picture. So once we had our options we ran what I'm going to call a five point survey. So it was actually adapted from a technique called identify the champion which was developed for selecting papers for academic conferences. So the idea that is for each of the options that you've identified you rate it according to one of five different options. So this is a great idea and I would argue for it. I'm happy with this idea but I would not argue for it. I'm not happy with this idea but I wouldn't argue against it. This is a terrible idea and I would argue against it. Of course I just don't have any opinion on this one. Now there are a couple of nice properties about that this gives us. So first unlike a survey where you can only choose one of the options it allows you to collect information about what people think about each option. Using text instead of numbers allows us to be able to compare more between different options. So if someone rates it I'm happy with this idea but would not argue for it that's pretty comparable across different kinds of people and it might not be so much. And when it comes down to it all of our opinions can kind of be distilled down to one of these sort of options. So having the five ratings gives you a good balance between simplicity and richness of information. So a couple of other details about the survey that we ran. So we did our best to summarize the options but we didn't want people to feel that they were being railroaded into an artificially constrained set of options. So in addition to the four options that we had identified we added a field where people taking the survey could write in their own option if they wanted to. And this turned out actually to be pretty good because many people used it to give explanations or additional comments as well. So another question we faced was whether to make the survey anonymous or whether to make people put down their names. So on the one hand we wanted to make sure that people were free to express whatever opinions they actually felt and without fear of consequences. On the other hand we wanted to make sure that there wasn't a risk that there would be ballot stuffing or even if there wasn't actually a risk we wanted people to be able to trust that there wasn't any ballot stuffing. So in the end what we decided to do was said we said that people could choose to be anonymous but if they put their name down then their votes would be given more weight particularly if it looked like there might have been ballot stuffing. So in the end actually having people's names turned out to be quite useful in doing the analysis because it allowed us to identify clusters of voters. So I'll cover that a bit more in the analysis section. So we ran the survey. We left it open for two weeks to make sure that we didn't catch anybody on vacation. We raised it publicly on our blog and in whatever social media channels we had. In the end we got 33 responses of which only four were anonymous. And of the 29 people who put their names down there were a pretty good mix. We had representatives from all over the community developers, distributions, large-cloud providers, small-cloud providers we even had Alan Cox Trimon. So the next thing we did was we analyzed the data and we looked for the center of gravity of the community. So remember this is not a vote. The purpose is to find something that best balances what everybody thinks. And this may mean a bit of digging. So some things to look for. Obviously, an option that has a higher total approval than total opposition is better than one that goes the other way. Look out for polarization, right? If you have something where people have strong feelings on it, there's some people who really love it, some people who really hate it and not very many people in the middle, that's not really a great option on the whole. Particularly if when you look in the details it turns out that the people who really love it are sort of one kind of group of people and the people who really hate it are a different kind of people. So this kind of division in the community is not very attractive. But if you have an option where the people are opposed to it or seem opposed to it for opposite reasons, right? So if some people said, this is terrible, I want to go this way and other people said, this is terrible, I want to go that way, that's kind of an indicator that things are sort of in the middle of the community. All right, so here are the results. So we're going to go step by step. So no pre-disclosure. Obviously there's a handful of people who like this one but this one had a lot of people who said, this is a terrible idea. And actually it's not 15, this is clipped actually. So 17 people said this is a terrible idea. So we crossed this one off the list pretty quickly. So next we have the software providers only option. This one is a bit of a mix. We have some people who really like it, some people who hate it, a lot of people in the middle. And there were a lot of people who said, I don't really like this idea but I could live with it basically. So it's a possible option but it's not really the best if we can find something else. Right next we have what at that time was the status quo. So software providers and large cloud providers only. So this one as we can see is quite polarizing. There's a lot more people who voted, said this is a great idea and worse idea and many fewer people in the middle. On the whole there's more negative and positive and actually when you dig into the data then it seems that most of the people who said this is a great idea were large cloud providers, not surprisingly. And most of the people who said this is a terrible idea were developers. So having this kind of division where we have this group of people over here and this group of people over here made this seem like not really a great option. So finally we have the idea of opening up the predisclosure list to all public cloud providers. So this one has the highest number of, this is great, of all the other options and it has far more positive votes than negative votes. In addition, when we dug into the people who said this is a terrible idea, about half the people who said this is a terrible idea wanted to do no predisclosure at all, just no disclosure. And the other half who said this is a terrible idea said we should just do large public cloud providers. So having this thing where, you know, it had a lot of support in the middle and the people who opposed it for opposite reasons seemed to us to indicate that this was the best option. So we wrote up a blog with the analysis and we proceeded to the next step, which was to write up a concrete proposal, propose it on a list and if necessary have the user fall back. So in this case, the proposed change was to change the security policy to design to allow any cloud provider to join. This is the point where you have to drill down to the exact wording and try to define objective criteria that you're going to use to determine if this is a public cloud provider or not. So on the list, we had just the normal discussions about tweaks of wording and this sort of thing, but the proposal itself, once all the tweaking was done, passed without any objection. In other words, we never actually had to fall back to the formal committer vote because nobody complained. Probably part of this was because everyone saw, well, okay, if I do complain, it's going to fall back to the committer and they're going to pass it, so there's not really much point in complaining. But hopefully, a lot of people didn't complain because they saw that even if this wasn't the option they wanted, it was the one that was going to be best for the community as a whole. So, other way, we made the change and we got back to writing code and that's where we are today. So if your VMs are running on a Zen public cloud and your cloud provider is at all competent, then by the time you hear about a security vulnerability, then your data is already safe. Okay, we've covered a lot of ground now, so let's recap. So make sure that you have a fallback in case consensus is not reached. Have an online discussion, but don't stop there. Summarize the options and run a five-point survey. Analyze the data to try and find the center of gravity and make a concrete proposal based on the finding and move forward with things. And with that, I will take any questions. Yes. Yeah, so the Zen community, the core developer community is a lot smaller than Linux. So, the people who are... Let me think about the answer to your question. So let me repeat the question first. So he said we had 33 responses. Was that a lot? Was that more than we expected? Was that less than we expected? How did that kind of figure in things? I think it was probably slightly more than we expected because people who were actively discussing things on the list, I would have to go back and count, but it might be around, you know, 15 or so. And of course, we know, again, there's people who don't like to argue. There's people who don't feel comfortable expressing their opinions on the list. And so having 33 was, I think, was pretty decent. Do you have anything to add to that, Lars? Okay. The answer to your question. Yes. Thanks. Yeah. I would have to go back and look. So obviously we... So we proposed the changes to have a fullback about a month or two before the actual discussion, right? The discussion took about four weeks. I think I mentioned that. And then I planned a survey, and that maybe took a month, and then the survey ran for two weeks, and then I sort of collected things and made a proposal. So I think probably five or six months, maybe? Yeah. I think, I mean, basically, once we had the survey, so when we had the discussion, everyone was like online. There was a huge amount of interest. And then we had the survey, and lots of people responded, and it was really interesting going back and looking at the results and seeing all the comments that people made. So even when they took the survey, they still felt sort of passionate about what they wanted to argue for. And then when we sort of wrote up the blog saying this is the analysis, then at that point kind of everyone felt like, okay, this is the solution, this is where it's going, and it doesn't matter too much exactly when we get there. Does that answer your question? Yeah. There was someone back here. All right. So I have my PhD, and I was invited to be on an academic thing. And so I saw the, so that's how I was exposed. So again, this is, you know, pattern matching based on experience. So I said, ooh, this is useful, and I plot it into my situation. All right, the guy who wrote it, yeah. So I'm not sure, because so one of the things about the survey is that you have kind of objective sort of things that you're filling out, right? So I suppose it would have been, so during the survey itself, individuals can't really throw their weight around. Does it make sense? Right. So no, we didn't. And part of it is because I think overall, we do have a pretty good community, even if there's, you know, people with strong opinions on other side. And again, I think part of it was everybody, because we had that fallback, right? I think everyone could see, so we had the survey and we wrote the results and said this is what we found. I think basically everyone kind of felt like, I can't speak, I didn't ask them, right? But I guess that everyone felt like, okay, well, they could see what is going to happen if I argue, right? These were the surveys, you know, results showed, this is probably what the communities are going to go with. I can argue, but it's just going to waste my time and make people upset. So I hope that's the answer. Yeah. Yeah. And I did have the, one other kind of minor point was that the, I did offer people, so I didn't publish all the data because it had people's email addresses and this kind of stuff in it. But I said if you want the data, then I will send you a slightly sanitized version privately. So I think one or two people asked for the data, but so again, it was open. And I think everyone could tell this is probably going to be the best thing going forward. Does that answer your question? There was someone over here. It was just kind of fuzzy. So the idea was basically, so we wanted, the main reason we didn't, we wanted to have things not be anonymous was to avoid ballot stuffing, which means one person has, you know, goes around and votes 10 times, right? So basically what we wanted to say is if it looks like there might have been ballot stuffing and you vote anonymously, your vote might get ignored. Does that make sense? So, and most people I think they didn't, you know, they were totally fine with having their name to it. They put their name to it and said, this is what I think. So that was fine. Oh, I see. So you mean, right, right, right. Well, I think there's always a certain, so the question was, so I think your question is, could we for instance have said, if you ask me to, if you can vote non-anonymously so that the person collating the results knows who you are, but he sort of just sort of guarantees, he just says, Scouts on, I promise that this was, you know, these were all different people. There wasn't one person who voted 10 times. That's actually something we could have offered. So, yeah, yeah. So this exact process, yeah, so the thing with the survey, we've only used once, exactly once, right? I mean, so then it started in, I mean, it started as a university project back in 2002, 2003, but it's been a public, you know, thing for over 10 years now and the total number of times we've had to do this is one. So the thing of, the thing of allowing people to say, sorry, did you want to add something? Okay, right, right, right. And in general actually, having, instead of just kind of, so, you know, we have the plus one, minus one thing, which is sort of useful, actually having a way for people to say, I'm not happy with this, but I don't want to block it, is actually quite useful. So one of the things we've been considering adding to our sort of governance document thing is officially making it so that, in the governance document saying, if you're going to have a straw poll, use this kind of four level thing where, you know, the five level thing. So it's great, I'm happy, but I wouldn't argue for it. I really don't know. I'm not really happy, I don't want to argue against it. This is a terrible idea. And it's actually really useful to have that four level because then people can say, I don't really like this, without feeling like they're sort of blocking the whole thing. And then it comes out that actually most of the people didn't want to argue against it. And you're like, well, okay, then there's only the one guy who likes it, let's not do it. So just answer your question. So the question is if people, well, I don't know, some people might have felt this way about the British referendum recently, if you care about this. So yes, let's leave. Oh right, we didn't actually want to leave, did we? Yeah, so the question is about if people didn't take it really seriously. I think, so first of all, when you have an online discussion first, and I think one of the things about online discussions is that certain kinds of people who like to argue really get into it. So I think that draws out the people's sort of strong opinions. And ultimately, if people are going to take things seriously, then you say you had a chance, right? We gave you this opportunity. And again, so the one or two people kind of sort of hope that one or two people sort of being not voting what they really think wouldn't affect the center of gravity too much for one thing. And for two, remember that one of the goals isn't to find, it's not a vote, right? The goal is to find a good solution, but also to do it in a way that people can trust the process. So even if you, as an individual, said actually, I think I did the survey wrong, I want to do something else, but the next time, if there's something like this that comes up, your opinion will be listened to. So you've still accomplished half of the process. Does it make sense? Sorry, how are we doing for time here? We still have a bit of time. Yeah? It was designed exactly to choose that kind of thing. So this is what he said. It came from the academic community for a program committee. It was designed exactly for an academic program committee to use papers for the thing. So it also works well for conference committees to choose talks. Okay, any other questions? Yeah? Yeah? What's that? Oh, so I'm a developer. I'm a committed on the Zen project and I'm on the team at Citrix that sort of mainly does the open source kind of stuff. And Lars is the community manager. Well, I think we have a lot of people. One thing is, I mean, I'm a bit older. So I'm actually, I don't look 40, but I'm 40. How many minutes is that? Five minutes. Okay, I'm sure we'll be done. Thanks. And we also have other people that have been around the block too. So Ian Jackson is in our community and he helped write the Debian voting system and stuff like that, which is much more heavyweight than any of this stuff. And I think, again, we had a lot of matching from experience. So we we did, at the time the XS-A7 came out, we were doing everything by consensus, right? And because of how that experience with my family's cottage, I was like, okay, this is going to be a disaster, right? We have to have something to fall back on. So yeah, we worked obviously with the community manager and with other people that had been around the block a few times and had come up with sort of ideas. And had been exposed to the survey method and so on. So yeah, just answer your question. Okay. Well, thanks. You guys have been a great audience. And I'll be there.