 Well, good afternoon, and welcome back. So we're talking to you about why it's important that free and open source licenses are neutral on the question of what purpose they're used or what purpose the software is used for. The reason for doing this is that there was, late last year, yet another resurgence of conversation about the idea that free and open source licenses should be improved by rolling in terms about the ethical use of the software, constraining the user, constraining the license on an obligation or undertaken by the user to operate under certain ethical principles. There are various options, but that's the broad picture. So it occurred to me that it might be appropriate to address this because it doesn't get taught about much, and most of it has been forgotten about. It was discussions that happened 20 years ago. And so part of it also is just our origins and remembering who we are and what we're about. The grandfather for all of this is the Free Software Foundation's Freedom Zero, so-named actually because of the zeroth law of thermodynamics, the idea that the most fundamental freedom that's embedded in software is the freedom to use the program. The software freedoms that the FSF propagates, there are actually four of them, I won't go into details, but the baseline is the ability to use the program. If you can't, then what's the point? The rationale for this, and this is Stormer at FSF, is that every non-free program has a lord, a master, and if you use the program, he is your master. This is the rationale for the GNU project, the Free Software Foundation, copy that for licensing the F in the name of Fossasia, this is what it's about. The fact that unless licenses are constructed, unless software is constructed, so that developers do not control users, then inevitably a power imbalance arises and developers will control users. So it's in fact the point of Free Software to say it is not okay for a developer to decide what a user can do other than to decide whether to use or not use the software. But the point was to create a body of software licensed in this way. Hello, you may enter. So the Free Software Foundation is sort of fairly unapologetically about human rights and isn't shy about saying so. This puts a lot of people off. And in the 90s, Bruce Perry and others realized that there was scope for the use of open-source software, or free software at the time, in a for-profit context. But maybe we could brand it a bit better to stop scaring corporate types away from talk about human rights and domination and lords. And so parents wrote the Debian Free Software Guidelines. This was the definition that the Debian organization would adhere to in deciding what licenses it would permit on software that made up part of the Debian distribution. So this is where this thing really starts to become embedded in the systems that we now use. And so the Debian group established the 10-11 term definition and then set about rooting out of the Debian distribution anything which had a license that didn't meet these rules. The same, almost the same rules made their way into the open-source initiatives definition. I said, okay, let's do this, but do it with the corporate branding. But the wording and explanation are almost identical. The way this was addressed was not that the user was free to do anything, but rather a positive, sorry, do a look at it, but rather stating explicitly the license must not discriminate against a person or group. And separately that it must not discriminate against any field of endeavor. And the sort of hot button example that was offered at the time was genetic research, particularly for the genetic modification of food. So these were ideas that were present not as part of a program to enshrine human freedom, which is what the Free Software Foundation was about, but as part of a campaign to make the idea of using software where the source code was available to everyone as part of how corporations would do business. And that meant providing the groundwork for a very open method of collaboration. I'll come back to that later when I talk about Kosoz Penguin. But that is critical, that first and foremost, while Free Software is about the freedom of individuals, open source is about an open collaboration technique. So okay, you might have heard a lot and thought, well, that's great, but I wanna decide who gets to use my work or what they use it for. I really don't like the motivating example of ladies ice-putting children on babies in cages. There are various other things you might object to, but so you're left with this, but hey, I want to decide, I'm contributing and I should have a say in that. Well, okay, it's your code, you make the rules. Tophal's famously pointed this out about the next kernel. Yes, when do you like, but he wrote the initial kernel and put a GNUGPL lessons on it. If you don't like that, then don't use it. So fine, let's look at how this plays out. And certainly the reason that it is, okay. So all right, this sounds like good stuff. These two are great documents. The stuff in them is good. We all think they're excellent. Why shouldn't a license include these as a use limits and forget the enforcement mechanism, that's more complicated, but just the principle of having these two documents embedded as a use limits for software. Why is that a problem? Can't we do that? Obviously we can all agree that these are good things, can't we? Okay, let's look at it. UDHR is 30 something articles, here are two of them. Yeah, 1948, this is the next problem. Because, so part of the argument is, and I'll get to that in a moment. So this is immediately after World War II where a particular approach to the relationship between an individual and the state had gotten a bit out of control, attempted to destroy an entire ethnic group, like literally physically kill them, and as a side effect, it killed like 10, 50 million people. So yeah, everyone was like, this hasn't worked. Let's think differently. I'll get to why that's also inappropriate a bit later. The point of it, here are two articles. I'm drawing these out because this is an area that I work in. No one should be subjected to arbitrary interference with his privacy, his correspondence, no tax on reputation, and protection of the law for those things, and rights of freedom to expression to hold opinions without interference to seek, receive, impart information through any medium and so forth. So okay, so far so good. Hopefully we're all okay with these. These seem like good things that should be okay and how could it possibly be a bad idea to embed these? Because this is the actual text that's being incorporated by reference into the hypocrite license part of it. How could that possibly be a bad idea? Well, the first thing is in fact, the point that I've noted. This is written in 1948. First enforceable laws didn't appear for more than 20 years. It's the gap between agreeing a set of principles and documenting a law that's enforceable is enormous. It's not a simple thing at all. But worse than that, these were single state laws within Germany and they dealt with specific pieces of processing. To be fair, automated data processing wasn't very advanced at this stage. So perhaps the limited form of the risk was part of the reason for the delay. But it's not as though the rest of the UDHR got itself rolled into every piece of law worldwide. A great deal faster. Another 25 years goes by, and finally the European Commission and Parliament issued a directive in 1995, which runs to that 12,000 words. And all it does is turns those two paragraphs into actionable law. It's a 12,000 word document which took almost half a century to appear. 20 years of experience of that gave rise to GDPR, which we all know and love, which is more than four times the size of that. Being capable of determining whether a particular action complies with GDPR is difficult. It's worthwhile, it's a fantastic piece of law and a major proponent of this thing. But to say yes in order to meet the terms of a license which embeds the UDHR, you are necessarily embedding some sort of embodiment in workable law. We're now talking stuff that runs dozens of pages and needs professionals to interpret. So it's not feasible. You can't, an individual program, or even in some cases, a mid-sense organization, cannot look at a bit of software and say, yes we do or no we don't comply with everything in the UDHR under every conceivable interpretation. Come back to, oh, in fact, yes, that's my next point. And for this reason it's completely the wrong tool for the job. Human rights principles, enforceable laws, enforceable regulations, and contractor license terms are for quite different things. They're intended for using quite different situations. Most importantly, the human rights principles were intended as a starting point for creating enforceable laws and regulations. They're intended for situations where human lives are being seriously harmed, where they're being harmed by states who have the right to use force to enforce their will and where the states are high enough that therefore courts are involved, case by case, to determine what is and isn't the law. It just isn't simple to determine. It's not because the one or both parties are being dishonest. It's simply difficult to determine whether a particular act is or is not compatible with the intention of the principles that alone, sorry, of the regulation that alone laws and principles. And so these things are extraordinarily inappropriate for moving from the law relating to states, which is where the use of force is involved and therefore courts review into the laws of private contract. And particularly for open source communities where the decisions are made at a very low level and very low stakes where nobody can afford a panel of lawyers to determine the consequences of incorporating the entire gamut of human rights into a license agreement. So okay, you're not convinced. You think that's fine. We still think these are such good things that should be put in. And in cases where we can definitely prove that someone has breached, then it'd be a good thing to have that because they're all good. We can't possibly object to them, can we? Pretty well every person in this room draws some or all of their income from a company whose business is automating something that used to take more human beings. Under one interpretation, every single one of us makes a living out of breaching Article 23. We are impeding the ability of other human beings to enter living. Now, hopefully, no one in this room would think that that was a bad thing, that over time we're all reasonably convinced that human beings adapt and the available labor adapts to the available demand and people learn things and the problem goes away. But certainly under one set of interpretations, what it is most of us own our livings doing breaches this article. So if we're starting to embed this particular set of ideas as principles into licenses, we're now creating minefields for ourselves, for our employers, for our customers. So what I'm suggesting is that these are sort of well-intentioned attempts to impede bad people. It's like, here's a thing that is being made use of to advance some interest. The recent one was ice putting babies in cages. Here's a thing where we could exert some control. Therefore, we must. Like, this is not, that conclusion doesn't hold. Yes, sticking babies in cages is appalling. The use of human rights principles in open source licensing is just not a good tool. It's a bad fit for licensing. But also, even if you buy the rest of what I said, it's not compatible with FOS communities. I'll touch this again at the Coases Penguin section, but FOS communities in particular are diverse people who have conflicting interests and often don't like each other very much. We succeed in spite of that. We succeed because we're able to facilitate collaborative work by extremely diverse groups of people and groups of people who in many cases are either direct commercial competitors. Think the phone manufacturers and the GPU manufacturers who contribute on the Linux kernel. They work directly with their competitors and yet they cooperate. Certainly who despise each other. There's all been a number of places where some subset of the developers or users of an open source tool doesn't much like the others. But perhaps most critically, it's unlikely to stop the bad guys. For a number of reasons. The bad guys are human rights abusers in the Middle East where human rights law has no effect at all and in any event the licenseeers who are upset about it are unlikely to be able, unlikely to have standing to bring an action in front of a court. Perhaps closer to home, certainly the US, the such licenses could be, under one interpretation, voided completely by simply outsourcing. Okay, ICE can't be the licensee but in any event they tend to outsource their IT functions. The IT guys can be the licensees. Boom. You've achieved literally nothing. But okay, suppose both of those are true. That, sorry, they're both unavailable. That we have an actual mal-factor zone. We all agree is bad. Who's inside a jurisdiction? Who is subject to being sued? And so this is not impossible. The Free Software Foundation has repeatedly successfully gotten license abusers to come clean, famously Cisco with the Lynx's firmware but there are apparently dozens of others. Okay, so let's say all that's, we're at the point where we really could haul them in front of a judge and get the problem solved. What's the solution? Fine, stop using the open source component and go and pay for a closed one. At which point we've royally shut ourselves in the foot. The last thing we wanna do is divert money from economic actors out of the pockets of open source developers working for companies that develop and share it into the pockets of open source developers who work for companies that don't. This is not, it completely fails to understand that the organization that you're seeking to control isn't going to be controlled. If they can't cooperate with you then they will compete with you, okay? Success. So that's sort of the other part of the argument is that we all benefit from the work of people that we profoundly disagree with and in some cases we even despise. And this might be familiar to those who've studied political science. This of course is exactly what goes on in representative politics. In a parliament or a congress, this is precisely the deal. No one says, hey, this is perhaps not entirely true. In a civilized society, no one says, hey, we don't like you, we wanna make sure you don't get to participate. You don't get to work, you don't get to own, you don't get to vote, you don't get to do anything, you don't get to eat. So it's not as though there is a lack of precedent in civilized society for people who disagree profoundly or even who despise each other to nonetheless cooperate. And I'd suggest that that's this area that we profit in. And indeed, this is pro-social. One of the arguments is, hey, they're doing something anti-social, we should exclude that from the license. No, it is the ability to cooperate with people that we don't like or that we don't know aren't our family. Easy in fact, the basis for civilized behavior. But I don't like those people. So okay, no, that's all right, big button. I don't like that bad people will do bad things with software contributed to. So this is the broad social control problem. At least if you're honest about it, it is. If you're not, if all you're saying is, I don't care that they're doing those things, I just don't want anything I worked on to seem to be helping them. Then you're admitting that you actually don't care about the things you claim to care about. You only care about your own reputation. So that's the virtue signaling half. Let's assume that's not the case. This then becomes a social control question. How does a society ensure that a variety of bad things don't happen? And laws and regulations are by no means the only way of doing it. They're not even the major way of doing it. Custom, particularly civility, the fact that we don't routinely beat each other in the streets is a much, much larger piece of the puzzle. And part of that is remaining civil around people that we don't like. Religion is a surprisingly important one. And although this might seem remote to what we all do for a living, the Vatican recently published a specific call for ethics in the use of AI. Religious organizations are getting directly involved and I am in fact, although I'm not a religious man, and have all kinds of problems with what religion does and has done. I would argue that religions do have a direct part to play in helping to shape the norms of a society, hopefully, mostly for the better. Professional ethics is hugely important. If you're sort of doctors, lawyers, a bunch of other people who are taking on a responsibility where the customer has to trust them. And so you establish a professional obligation for the professional to be more loyal to the customer than to themselves. This is not necessarily a solution to the problem, but it's a way of, one of the ways society solved the problem that I don't like the fact that that guy over there is doing a bad thing. So one of the ways to do that is for progression to establish effectively a guild and to kick out anyone who abuses. Similarly, labor unions. They're not quite the same thing. The analysis is different. But again, it's about there are other ways of establishing behavioral norms in a society than embedding everything in contracts. And although you might argue that it should be done as well as, I hope I've already made clear there are real problems with trying to embed human rights principles into contracts generally and to software licenses in particular. There's also a process abuse risk. We see this occasionally with codes of conduct. And so I'll start up, I'm actually perfectly happy with codes of conduct. I think that the level of civility in the Linux kernel environment, for example, has gone up drastically in the last few years is a major achievement and one that I think should be seen that way. But of course, some of the same people who are pushing for ethical source are pushing for codes of conduct which seem to be used primarily to exclude people that they don't like. And so that's problematic at the level of a project team. But I'd suggest that a code of conduct does make sense at the level of project team. And then the team just has to keep an eye on whether the code has been constructed in a way that facilitates abuse and whether it is being abused. At the much broader level of the supply chains for software, particularly in a moment, I'd suggest that risk can't be contained. So it's again a harmful thing to do. The other thing that I suspect might be going on in the, we want to stop ice-putting babies in cages case is at least in the US, there's a significantly lower standard of proof for civil law than for criminal law. So by embedding all of this woolly stuff into license terms, which is harmful for all the reasons I've just described and which will achieve essentially nothing for the reasons I've just described, what it will do is allow fixatious litigation. It'll allow someone to simply use the courts as a weapon. Not without any basis for hey, we've got this thing that we dragged in as an argument that what you're doing which is human rights, which I've just pointed out covers just about everybody. So now you can, if you want to hurt them, you just hold them in front of a court. And either you step away in which case the court takes money from you or you engage in which case the lawyers take money from you. So it's a means of bullying people. So that's, and I think that I suspect that that's part of what's going on in the current surge or current recent pressures for ethical licensing. So okay, look at the long supply chain. Maybe you construct the license in such a ways to escape the outsourcing risk that I mentioned. One of the ways the bad guys can be out of reach of your theoretical ethical license is to outsource. So okay, you construct license terms to survive outsourcing relationships. I won't bore you with the details, but you can at least in theory do such a thing. So right, we work in a project team on some library. We have a code of conduct that's appropriate for use inside the team and is being used in a hopefully inclusive rather than exclusive way. However, the component is a dependency of a bunch of other things. It might be entire trees of these dependencies, but ultimately of an application. The application is a monitoring system, a ticketing system, any variety of things, embedded somewhere in an infrastructure or platform as a service cloud. The Amazon Web Services, Rackspace, all those guys use a bunch of open source software internally. A random small to bin size organization or startup produces a software as a service, which they of course host in this environment and they make it available to their customers who subscribe, who then have their employees use it. But the employee uses it to do something that is unethical. And we have constructed our license terms theoretically to survive all of these indirect steps in the relationship. So now what has to happen for this solution to work, what has to happen is that at every one of these steps, but in particular at the two subscription steps, the IS pass and the SAS guys has to include the transitive closure of the ethical use limitations. So if there are 100,000 components and they have only 500 different licenses, because of course every one of these things is gonna be specific, then you end up with this enormous list of use limits. So when you go to sign up for Google Docs, like G Suite, Salesforce, RandomSense for the month, here are the 10,000 things you're allowed to do. No one's going to do that. And so again, even if you solve the problem of the outsourced loophole, what you do is destroy the economic driver for collaboration by anyone who lives in any of these chains. They have to immediately exclude what you're doing. It's the same as the problem with the viral GNU licenses in some cases and the ephero licenses in some cases. But you're now talking about a much larger and much more complicated problem. So it can't ultimately solve the problem. It either doesn't work or it does so much harm that it prevents use. Another way to, now we're getting into sort of a grab bag of ideas that I didn't quite get into sequence over that. Another way of getting why this is ridiculous would be this kind of license agreement. I imagine some people in this room own a screwdriver. Would it make any sense at all for the purchase agreement for the screwdriver to have included an obligation not to breach the Universal Declaration of Human Rights? Hopefully it's really obvious that that doesn't make any sense. I mentioned Coase's Penguin. So this is a paper that came out sometime in the 90s noticing that open source communities were doing something that's not correctly predicted by the preexisting models of collaborative production either in the firm or the marketplace. So the firm is you grab a bunch of resources and you hire employees. The marketplace is you grab a bunch of resources to work on a thing and you contract with others to do some other part of what you're doing. And of course any real organization does some mixture of these two. And the economic theory had been these are the two ways everything gets done and neither of them sensibly explains what's going on in free and open source communities. In those two older and better studied approaches the division helps. Within firms it provides, it's more about competition in fact in both cases. Inside firms it means that you simply exclude people you don't like and you get a firm of people working with one worldview, the sort of brogues culture that we keep seeing in Silicon Valley. It has side effects but certainly internally it's very efficient. In the marketplace it promotes competition which promotes more efficient and better products. So these sort of choosing a mechanism which tends to divide people is in fact valuable in these two cases. But an open collaboration is not. The last thing you want to have is people who are collaborating across organizational boundaries, across geographic cultural boundaries to start putting lines in saying no, you can't talk to each other. So the point I'm making here is simply that the idea of inserting ethical use limits is unusually harmful in free and open source licensing in a way that would not be in certain kinds of commercial licensing. A bit snarky perhaps, but a lot of this is puritanical and by and large you should ignore puritans because they don't deliver. If you allow a project to get bogged down in virtue signalling then what you do is take away its capacity to deliver. So again it's harmful, this is not just open source, this is everything. Finally of course it begs the question, okay suppose you're still hell bent on doing this thing. What would it look like? How would you do it? And one of the things, I raised this because there was recently an attempt to change the, in fact this ongoing attempt to change the open source definition maintained by the open source initiative to remove those two clauses or to modify the two clauses I showed you earlier. And so part of it is well, the GNU project is too hard to repeat. What Stolman did is absolutely amazing and very few developers have ever or will ever do what he did. He sat down and wrote a compiler on a shell and a text editor in a build system himself and got to the point where they were self-hosting on a commercial UNIX. And that was in the sort of the kernel or the kernel but the germ from which free systems evolved. But interestingly because the intent of the open source initiative was to make this method of the use and production of software more attractive to corporations in addition to removing the human freedom thinking, they also removed the requirement or allowed the license to not have the requirement for the transitive or viral nature of the licenses. If you take a work under GPL, you make a derived work, you must make it available under the same terms. That's not true for almost or almost any of the other open source licenses. So interestingly, if you were to create an ethical source program and a project to create a sort of ethical desktop with a license that had let's say a UTHR embedded in it, you could in fact pick up thousands of existing packages under open source licenses and simply add restrictions. And most of the open source licenses permit that. So if this is more the answer to why the open source initiative should not change its definition. For the same reason the open source initiative was a separate project to the Free Software Foundation. If someone wants to do this, the sensible way to do it is to start a new program. Because you now have this huge body of open source software that has licenses that allow you to put further restrictions on derived works, in fact, you've got the starting point. I merely brought it out, it's interesting that of course you can't do that with free software. You cannot add restrictions. It seems interesting that the subset of open source software that is targeted specifically at human rights and freedoms is exactly the software that would not be compatible with this strategy for creating an ethical source program. If you've ever thought. So just close on a reminder of these two ideas. Although it's off, very rarely actually stated this way. These are the two things that FOS are about. Free software is about protecting the rights of users of software. That the developers don't gain domination or control over them. And open source is about sustaining the ability to cooperate across enormous differences in time zone, in geography, in culture, in ethics, in commercial competition, the whole lot. Thank you.