 Hi, welcome to We Won, Now What. This is being presented for DEF CONFUS 2020. I'm Ben Cotton. I'm here to offer some opinions to you for the next 45 minutes. So start off with some boilerplate, as I usually do. Let's talk about it. I wanna make this a thing that we can discuss. If you have constructive comments, you can at me on Twitter. You can hang around for the Q&A afterwards. If you have non-constructive comments, you can keep them to yourself. So now, usually when somebody gives a talk, they have some disclaimers, right? So, you know, these are not my employer's opinions. These are not my project's opinions. I'm speaking for myself, blah, blah, blah, blah, blah. And that very much applies here. You'll see that I'm not wearing any branded apparel and that's for a good reason. Just pretend like you don't know anything about me or what I do for a living. Cool. But also, I have an unusual disclaimer for you that this might not even be my opinion. There are things in here where I'm not entirely sure I agree with what I'm saying and you're more than welcome to also disagree with that. This talk is really about presenting some thoughts and ideas in a way to think about the future, not necessarily to prescribe a future. To that end, I have another unusual disclaimer and that is this talk, I think, will ask more questions than it answers. There is a question mark in the title of the talk. That should be your first indication. So, what's this all about? So this talk, the idea came to me in the before time. This was October of 2019. I was in Raleigh, North Carolina for opensource.com's annual community moderator than now correspondent summit. And we were talking with Red Hat's chief people officer, Delisa Alexander, about the history of opensource.com. So I was celebrating its 10th anniversary and we were just talking about like, look how far we've come in 10 years because 10 years ago, people didn't really understand open source, especially in enterprises. It was scary to them and now opens the default. Like we won, isn't that great? So I asked her a question and I said, where do you see 10 years from now? What will this conversation be like then? What's the big difference? And my friends, I really wish that I had been taking notes or something because she gave an answer that I thought was really insightful and I have no idea what it is now. Sorry about that. But it did get me thinking over the course of the next few days. Like, well, if we've won, like, what does that look like? Where do we go from here? Like once you've won, what else is there to do? So open is the default choice. Present to you some evidence that we've won in case you don't agree. You might remember in 2001, Steve Ballmer was at Microsoft and he said, Linux is a cancer. And cancer is generally a thing people don't like. They don't think kindly of it. It's bad, right? And Ballmer and others thought that the GPL was like a virus that infected other software and that it would just destroy the way they viewed the world if it were allowed to continue. And then in 2015, Microsoft loves Linux. So you might not agree that Microsoft loves Linux. You might question their motivations. That's fair. Do they love Linux? Because they realize the Linux can't be stopped? Maybe. Do they love Linux? Because they realize they can make a bunch of money being supportive of Linux? Probably. Like, maybe it's not a profound philosophical realization. Maybe it's strictly commercial. But either way, they're a huge contributor to Linux and open source in general now. And that's good. And I can tell you from when I worked at Microsoft a couple of years ago, like they're sincere about it. They may not have the same motivations that others do, but they are legitimately loving Linux. This is not an embrace, extend, extinguish kind of situation. When I worked in the Azure marketing team, the fact that Linux virtual machine usage was outpacing Windows, that was a cause for celebration. We were graded on that. Like that was an important part of what we were trying to do. Now, granted, Microsoft is a company of 150,000 people. I almost said country, which I mean, it could be. Anyway, the love for Linux isn't evenly distributed within business units or even within individuals, within ones that do, but it's there and it's sincere. Now, you may remember in 2004, our friends at Canonical opened Ubuntu bug number one. Microsoft has a majority market share. That was a bug that they needed to solve. In 2013, Mark Shuttleworth left a very lengthy comment that you can't read, but it's just there for visual prettiness, basically saying, fine, like this isn't what we need to focus on anymore. We're good. They closed it. Now, Microsoft still has a majority market share, but maybe we don't care as much. And for years, the year of Linux on the desktop was a big joke and the year of the Linux desktop is here. It's called Android and Chrome OS. Like more people are using Linux today than we probably ever would have thought 10 years ago. They're just maybe not using a typical Linux distribution like we thought about. And open standards are being embraced too. Jack Dorsey on Twitter was talking about maybe putting together a team of engineers and stuff to develop an open standard for social media so we can decentralize it. Now, he got a lot of flack about this because, I mean, there are already open standards that are very Twitter-like that Twitter could just participate in. But, you know, this reasoning is email for as much as we hate it has worked really well-ish for the last decades because it's built on open standards and you don't need to be on Gmail to email another Gmail user, right? You can email whoever you want, which is both a blessing and a curse. But Jack Dorsey says, hey, open. This is good for my platform, right? All right, so now what do we do that we've won? Well, unfortunately, we can't all retire to the beach as much as I would really love to just sit there with a bottle of whiskey and a stack of good detective novels and a big umbrella because sunburn. We can't do that yet. We're not there. But I got the thinking, you know, does the fact that we've won explain some of the tumult that we've seen in open source lately? So I developed a hypothesis that we're not actually gonna test but I'm just gonna put this out here. My hypothesis is that open source contributors have agreed to an unspoken truce in order to defeat proprietary software. And now that we've won, we're able to resume our infighting. So what am I talking about here? Like, remember back in the day when people would get into fierce battles over which text editor was better? And the right answer is the one you're using is a good text editor, that's fine. But we spent a lot of energy fighting over this. We thought about which Linux distribution to use, thought about which desktop environment to run, which license to use. And none of this really matters because people can use what they want. And that's one of the great things about open source is that there's lots of choices to choose from and you can go participate in whatever community you want and help make it more better. And we kind of stopped doing that after a while and maybe because we were focused on the outside and now we're turning it back to the inside. So what am I talking about here? Well, one, maybe we're writing new licenses to solve old problems. It seems like more licenses are being submitted to the open source initiative lately. This is the body that defines what open source is. They have the capital letters open source definition. And these licenses aren't just people sitting there going, I'm gonna write a license because it's fun, which it is. But these are people getting actual lawyers to write licenses that people can actually use. And we're seeing an uptick in that. And maybe it's because they're like, oh, all right, now it's time to solve some other problems. One that was recently approved, went through a very long process and got Bruce Parins to quit the open source initiative because he thought they're heading the wrong direction. That was not minor news. But one of the things that we'll come back to a little bit later is this observation that data and software are emerging. And we used to think of software as this thing that was just sort of contained and off to the side. And that's maybe not the case anymore. So one of the other things that we're starting to see is typified by the Hippocratic license. And if you're familiar with the Hippocratic oath, it's what doctors promise to do when they doctor and it says, first, do no harm. So if you can't help the patient, at least don't make them worse. That seems like a pretty good thing. And the idea behind this license and others like it is that if I put a lot of energy into making the software, I don't want it to be used by the bad guys. Now you can see where the difficulty is there is like who defines what the bad guys are. And I think most people who make tools of any kind, whether they be automobiles or software or firearms or kitchen knives or any other tool, they don't want it to be used for bad purposes. You're making this thing because you want to make the world better somehow. But it's really hard to define that in a way that people can agree with. And frankly, it's not open source. Now that doesn't mean it's good or bad. It just means that because it restricts field of endeavor, it cannot be considered open source. Another problem that we've started to come up with is the, oh no, this isn't a business model. It turns out you can't just say, this software is open source and make a bunch of money off of it. Now Red Hat has been extremely successful in producing and selling support for open source software. So the tune of two plus billion dollars a year and then being the largest software company acquisition ever. That's good. Frankly, not a lot of companies have had that level of success and I'm not here to say whether Red Hat has been good or lucky or both, but there's probably some of each. But it turns out that it's more than just being open to make money. And it turns out that if Amazon Web Services wants to take the code you wrote and make an as a service offering based on it, they're gonna make a whole boatload of money and the license you chose doesn't obligate them to give any of that money to you. Now you can again make the case that ethically they are strongly encouraged to do that. They have an obligation there. You could certainly say from a business perspective if they don't wanna have to keep developing it themselves, it's in their interest to make sure that the upstream project is well funded and viable. But there's no like real obligation for it like no legal obligation. And so we had a few projects that just kind of wrote their own licenses that my interpretation of them is basically you can use it, don't make Amazon Web Services out of it. And people were rightly like, okay, but that's not open source. You can do whatever you want, but don't call it open source. MongoDB for example, had to be removed from the Fedora repos because the server side public license not open source so Fedora can't ship it. So we've won, but that's not enough because there's still problems for us to solve. But then maybe the question is actually, have we really won? What does it mean to win? What does winning look like? So the state of technology these days is a little bit of a garbage fire. We talk about the algorithm like it's going to save us. Like we can take this math and shove it in on some thinking rocks and save so much human effort and be more reliable and not have to subject underpaid contractors to viewing all of the terrible things that Facebook makes them do to make sure it's not something that should be removed from the platform. And we can do all these things and make everything so much better and more automated and driverless cars. Yeah. The algorithm won't save us. Because it turns out the algorithm is only as good as the people that make it and the data that goes into it. So when there was an algorithm that helped determine levels of healthcare and what kind of healthcare treatments people receive, it turns out black patients got less from that algorithm because historically in the US it turns out there have been some racial injustices and perhaps there's some biases in the system that we're perpetuating with that. So Amazon said, you know what? It's really a problem for us that we are not having the diversity in our company that represents the population at large. Studies have shown that more diverse companies produce better results. So they had a secret AI recruiting tool that they were gonna use to help screen resumes and get rid of bias, except it showed bias against women. Why? Because in technology we have largely had a problem with bias against women over the decades. And so the algorithm has just learned from what we've given it, face recognition software. Now, let's set aside the argument for a moment that facial recognition software not working is actually a good thing. It's facial recognition is used for a lot of bad purposes by a lot of regimes around the world. It's also kind of useful. You pull out your iPhone, you hold it up like you're gonna do anyway, you're gonna put it in front of your face and it just says, oh, there you are and it unlocks for you, that's kind of nice. But it turns out that facial recognition software works really well for white men and less well for others. And so let's talk about faces a little bit. Recently on Twitter, I saw a project where they would take a pixelated version of a face and try and depixelate it with AI. So former US president, Barack Obama, is famously a black man. And when his picture was sped through that algorithm, it turned into some vaguely white dude. But it's not just him. Somebody posted a picture of his wife who was of Asian descent and she turned into a vaguely white person. And so did actress Lucy Liu. Huh, this seems problematic. You know what's even more problematic? When facial recognition algorithm says, yep, that's the guy, that's the one who did the crime. And it turns out, no, it very much is not the one who did the crime. Reporters did some reporting as reporters are want to do. And they found out that in Detroit, the facial recognition error rate was 96%. 96. Now, I want to take a bit of a moment to take a detour, to make this a little bit personal. My undergraduate degree is in meteorology and weather forecasting is notoriously very difficult. Meteorologists have a reputation of being wrong all the time and people are like, you can be wrong 50% of the time and keep your job. That's awful. 50% of the time. 96% failure rate. 96. You could just guess and be better than that. So garbage in, garbage out became garbage in, garbage fire, but not necessarily on purpose. Like I'm not here to say that the people who wrote those algorithms or provided the training data sets are doing anything with malicious intent. I don't believe that's the case. I think they don't think about these things because they don't have to. That's what we call privilege. So now maybe the idea is we just need to be more aware of what we're doing and think about the ramifications of it because it turns out there are ramifications. So maybe what we need to do is not worry about winning but just get better every day, right? Like wake up today and like I'm gonna be slightly less bad than I was before, woo, progress. I wanna go back to Pam's comment about the OSI mailing list and looking at the new model of licenses because I think it turns out that while we were focused on software, the data was actually what's important. And this is the part where I'm gonna get a little spicy and this is the number one slide where I'm not sure I agree with myself but here it is. Free software is less important than reliable and protected data. Do you agree with that? I don't know if I do. Maybe a more agreeable way to say it is free software is a necessary but insufficient condition because all the things I've talked about with the algorithm and stuff, mostly these are done with proprietary algorithms or proprietary data sets but being open source wouldn't necessarily fix that. It's not a magic balm that fixes everything. Facebook could open source their entire platform. They could have been open source from day one and the data is still the problem, not the software. So let's talk about how some of the ways that data is the problem these days. For example, you have police databases that have all kinds of information about people and I think most people would agree that there are at least certain cases where it is desirable for law enforcement to have access to information. I think we can spend a lot of time agreeing about where the boundary is and under what circumstances that's appropriate. That's another talk. But I think most people would agree that police officers stalking exes or crushes or romantic rivals or people that beat them up in high school or whatever, that is a misuse of it. There's a word for it that happens often enough. Here's something that's even more serious or perhaps serious in a different way. There was an app that was designed to help people who were in abusive relationships get the resources they need to get to a safe place, to escape the relationship or do whatever it is they need to do. This is very important information because if an abuser knows that someone is being, that their person they're abusing is trying to get help, that can make the abuse worse. It puts lives in danger. The app had a little bit of a data breach and a whole bunch of information got out. This is life or death stuff, friends. This is not some hypothetical situation. This is putting real people's real lives at real risk. So apart from protecting data, there's some other things we could be doing in the future, like fixing our people problems. So open source is growing, but not how it should. This is the title of a summary of the 2019 Stack Overflow Developer Survey. I'm gonna show you a few charts from the 2020 version. We'll just think a little bit about what this means for ourselves and for our community. Race and ethnicity. Almost 69% of respondents said they were white. Doesn't sound like it represents the population very well. 4.5 are black. That's certainly less than the population of the US percentage-wise and well less than the global population. Now, this seems bad, yeah. How about gender? Roughly 90% of respondents were men. I am very sure that men do not make up 90% of the world population. Physical disabilities. I actually really do not like this graph because it doesn't have, I have no physical disabilities to scale it, but approximately 1.1% of respondents said that they are blind or have difficulty seeing. And it turns out that can be a problem because when you design your fancy new webpage and it's got, well, not flashing anymore, it's got like JavaScript-y doing stuff everywhere, boop, boop, boop, boop, boop, boop. Screen readers have a hard time with that. If you have a bunch of emoji in your Twitter handle, screen readers are gonna have to sit there and read out every emoji. And if I were blind, I would probably want to show up at your door and yell at you for having 20 emoji that every time a tweet from you is read, I had to sit there and listen to each and every emoji. It's things like not including alt text on images and stuff like that. These are things where we're excluding people for no good reason and I'll admit, I'm very bad about doing alt text on images, especially on social media. I just don't think about it a lot of the times and that's because I personally don't have to, but that's bad and I should fix that and you should fix that. And if we had more people with physical disabilities represented in our communities, we would think about that more. And age. I am 37 years old and I am apparently ancient. Half the respondents, slightly more than half, are 30 years younger. Old people have made a lot more mistakes in their careers generally than young people. When you're younger, you can make many mistakes more quickly, but when you're older, you get to learn how to make bigger mistakes. And this doesn't say that it's better to be old or it's better to be young, like there are advantages to both and frankly, if you're young, you're gonna end up old, hopefully. But what kind of mistakes are we making that we could avoid making if we had people who had made mistakes like that before? So we have people problems. We have not enough contributors. Most projects, by and large, would always like to have more people helping out in some fashion because there's always more work to do than there is time to do it. As we've seen, our communities are not representative enough. Like this is a problem, full stop. Hartley taught us that there's a lot of projects out there that power trillions of dollars of economic transactions every year that don't really get the financial support that they need and they've got like one or two people making pretty low-end wages for technology, basically keeping the entire world's economy running. That seems like a problem. And there's not enough user education. And what I mean here isn't just like training manuals and stuff. And it very much is not, don't use this proprietary software, use the open source of the free software version. It's, yeah. I mean, yes, but also nobody cares. Why? It doesn't matter that you have a strong philosophical argument in favor of free software. If you can't express it in a way that the person you're talking to is going to care because there are so many things in this world for people to care about as we have learned in the past six months. There are so many things and nobody can care about all of them. And if something just works for people and they can just do it, then they will. So we have to be able to explain to people in a way that matters to them why it's important to have software that they can see or that somebody they trust can see what it's doing, why the data collection and data retention and all of that matters. So where do we go from here? I don't have answers. I've given you some ideas of things we should think about, things that maybe we can improve upon, but there's a lot between here and there. And so I invite you to spend some time thinking about this. 10 years from now, what do you want this talk to be about? What do you hope that we will have fixed? Will you improve? What definition of winning can we set up and meet that goal and then move on to the next and we can iteratively improve on ourselves as people and our society and our technology. So with that, I thank you for your wrapped attention. Everyone didn't even blink in the audience. That was great. If you have constructive comments, there's my Twitter handle. Again, non-constructive comments you are cordially invited to keep to yourself and I will be available in the Q and A for further questions and discussion. Thank you. Hi everybody. Questions, comments. When I gave this talk at the DevConf check back in January, I very cleverly filled the entire 25 minutes with talk. So I didn't have time for feedback because that was really great but now I've left myself open to you. Sally asked where I'm broadcasting from. I'm in Lafayette, Indiana. Yes, Jason, I did put on a different shirt. I thought about like getting the black t-shirt and the sport coat on again. I was like, eh, it's kind of warm. I believe the shirt is cotton as am I. Oh, Jen asked, can I talk about what my vision in the next 10 years maybe? You know, maybe I should have thought about that a little bit. I'd like to see a world where we're using our technology to make life better for people and I think a lot of times we think we're doing that but we're very narrow in what we think about and this kind of goes back to the privilege thing I was talking about in the talk where we're not very good at thinking about how our work affects people who aren't like us and if we can change one thing in the next 10 years I hope we as an industry and as a species can get better at that. Langdon asked, what will make that happen? Boy, I tell you what, if I knew the answer to that you would be paying for my talk instead of hearing it for free. That's, I mean, that's a really good question. I think in some ways bad things happening can help make good things happen. At some point the negative effects become too loud and too hard to avoid, right? We can no longer pretend that we don't know the bad effects of what we're doing or what other people around us are doing and it really stinks for the people who have to face the negatives in order for the positives to happen but I think that could be a good driver. The question is, do you see signs of that since open source has been around for a while what I said about inviting diversity? I feel like corporate backers of open source have done a really good job of talking about diversity for a while. I don't necessarily see by and large communities doing a good job. I think there's more acceptance that we need to be more diverse and inclusive but not necessarily a lot of large scale improvements there. There are certainly programs that are doing good work. Outreachy is one that comes to mind that I'm at least partially familiar with but I think there's still a long way to go very clearly. Do you find codes of conduct helpful or communities using it more diverse? I think they're helpful. I think having a code of conduct at its core is defining what the bounds of acceptable behavior in a community are. If you were in Denise's keynote this morning she talked about the, we don't do that here. I think a good code of conduct says what we don't do that here means for that community and how the community enforces those norms. Our community is using it more diverse. I don't know. Anecdotally, I have seen people refuse to participate in communities or conferences or other events that don't have a code of conduct but I don't know that there's been any real study about relative diversity on any access between communities with and without a code of conduct. And that's partly because codes of conduct can vary pretty wildly. And so it really, you kind of have to have a taxonomy of codes of conduct in order to start with. What program did I use to record? So I recorded it with a handheld video camera on a tripod and I used Kden Live to edit it. So that's how I was able to composite the video and the slides, so I exported the slides to PNG files and then imported them and then was able to do the sometimes slide big, sometimes slide small, video and all of that. I also used Audacity to do some noise reduction on the audio. I tried to be as noiseless as possible. I had the windows closed and the air conditioning off so it got pretty warm in there while I was doing the recording. There's just traffic noise and stuff so I tried to edit that out. Another question, do you know if there's any work going on in trying to bring down the huge error rates like the 96% failure rates for the cameras in Detroit? I'm not specifically aware of those, but oh my gosh, I hope that somebody is working very hard on that because we can't claim to have a justice system where we're misidentifying people 96% of the time and like I said, it'd be much better to just scrap that entirely. The question is, what are the incentives? I think it was Jen that made a comment about Microsoft embracing open source in the interest of money. And I think a lot of the vendors of this kind of product, as long as somebody's willing to write the check for it, they don't necessarily need to worry about how good it is. And so I think from that perspective, the pressure really needs to come from the public onto their officials elected or otherwise to really step that up. I don't think the industry itself has much motivation to fix that beyond the ethical work of certain individuals. Langdon asks, have you ever heard of a Hippocratic Oath or software devs? Apart from the Hippocratic license, no. There are a lot of organizations that have a code of ethics. I know the Association for Computing Machinery, the ACM has one, the League of Professional Systems Administrators, LAPSA has one. There are a lot of organizations that have something like that. I think part of the problem is that even when they exist, we don't necessarily know how to apply them. There's some good books that have been published in the last few years about including ethics as an active part of curriculum, whether it's in coding boot camps or computer science programs or things like that. You can get four years or even get a PhD in computer science and have never taken an ethics class. And so I think a lot of times we just lack the framework and then people who have come into it from other fields or just sort of start dabbling and just fall into technology by accident, we don't have the theoretical framework to apply some of the ethical principles. Yes, I am also not sure why you can get out of high school without an ethics class, but nobody has asked me to design state graduation requirements yet. This is more of an awkward silence than a pregnant pause, I think. One of the interesting things about the virtual conference idea and having the chat here is that usually at conferences during the Q and A, you have the comment disguised as a question and here it's okay because you can have the comments off in the chat and the presenter can feel free to let the conversation happen, but not people aren't gonna get up and monologue the Q and A. Sally said, awesome talk, Ben, and I will point out that that is not a question. But thank you, Sally. I think I have what five-ish minutes left. So if there are other questions in the next few minutes, feel free to pop up, otherwise I had my Twitter handle, I'll drop that in the chat here in a minute or I'll be around through most of the rest of the day so you can find me in the hop and chat. It seems like we've exhausted the question, so do you wanna thank everyone for coming and having a great conversation in the chat, both during the talk and in the Q and A. It's been really fun to watch the feedback in real time, but you don't usually get that kind of, that degree of richness in an in-person presentation. So I think I will go ahead and shut off the video and I will hopefully see all of you around the rest of the day. Thank you.