 Good morning, everybody. How's everybody doing? All right, so OK. You people hungover, that's fine. I want to start by thanking the DevOps Days organizers for trusting me with the first keynote, because talking about ethics at 9 o'clock in the morning is a pretty gutsy move, right, for a conference. So hopefully we don't screw it up too much. I don't have any answers in this talk, unfortunately. Got a bunch of questions. And the whole point of this is really to start a conversation, to start a dialogue, about thinking about ethics and our responsibility as engineers in the technology that we create and the things that we enable through the things that we create. For those that are seasoned DevOps Days attendees, you're probably going to sit back and think, damn, this is the longest open space pitch I've ever seen in my life. It's pretty much what it is. So with that, I'll start with the story. I'll start with the story. I got a phone call from a friend. And we're all in tech. So we all have that friend that thinks, because you're in technology, you can do anything, right? So I get the phone call, and I'm like, OK. Something's wrong with her printer, I'm sure. Jeff, what? I need to know how to hack an iCloud account, OK? It's a better problem than a printer. What's going on? I just need to hack an iCloud account. Can you help me? Can you teach me? Can I teach you? Yeah, all right. Sure, I can teach you. Get a pad of paper, pencil, write this down. You're going to need to find one of your friends, one of the fastest typists, but make sure they have small hands, because both of you are going to need to put your hands on the keyboard at the same time, because we're going to be doing some serious NCIS shit here, right? There's going to be stuff flying all over the screen and everything. She's not amused at all. Jeff, seriously, I need to know how to hack an iCloud account. I'm like, what's wrong? What's going on? Talk to me. She said, my 16-year-old nephew left for the grocery store 20 hours ago. We haven't seen him. We haven't heard from him. He's not answering his phone. We want to get into his iCloud account so that we can use the Find My iPhone feature to see where he's at. So I start thinking about this, and I say, you know, with two-factor authentication, you don't really have another device that he has. Hacking an iCloud account is probably going to be a pretty long endeavor. Maybe we can go another route. Maybe you can contact the telecom company and have the telecom company ping his phone. She says, OK, that's probably a better path. Now, something you guys don't know about me. I watch a lot of law and order. Like a lot. And while it's not technically legal advice, I know enough from law and order to know that the telecom company is not going to do it without a court order. You're not going to get the court order until you've got a missing persons report. You're not going to get a missing persons report until the person's been missing for 24 hours. And it's a kid, right? He's a 16-year-old kid. He's probably running off with friends, phone died, something of that effect. But everyone's really freaked out. So they start the process and true to form, cops drag their feet for 24 hours. Telecom company drags their feet for another 12 hours. But then finally, after 36 hours, they ping the phone. Unfortunately, they were two hours too late. A jogger had found this car. He had had an accident, fell into a ravine, and was trapped inside. So you have to think, could it have been different if they had got to them faster? How long was he trapped in the car? We don't know. And those are questions that the family really doesn't want to ask, right? Because you just don't want to know that the company that you're sending $130 for for shitty service was complicit in your child's death. But at the same time, we have to be careful about how we look at these things. Because it's so easy for us to look from the perspective of the consequence. We do this in a lot of life, right? We take an action and we evaluate it based on the consequences that came out of it. In this case, it was a terrible, terrible end, terrible tragedy, but this could have just as easily been a jealous spouse calling to find their loved one to harm them. Or a stalker, one of your garden variety types that is a little technical savvy. So how do companies really go about creating rules that aren't beaten up when the consequences don't really go our way? And then how do we evaluate right and wrong when we can only evaluate it against the outcome of a particular action? And it's becoming more and more important with all of the technology that's coming out and all of the interesting things that it enables us to do, but all of the terrible accidents that can happen too. My favorite was the Amazon Echo recorded a family's conversation and then sent it to a random person in their contact list. I love my Amazon Echo, love it. I got my first one at a DevOps days, actually. But that's a scary thought, right? Cause you're probably sitting home, talking trash. The next thing you know, your boss has an email of your conversation. People talk about a euthanasia expert who just unveiled his suicide machine. And while it sounds great when you're thinking about someone that's got a terminally ill condition that wants to die with dignity, you can think about some of the negative ramifications that happen from that. So how do we evaluate? How do we keep a ledger of what's good, what's bad, and who's really the arbiter of those decisions? So on that depressing note, good morning. My name is Jeff Smith. I'm the director of production operations at a company called Centro. Centro is a digital media software company. We're in the ad tech space. Don't worry. The irony of an ad tech guy talking to you about ethics is not lost on me. We'll talk a little bit about that as well. Centro's also one of the scholarship sponsors of the conference. I think we got some central people here too, right? Shout it out. Okay. Back me up, guys. Yeah, so Centro is a digital media company in Chicago. We help people manage their ad campaigns. It's a great company, great culture. Like everyone else, we're hiring. So if you're interested in making a career change, feel free to reach out to me. I'll be here both days for the conference. I'm also working on a book. It's tentatively called Real World DevOps. It's probably gonna change 15 more times. After I get Emily Freeman's book, I might cancel the project altogether. Like, well, she pretty much nailed it. So I guess I'll just bow out of this conversation. So another story. How many people have gotten on the Get Rid of Facebook campaign? Anybody? Not as many as I thought, but still, that's hopeful. So one day Facebook has one of their many breaches, our data leaks or whatever. I don't even remember which one it was, but I was finally like, I'm done. I'm tired of it. I'm leaving Facebook. So I'm having lunch with a friend and she's like, well, you know, I sent you this thing and you didn't respond. I'm like, oh, you know, cause you gotta get a little hoity-toity about it. I'm off Facebook. Don't do Facebook anymore. Why? I was like, don't you see what they're doing? Don't you hear about all the things that are happening, all the terrible pieces of data that they're leaking, all the ethical violations they're doing? She says, I like cat photos. I like news feeds. I know that they're taking my data, but it's an even exchange for me. I get all of this fun stuff and it doesn't cost me anything. Is her perspective any different than mine? Is it any more wrong or more right? Probably not. She just has a different value proposition of it. When you think about all of the things that Facebook has, it really enrages me, right? Facebook was doing emotional studies. They were manipulating people's news fees to see if they could alter their emotions. They were basically experimenting on you in the social behaviors experiments without your consent, without your permission. And that violates all ethical guidelines that they have around human test subjects. They ignored staff warnings about the sketchy Cambridge Analytica work. If you're in Facebook, what do you do if you bring up this bad behavior to your leadership and they don't do anything about it? What's your recourse? Where do you go? But then you have to weigh that against the good that Facebook does. Over one billion in causes were raised on Facebook. And that's great and aggregate, right? If someone asked you a question like, hey man, I will donate $20,000 to cancer research if you tell me what your favorite food snack flavor is. Okay, yeah, it's well just great. Cool, I'll give you that bit of information. Is it worth the trade-off? I don't know. But a billion dollars in causes is a lot for something that, you know, you're really not actually physically giving up anything even though you're emotionally are. And then I get a lot of benefit out of it too on a micro level. My family, we're the only ones in Chicago. The rest of my family is in New York and Georgia. So we get to share moments with them on Facebook. My daughter got a hamster this summer when she was with my mother. That's all she could talk about. I'm gonna hamster, I'm gonna hamster, I'm gonna hamster a little. And she was so excited. My mom wanted to see that moment when we got it so badly. So we did it on Facebook and she was able to see that. And that's huge, that's powerful. But again, how do we add up the ledger? How do we know that these things that are happening are worth the cost that we're paying? And the other interesting thing is we're always thinking about this from the perspective of like a Western philosophy, right? But the internet is global. What do we do when we have an Eastern set of philosophies? How do we take their viewpoints into account? What happens when they clash? Urban and rural. We see that here in America. There's a huge disparity between what urban people feel is right and wrong and what rural people feel is right and wrong. So how do we balance that? How do we figure it out? Is it ethical, is it moral, is it fair? So I started to think in a group of us, we're actually having a conversation about this because four of us, actually organizers at DevOps Day got together and said like, what can we do about ethics? What is it that we should tackle? When we started reading about it, we're like, man, this is complicated. You wouldn't believe people have been studying this for 1300 years. It's weird. I thought we were gonna be able to apply an algorithm to it and just be done with it. But then we started focusing on this idea of consent. And I started noodling on that to figure out like, okay, maybe that's the one thing we might be able to grant. And in my world, you can't talk about consent without talking about James T. Kirk. Weird segue, I know, just stick with me for a second. So I'm a huge Star Trek fan. There's an episode called The Taste of Armageddon. In this episode, the enterprise is dispatched to a planet to build a trade negotiation, right? But the planet has been at war for 500 years. 500 years they've been fighting each other. It's probably a wasteland. But still, you know, federation in America, you know, they need to get that dilithium crystal, which is basically space oil. So they show up at the solar system and when they get there, the solar system has a beacon out there and says, do not enter, enter at your own risk. No visitor is wanted. But this is Kirk and the enterprise. They're like, fuck that. We're going anyways. So they get there and they're welcome. Hey guys, how you doing? Good to see you. Yeah, come on down to the planet, visit. So they get down there and it's a paradise. This box looking around is like, man, the lawn is tight. Everything's clean. This doesn't look like a planet that's been at war for 500 years. What the hell is going on? So they start to ask about it and Kirk finds out that the war is actually computer simulated. So instead of an actual attack, they have agreed that we're never gonna have peace. So let's try to make sure our culture survives. So they both create network computers that simulate their attacks. When the casualty is recorded, the computer spits out who dies and that person is commanded to report to a disintegration chamber in order to be killed to record the death. Yikes. But the interesting thing is that everyone on the planet consents to this. They agree. They say, you know what? It's a lot better than having dinner and then having your roof cave on you because some bomb attack happened. At least now I get a sticky note that says, hey man, you just died, wrap shit up with your family and then report to the disintegration chamber in 24 hours. It's weird, but it works for them compared to the alternative where they had the death and the destruction of their culture. How many people think that this is a right way to immoral thing in their space? Show of hands. How many people like, okay, a few. So I guess the rest of you think it's immoral. Oh, it's terrible, right? It's a tough line. So the original series really didn't do the whole prime directive thing that much. So Kirk goes in there and starts kicking ass like Kirk does, right? So he's gonna go in there and he's gonna blow up the disintegration chambers, right? Because this is wrong. Also, there was a woman involved that he liked that was recorded. So classic Kirk. But if everyone is consented to it, why would Kirk impose his moral philosophy on these people? And that's something that we have to think about in technology too. As we create technology, the technology is inherently neutral for a lot of technology. There's probably some that's pretty bad, right? But it's the application of that thing. Then how do we apply our moral code to a group of people that might feel differently? Is it legal? Is it ethical? Is it moral? I don't know. We've gotta get to this crossroads where we gotta figure out like how do we deal with it? How many have ever encountered a situation that they thought was a little shady at work but wasn't 100% sure what to do with it? Man, some of you guys are real lucky. Don't leave, don't leave. There's a lot of stuff though that's just a little dicey but you just don't know where to go with it. You don't know what to do with it and you don't even know if maybe if it's just you. Maybe it's just me acting weird, right? So one thing that always comes to mind is end user license agreements. Everyone has seen those, right? And how many of you have actually read them? Yeah, this guy. I need to see you in an open space, sir. There's too many of them. I don't read them. In fact, I made me think about it again with the whole GDPR thing where now you get prompted for cookies with every site you go to. I don't read that shit anymore. Yeah, yeah, yeah, yeah, except, except. Firstborn, yeah, whatever. Give me the news. I need the news. So that's a, it has been traditionally accepted as when we agree, when we click that button, we're consenting to all the terrible things that they're gonna do. But is that consent actually informed? And does that matter? Of course it matters. It matters in all types of areas of in our society, right? People can't consent when they're drunk. Children can't consent for a lot of things because they don't understand the implications of what it is they're consenting to. So informed consent is a thing. So how do we think about that with technology as well? So as I said, I work in ad tech and our CEO, Sean Riecksecker, oh, I finally pronounced that right, is a very thoughtful guy. So we were having a conversation about ethics and things and he had this quote, which I absolutely had to share. He says, government and industries have been adopting a very libertarian and Randian viewpoint on pretty much stating that it's the consumer's responsibility to be smarter and know what they're accepting relative to policies and terms and conditions, rather. This is ridiculous thinking. It's incredulous to think that A, most consumers are technically literate enough to understand what any of this technical jargon beans and B, that for every site they visit or app they install, they're gonna take the time to read 10 pages of legalese. And that's dead on, in my opinion. Every time I install an app, I wanna scan real quick just to make sure that they're not inviting themselves to my house, but as long as they don't get a spare bedroom, I usually click okay. In line with this, I feel we need to, in fifth grade English, make it easy for the customer to understand what they're accepting with a small, well-designed screen versus 10 pages of legalese. And this stuff works. If you're on the Android App Store, if you've ever installed an app, I don't know if they still do it, it's been a while since I've been on there, but you get a list of all this terrible shit that this app is gonna be doing and you're like, Jesus, it's a note-taking app. Why does it need access to all of this other stuff? And you say, no, I'll find another app. So the idea works because we're using it and evaluating things like that today. So maybe consent is the lever that we need to sort of toggle and making sure that even though we're doing something like the Nest, for example, when the Nest had a microphone built into it, there were tons of people that knew about that, right? Microphone just doesn't show up in a device. And it's not the fact that the microphone was there, is that people didn't know about it because we put microphones in our houses all the time, but we've accepted that, we've consented to that. And the idea that it's not any different because they didn't ask is ludicrous. So going back to Kurt, this whole manifest destiny thing, there was one small piece of information I left out and since most of you thought it was immoral, it'll probably continue to be immoral. One of the things that I left out was that when the enterprise entered orbit and broke and basically disregarded that message, they basically entered the conflict. They became legal combatants. And when they arrived, there was an attack and the simulated attack recorded the enterprise as destroyed and all of its crew perished, died. They need to report to a disintegration chamber within 24 hours. They consented, right? We gave them the warning. They ignored the warning and they showed up. Was it informed consent? Does that matter? I don't know. But when that part happens, it sort of changes your view on if this is okay or not because consenting has to be informed. And when we talk about this, it's easy to continue to sort of push it up the envelope. As an engineer, you know what this code is doing, you just wrote it. But you're gonna push that off to the product manager because that's what the product manager said to do. The product manager is gonna push it up the tree because that's what the strategy said to do. And then the strategist is gonna push it up to the sea level and then one day there's gonna breach and everyone's gonna be like, I have no idea how this happened. The code just showed up. I clicked the button and it generated a bunch of privacy violation shit. I thought it was scaffolding. Chasar probably was now. But at some point we have to say, when do I take responsibility? When do I insert myself into the process? But then even if you did do that, what do you do? Where do you go? Your bosses know that you're violating people's privacy. It's what they asked you to do in not so certain terms. So with DevOps, a lot of things that we do is we look at other industries. How do other places handle it? What are other industries doing? Well, the answer is not great for me. Licensing. That's where we always come back to, right? The Bar Association, Medical Association, Federal Aviation. These are organizations that basically say, you are licensed to do what you do. And it works for them because they have some sort of body to go to. They have some sort of concrete set of guidelines to follow to say, this is what we should be doing. But in technology, that scares me because I think technology is an escape for a lot of underserved people and licensing and regulation and everything has often served as sort of these artificial hurdles to getting into something, right? Women used to dominate programming as an industry. Then what happened? Well, now it's a professional thing. You got to have a degree. That's a barrier for women. Now we're struggling to get women back into a field that they owned not that long ago. So licensing isn't a great solution but you have to recognize the appeal of it. These are the licensed professions in Illinois. So you're telling me my barber has to be licensed to make sure he doesn't screw up my fade but this guy can program a 747 landing algorithm? It's kind of scary, right? You were to ask the populace if you were asked people that aren't in technology they'd probably be like, that sounds kind of crazy, man. Every time I get in an elevator I think like, oh, who programmed this thing? And were they licensed? A lot of these things start to intersect, right? Because if you look at like the medical profession, like we said before, you have this opportunity to lose your license. If you do something wrong, I can prevent you from doing that. But not only that, there's a body, there's a document that says these are the things that are right, these are the things that are wrong. Here are the ethical sort of hazards. And there's a separate body that adjudicates that. So we said, well, what can we do that's sort of similar to that, that doesn't involve the whole licensing piece? Because the other thing that we've discovered, especially in advertising, is self-regulation can be a little difficult if it doesn't have teeth. So in advertising there's this group called the IAB, they're so powerful I can't even remember what it stands for. But basically they put out guidelines of like what's right or what's wrong. And if you violate it, they say, no, that's your lashing. So it's gotta have teeth. So how do we give it teeth? Without having some actual licensing body. I don't know. The only thing I could come up with, you're a developer, ops person, you see something shady going on, you say, hmm, something suspect is going on. So what if we had this separate body that was sort of like the IAB, but whose job was to both amplify your complaints and do a little public shaming, right? So you go to this ethics body and you say, hey, I have a situation that seems suspect. Wondering what I should do about it. The ethics body goes to the company and says, hey, I heard you guys there's not some bullshit. Company's like, sorry, our fault, we'll fix it. Right, could happen, probably not. But then the problem's solved, right? We have this ethical body that we've all sort of agreed and empowered and they're approaching these companies and because of the way the ethical body could be structured, it could be large enough where it wielded some sort of market influence. And if they don't respond, if they don't fix it, then we go to public shaming out, right? We go to popular sites like LiveJournal, GeoCities, put it out there. All 50 users know that you guys are doing some unethical stuff. Will it work? I don't know. I don't know, but it's the only thing that I can think of that we can do ourselves that these companies might respond to. For all the ill that Facebook is doing, they are trying to respond in some way to all of the negative pressure. So maybe we can try to leverage that somehow, some way. I don't know if it works, but I would love, love, love to talk about it more in an open space. What are some of your ideas? What are you guys thinking? Is this something that people even care about? Is it something that I only worry about? I think part of it is because it's a great starting point, but as technology evolves, I get worried about things like AI. Who's providing AI data? Are we just programmatically encoding our biases? Women, you're cold in the office. Why? Because they only studied men. That's why you're cold in the office. So how do we make sure that these things are being represented? Those are larger conversations, but I think we gotta start small. We've gotta start with something and it feels like consent is probably an easy first step because we can all agree that if you're gonna do something with my information, it'd be nice that you told me that you were gonna do it. Because the other problem is, the companies aren't gonna do it themselves as long as there's a bad actor still doing the bad stuff. Because it's unfair if company A is like, well, we can target Jeff because we know he's in his 40s, we know how much he makes, we know he's got two kids, we know he's got a wife, we know he loves Star Trek. These guys know that he like welches great fruit snacks. That's all they got. So of course companies are gonna continue to violate until there's a reason to bring all of the players in focus together. So let's talk about these things at the open space. That's my time. Whoa, wow, I got three minutes. I've never been early. That's my time though for today. Thanks for listening. Hopefully I didn't bore you guys too much with the ethics talk and hopefully we can continue the conversation in open spaces later this afternoon. Thank you.