 Things is this idea of connecting devices with sensors, with big data, everywhere in the world, and then doing things with that data to effect change. And there's sort of two sides of the internet of things. There's the industrial internet, which is where we hook up the internet to big control systems, turbines, power generation, machinery. I'm not gonna talk about the ethics of that, because that would be like a four-part lecture series and not a 40-minute talk. Instead, I'm gonna talk about the consumer side of the internet of things, which would be the devices that you would encounter, that you might buy in your home, your workplace, retail establishments, your car, that sort of thing. There's a few things that I wanna just, bookkeeping things that I want to talk about first. The first one being, please feel free to tweet, take pictures, video, whatever you'd like to do. I want to give my enthusiastic and explicit consent for that. And the reason I'm saying that is because the idea of consent is the bedrock principle upon which our modern system of ethics is built. The other reason that I wanna say that is because the talks that have happened yesterday and this morning have been so amazing and so inspiring that as I've been wandering around the conference, I've heard a lot of people talking about wanting to submit a talk for this conference next year or another one. And I just want to say we're all aware of how the internet can be and we're all aware of the risks that happen when you talk about challenging concepts in tech spaces. And I just want to say that if you have ideas that you want to share with an audience or with people, you need not consent to being videotaped, being photographed, being tweeted about, it's okay to say no. So please don't let that be a barrier for you sharing your ideas with people some other time. Your ideas are valuable and I hope that you get to share them some point. I do have some content warnings for this. It is nearly impossible to have a meaningful discussion of ethics without getting into some heavy stuff. And I'm gonna get into some heavy stuff right away. This is open source and feelings and is a commutative operator. So I'm gonna go with the feelings first and then the open source later. Some of these things we'll be talking about racism. We'll talk about illness and death. We'll talk about terrorism and warfare. There will be some images with no dead bodies. We're gonna also talk pretty frankly about an incident of sexual assault as it pertains to the internet of things. I'm gonna get right into this. So if anyone has any issues with this, please feel free to get up and grab a coffee or take a break. One other thing I wanna add is I will have two videos. One of them is a video that has a piece of raw meat. It's about 15 seconds long. I'll give you a heads up for that. Kinda squicks me out, even though I'm the one that made it. So if you just wanna like shield your eyes during that, it's also a little bit blurry. So I'll let you know when that's done. So if you have followed my Twitter rants or if you've talked to me before, you know that I'm not actually a software engineer by training. I studied aerospace, my degree's in mathematics, but I studied aeronautical engineering and I'm trained as an aeronautical engineer and I worked in that space for eight years. And we have this saying in the aerospace industry that's kind of hashtag horrible. And the saying goes that the difference between an aeronautical engineer and a civil engineer is that aeronautical engineers build weapons and civil engineers build targets. And this image kind of captures that perfectly. This was taken during the Korean War. This is a US Navy jet bombing a bridge in Korea. And the problem with this saying is not only is it horrible, but it's also not strictly true. Because civil engineers build weapons too. And so do biomedical engineers and so do software engineers. Let's go back a little ways. This gentleman is Robert Moses. Robert Moses was the park commissioner for the city and state of New York in the early 20th century from 1929 to around the 1960s. He did some amazing things. And if you ever get a chance to travel around New York City, if you're in an Uber or a taxi or something, try to pay attention to the roads and the bridges, especially out in Long Island. It's a fascinating, fascinating system that they built. And they did it very quickly with a lot of federal funding under the New Deal. And at one point, every federal dollar that went to public infrastructure through the state of New York first crossed Robert Moses' desk. He did many great things. He built the Verrazano Bridge. He built the Trivore Bridge, I believe it was. And he built the parkways of Long Island. If you know the New York highway system, you'll know that parkways are for cars only. Buses and trucks cannot drive in parkways. And he helped make this legislation back in the early 19, or late 1920s, early 1930s. But he also said that laws can change with the whimsy of the legislature. And that wasn't good enough for his agenda. So in Long Island, he built some 200 bridges with low clearance that still stand today. This is one of them in North Massa Piqua, New York that is a 10 foot, two inch clearance. This is on a road that's only passable legally by cars. But what this has an effect of is barring buses from ever driving on the roads. Bridges are more expensive to rebuild than laws are to change. And he used an engineering justification for this. He said that the footers of the bridges would require less land and be easier to build if they were shorter. That's pretty good engineering justification if you were to extract it from the political consequences that it has. This road is one of the main roads that you would take to get from New York City to Jones Beach State Park. And when he built this road in the 1930s, 1940s, the black communities of New York were mostly limited to public transportation because they didn't have access to cars. They weren't financially privileged enough to have access to independent transportation. Robert Moses used this federal funding in an open environment to gate access to a resource from people that he felt were undesirable. Robert Moses used this bridge as a weapon. Public infrastructure is an interesting thing because it is federally funded or community funded. And we're living in a time right now where data has become public infrastructure, not connectivity. Wi-Fi is everywhere and smartphone ownership, especially among low income populations, is nearly ubiquitous. But connectivity is not enough. Data is what you need to make decisions. Cities, companies, governments are increasingly reliant on data to make decisions. They cannot act if they do not have data. And the data sets have become increasingly sophisticated. We're entering a time where sensors and information is what we're building our infrastructure around. And infrastructure can lift populations or it can oppress them. It can do amazing things. The internet has done amazing things for healthcare, for education. We're all here because of it. It can buoy people. It can create communities. But it can also oppress people just like Robert Moses did with those bridges. The tech industry has been very particular about focusing on the positive, focusing on lifting communities. And they sell this image pretty hard. I'm gonna ask you to watch a very short video. It's an advertisement. I'm really sorry. Is anyone here from Microsoft? Okay, I'm really, really sorry because I feel like I'm about to be dog-fooding you because this is one of y'all's ads. This is from four years ago. The ad is called the Connect Effect. And I picked this because I have a lot of experience working with the Microsoft Connect in my professional life. But also, there's a thing that came up recently and watch a video, you might catch it. That makes this extremely prescient even though this video was made almost five years ago. We started with a sensor that turned voice and movement into magic. Xbox, what? We thought this would be fun to play with. And it was. But something amazing is happening. The world is starting to imagine things be happening in the back. Unexpected things. The awful things. But I think a lot of the world keeps asking us what we'll do if we're not next. We're just as excited to ask the world the same thing. First, the choice of the pixies wears my mind as a metaphor for a bright future is kind of the opposite direction of Fight Club. Curious choice there. The second is maybe you notice it is really hard to see because they put white text on a light background. But the disclaimer text was that depictions are visionary. And the wording there is really interesting because the ad wants you to believe that we live in that future. But we don't. That's our vision for the future. That's our tech company's inspiration of what they want to create. The third thing about that was this. Did you notice the bomb disposal robot that was hooked up to the Kinect? When I submitted this talk proposal was in March, April. This was long before the events of Dallas. That was this month. And I was using the weaponization as a metaphor. But the reality is, those are the real consequences of designing technology with sensors, with fast connections, with big data. That people will use it to cause harm, even if that's not what you intend. The key takeaway to all this, the thing that we have to remember as we design this technology is that algorithms are not perfect. Algorithms are not a black box from which emits pure truth. They are biased. They are flawed. And I'm not going to talk a lot about them. If you want to see a great talk about that, Karina Zonis' talk, Consequences of an Insightful Algorithm, is an amazing talk. It's the inspiration for this one. I want to talk about sensors and logs, however, because sensors are not exact. They have noise. They have bias. And logs are not a perfect record of history. What you see in a log file is not indicative of what actually happened. Perfect example of this is that hardware fails. Hardware fails all the time. We buy new coffee makers. We buy new microwaves. We go to the shop to fix our cars. We have plumbers come in to repair our plumbing. We have construction crews repairing bridges. But software fails even more. Turn it off and turn it back on again has become a trope in our lives. And this is a problem because we want to install software in all of these devices that we use in our everyday life. So there's an example I want to show of a particular kind of combined hardware software failure. This video, I'll pull it up in a second. It does have an image of raw meat. It is a little bit blurry. So I made this video a few months ago. You might have seen it if you follow the internet of shit account. So yeah, this is a Microsoft smart band. Again, Microsoft, I'm really sorry. So what's happening in this video is that this smart watch is picking up a positive signal off this piece of chicken, which is not connected to a live chicken. And it is reading a heart rate of about 119 beats per minute. This is real. This is not faked. And I can replicate this not just with my Microsoft band. You are excused. I could do it with an Apple watch, a Fitbit, Samsung, Motorola technology, anything with a photoplethysmography sensor. Apologies to the captioner for busting out photoplethysmography with no warning. This is a really interesting except there is, I'm sorry, I left that up too long. There's something disturbing about this that happened last year. Last year, a woman visiting a coworker in Lancaster, Pennsylvania, called the police and reported that she was assaulted and raped. In her statement, she reported that she lost her Fitbit. Police found it in a hallway. With her consent, the police accessed the data on the device and found that when she said that she was sleeping, the device showed that she was active. In April, she was charged with and convicted of misdemeanor making false statements to the police and put on probation. Unless you think that there were other facts in the case that may have led to this, the assistant district attorney prosecuting the case said that the data from the device sealed the deal. I can pull 120 beats per minute off a piece of raw chicken and a woman's life is ruined because of that data. Sexual assault in particular is extremely difficult to report. The overwhelming majority of sexual assaults do not get reported. And this is an experience that I have lived like one in five women and one in two transgender people. This technology does not proportionately affect the people who use it. There is a political bias, even if it is unintentional. We didn't give away our due process rights when we agreed to the terms of use of these devices. The problem is, as developers, we have to ask some really challenging ethical questions. One of them is being who gets put on probation when a device makes false statements to the police? Who's responsible when an algorithm causes an accident? Is it the principal engineer? Is it the person that pushed the code? Is it the company that made the device? We don't have answers to this and we're gonna go through some examples of what has happened just in the past few months, literally in the time between I submitted this talk at now. This is the first known incident of a self-driving car being found at fault for an accident. The white SUV here is a Google self-driving car and this view is taken from a Mountain View municipal bus. The car is going to pull out and it's going to hit the side of the bus. Nobody was injured, everyone was fine. But it was found that the car was at fault. Google has acknowledged this and they said that the car predicted that the bus would yield to us because we were ahead of it. Have you all ever seen a bus yield? And this is a challenge, right? Because the laws that we have, the systems that we have, you can encode, you can take a driver's manual and program it. You might have to be a very good programmer to do it but you could do it. But the rest of the world is analog and doesn't work with that. This has happened more times recently. There is a model of vehicle out there that likes to self-drive and likes to be advertised as self-driving, the Tesla. There have been numerous claims where people have said, the autopilot was on, I didn't drive into that building. And Tesla almost universally comes back and's like, hey, we got the logs. Well, remember what I said about logs? They're not infallible. How do we know? Other than Tesla's word, how do we know that those logs are an accurate representation of history? Recently, it came out last month that the first fatality under autopilot happened. A tractor trailer truck was pulling, turning a left across two lanes of traffic and the Tesla was coming down the road and neither the autopilot nor the driver managed to apply the brakes. And so the car went at high speed underneath the trailer and the driver died. Tesla's statement regarding this instance says it is important to note that Tesla requires explicit acknowledgement that the system is new technology and still in a public beta phase. It's a great sentiment, but the driver is dead in release. What other industry do we get to say, eh, there's a beta? Like Ikea doesn't get to say, oh, sorry, those dressers were our beta. Sorry about your kid. Tesla went on to say that contrasted against worldwide accident data, customers using autopilot are statistically safer than those not using it at all. And this is true. This is very true. And this is actually why, contrary to all this, I am a proponent of self-driving cars. The problem is, lives saved is not the only term in the ethical calculus of our society. We have to also worry about the lives that were lost and the quality of the families. We have to worry about who is responsible. We have to worry about the lives that are saved. Are they injured? Are they disabled? Who's going to pay their medical bills? Who's gonna pay funeral costs? These are challenging topics. And when a human gets in a car, they can make ethical decisions. Every time if you drive, every time you get behind the wheel, you're making an ethical decision that can lead to the death of a human or multiple humans, including yourself. And we've accepted that responsibility and we've accepted a legal framework, which we know that if we screw up, we will go to jail, we will owe somebody money, we will make somebody's lives as whole as we can if we make a failure. Somebody has looked at this from a legal standpoint. The Honorable Curtis Carnow is a judge in the County of San Francisco and a few years ago, he was studying as an academic sort of exercise the implications of tort law on autonomous embodied systems. And his conclusions are that the actions of autonomous robots devising their own means to attain a task may not be subject to liability under any theories of tort. What that means, if you've studied any law, there are three primary ways which a person can recover damages from another entity, contracts, torts, and statutes. What that means is torts are out of the question. If I buy a coffee maker at Burnstown My Home, the company that makes a coffee maker has product liability. They would be responsible for paying the restoration. But if that doesn't work for autonomous systems, how do we recover damage? And this is an unanswered question. The truth is we need a statutory solution, but IoT technology currently is completely unregulated with a couple of exceptions. There is no law preventing Tesla from shipping a beta autopilot with their vehicle so long as there is a driver behind the wheel. There is no law, there are no standards for your Internet of Things coffee maker or door lock or smoke detector. If you buy a smoke detector from a hardware store, you have some confidence that it has been evaluated by an independent standards agency such as underwriters laboratories. And that is still true for an Internet of Things smoke detector, but they don't have any way of evaluating software. And software fails a lot. And when you put software and connected software into a device, you introduce a whole suite of failure modes and failure pathways. Internet of Things technology can even bypass regulations in some cases and expose data and expose vulnerabilities that are otherwise protected by law. Perfect example is the Samsung family hub. This is, I actually think this is an amazing idea for a lot of reasons, I don't have time to go into them right now. But one of the features of this device is that every time you close the doors, it takes photograph of the inside of the fridge. And that way, like if you're at the grocery store and you're like, oh crap, did I buy eggs or do I need to? Am I running out of milk? You know, anything like that. You can just pull it up on your phone. But there's a problem. Not everything in a refrigerator is food. Not all people's food is the same. And sometimes the things that we put in the refrigerator are sensitive. This is a multiple sclerosis drug. It's used to treat the symptoms of multiple sclerosis or at least slow their progression. Copaxone requires refrigeration. It comes in boxes with texts that is very easy to read and therefore very easy to identify with OCR. Samsung would be taking pictures of your copaxone if you had multiple sclerosis and a family hub. This data that you are on this drug is protected in the United States by HIPAA. If that data is stored or generated by a health insurance company, a healthcare provider or a pharmacy. But Samsung is not a covered entity in this case. And so they have no statutory responsibility to protect this data or treat it with sensitivity. They can sell this to a third party. They can leak it over the cloud. They can store it unencrypted and unsecured. And this is the kind of data that can be used to deny people jobs, to affect lives in ways that we did not agree to when we said, yes, I consent to this device sending pictures of how much milk I have to the internet. The reality is that the tech industry hates regulation. Regulation slows the pace of innovation. It introduces challenges and hurdles that are crappy because if we're technologists and developers, we like solving hard math problems and hard code problems. And regulation is about paperwork and paperwork sucks. So tech fights regulation at every possible juncture. We saw this, this past spring in Austin when Uber went toe to toe with the people of Austin and Lyft, I should say Lyft is implicated in this too. And the referendum passed and they said, well, if you're gonna operate a rideshare, we need you to fingerprint your drivers. And Uber said no. And so their announcement to this was seething with disdain. We hope to resume operations under modern ridesharing regulations in the near future. Notice the modern ridesharing regulations as if your laws are not good enough. I'm not really sure what their definition of modern is when the law was literally just passed. And Uber has some really interesting interfacing with laws. This is a tweet republished with permission from Amy Britton who's in DC and her driver went down the wrong side of the road and she complained. And the response that she received says among other things, what really happened here is sometimes the driver partner has their own way of getting around the city. Y'all, the tech industry needs to learn how to deal with regulation. It needs to learn how to deal with communities to interface with governments and legislatures to constructively build up regulations that work for both them and the people. A perfect example of this is medical devices. This is one of the exceptional cases. Medical device software is heavily regulated and has been since the Thirak 25 incidents in the early 1980s when it was found that software errors in an electron beam radiation machine led to the deaths of four people. Medical device manufacturers, if you build medical devices, you take on statutory liability for the functionality and the efficacy of the algorithms, the software that you put on them. And this is kind of interesting because if you're a software engineer, you probably have heard the statement, don't roll your own, don't roll your own XML parser, don't roll your own cryptography, use somebody else's that's already tested. And open source is the perfect venue for getting software packages and libraries and tools that have been tested by the community at large. But if you're gonna take third party software into your medical device, you're now taking on all that responsibility for people that don't work for you. And that's a challenge for many companies. That's really, that's a difficult pill for a lot of companies to swallow. And since this is open source and feelings, I wanna talk about how as open source developers, we can make it easier for our stuff to get into medical devices so that we can contribute to the actual bright future from that Microsoft video so that our code can be used to build those games where a child is rehabbing their knee after an injury using a Kinect. So in the medical device world, third party software has a fun name, it's called SUP, or Software of Unknown Pedigree. And basically the way that the laws work is it says, you're free to use whatever third party software you like, but you take on all that responsibility. And that introduces some challenges with the update and the distribution model. Because every time an update gets pushed, well there's other things that have to happen once you have a registered medical device about how you have to notify the FDA and maintain device history files. And it's a very burdensome process, it's a huge pain. And so it's really like Semver doesn't solve this. One of the ways that we can make life better for companies, for people who want to build internet of things and regulated spaces is to have codes of conduct in our open source projects and communities. And to make this clear, this is my personal belief, but if you do not have a code of conduct in your open source community, you are not mature enough to build safety critical software. I apologize for the bad contrast with the orange, it comes up a little bit cleaner on this display. If people don't feel comfortable contributing issues and bug fixes and reporting problems, they're not going to. And that means that software is out there that's broken and if it's in safety critical systems, that means that there's a bug out there that can cause a buffer overflow, that causes a fire, that displaces a family, or worse. The reality is that free speech and free beer are wonderful concepts, but they alone do not constitute open development. There is no openness without both accountability and transparency. And if you don't have those, then you cannot claim to be open. If you don't have a process by which to evaluate bugs and incident reports, then you don't have an open process. If you don't have an open process, then all of the arguments you have made about community driven development being more secure and more safe, go out the window. And in the real world software regulations, they don't tell you like, okay, you need to use SHA-256. They don't say like, you need to use object oriented programming. They're all about accountability. The talk earlier about failure analysis, I loved it because it's perfect. This is what happens in the medical device space. If there's an incident, you have to do an investigation, you have to report it to the FDA, and then you have to solve the problem. It's not about blaming somebody, it's about being accountable. And so I put this in my abstracts. I have to talk about it. So NPM folks, where are you at? Sorry. Sorry. I know y'all get a lot of flack for this at places. So for those who don't know, Left Pad happened when, and talk to the NPM folks if you want the specific details, but basically an open source developer had a namespace conflict with a copyright for some corporation and got all free speech on us. And got angry and pulled all of his packages from NPM's repository, which caused a lot of builds to break and a lot of unpleasantness to happen. That's fine in a world where we're just using it to develop websites and web apps. And there might be some financial risk that happens when these failures occur. But when we have an internet of things issue, a broken build doesn't just mean downtime. A broken build can mean devices stop working. If your refrigerator stops and your food spoils, that can be very detrimental to say the least if you're somebody that cannot afford food. Or if this happens for an over-the-air update for the software in your car, that would be pretty bad, especially if you're driving. And these things have happened. This is not hypotheticals. These things have actually happened. I don't have the time to go over the case studies, but if you wanna talk to me later, I have about five case studies that are just mind-blowing that have happened in the past year. Open source, generally speaking, has a security and oversight vulnerability problem in how it's distributed and used. And what happens is companies decide to roll their own because they can't solve this problem. And so what we need to do, the takeaway from this talk, I guess, is that as open source developers, we need to fix this problem to make the internet of things great. And so we need to understand that when we connect and as we go into a world where devices and sensors are all around us, we're entering a world where we can't walk away from them. I can shut this computer and walk away. I can hang up my phone, put it in my purse, and ignore the internet. But when the internet is in my car, my fridge, my coffee maker, when I have sensors in the roadway, when I have sensors in the ATM machines and the retail establishments, I can't get away from the internet anymore. And so if we break things, we don't just have a world anymore where you can say, crap, I gotta take my phone into the shop. We have a world where material harm can happen. And so now the stakes that we have as developers have gotten so much higher. So how much time do I have? I don't have the timer running right now. About as long as I, oh, great. So there's something in the slides, but I wanna tell one quick story about, kind of tie back in that aerospace angle about something that has affected me because I worked in the aerospace industry for eight years and I built weapons. And I've analyzed data and looked at things that are uncomfortable. And I've left that world. And I've kind of lit that career on fire if you follow me on Twitter. When I was studying aeronautical engineering, the school I went to, there's a class that every software arrow engineer had to take. It was called Aerospace Fundamentals. It's taken the sophomore year in the fall. And every year is held on Tuesday and Thursday. It's taught by this tall Norwegian man, older guy, taught the class for 30 years and he was a former NTSB investigator. And he had this policy that you could bring whatever you want to an exam, computer, textbooks, notes, past exams, homework assignments, solutions, whatever you want as long as it didn't breathe. But there would be no partial credit. And his problems were brutal. This was like two questions, part A through part G. And if you screwed up part A, it fed into part B. And if you write two plus two equals five, you may as well just take out a loan for the next semester because your class was done. And his rationale was that when you build devices in the real world, you don't get partial credit. If the airplane crashes and people die, there's no partial credit for that. So I took this class my sophomore year, which was in 2001. September 11th was a Tuesday. We had the class at 10 a.m. This is an upstate New York. And he walked into the class and after dismissing everyone that was from D.C. or New York, he went through a quick set of calculations. We didn't know what kind of airplane it was, so he had all of the specs for 737. And we computed all of the impact on the building, the moment at the base, the heat at which the fuel would burn, the energy of the impact. And he ran through all these calculations right there in front of the class. And he said that by the time we were done with this class both of those buildings will have collapsed. We didn't know this at the time that they had already happened. This is 2001. We didn't have Wi-Fi. We didn't have the internet. Not in the, we had the internet. We wasn't in the classroom. And the message that he said to us at the end of that is that this wasn't an academic exercise for fun. It was to let us know that as engineers, the things that we build, whether they are used for benevolence or malfeasance or if they're used with negligence, no matter what, we are responsible for the devices that we create. And we must, if we take the term engineer, we must accept that responsibility because there is no partial credit. Thank you very much.