 Saturday, first talk of the day, let's give him a great big DEF CON welcome. Come on! Yeah! All right. Hey, welcome everybody. We're really appreciative. You know, I think it's wild just to think about how far DEF CON's come. We're at DEF CON 30. How insane is that? You look around, walking through the halls, there's marble, and this is just so interesting to see where we've come. I think it's worth a second to just look around, see who's there, see how much we've grown and appreciate that. We have just begun to burn down all the shit that's wrong in this world and fix it with our hacker mind. So guys, give yourself a round of applause. That is important, and you being here is part of that, and we really appreciate that. Thank you so much for being with us, especially at 10 a.m. on a Saturday, which in DEF CON time is like 3 in the morning in the real world. So really appreciate it. My name is Jeff. My friends call me Replicant. I'm a pediatrician and an anesthesiologist, and I do some security research about clinical medicine and technology with this guy. I'm Christian D'Ameth. Qwadi is my handle. I'm an emergency medicine physician and security researcher. And I'm Corey Doctro. I've worked with the Electronic Frontier Foundation for 20 years in different capacities. I'm currently the special advisor. I write science fiction novels and I'm on the computer science faculty at the Open University and the library science faculty at the University of North Carolina. So a little known fact, Qwadi actually brought me to my first DEF CON, which I think was DEF CON 18 or 19 back when we were baby medical students. And I was thinking the other day about what has happened in medicine since then. I mean, it seems like the 10 plus years have gone by quickly, but in medicine that's a lifetime. And there have been some incredible advances in some of the ways we used our technology to treat patients that we can really look at with two lenses. You know, a lens of promise for the future and the incredible achievements we can make, but also one apparel of a future in which we want to avoid really significant concerns and tensions with privacy and autonomy. And so just to kind of give you a little bit of a consideration for that, you know, we have gene therapies now that treat diseases that were fatal to children when I was in my pediatric residency. And that's incredible. That's incredible promise. Looking at the other way, though, if you don't have insurance, you got to pay half a million dollars a year to keep your kid alive, right? We're going to talk about artificial pancreases, incredible technology that hackers have pioneered that can help to improve your diabetic control and really increase your quality of life. But we can see a future where some people may be forced to use a black box that they don't fully understand that collects data that they can't access. That depends on DRM shackled consumables that they need in order to continue to have it function. Or a future in which telemedicine is available to everybody and people can see a primary care provider and they can use personalized medicine to customize treatments for them. You know, you can see a flip side of that coin where private equity is buying up your primary care provider and Amazon's acquiring your doctor's offices and you're starting to get targeted ads based on the data they're mining from you. So I really want to explore those tensions today as we talk. And we thought no better person to help us talk about this than, you know, the activist Corey Doctorow. So we're going to let Corey take it over for now. Give it up for Corey. Give it up. Thank you all. So I want to talk about a group of people who rely on medtech and also rely on modifying medtech and some of the ways that their own safety has been weaponized against them and some of the stuff that's come out that's made life better for people who rely on it. So I'm talking about people who use power wheelchairs, which are a significant part of the $50 billion durable medical equipment market. There's about 3 million Americans who use powered wheelchairs. It's the complex rehab tech area of Medicare. And Medicare is pretty dysfunctional in this regard. They very narrowly interpreted their mandate. And so if you use a power wheelchair and you rely on Medicare to provide it to you, Medicare will only give you an indoor powered wheelchair. Although many of us like to leave our homes and they will also refuse to cover any preventative maintenance. So this is a recipe for disaster. You have a chair that's being used in ways that it's not supposed to be used and you can't perform preventative maintenance on it. One problem of the way that Medicare procures these chairs is that they go to lowest bidders and the way to generate a low bid is to have economies of scale. And so two private equity rollups, a company called New Motion and another company called National Seating and Mobility have bought virtually all the other companies that make powered wheelchairs. So people who use powered wheelchairs buy from one of those two companies typically. And private equity firms, they have a common playbook which is to load up their acquisitions with a lot of debt and then squeeze them to service that debt. They pay themselves a special dividend on acquisition and then to make good on that debt they then have to squeeze them. One of the areas they've squeezed is by cutting service. And so parts are billed at very high prices and it takes a very long time to get serviced. So all of this was examined in detail in a report that came out this spring called Stranded from the Public Interest Research Group or PERG. Stranded found that 93% of the powered wheelchair users they surveyed had had a need for service in the last year. 62% of them had waited four weeks for that service and 40% of them had waited seven or more weeks for service. And yes, seven or more weeks, sorry, I thought that was months. No, seven or more weeks. And you have to understand that in some instances this meant not only that you couldn't leave your home but possibly that you couldn't leave your bed. So it makes it very hard to have a family life, have a personal life, do shopping, maintain your job and do all of these things. So the question is, why can't people who are literally stuck in bed for seven weeks waiting for a part, why can't they just fix their own chairs? And partly that's because the part stream itself has been starved by the duopoly. So in Stranded, PERG collected stories from people who use powered wheelchairs about their problems getting parts in service. They found multiple people reporting that the $6 inner tube that their chair used cost $300 in as a Medicare bill part and that it would take six to eight weeks to procure. So you would have a flat for six to eight weeks while you waited for your chair to get fixed. There is an instance of a $20 power button that literally turned on the chair that cost $500 and took four months. But even where people can get parts and they do, they source them from eBay and they source them from Amazon. And there's one great story about a stability wheel that they were having trouble sourcing a couple and then their son looked up and he was like, that's just a skateboard wheel. And show them how to buy that wheel with cool orange glitter and whatever and they just replaced it. So sometimes you can just fix your chair by going around this, treating them as damage and routing around them. But sometimes you get blocked by digital rights management. So these chairs use digital rights management to restrict access to their management consoles. That means that you can't get diagnostic information out of them. It also means that you can't make routine adjustments. So for example, there's often a delay built into the steering mechanisms. As you get more proficient with your chair, you might want to reduce that delay. You can't do that on your own. Also, if you change the pressure on your tires for different terrain and you want to adjust the torque and the motor, that's also an alteration you need a security dongle for. So the good news is that Colorado in June passed the HB 22-1031, the Consumer Rights Power Wheelchairs Act, which substantially fixes a lot of these things. They did run up against a really important problem though, which is that removing DRM as a felony under federal law. Section 12.1 of the Digital Millennium Copyright Act provides for a five-year prison sentence and a $500,000 fine for providing a tool to bypass DRM. And so they couldn't authorize people who wanted to fix wheelchairs or who used wheelchairs or who used wheelchairs and wanted to fix wheelchairs. They couldn't authorize them to make or provision each other with tools that would allow them to affect these repairs. So instead, they did an end run around it and they ordered the wheelchair companies to just provide the tools that would allow them to read out the diagnostics and so on. This is a good solution, but really it's not enough. And so I just want to finish by saying that Electronic Frontier Foundation, we're representing Matthew Green and Andrew Bunny Wang in a lawsuit to overturn Section 12.1 of the DMCA. And finally to say that, you know, as I noted, there's some, like, deep structural problems that make it hard for people to use powered wheelchairs, right? There's the duopoly, there's Medicare only paying for indoor chairs and not supporting preventative maintenance, and better repair doesn't solve any of those problems, but it does fix wheelchairs, right? And that in itself is something worth doing, and we can walk and chew gum. We need to do both. That story, how many of people out here were surprised about that story? Have you guys experienced modern healthcare that raised your hand? Raise your hand if you've been frustrated with the inefficiencies, the lack of communication, the broken insurance system. Yeah, okay. We can all relate with that, absolutely. As clinicians, we relate with that. As clinicians, ourselves we do. We're going to transition a little bit to another very strong theme in modern medicine, which is just that we often are unaware of how the tools we use actually function. That's a replicant take us away. So my day job, I'm an anesthesiologist, and I somewhat facetiously say that I hack people's brains, so I turn you off and somebody pokes you with a hot knife and I turn you back on again. If it's a neurosurgery, we blow on it before we put it back in. But it's widely accepted among our profession that it's less than ideal to wake up during surgery, right? And so one of the things that we use as a monitor that helps us in addition to a couple other variables kind of keep track on how deep a patient is under anesthesia. It's called the BIS monitor. We've used it for about the last 20 years. It has been the topic of thousands of academic papers that really investigate how anesthetics even work. So it's something that we're all very familiar with. And without getting too much in the weeds, this is a monitor. The name BIS is derived from how it works. It takes the electrical signals of the brain, the EEG, and it processes it to produce a unitless, dimensionless number that people can trend. So 20 patients nice and deep under anesthesia, 80 they're about to wake up and sue you from out practice. So for a long time, that name BIS was derived because most people understood this is something that looks at what's called the bispectral index, which doesn't matter. It's just a way that you can analyze that EEG. Last year, a really awesome doc at Harvard, Christopher Conner, reverse engineered these previously proprietary algorithms. This is a black box nobody's ever really seen under the hood here. He reversed those algorithms and showed that actually this device isn't producing a bispectral index at all. It's looking at a completely different aspect of the EEG, which is a little unusual because a lot of the research that we've based and used this monitor to conduct has operated under this base assumption that the manufacturer has never bothered to correct or operate. It's kind of a little bit unfortunate because I think in situations like this when we're using these clinical devices without fully understanding where they're getting their information, how they're producing it, and how we use it, we miss opportunities to innovate. We miss opportunities to be able to use these devices in different situations or to say that they may be less than ideal for a particular use context. And I just don't really understand why we have to live in a paradigm where these things are so locked down and proprietary. And if this is a concern for clinical devices that are relatively reliable that we've used for the last 20 years, I'm even more worried about this coming tsunami of clinical AI ML algorithms. You know, if I had a Dogecoin for every Silicon Valley guy at a medical conference, you said, I've got this AI algorithm that's going to revolutionize the way you practice medicine and save you billions. I'd have like four dollars. So, Corey, like, why should we not lift our hands and welcome our new clinical AI overlords? So, I mean, I think that this is probably an audience that is well up on all the different ways that ML can go very wrong. We have a whole village here at DEF CON where you can see people giving ML all kind of hallucinations and tricking it in lots of ways deliberately and accidentally. It sometimes has some weird failure modes. And of course, that's true of people, right? You know, people make mistakes. People have biases and so on. But there's one thing about a number that's given to you by software that is, I think, more dangerous than a number that's given to you by human, which is the degree of trust we put into it. That if you take a process that would normally take someone back and have them say, wait a second, that can't be right. And you have a computer emit that as a precise number instead of as a kind of squishy judgment. You can empiricism wash your weird ideas and people go, oh, yeah, I guess algorithms can't be racist. There's no such thing as racist math, you know, to which I say, meet my friend, the phrenologist. He'd like to measure your skull with his calipers. So, you know, I think that when you combine the already difficult situation in which people often defer to medical professionals about things that they are uniquely situated to describe because they're part of their subjective response to their pathologies, and then you add a computer in the mix that says, no, everything is fine. It becomes very hard to imagine how patients are going to be able to exert bodily autonomy and autonomy over their care. And I did want to add about that awesome paper about the BIS monitor that this audience I think will appreciate. The guy who reversed the BIS monitor, one of the ways that he was able to do this is by building an emulator, which turned out to be really easy because the core DSP in the BIS monitor is a TI DSP that's used widely in video game systems so he could use MAME, which is just great. That's rad. So, we've talked about algorithmic bias and empiricism washing and how we're all kind of really aware that these algorithms can have bias unintentionally sometimes just basically, just so fully based on the data that you put into it or the training set of the demographic composition of the individuals that comprise it. But then it becomes even more concerning and, you know, a lot of this talk is talking about dystopian futures when you consider adversarial machine learning. I'm not familiar with that. Many of you are. Think about ways in which an intelligent adversary could attack machine learning algorithms to manipulate the outcome, right? They could attack classifications. They could attack training sets. Change what the ground truth is and design an attack that manipulates the outcome of the algorithm. That could be done in a variety of really scary ways for a lot of really scary purposes. If it's to manipulate you into buying something, you can see a financial motivation. In the healthcare space, you can imagine organizations, companies, entities doing that as to compete with one another and make their particular AI algorithm less effective, for example. There are even some papers out there that are quite concerning where it's not necessarily the entire population that may be impacted by adversarial machine learning, but you can craft attacks that the outcome of that only impacts a certain group of people, terrifying implications there, and one of which that I'm going to have a little bit of a call to action. These whole talk is kind of a call to action, but of the people in the world that are best suited to understand the perils of this and be equipped to help defend the future of humanity in this, I think hackers are probably right there at the top, right? So two things. One, continue the transparency that we've talked about, this monitor as an example about how we as hackers are generally in support of far more transparency, especially with these algorithms that touch every aspect of our life, and then also that we possess a unique skill set. One that can understand how malicious adversaries, you know, adversaries can attack these, how we can defend against them, and how we can better secure the infrastructure that will then hopefully with the promise of a lot of this technology potentially give us huge insights into clinical care, right, improved treatments, new medications that can completely do away with pathologies we never thought would be possible. And so that kind of peril and promise, we need you out there to make sure that these things that would thwart that future, that promising future don't come to fruition. So we're hopeful in that. Let's switch a little bit from the doom and gloom to flip the script and talk about what happens when hackers pwn themselves and are actually able to initiate an initiative and innovate on some of this stuff. Yeah, absolutely. So as you probably know, one in ten Americans has diabetes, 92 million Americans are pre-diabetic, and diabetes, while anyone can get it, is disproportionately falls on marginalized people. It's a disease of poverty. And so people who have diabetes are structurally find it difficult to demand high quality care and to push back against abusive practices by med tech firms. So in 2013, some people with diabetes decided to do something about this. Two hackers, Dana Lewis and John Kostek, took a continuous glucose monitor and figured out a way to hook it up directly to an insulin pump and wrote an algorithm that monitored your blood sugar, tried to predict where it was going, and tried to dose you with insulin as you went along. And the closed loop pancreas, artificial pancreas was born. They called themselves lupers and they gather on a platform called openAPS.org. A lot of the people who built these tools early on were parents of young children. So my friend, Sol Cajaro, is a video game developer. He worked on a bunch of Salarkey games in the old days. His young son, who was two years old at the time, had just been diagnosed with type 1 diabetes and was in daycare. And the people who worked at the daycare were very diligent and caring, but they weren't experts in managing diabetes. And so he wanted to be able to oversee, partially automate and correct and get alerts on his son's insulin levels, blood sugar levels. And so he became a core developer on the looping tools. And, you know, there's a lot of hacker overlap with this looping stuff. And it's one of these great examples of hackers helping normies, where the stuff that we build for ourselves ends up sort of leaking out into the rest of the world. And there's a reason that hackers want to build looping software and it's not because they're too lazy to manage their blood sugar. It's because doing a routine task perfectly all the time is why we have computers. There's a reason we replace all of our routine tasks as hackers with shell scripts. Like, do you remember when Unix systems used to shift without a pre-built cron job that rotated the log files and they would just crash every three days because no one could remember to rotate their log files? And, you know, replacing the routine things in your life with a shell script is especially important if when you screw it up, it's hard for you to think right. And if you screw up your blood sugar, it can impair your cognition. So we lost a dear friend last year. Excuse me, I always get choked up at this point. But Dan Kaminski died last year. He had diabetes. He had management problems with it. He was in lockdown. He was isolated. Other people couldn't see what was going on. And you can see how even someone as brilliant as our pal Dan couldn't manage a routine task perfectly all the time and could experience a literally fatal cascading failure, which is why we love this stuff. So you have these hackers who are hacking hardware, hacking software, making their own algorithms. And to do this stuff, they need to rely entirely on jailbroken hardware so they can affect these changes. So I'll bring it back to you guys. Yeah, I mean, a question that we very commonly get is, this sounds awesome. Why aren't more doctors recommending these systems to their patients? Why aren't more patients coming and asking for this type of care? And we kind of want to hit a little bit on some of the reasons why we need to do some more work. So the first is education. Again, these were not tools and technologies that existed even 10 years ago when we were training, let alone the endocrinologist who's been out in practice for 30 years, right? And even as widely adopted as these are in the hacker community, I think Loop, which is one of the biggest platforms, has about 9,000 people using it. And there are 1.9 million type 1 diabetics. So we really have a lot of work to do to sort of raise awareness there. Doctors who learn about this are going to just inherently worry about its clinical efficacy and safety. And put aside the fact that we give diabetics a vial of 100 units of insulin, tell them to go figure it out on their own, but people are going to say, oh, can't they screw it up if they set this up themselves? And we're starting to just now get some really interesting data to support the efficacy of these different types of devices. There's a really interesting paper published last year by some folks at Stanford and Loop in Miami that was really unique in its design and kind of demonstrated the promise of decentralized clinical trials. They basically just found people who were signing up for Loop. So Loop is one of the systems, and they said, hey, we're just going to pull some data from you if that's okay with you. Give us your baseline data. We're going to see how you do over the next six months. But we're not going to tell you how to use this tech. All of the patients who were in this study had to work with the community troubleshoot guidelines on their own, so it was really not a very paternalistic platform to say you need to follow this protocol exactly, but the results were pretty incredible. Patients, after using this closed loop system, were able to spend longer time in a normal blood sugar range. They were able to avoid really significant low blood sugar episodes, and there were no episodes of sort of the more feared complication, DKA. So really impressive technology that we're starting to see is efficacious and is pretty low risk that people can use. One thing I do want to kind of comment on some of these studies, and in the population in general, is that it does sort of trend to reflect some of the inequities that we previously discussed. So the 500 or so patients in this study, 90% of them were white, 85% of them were college educated or higher, 70% of them made more than $100,000 a year, and 95% had private insurance. So there's work for us as we continue to push some of these open source platforms to make sure that the inequity that pervades modern medicine and form a clinical trials doesn't persist in these spaces. Lastly, we'll just talk really quickly about this idea of risk. So in order to give something to a patient, you need to have a discussion with them. It's called informed consent. We talk about the risk. We talk about the benefits. We make sure that you understand those. Quadri and I are working on developing this concept of a cyber-informed consent that we'd be happy to talk about with people if they're interested. But if your doctor is going to be giving you these connected technologies, we'd like to think that the potential risks, and there are risks that arise from the connected nature of the techniques. Yeah, I just wanted to just quickly pull the audience here. Like, who here has had gone into a hospital and had a procedure done and a doctor or someone else has gone to them and talked to them about the risk? Okay, you might have an infection or you lose blood or anything like that. Raise your hand. Okay, how many people out there have ever had someone talk to them about the risks of their privacy or security of connected medical devices when they get them? Anyone? Yeah. That's a problem, right? But it's a problem in so many really interesting ways. First of all, it's a problem because it's not the right thing that should happen, right? We should be informed of these things. Two, often you do not have a choice, right? What dictates what insulin pump you get when you're a diabetic isn't necessarily a free market thing where you can go and decide and look along all of them and say this is the one I want because it's the one that protects my privacy the most or it's the one that protects your privacy. You don't have those choices in our modern healthcare system. It's an insurance thing. And the last thing I would say about cybersecurity informants that I know cyber God, this isn't a whiskey, but I'm not supposed to say that at DEF CON. But the other interesting thing is that the people telling you about the risks have no idea what the hell they're talking about. How many of your doctors could even articulate basic security and privacy concepts? So people tasked with asking you to consent to this don't themselves understand it. Don't talk to you about it. And there are structural reasons why that's a really hard nut to crack. And so changing that requires education, awareness, and a lot of other things. We're going to also talk about the elephant in the room now, which is a lot of the features that the looper community was able to accomplish with this technology were because the devices themselves were vulnerable. That is an interesting prospect. And I have a hypothesis here. I want to just do a little thought experiment in the audience because I think this is a unique composition. Who here would wear a vulnerable insulin pump, infusing insulin into their body? It has Bluetooth connectivity with no authentication that you can change the settings off just by connecting to it. Who here would wear that device and rely on it to take care of them every single day? Please raise your hand wide. Yeah. I'm sorry? If it allowed you to do those feature sets, forgive me, I put a little caveat on it. You can wear this device, it's vulnerable, but it allows you to have all those cool features that the looper community has made. Please raise your hand. Question. Yeah, so there's a question about who would wear a vulnerable pump if it allowed you to have all these cool features that the looper community is making. Raise your hand. Who here would never wear a vulnerable pump that had no authentication? Right? Well, so this is the elephant in the room. This is the seeming conflict between the looper community and the security space. Right? We've had multiple security researchers over the years discuss real vulnerabilities in these connected medical devices that pose really scary safety risks, not just to your privacy, but to your physiology, to your well-being. The FDA, in my opinion, has done a fantastic job over the last 10 years of pushing device manufacturers to do the right thing about security. Right? They've done the post-market guidance. They have a pre-market guidance document that says if you want to market a device in the United States, you have to be this tall on the security side. You have to do these basic practices. And I will say this, the FDA also is the first to the podium to defend security researchers when device manufacturers try to take a shit on them. When device manufacturers try to intimidate and or do less than desirable things for security researchers, the FDA is at the podium defending this work. But their tasks with the safety of medical devices. Right? And so if there's an issue, if there's a vulnerability in a medical device, they might issue a recall. Then those devices are no longer going to be in the community for loopers to innovate on. Right? This is the seeming conflict. And one of the points of this talk was to try to say we want our cake and we want to eat it too. You know, what we should be doing instead of fighting between the looper community against the FDA is instead using this as an example of a market failure. That consumers, patients, parents of type 1 diabetics want access to the data. They want better control over the technology that keeps them alive and that's something that they demand and they deserve. At the same time, we want secure medical devices that don't have hard-coded passwords. What we should be doing is pushing device manufacturers to the right thing and what we also don't want to do in this dystopian future is allow for documents like the pre-market guidance, recommendations that devices should be secure to be a reason a medical device manufacturer's site as to why they can't be open, why you can't have access to your own because they'll say, oh, we can't expand the attack surface. Why don't we just challenge that paradigm and say, why don't you deploy better secure development practices? Retool your security infrastructure to be both open, transparent, allowing for our patients to have access to that data and then far more defensible from a security posture. Is that fair to say? Is that the best thing we could do? Yeah? We need you out there telling people that it should happen. So I want to close this out talking about how this all interacts with cyber law and the stuff EFF does and competition. So MedTech companies don't like that patients are jailbreaking their devices and they use laws like the Digital Millennium Copyright Act to take down software that allows you to change your firmware. It might allow you to change your firmware to make it more vulnerable. It also might allow you to change your firmware to make it more secure. So MedTech Labs got GitHub to take down LibreLink which was software for the Libre2 glucose monitor in 2019 citing the Digital Millennium Copyright Act. And they argued that this was about patient safety. But I think that is a common motif in MedTech research especially around medical implants and especially around medical implants for people with diabetes is that firms do not respond in a timely fashion or sometimes at all to really significant vulnerability disclosures about their products when they're not being used by loopers to give themselves a better health care experience. So for example in 2019 a couple of hackers, Billy Rios and Jonathan Butts did a responsible disclosure to Medtronic to tell them that their mini-Med paradigm pump was super vulnerable and they sat on that phone for two years and then last year a black hat the researchers who revealed the phone released a proof of concept called a universal remote for killing people and that's what actually got Medtronic to take action. So this is a pattern that's repeated across all the major hardware vendors including companies like Johnson & Johnson this track record of foot dragging when there are issues that are live threats to patients but leaping to action when there's a live threat to profits. In the shareholder communications that these firms make they're pretty blatant about what they want to do because they want to use closed ecosystems for closed loops where you have a single vendor providing the algorithm glucose monitor, the pump and sometimes they talk about proprietary consumables either proprietary formulations or proprietary packaging for the formulations basically they want to turn your artificial pancreas into an inkjet printer and use all the printer tactics that we have historically seen in the printer world an absolute inferno of mergers and acquisitions activity as private equity companies see a lot of potential upside from building these closed ecosystems and the closed ecosystems beget more closed ecosystems so there's a great advocate for this stuff a woman who calls herself the savvy diabetic her name is Joanne Milo and in June she sent a letter to the FDA objecting to the merger of a glucose monitor company called DEXCOM with a pump company called Insolet although this was super anti competitive there's a reason DEXCOM was doing it which is that Medtronic had bought another insulin pump company and locked out DEXCOM they did that with a company called a companion and tandem it also blocked them so here you have this company that makes the glucose monitor and is watching all the insulin pumps get bought up by other glucose monitor companies and locked down so they're like we have to have an insulin pump company too but for all that they're the victim in this they're also a terrible company with a long history of threatening their patients when the patients take actions to try and come out and extract their data so none of these firms have their patients backs all the time and if we allow them to merge and create these closed ecosystems it comes at the expense of patients who have idiosyncratic problems with their health that they want to resolve by mixing and matching pumps and algorithms consumables and monitors it also harms them in that it makes the supply chain brittle because if your pump only works with one glucose monitor and that glucose monitor can't be found because of a supply chain problem then your whole pump breaks down we saw what single sourcing vendors did during the pandemic and after the pandemic with things like the baby formula shortage so it is true when these firms say patients might harm themselves by modifying their devices it is true when security staff say we are only locking these down because we want to help our patients it's true that that's the thing that they do and it's something that comes up a lot when I speak at DEF CON and in other hacker forums about competition more broadly I work on the competition team at EFF and we're talking about dismantling big tech and letting smaller firms enter the market and oftentimes I'll speak at an event like this and someone will come up and say the eye-wateringly terrible stuff I block every day in my job at Apple or Facebook or Google and they're absolutely telling the truth but the thing that they need to recognize that we all need to recognize is your boss will pay you to defend me from his enemies but your boss is never going to pay you to defend me from your boss and this is why ultimately if we're going to have an arbiter that decides what mods are safe and which ones aren't we're going to have a 58 to show up and say no don't do that, you'll kill yourself but if we rely on the manufacturers to do it sometimes they'll be sincere sometimes they'll be talking out of their ass and what they really mean is don't do that or you're going to spook our shareholders and we shouldn't have to figure out which one they mean we should have access to a democratically accountable system that tells us what the truth is can we just clap on that on that note we have a little less than 10 minutes to get a couple of those out of the way we've got a mic back there folks are able to get in line and thanks so much for coming out you talk about these acquisitions and why isn't the FTC getting involved and I trust yeah you know what I got good news for you about that so the FTC for 40 years took a nap the official doctrine on antitrust enforcement in America and most of the world for 40 years has been something called consumer welfare which basically ignored all monopoly problems and allowed for example Microsoft to corner 95% of the US market and you know lots and lots of mergers, two companies make all the beer in the world there's one professional wrestling league all the glasses are made by the company that makes all the frames and owns all the retailers it's terrible but five years ago a law student named Lena Kahn published an astoundingly good Yale law review paper called Amazon's antitrust paradox that demolished the arguments for antitrust forbearance today that law student five years ago is the chairwoman of the FTC she has promulgated amazing new guidelines to block future mergers and she's just announced antitrust scrutiny of privacy practices by firms bypassing the deadlock in congress and promising to regulate firms on privacy directly through the administrative branch she is a hero she needs our support there is a public listening session on September the 8th that the FTC is holding on privacy you can go and intervene the FTC is really going to make a difference we are in a moment in which we have better news on antitrust than we have had since I was ten years old I cannot overstate how fucking great the antitrust picture is right now it is a maze balls I have a question about trust as somebody who lives and breathes this is a bipab user closer to the mic please as a bipab user what you've been describing the dystopian future is already a reality for me philips respironics has been killing people knowingly for years and took them three years to actually recall their products similarly resmet if you don't pay them enough I'm not going to tell you that you have changed oaks breathing even though it's just a bit flip that you need to do how can we actually have trust in the medical institutions today that's a really good question sunstrion is the best disinfectant Corey I mean why is this black box proprietary aspect of some of these things seen as like a competitive business practice why is there this or burrows between oh if we can't keep these things secret we can't innovate so I think the problem is again an antitrust I think that when you have an industry dominated by like five firms if they all settle on the same convenient bullshit like if I told you about it I'd have to kill you no one who's credible which is to say no one who works for probably sized firm steps up and says no no wait that's nonsense we absolutely can share this information with people not only that but when firms are very concentrated they have a lot of money to spend on lobbying so the way I think that you get good regulation is by making sure that firms are neither too big to fail nor too big to jail people looked at that photo of Donald Trump at the top of Trump tower with all the people who run all the tech companies around a table in 2016 and said isn't that terrible that they're meeting with Donald Trump like you know what's really terrible that they fit around one goddamn table right because if they can fit around a table they will sit around a table and when they sit around the table they're going to figure out how to screw us and so this is why as a prerequisite it's not enough but it is an absolute prerequisite for good regulation firms, sectors have to be diverse with firms who will blow the whistle on each other when they're telling convenient commercial lies so quick I saw the five minimars so I'll try to make this quick and I have one of the Abbott freedom whatever the hell it's called it's gen one, it's NFC only and your question about what I trust this I don't need insulin thank god yet but there's a gen two with bluetooth I'm not getting that so I have to upload my data to the cloud as a necessity to inform my doctor also I had to advocate myself to get this and then I was going to pay out of pocket for my insurance so I'm very lucky I'm very privileged to have insurance so that's kind of where we're at with diabetics in general it's yeah I had another question but I don't think there's enough time so I'll let other people go I think that one of the things all firmware can do is let you disable the bluetooth on your device and that's one of the reasons that we should support all firmware real quick addendum also the other thing that I could have done that hacked firmware is not have to replace this every two weeks I'm fortunate that I have good health insurance however my insurance dictates what devices I can use so I'm wondering if you have any comments on where the insurance companies fit in all of this and if you follow the money there's quite a bit of money in the medical insurance business that's a really great question and comment I mean I'm sorry that you're in the position of so many other people which is you may want to select a device for a particular reason but you can't because of your insurance you know one of the interesting arguments I've heard opportunities is to talk about how these devices pose risks to some of those payers and what am I talking about health insurance is going to care if they have to spend more money on you they also care how much the devices are going to treat your illness and so if we can talk about risks to privacy risks to security and what the ultimate outcome of that would be from their bottom line it may be a persuasive argument they also are very commonly looking for reasons to try to save a buck and so as these things become more and more connected as these devices and there are more and more vulnerabilities found in the wild their recalls can be quite costly that will eventually hit insurance companies bottom lines and I could see them in the future doing a risk calculation to say well this pump might cost 50 bucks more than this other but it's more durable it's more secure we're having less headaches with this and so more and more as we can they themselves can realize that I think we'll move in a better direction but also you as a patient should let them know that is really important being a voice and advocate for choice of that choice privacy and security being part of that is very persuasive to the people that regulate insurance companies as well it's going to be a long haul keep going but raise your voice so that we can change that dynamic and then also just try to advocate broadly so that it isn't so stark a contrast between device manufacturers really we want to raise the entire device ecosystem into their security and privacy we want all people across all types of insurance yeah we don't want to create a market where like if you're lucky your insurance lets you eat in the restaurant where they're forced to wash their hands after using the toilet but otherwise they don't we just want to eliminate the restaurants where they cook your food without washing your hands that's what we actually want we just want to abolish the bad devices what an appetizing metaphor to end on we got 90 seconds before we turn into a pumpkin so I don't want someone to ask a great question so thank you guys so much for coming we're going to head over here anybody in line who wants to come and find us we'd be happy to talk outside really appreciate everyone coming have a great con I'll be at the EFF booth later as well