 So this morning we're talking about how to hack government, technologists as policy makers, I'm Ash console Tony, currently the chief technologist at the FTC. You might know me from my work with the Wall Street Journal or the Washington Post, that was one of the reporters that worked on the Snowden documents that brought you that smiley face. And I'm Terrell McSweeney. I'm a commissioner with the Federal Trade Commission and I'm an attorney, I'm not a technologist and I've been in the policy space for a long time at both FTC, White House and DOJ. As you can probably tell by the difference between these pictures, I'm the type of person that really relies on technologists like Ash Khan to help me do my job, which is a little bit about what we're gonna be talking about today. Just a word about the disclaimer that's at the bottom of this slide. Both Ash Khan and I work for the Federal Trade Commission. So do a lot of other people and I actually have four other colleagues who are also commissioners and they don't always agree with everything we say, so we're here talking today about our experiences and we're speaking sort of individually and not on behalf of the whole entire trade commission. Or the government. Or the government. So this talk, we're gonna talk about what is tech policy? What are the big debates right now and why do technologists, you and the audience, why do you guys matter? And what this talk isn't, it's not a tech talk. We're not dropping any odd days. We're talking about some vulnerabilities but they're the humankind. They're people and process. Just to start, does that work? The internet is not something that you just dump something on. It's not a big truck. It's a series of tubes. And if you don't understand those tubes can be filled and if they're filled, when you put your message in and it gets in line, it's gonna be delayed by anyone that puts into that tube an enormous mass of material. So this is Senator Ted Stevens and that's a, you know, his relatively famous quote during the 2006 net neutrality debate where he's trying to describe the infrastructure of the internet. And we included it just because it sort of underscores our message which is that people who make laws and who make policy do need to have technical experts that help them understand the technology that they are impacting. And to be fair, he wasn't totally wrong, right? It is kind of tubes. It's more tubes than trucks, for example. So, not bad, right? So there are a number of policy issues that are hotly debated in DC right now. This is kind of a word cloud of many of them. Privacy, data security, cross-border data flows, a right to be forgotten. Wassinar, you have Computer Fraud and Views Act which you guys are probably very intimately familiar with. There's cybersecurity legislation currently on the floor, the Senate, data breach and data security. There's a ton of debates. But I want to start here at the FTC because every day you take the lead in making sure that Americans, they're hard earned money and their privacy are protected especially when they go online. And these days that's pretty much for everything. Managing our bank accounts, paying our bills, handling everything from medical records to movie tickets, controlling our homes, smart houses from smart phones. Secret Service does not let me do that. But I know other people do. So this is President Obama actually at the Federal Trade Commission earlier this year talking about a lot of the tech policy issues that we at the Trade Commission spend a lot of time on. Consumer data privacy, data security, the internet of things. And this was actually kind of really memorable for the FTC. President Obama was the first president since President Roosevelt to visit the Trade Commission. But I think it really underscores how important a lot of these technology policy debates are right now and how they've risen up literally to the highest levels of government. Just to note also, I think it's important to explain that at the Trade Commission we protect consumers and we're focused on practices in the private sector that are impacting them. So that's what we're gonna focus on in this talk today. So when I'm talking about consumer privacy, I'm talking about privacy as it relates to the commercial space. We're not the part of the government that gathers information about people and things like that. That's a different debate, very important one, but we're gonna stay out of that for now because that's not in the FTC jurisdiction. And he makes an important point that you guys all know which is that nearly every part of society now has some technological component. Nearly everything we do is online or technically mediated by an app or a computer of some sort. And that's where I think we come in as technologists. So just to level set, there's a kind of alphabet soup of laws that protect consumer data and privacy in the United States. Sometimes we refer to this as a sector-based approach and this is just a slide that runs through many of those laws that we protect children under 13 in Kappa. We have special protections for financial information and health information for certain kinds of student records, that's FERPA. We have obviously privacy protections in the Telecommunications Act as well, that's the jurisdiction of the FCC and several state laws as well. So that's the kind of landscape that we're operating in and the kinds of protections that are out there. And then we also have the FTC. And one of the things to think about is, for the most part, there aren't a lot of restrictions on what information companies can gather or what consumers' data protection practices in law. We have a patchwork, as Terrell said, of laws and regulation that essentially achieve that effect but there's nothing specifically prohibiting companies from gathering your information, for example. So the FTC, how does this fit into this space? The FTC was created in 1914 by President Woodrow Wilson, that's him on the left. It was actually part of a policy debate that was really focused on trying to combat the economic power of the trust. That's like a cartoon about standard oil, you know, all that stuff. So it was created because there was a lot of concern about the power that these trusts had in our economy. And that was 100 years ago, but it was given this relatively broad authority to protect consumers from unfair deceptive acts and practices, and that's the next slide. And as Terrell said, we're, well, I'll go ahead. You go ahead. So we're kind of, you could think of us as the white hats of government, right? We're here to protect consumers. We're here to promote good practices by companies, get them to fix their shit when it's broken. We're kind of, as far as government goes, on the consumer side of things. And so the authority that we have, we use essentially in a number of ways, and we're gonna spend a little time talking about that today, mostly to check to make sure that privacy promises that are being made to consumers are being actually adhered to, and that consumers are protected from unfair practices, especially when it comes to securing their data. And unfair, it took me a while to get, but unfair doesn't mean, hey, that's unfair, and there's a specific legal definition of unfairness under FTC law, and that is really important to understand as you try to, or as we try to highlight practices that we think might be problematic. There's restrictions like it can't be offset by countervailing benefits. There's likelihood to cause an injury or harm in some way. And so I'll let Terrell explain more about the law. I don't know anything about this. Maybe you know something about this. It has to do with the Federal Trade Commission, and apparently there was a product being marketed in the United States. It was caffeine laced undergarments. The idea was the caffeine in the undergarment, the wearer could lose weight and have less cellulite. By putting caffeine in your underpants. That seems like a really cool idea. It turns out this product doesn't actually work, which is a bummer, and we, literally, right? We at the FTC protect consumers from a bunch of products and marketing claims for products that don't do what they claim they do, right? That's a really low tech example. And we also, so we do, that's like our bread and butter, but we also do a lot of high tech products as well. So it wasn't just our undergarments, underpants. Consumers in 1914 were adopting technologies like the phone and rail. In fact, one of our first cases was against this company that produced a calculating machine, and we brought in a case alleging that it did not do the claims. And there we had to understand what a calculating machine, an early computer, what the capabilities were, and what claims that we're making, and actually understand the technology underlying their claims. And it's always been a part of our work. We, in the, I think, 60s or 70s, for example, would build systems to automatically test tar content in cigarettes, right? So this was a parallel smoking machine that would just inhale cigarettes and measure the ingredients. It became the FTC method. And there, again, we're building technology to measure things, right? If folks remember the Droid Army project that would measure vulnerable libraries and multiple Android devices in parallel, these are the types of things that we've always kind of been engaged in to help measure and contrast and understand practices in particular spaces. So the FTC at 100 is increasingly engaged in looking at technology and high tech products. This gives you a sense of some of the cases that we've had involving huge tech companies in the last five years. And sometimes the FTC is even referred to now as the Federal Technology Commission because of our role increasingly in looking at these, the practices of these companies as it relates to the impact that they're having on consumers, especially their privacy and their data security. And we're gonna step through a couple of these cases, but to start. So to start, one important thing, and we've been talking about it a little bit, is we have the ability to try to make sure people are marketing their products fairly to consumers and not deceiving them. So this case actually is currently in litigation. And here the issue is, does an unlimited plan that is throttled actually, is that actually an unlimited plan? And our allegation in this case is that it is deceptive to consumers to say that they are getting an unlimited plan if in fact they're gonna be throttled over certain thresholds. Right and here we have to understand things like how network routing works, what is congestion-based throttling versus just network management. What are, for example, SS7 and weird networking and GSM protocols that some people now in this audience know, but obscure technical underpinnings of old telco systems in order to bring this case. So companies would make arguments that this is how it needs to be managed. And we would say, for example, if you're capping people at five gigabytes, regardless of network congestion, that might not be congestion-throttling. And so it becomes a core element of our case. So privacy promises. This is a big piece of what we do on the enforcement side. And the reason we're spending a little time talking about a bunch of the FTC enforcement cases is that enforcement is actually one of the strongest tools that we have in our toolbox to help shape policy and practices in the private sector. Google Buzz, for example, this case involves a broken privacy promise, which is that Google made to Gmail users about how they would use Gmail information and whether they would use that in a way that was different than the Gmail users expected. And so it's relatively important because we allege that, in fact, the Gmail users couldn't have anticipated having their information used for Buzz and then didn't have adequate notice to be able to opt out. And once the settlement has reached, the company is often put under order. For example, Google is under order for 20 years to maintain their privacy promises and in fact have regular privacy assessments. And we then use that authority to later bring other enforcement actions. So you might recall Google was also found circumventing Safari browser settings, basically respawning Safari cookies. And we later then brought in enforcement action with monetary penalties once they're under order. So similarly, Facebook, and then the Facebook case, again, we took a hard look at whether the representations that Facebook had made to users about how they could restrict their information were actually true and determine that they weren't and brought a case there. This is also pretty important because we looked at how the retroactive change to how users' information was handled actually was deceptive to an unfair to consumers. So we looked at a change in policy and tried to hold Facebook accountable for the promises that they had made. And this is one of my first cases. I was a staff technologist at the time 2010 and there was a lot of technical work that needed to be done. For example, we allege that Facebook apps would have access to more information than what users were told. So you could restrict your information sharing to private, for example, via Facebook settings. But as folks in this audience know, via the Facebook API, you could pull a lot more information. You could pull whatever information the user had set. So we had to do things like I understand the API, run apps to demonstrate or write apps to demonstrate and verify our claims. There was a claim regarding deletion of photos and so there was the need to understand how CDN works. So when you delete your photo on Facebook, if you deep link to that photo, it's still available on the caching network because they did not delete the photo off their CDN. There was another case with regards to sharing information folks here know how advertisers get information via refer headers. So Facebook claimed that they would not share information with advertisers but in fact, your Facebook ID and perhaps the page you're on would be sent to advertisers embedded in on the Facebook page because they didn't properly iframe the advertisement, et cetera. So a lot of it, the case, the findings or the case was a legal finding but we had to essentially technically demonstrate what our claims were with regards to information sharing. If I could just sort of underscore that for a second, I'm a lawyer, not a computer scientist. Half the time I might not even understand some of the terminology Ashkan was just using but when I can work with a technologist who does, they explain it clearly then it helps inform our mission. So it's like core part of why this partnership is so important. Nomi, so this is a relatively recent case. Again, privacy promises. This is a company whose technology allows their clients, retailers to track users as they're coming in and out of their retail locations and again, we brought a case here because the company said that consumers would have the option to opt out in retail locations that was using the technology and that they would have noticed in the retail locations using the technology, neither of which were happening so we brought a case saying that was deceptive. And do folks know what this is, the retail tracking? So this is basically promisk mode Wi-Fi sniffing, right? So you have to, you know, the retailers and malls will install essentially access points that are passively monitoring for Wi-Fi or GSM beacons and they will track you, you know, whether you return to the store or where you walk through the mall. And again, there was a lot of technical claims. For example, a lot of the companies in this space will argue that the information is anonymous. They collect, for example, they might collect MAC addresses, but they use cryptographic functions to anonymize the MAC addresses. And in fact, if you guys well know that they're hashing the MAC addresses with known hashes, oftentimes it's like six byte MAC address. The first part is essentially the manufacturer code so the space required to actually brute force a hash is quite small, right? It's something like two to the 30th and you can do that on a regular laptop or you can download a rainbow table. So a lot of the argument was that, you know, when companies make claims that this information is anonymous that we would demonstrate that in fact it's not really, right? The current status quo is you can reverse the hash or you can go back to the MAC address. Similarly, you know, with regards to claims about notice, companies would make claims that like, for example, the tracking only occurred within the mall, but as you all know, Wi-Fi signals can traverse, you know, past walls, right? So the store next door would also, visitors to the store next door would also have their information captured by this location tracking technology. So informing the pros and cons and the pitfalls were critical to actually bringing this case. I think this is where we as a community can really contribute. Finally, Snapchat. This is a case that's part of our ongoing effort to make sure that apps are being marketed truthfully to consumers and in this case, we alleged it was deceptive when the company claimed that messages would disappear forever when in fact it was relatively easy to capture them. We also had, look carefully at data security practices and some of the other practices and found that the claims being made were also similarly misleading. And this is another, like folks in this community, there was a talk I think in 2013 by the straws guys about ephemeral apps and how they weren't in fact ephemeral and this is an area where this community has contributed quite a lot in research and demonstrating that when companies make privacy and security claims, which is something we want, right? We want privacy preserving apps, but when they're not true, it actually harms consumers and harms trust in the industry, right? So if you can Snapchat some sensitive photos to someone and they can easily scrape that information, consumers are effectively harmed because they were proceeding or sending those photos under an assumption of trust. And so in addition to kind of the deceptive claims, we also bring enforcement actions against companies that fail to meet reasonable security practices. So one of the cases that I worked on was HTC. If folks remember, there was this hubbub around carrier IQ which was kind of a telemetry app inside multiple smartphones. The carrier would essentially get the OEM to load this kind of logging feature to help understand how people use the phones or monitor the devices. And in fact, in this case, carrier IQ, sorry, HTC integrated the carrier IQ system as well as their logger in a way that, for example, broke the Android permissions model. They allowed unsigned code. The daemon was using an iNet de-listener bound to essentially all interfaces so any other app could also pull information from the daemon. They enabled debug options in the production code so it was logging all sorts of things like key presses and SMS. And so here we made the case that by this poor integration by not having proper procedure to review code and tests and doing code signing, that they were not essentially maintaining reasonable practices, right? They broke the Android permission model and would effectively leak information to other apps. And just a word about these security cases. We've been talking before about privacy cases that usually involve some sort of deception or a promise that's made that isn't kept. These security cases, we also use our unfairness authority to bring them, which means essentially, and we talked about this a little bit at the beginning, that we are looking at whether consumers can avoid the harm to them and whether they are in fact harmed. And that's sort of the way we look at it. In the security context, that leads us to, excuse me, take a look at whether the practices that are being used by the company are reasonable. We don't believe that they have to be, that there's such a thing as perfect security or that what we are expecting here is perfect security. But what we do start to enforce here are reasonable security measures in place to protect the consumer data that the company has. And that's where the technologists are absolutely vital to our mission because we have to understand and have expertise to understand the security practices and procedures that are in place. And we have a number of these apps. This is another case where multiple apps, in this case, Fandango, the ticketing app and Credit Karma, an app that you can pull your credit score were essentially not cert validating and allowing SSL man in the middle attack defeating the whole purpose of SSL. And so we brought a case against them that by breaking cert validation, they were in fact not engaging in reasonable security practices. And these cases also involve a deception element as well. So promising you're securing something when you're not can be deceptive. In one of the recent cases, TrenNet, which is an IoT camera, folks are probably, this is one of our first IoT cases. Folks are probably familiar with this. It was kind of a webcam that allowed essentially users to, and they would advertise that it could be used to monitor your baby or it could be used in banking environments. It was a secure camera, except the secure view functionality allowed any user that could pull the IP address of the camera to pull the video feed, even if the user had marked the video as private. And this is one of these cases where again, the understanding, for example, how to connect and how the network, this was a really easy one, you can connect directly to the camera, but some of the newer cams, for example, will have either bad defaults or the way they do port negotiation will allow any attacker to pull video for a number of cameras without even needing to go to the admin interface. And so these IoT cases are really interesting for us because we're trying to understand this space. There's been a number of talks here informing some of the problems in IoT, and I think this is a critical area for us. And just a word about the harm in this case. So it's obvious if the harm is credit card information, financial information, that kind of thing. Here the harm though is exposing video feeds from people's homes to the public internet. And we think that's a violation of privacy that is deeply harmful. Yeah, I think in the case, there were examples of people in various states of undress and engaged in sexual activity, which we would argue is somewhat harmful. If an attacker can review that, or not even an attacker in this case, someone can punch an IP address in their camera and watch that feed. And then the last case we'll talk about is a case called designer wear. This is an older case. I didn't actually work on this case, but it was essentially, I'm sorry, I got them mixed up. The nudity was in this case, not the earlier case. This was essentially a rat, right? So designer wear made software that allowed rent to own companies to monitor devices that they rented to consumers. And so the software could be enabled to monitor keystrokes, take screenshots every two minutes, take web camera shots every two minutes, and there was no notice to consumers. And so we argued that this was an unreasonable practice, especially that, as I said, in some cases, photos of pictures of children, individuals not full in clothes, and couples engaged in sexual activity were captured by the software. And this was like for people that were late on their payments, right? This is like the rental owned companies would enable this for people that were late on payments and would capture this data without the users knowing. So again, unfair and deceptive, harmful to consumers because it's exposing them to things that they can't avoid, they're not aware of, and they're really very vulnerable if your computer is turning on and taking screenshots sort of like Mr. Robot of your home or your personal life. And I just want to highlight, so this is just one, we just covered a few of our cases, but there are a ton that are very technical and informed by this community. We've brought cases against companies using flash cookies to circumvent browser settings and privacy controls. We recently brought a case against a mobile app that had a Bitcoin miner enabled in it that would essentially kind of like SETI at home would use your phone to mine bitcoins when you were not using the phone. We've brought cases against companies using CSS history sniffing, which is again an academic paper that informed us of this practice, but we then verify and demonstrate the problem ourselves, but again a very technical concept and then we have a number of cases in data security. But in addition to enforcement. Yeah, so we've been talking about our big stick, which is our enforcement cases and there's a bunch more of them as Ashkan just said, so we can provide that information for you and it's on our websites. But we also shape policy and the public policy debates in a variety of other ways. We convene workshops for example, we've been very focused on the internet of things. We think the innovation and the potential in this space is absolutely terrific for consumers, but as a bunch of the folks in this room know and as has been demonstrated throughout DEF CON this year and last year, there are a lot of potential pitfalls and vulnerabilities in these products, so we're looking carefully at that. We're looking at data discrimination, we're looking at health and fitness wearables and some of the practices there that are impacting consumers. Again, if you're generating your own health information and sharing it, it's not HIPAA protected and a lot of people may not totally understand that, so we're trying to understand how that information is being collected and shared and cross-device tracking. We're also doing a workshop this November on cross-device tracking. Do folks know what this is? Cross-device tracking, it's an industry term, it's essentially advertisers want to know, for example, when you see an ad on your mobile device and you later purchase it on your tablet or on your home computer that you're the same user and they can attribute the impression or the conversion correctly, right? But the technology behind it is quite interesting and so we're having a workshop to kind of discuss some of the concerns and some of the consumer notice and some of the choice functionality. For example, the technology works either through logged in services, so you could be logged into one of the big companies, Google, Facebook, Twitter and they're able to know you're the same user. But for a good portion of the practice, a lot of companies will try to behaviorally model that two devices are related. So you might have a burner phone and you might have your regular phone and you try to keep those separate, but in fact the technology will try to fingerprint and then identify that the same phone is connecting from the same location or the two phones are connecting from the same location or maybe they visit the same websites and it's using browser fingerprinting or some sort of behavioral correlation that they, statistical correlation. One of the more interesting technologies, for example, is something that might resonate in the bad bios crowd. One company might use audio beacons, subsonic audio beacons that aren't, we can't hear, but one phone will, the ad library in one phone will emit this pulse and an ad library in the other phone will pick up that these two phones are in proximity to each other and then they will link that together. And so that's a current practice and something we wanna learn more about. So we also put out material, really excited right now about our start with security initiative. We're going to be convening workshops around the country starting in the fall, but we've already released a report called start with security, it's on our website. If this is your field, I encourage you to take a look at it and it's based on the more than 50 cases that we've brought in this space and really gets into sort of lessons learned and best practices that we're drawing from our enforcement efforts. So yeah, so one's happening September 9th and the other's happening September 5th in Austin. We're trying to bring start-ups and researchers like yourself and you know, VCs and others together to say how can we think about security from the get go, how can we prioritize and what are those best practices? I also have a kind of a tech FTC blog and there we, or there I try to highlight some of the more technical best practices or some of the technical concerns in the space and again, it's very much informed by this community. And they tend to be me or other technical lawyers at the FTC, they're actually quite a few. This is a blog post on the principle of least privilege and how to access API securely. So there's a bunch of work that we're trying to do in addition to enforcement to inform the community and what should be best practices and again, it's very much informed by the work you all do as well. We also have been really excited to use the America Compete's Act to run contests to harness the technical know-how of this community and others. So this week, we've actually been running our Humanity Strikes Back Contest which is helping us bring new tools to consumers to block robo calls and then report them into a crowdsourced honeypot. Pretty excited about this contest. It builds on our effort here last year which was called Zapping Rachel. Rachel is that annoying robot customer service like I'm Rachel from customer service. This is really important to us because we operate the do not call list but we get about 170,000 complaints a month about robo calls from people who are on the do not call list. It's really hard for us to try to enforce our laws and protect people from these really annoying calls. So we're trying to develop new tools, harness new technology so that not only will help us with our enforcement effort but also give consumers new tools. So contests are awesome. And the contest will be announced today at two here at the award ceremony. And this is a great example of this harmony between technology and law, right? So a lot of what happens in policy is the lawyers are like technology will fix it and us, our techs are like someone should make a law against that, right? But in fact, you need the two working together. So we have the do not call list but as we know robo calls can jump on PBXs and make calls from any number and they're hard to do send a reputation, right? So what are some other tools we can employ to essentially protect consumers or kind of enact the do not call mission which is to protect consumers from robo calls. We also have, I'm happy to announce, we're kicking off a Office of Technology Research and this is essentially an in-house research group that I'm helping put together. I actually have some interns in the audience here that managed to wake up at 10, nice work. And they're doing ongoing kind of proactive research into emerging technologies, right? So we're blackboxing things ourselves, we're poking at things, we're looking for vulnerabilities, we're looking at data issues, we're looking at data discriminations. Some of the topics of interest for me personally are the internet of things, obviously connected cars which is a hot issue and this idea of algorithmic transparency and by that I mean, so some of my past work I've helped highlight companies charging different prices to people based on your zip code or based on some refer headers or what cookies you might have. One way I like to send the message home is how many folks here in the audience use some sort of mapping software, Google Maps or Bing or whatever else, right? Almost everyone. But how many actually know whether you're being served the best routes, right? So we assume the system is routing us based on shortest distance or traffic or congestion but how do you know it's not routing you in front of a storefront or a billboard the company receives kickbacks for? We don't, right? We don't have a way to look into algorithms currently and know what are the biases inherent and so driving is one factor but there's issues with regards to discrimination and gender biases and a lot of kind of problematic areas in society that algorithms can help either directly or inadvertently contribute to bad practices, right? Or bias practices and so that's an area that I'm very interested in in black boxing. And can I just, like a word about this OTII office which I think is really, really important and in fact Ashkahn's role and the role of technologists at the FTC what we see is increasingly the need for us to protect consumers in an increasingly wired and connected world we need to have our own technologists helping inform our law enforcement and policy mission and so we're expanding the role that the technologists are playing and I think it is vitally important so that we can understand exactly what's happening in the marketplace and keep up with a very dynamic and exciting innovative marketplace. And we've got to hog the stage about the FTC's work, right? So this is all about what we're doing now at the FTC but there are a ton of It's the FTC rocks. That's true too. But there's a ton of other technical debates that are currently going on in DC that a lot of us are not engaged on but they're in fact critically important to technology or require a deep understanding of technology to facilitate, right? So data security, export controls. We talked about a bunch of them, drones, student privacy, patents. I think this was your slide. Yeah, that's right. I think you're doing a really good job with student privacy, patents, facial recognition, trolls. These are all really important tech policy debates. The FTC plays a role in them and so do a lot of other government agencies and so is Congress. And we are here to make a plug for you to bring your technical know-how into that debate and have a voice in it as well because we think having technologists at the table is absolutely vital to getting these policies right. That's right. And so this is Call of Duty, it's really fun but this is another Call of Duty, right? This is actually a lot more fun, right? Telling a bunch of high powered policy makers that their understanding of the world is completely wrong is really great actually. It's quite fun, I've done it, I've testified a few times but it's also critical, right? So otherwise, if we're not engaged in these debates, if we're not trying to inform how things work and what the policy impact of the laws that will be proposed are, then people that don't have any technical background or brag about the fact that they don't use email or you have a flip phone, there's in Congress there's a flip phone caucus and I don't wanna pick on particular members but there are people that say like we don't need to know about technology to make the right law, we think the internet should work like this or we think technology should work like this and if we don't actually engage and inform this debate, other people will make these laws that affect us, affect you without any kind of understanding of the underlying technical impact. And so... So main takeaways in case you haven't missed it, we need your help. Your work has a lot of impact on what we do and it matters and we want you to come help us. We've been talking a little bit about some of the more formal ways we engage but I also wanna put a plug in for just coming in and having a brown bag and talking to us about your research. We do that regularly at the FTC and it's really, really helpful. Yeah, so if you wanna contact us, ftc.gov.tech is my blog. Tech at FTC.gov is also how you can reach me. If you're coming to DC and you wanna talk about, if you wanna give a presentation of something that we might be interested in, shoot me a line, check out the Office of Technology Research, OTRI. In fact, we are hiring, right? We're trying to bring, we're currently looking for White Hat researchers, research coordinators, people to do research in-house in our Office of Technology Research. So I urge you to either email me or check out the USA Jobs posting under FTC. It's a horrible website so you might wanna actually just email me. Yeah, but you can come work I wanna argue it's not necessarily the most glamorous job. There's a bunch of constraints but again, it's a tour of duty, right? So just if you come for one or two years and help work on some of these issues and help do research and you get to work on pretty much really fun stuff, right? You can poke apart any new technology and then make policy recommendations, right? So in addition to like poking apart stuff, you can say, well, you know what? This is how things should work, right? So you can make suggestions for policy and how companies might wanna implement or just ideas, right? So you can creative solutions to these problems. So it's pretty fun even though, you know, it's government and your friends will make fun of you and spot the fed. So that's our talk. We have a few minutes for questions. I'm happy to take questions. I'm also happy to kind of talk about a few tips and tricks on what I've found with regards to talking to policy makers. So it's not always, I think a lot of it is being able to communicate things effectively and something that our audience, not that we don't do a good job of but there's a particular way you need to engage policy makers that I've found to be particularly effective so I'm happy to talk about some of those tips and tricks, whatever you guys want. So there's gonna be a mic going around if anyone has a question, raise your hand. Otherwise I can ramble on for another 10 minutes. All right, so right here, fourth row. Good morning, thank you for coming. One of the big stumbling blocks in consumer privacy right now is how to do demonstrable cognizable privacy harms and that's been a stumbling block for the private plaintiffs bar. Whereas in the enforcement actions you mentioned the commission doesn't have to show actual harm to consumers. I was wondering if you could speak to the commission's efforts, if any, to bridge that gap or provide any sort of guidance or assistance to private plaintiffs who do have to show actual harm to themselves. Yeah, I mean this is a really important question and as you point out, we continue to use our authority to really try to make sure that privacy promises that are being made are being adhered to. But I think again I would underscore the fact that we really look at the kinds of information that are exposed for example in data security cases and look very carefully at the promises and commitments that were made and whether they were adhered to or whether people were misled. And it's important to stress that there are currently no baseline privacy laws in the US, right? There's currently some bills to provide those laws but right now a lot of the issue, if companies say they wanna, you know, pwn you in some way and they say clearly and consumers understand for the most part with the exception of, you know, if you can demonstrate harm or there are other concerns they get to do that, right? And that's part of the issue is the way the policy is written. And that might change over time as people's understanding of privacy and its impact changes. But right now, you know, a lot of our authority is based on this unfair and deceptive which we've been trying to use to protect consumers. But, you know, the laws, you know, the laws are different than they are in Europe for example. Fifth row? Yeah, my question is, and you guys have done a great job of explaining the cases and why it matters. What happens to the companies whenever they're found to be in violation? So we generally in these consent orders we've been talking about have the company under 20 year order. They contain these two year independent audits and requirements to have privacy and security policies. So as a part of that process, we can hold them in contempt if they're in violation and have actually done that. Right, and we can also say stop doing what, stop doing that, that's bad. Or we can like find them as well depending on the type of issue. So we can say like if there are particular practices that are problematic, a lot of our other cases are like fly-by-night scammers. So people that scam your grandmother out of her savings, we go after those companies and shut them down as well. So we have a bunch of different authorities and it's pretty fun, I will say. We can also, one of our authorities, and it comes up more in the scamming space is we can get consumers money back. So in our cramming, mobile cramming cases, for example, in app purchases for kids that were misleading, we got hundreds of millions of dollars for consumers that goes directly back to consumers in the form of redress, which is like a check in the mail. There's some questions over here too. Maybe speak loudly and I'll repeat your question. How about that? Actually there's one with the mic right there. Oh, there's one with the mic right there, sorry. Hi, can you detail the process of what happens when a tech policy law comes in like the SOPA, TPP, CISA, when a policymaker is more or less out of their depth, how are they getting that technical information in order to make an informed decision? Sure, so it's definitely not the schoolhouse rock how a bill becomes a law, right? There's a lot of, I will say some policymakers have very technically adept staff, right? There's folks in Congress that have really CS majors in Congress or their staff does, but some don't, right? And this goes into some of the points I wouldn't make. Oftentimes they don't know what they don't know and in fact they don't care, right? Because even though these are important issues, so are starving children and nuclear holocaust and whatever else, right? So there are a lot of issues in Congress and so to make this stuff kind of tractable or important you have to highlight one of those critical points and make sure that in the talking points, in the debates, the point is being made, right? So if it's car security, people have to demonstrate that it is in fact an issue and people should consider automobile safety in addition to increasing jobs, for example, right? And so, can I jump in as a non-technical policymaker? I think when you're talking to us, metaphors matter, images are good, pictures are good, acronyms are bad, we don't understand them anyway, you know? So you have to really kind of back out a little bit and sort of speak plain English, use pictures and diagrams and try to make it real right at the beginning and then get technical, that's good. Right, and so that's, I'm gonna just jump in through a couple of these points. So don't assume your audience knows what the hell you're talking about, but just nod and in the same way that you guys saw all those acronyms, CFA and Kappa and you just nodded, don't assume that the people you're talking to even know what TCPIP is or what HTP is or any of these ports, you have to find, you have to always back and then go forward. But also don't make us feel dumb for not knowing. Right, definitely don't kind of send. Metaphors are critical and metaphors both, so metaphor is this really powerful tool, right? Cause you can help someone understand the concept, but the metaphor has to be intact, right? So as you say, it's more like a phone call than a postal mail, right? As you start, if the thing in fact is storing forward, then maybe it's not like a phone call, right? Maybe it's cashed, right? So you have to make sure that you pick a metaphor that's maintained, but also it helps a lot even to just do a little bit of homework on what the law is around the previous metaphor, right? So law often works on precedent, so there's a law regarding phone calls and wiretapping, right? So if you understand those laws, then you can use the metaphor and build on it in a way that resonates with the person who's working on the law. Don't, so we have a tendency, I have a tendency to make sure that you know every fricking detail about the thing and all my findings, right? Realize that again, people have limited time and you want to start with the crux, right? I got this a little bit from my work as a journalist as well, which is what resonates with people and what is the turning point that this decision hinges on, right? So if it's policy regarding where data is stored at international borders, first focus on just that piece, not the transits and not all the other details and then expand, right? Because again, you need to like hit home with the one piece and not go into too much detail because otherwise you'll lose your audience. And then finally just be patient, right? Like a lot of the issue, I think a lot of like, so this video, the one we started with Ted Stevens was kind of funny, but a lot of what happened there and I feel bad because we contributed to it, a lot of what happens there is now the technical debate doesn't happen on the floor, on the Senate floor. People are not willing to engage in technical conversation because they don't want to feel like noobs or bigot made fun of for saying something wrong. So it's a lot of our, it's a lot of this community's work to say like, these are important issues. Here, I'll help you understand, I'll help you bring you along. You're not dumb for understanding it. In the same way that you guys aren't dumb for understanding, for not understanding the law when your lawyer starts giving you a bunch of acronyms and your eyes close over, at least it does for me, right? So we have to have that same level of respect and patience as well. Hi. I don't want you to take it as a personal attack. Sure. I think it's important for full disclosure. Ashkar, I assume that you were going through procurement when you received your position at the FTC. However, Terrell, if I understand correctly. Terrell. Terrell. That's okay. As a commissioner, you are an appointee, right? Yes. So at least you have a perceived alliance that would allow you, that people would think that you would make decisions of prioritizing what to investigate and so forth based on whoever appointed you, even maybe fire Ashkar if he voice opinions that you don't fully agree with where you wanna take. Can you address that a little bit? Whether I'm in a fire Ashkar? No. Not just to be clear. So I think that the point is really cool one actually because you're talking about a little bit about how our government works, which is that people who are political hacks, well, that's the majority of it. People who are political are deeply engaged in making very important policy choices and hopefully we are using experts to help us do that. I am definitely political. I'm appointed by President Obama. I'm actually filling a democratic seat on the commission. The way we're set up is it's two Republicans, two Democrats and a chair who's appointed by the president. So there's no question that there's sort of a divide there that's along some sort of political and partisan philosophy, right? And that does reflect a little bit. We don't always agree, for example, sometimes three of us agree and two of us don't or something like that. And we try to explain where the differences are. But all of us are really committed to using experts. So I would say also we use economic experts, we use technical experts and it's really, really important part of our work. And we're out of time but just to jump in, this is actually so there's political agendas for sure and this is how government works but you can't underestimate the power of truth or information. If you can demonstrate that you can pop a system or if you can demonstrate that the information is accessible or if you can essentially use science to prove a point, then the politicians or the people with an agenda have to ignore you or silence you. But basically highlighting kind of realities of the world and at least in the security world are very powerful and kind of go above the politics, right? Even if you're, you know, state of security is kind of a bipartisan issue which is great, like nobody wants to get popped in Congress, right? That's the worst thing for them, right? They do all sorts of sketchy stuff. And so you might find that by highlighting and just kind of trying to speak the truth and then working with the press or working with people to highlight these issues and shine the light, you can kind of cut through a little bit of that political bullshit. Sweet, I think that's our talk. I think we're out of time. Thank you very much for getting up early and coming. Yeah. Woo!