 probably all agree that online banking, digital health records, and being able to speak in public are all good things. And we can keep doing all those things as long as we can get the promise of privacy that we need from our software. The question becomes, then, how we get this privacy. Right now, mechanisms for privacy are largely non-technical. We rely on individual programmers to be responsible people. And because there aren't great ways to check beforehand, we trust the government to punish bad actors after a leak occurs. Can we do better? One problem is that using current programming languages, someone like me would have to go through and inspect potentially millions of lines of code for security and privacy. My collaborators and I show how to instead make the trustee programming language responsible for security and privacy, reducing the need for human inspection. At this point, let's define data, software, and programming language. Data is information, credit card numbers, voices, images of eyes. Software uses data to do things like perform online retail services, make medical diagnoses, and identify people. Developers use programming languages to create software. A language can require painstaking detail or be so high level that children can use it without much education. The less detail a language requires, the more child-proof it is. The harder to make dangerous mistakes. In the past years, there's been growing evidence that we need to child-proof our languages for security and privacy. Programmers have had a hard time even without worrying about sophisticated AI algorithms like RIDAS and Mariosas. Let's take a look at a couple of examples. Exhibit A. Gizmodo journalist Ashley Feinberg recently uncovered former FBI director James Comey's secret Twitter account. Hoping to find Comey, Feinberg followed Comey's son on Instagram. And a leak in Instagram led her straight to Comey. Here's what happened. Comey's son's Instagram account is protected. So who he follows should also be protected. But if we look at the suggested people for Feinberg to follow, it's Comey family members and a mysterious Reynolds neighbor. Turns out, Reynolds neighbor is James Comey, and he uses the same handle on Twitter. Let's look at another example for those of us slightly less high-profile than James Comey. I've been working with people at the University of Pittsburgh Medical Center interested in searching over digital health records. Here, a programmer mistake could leak medical diagnoses and all kinds of other information. For instance, consider a search for all patients with blue eyes and a positive HIV diagnosis. You can easily imagine a name ending up in that list that someone shouldn't be allowed to see. Functionalities such as recommendation algorithms and search interfaces already pose grand challenges for information security. From a programming language perspective, the problem is that we're still using 1970s technology from before security and privacy were major issues. Modern software is then a spaghetti of code for functionality intertwined with checks for security and privacy. As a solution to this problem, my collaborators and I have developed a language for what we call policy agnostic programming. Using this language, programmers can attach privacy rules directly to sensitive values instead of as a spaghetti of checks. Automated enforcement makes the language childproof. Using policy agnostic programming, we now need to trust only the language and not each and every individual programmer. We're currently working on two main directions, making this approach practical and supporting privacy rules that talk about potentially complex inferences. For instance, what people can learn from my biometric data. Despite all that's scary out there, I remain optimistic about a world in which we can share all kinds of information to better connect with each other and to make people safer and healthier. Yes, the problem of privacy stands in our way, but using the right languages could get us closer to the assurance that we need.