 There are real techniques to ensure that privacy isn't violated when you are gaining insights from data. You can write a formula that will produce something called an epsilon, getting too far into the details, that will tell you, right, okay, if you query this data set this many times with this degree of randomization in the queries, the first time you're the likelihood of implicating someone in that result is this small, and the second time it's this small, and the third time it's getting a little bigger, and so on, right, and so there's mathematical ways you can actually put a number on that. Another interesting way of getting around it is a cryptographic technique called multi-party computing, where you can take a data set, break it into pieces that are encrypted themselves or randomized, farm that data set out to a group of computers in such a way that no one of the parties has any insight into what the data actually is, and retrieve a result. Right now this stuff is exotic. What we're going to see in the next few years, I believe, is standards emerging that make it much more common and sort of normal to do this kind of stuff.