 All right, folks, we're going to get started. Anyone want to, like, move closer up? Or does this work for everybody? All right. Come on, Sick. Come on, come on, come on. Thank you for those of you who are moving up. So welcome to this session about big data and transparency. And what I would argue, by the way, is that everyone at this conference should be in this session, because this is where it's at. This is the future. I would also argue that big data, transparency, principles for responsible data, this is a governance issue. It's one of the biggest governance issues of our time. And so when we think about, by the way, I'm Erica Carbone, the founder and CEO of Cornerstone Capital Group. And we are an impact investment advisor with research, investment research at our core. And what I'll tell you, we actually just did a piece on data, data transparency, and again, what data means to business models. So we can talk about investments, but more importantly, we're going to talk about the fundamental issues here. And as I mentioned, when you are systematically integrating environmental, social, and governance factors into your investment process, it's the G governance that comes first. And so the issue we have here, big data, is the G. It's the big G. So let me just give you a little bit of color on a study. This is probably about a year and a half old, but you can imagine the numbers have gone in one particular direction. So this is a piece of research from the Pew Center at Penn. So the numbers that they shared with us are that 74% of the population feel that it's important who controls their data, and they feel that they are not in control of it. That's 74% of the population. 50% of the population, at least, has no idea how their data is used at all. And again, I think that number is probably going higher. The Pew Center says that 92% of their study agrees that consumers have completely lost control of their data and don't even know how their data is collected or deployed. That's 92% of us. And that 68% agree that the laws just are not there. It doesn't exist. It's not enough to control their data. And the government should do more. So in terms of what's going on, this is huge. What I would also tell you is when we talk about transparency, transparency is huge to sustainable and impact investing. That G, the big governance, that covers the issue of transparency. So big data is arguably a huge opportunity to give transparency to what's going on in the investment world and the economy. The question is, is the data quality any good? And we're going to talk about that. And is the transparency equal or unequal for people in different categories? So there's a lot to talk about here. And so I wanted to give you that background. So what we're going to do with our two very accomplished esteemed speakers is I'm going to ask them to introduce themselves, Sonia and Tim. And then we're going to start going into the risks and opportunities of big data and transparency and data privacy. So Sonia, why don't you go first? Tell us about yourself. OK, great. Hi, everyone. It's a pleasure to be here. My name is Sonia Katyal. And I teach at the law school at Berkeley. My work, so I guess the best way to introduce myself is to talk a little bit about what drew me to go to law school. I went to law school because I wanted to be a civil rights lawyer. I thought that I wanted to do work on anti-discrimination. But I went to law school so many years ago that when I got to law school, there was this exciting thing happening called the internet. And so I wound up actually switching course pretty dramatically and realized that so many of the things that I cared about through the lens of civil rights were actually playing out in the tech space. And so after I graduated, I clerked for a couple of years in California where I was able to work on some of the big foundational law cases about the interface between law and technology and these emerging issues with civil rights. And then decided to, I really wanted to be enthusiastic about technology and the ways in which it could solve social problems, but also mindful of the ways in which it can deleteriously affect our civil and human rights. And so I went in to teach. And in my area of scholarship and writing and teaching, most of my area focuses on the interface between intellectual property principles, that is copyright, trademark, patent law, trade secrets, and the way in which these principles of intellectual property interface with our traditional concerns about civil rights. So anti-discrimination, privacy, due process, those kinds of considerations. And I do work in a bunch of different areas, but in general, it's pretty much around that main interface. It's great to be here. Tim Morey, I work for a company called Frog Design. We're a design and strategy firm working mostly for Fortune 500 companies, but also some startups. And we tend to help them with products and services that are outside of their core. So we'd work with a car company to design a post-self-driving car experience, or we'll work with a drug company to design products and services that are outside of pills and powders. And through this work, particularly in the last decade, it's really taken us into designing products and services that leverage data and personal data. And particularly the decade that followed the iPhone, we were helping a lot of these companies mobile-enabled their businesses. And that got us thinking about what is the appropriate use of this data. What's the value exchange? How do we make this clear and transparent to the users of a product? What are the boundaries and trade-offs we need to think about? So I captured our thinking around this in a framework and published it three, four years ago in an HBR article. And we still use that framework internally at Frog when we're designing data products. So my role on the panel today is sort of representing the side of making products and services that are leveraging people's personal data and pushing forward as we get more sources of data that we can build into these products and thinking about the boundaries of what's ethical and what's value-adding to people as we build those. So I would assume a lot of us are not technical experts and more on the investing side or the entrepreneurship side. So let's start with some definitions, all right? So, Sonya, give me the definition of big data and AI. And then Tim, why don't you do machine learning and quantum computing, all right? You good? OK, Sonya. OK, well, big data, and this is, again, just super high level. What big data does that is very different from other traditional sources of data is that what big data involves are sort of massive, massive quantities of data sets that are called from consumer practices, mainly online, perhaps on mobile phones, issues that have to do with everything from buying habits to preferences to anything that we could capture in data about a click stream or the way that people respond to particular questions on a survey. All of that is kind of called into these sort of massive amounts of data. And Tim, you should totally jump in if you can add a little bit to this definition. But the basic idea is that big data is significant because rather than talking about a small microquantity of consumers, we're actually studying patterns that are kind of demographic and populational. And so any kinds of issues that we would, I guess, imagine in like a census, for example, would be the kinds of issues that we capture with respect to big data. The thing that is really significant about big data is how it interfaces with AI, otherwise known of as artificial intelligence. And so, and again, this is at a very high level. I'm sure that there are others who could explain this in more technical detail. But in general, like what we think about in terms of automated processing of big data are two big kind of frameworks for responding to data sets. So one kind of way in which we process big data is by designing certain algorithms that then process data. And those algorithms are designed by humans. And we are able to detect certain patterns in data, figure out relationships between different variables that we might not have anticipated. And so a lot of the big issues that come up in sort of designed algorithms has a lot to do with who designs those algorithms and the kinds of things that they look for in the patterns that we program. But the more interesting set of, I think, foundational problems comes from what we think of as kind of self-learning. So these are algorithms where, in fact, the computer is trying to detect certain patterns of causation and pairing certain variables. And so one of the big, it's a tremendous amount of opportunity in terms of thinking about the ways in which we can use data about consumers to discover new and sort of unanticipated associations and personalize people's online experiences and understand their practices. But it also raises really significant problems because sometimes these algorithms can mistake correlation for causation or lead to automated decision issues that really implicate issues of civil rights. So you get machine, you want to say something? I was just gonna build a bit on how we think about the layers of that big data. So when we're building products and services on data, the least sensitive is that self-reported, created data, so things you put on your social network or things that we gather traditionally through so late. When you say sensitive, I'm sorry to interrupt, what do you mean about sensitive? Do you mean sensitive as in personal or when you say least sensitive? Yes, I'll be precise. So that consumers, people in their daily lives feel the most touchy about that data being used in some way. And so the data we're most happy to give up and be used in the services we use every day are these self-reported, created data. The next level of sensitivity is what I think of as digital exhaust. So I took a lift to get here from my studio in the mission. I didn't mean to let Google know where I was and I didn't necessarily mean to let Lyft know, but through that process of coming here, I've created a trail of where has Tim been this morning. And we do that through our cars, through our IoT devices at home, through our mobile phones. So that's sort of the next level of digital exhaust that we create as we navigate the modern world. And then I think the most sensitive is when companies take that and make inferences and assumptions about us. I think we'll talk a little bit more about this later, but there's another layer in that digital exhaust that's being created by sensors in our environment. So cameras and smart city-type sensors. And that takes us into a whole new area. I hope we get to talk about a little bit more. Oh, we're gonna get to talk about it. So when I think about sort of big data and building products and services, those are the data pools that we're pulling from to say what can we build? And so again, back to definitions, machine learning and quantum computing. Sure, so quantum computing I know almost nothing about, so let's start there. I've done a lot of work in the semiconductor industry. The wafers are getting thinner and thinner and thinner. They're getting to a point where you really can't make them much thinner. And so people are looking for a new paradigm, quantum computing, I believe is one of those. It's not something that at Frog, as far as I know, we've touched or worked on yet. And then machine learning is taking, so rather than going to that data source with a hypothesis saying find me patterns in this, you just let the systems run and figure out and find what they will and see what interesting patterns come back. So it's kind of AI also. All right, by the way, an analogy that I got from, I think it was actually the CEO of Intel on quantum computing. And you'll tell me if you think this analogy is reasonable. But if you're in a maze, let's say a corn maze or something and you're working your way out to get an escape the maze. And with AI and machine learning, you can be testing every single one of the options really, really, really, really quickly and more and more quickly. And so that's what big data does. With quantum computing, you are simultaneously and rapidly testing every option at the same time. All right, so we're talking about exponential increases in processing. All right, so understanding these terms is really important to know where we're going. Now, so digital exhaust and the fact that there's gonna be sensors everywhere can be good, can be bad. But let me just ask a basic question, Sonia. Who owns the data? That's a great question. I think that probably the answer to this question might depend on who you're asking. I think that consumers sort of operate under sometimes the mistaken belief that they own data about themselves, that they have control over the data. The reality is actually very complex because of the absence of law to really critically and protectively govern data for the consumer. So as a result, the answer of who owns data often depends on who is gathering the data. And this creates lots of issues with respect to issues of intellectual property, but it also deals with issues about privacy because once data becomes owned by an external entity, the consumer's ability to control what happens with that data can sometimes be compromised. And oftentimes we're sort of at the mercy of companies in terms of protecting and anonymizing our data, which often they govern under this idea through a moral lens as opposed to a legal one. And so one of the big sort of factors that we as lawyers think about is the absence of strong legal protections in the United States. Part of the reason for that is because in the US we tend to think about data and privacy through the lens of the marketplace. And we could compare that to Europe where the lens of data and privacy are really sort of compared to a much more, a much more, I guess, foundational principle of human rights and civil rights, which is something that we don't see as much of in the United States because of the absence of these laws. Do you want to add on to that, Tim? Yeah, I don't know if folks in the audience saw Tim Cook's fairly definitive statements on this yesterday from the European Union. He was giving a talk to I think the European Parliament. But as a product builder, my hypothesis or starting point has been that the person using the product or service owns their data and they trade it for access to the service or product that we're designing. And that worked really well in the web days. It worked kind of okay in the mobile days. It's totally breaking down in the ubiquitous environmental sensing days simply because you don't even know that you've exchanged data as you've walked here or come here. And so this is where I think we have to move to legal protections in order to maintain and allow users to keep control of their data. So let me ask you something, where do you think we are? And by the way, somebody said to me that I was serving for something and I wasn't sure what product I was looking for online. I went to a few websites and I said I can't find the product that I'm looking for. And he said, well, actually you are the product. So I was giving up so much information. I was the product to the provider. And it seems like some big tech platforms. That's how they make these unbelievable kind of scalable businesses. I'm giving them myself as a product. Anyway, so from both of your perspectives, where are we on the continuum of an individual with my data having said, okay, I'm resigned to it. Have all my data and I get all my products and services. I'm resigned, complete transparency, you know everything about me. Or I am outraged that my privacy is being violated and I'm gonna demand some kind of principles for responsible data. So on that continuum of resignation to outrage, where are we as a society? So in the studies we've done, people seem to care deeply or they say they do, but it doesn't really change their behavior very much. And the last time I ran a survey on this was just after the Snowden disclosures and really thought maybe this will be the privacy event that makes people think twice about the services that they're using. We didn't really see any drop off in usage of these kinds of products. And then more recently with the Cambridge Analytica and Facebook disclosures as well as the data losses, again, we think each time we're on this cusp of sort of a privacy breaking event where people will say, okay, this is not a fair trade, I'm not gonna participate in this. The outrage is there, but it really hasn't translated to behavior change. And until it does, the economic incentives on the company side are such that we'll keep going. I had one more thought on this, which was that on the flip side, and I don't want to bad mouth my clients, but I'm going to, the Fortune 500 companies that we think of as this sort of super powerful, have their act together, know everything about you, companies, most of them are not. Most of them are very confused. They make terrible use of the data they have. They have almost no clue about this or very little. There's pockets within of data scientists trying to figure stuff out, but it's so far from being ubiquitous and knowing everything about you, that it's almost sort of embarrassing and laughable. And so, yeah, just put that out there that over time, I'm sure they'll get better, right? These are learning organizations, but we're not there today. I think there was a question from the audience. Hi, I'm Christy Mansfield. I have a data platform and startup and also work in tech and AI and technology. I was really interested in what you said around behavior change, because I think that's only the case because we are not giving people the option to change their behavior online. And I'm starting to see differentiation from potentially startups and others to give people the option to erase their data or to have ownership of that data. So when that becomes easy, like a push of the button in an app, for example, what are your views on how behaviors will change? Yeah, I think it may. I'd love to see it. I look at search engines like DuckDuckGo that's been around for a while and it's actually, I think the brother of one of my colleagues who built that product. And it hasn't got the traction compared to sort of the giant companies that have all the investment and mind share. So I think over time, if people are given decent choices, some people may switch. But I don't know, changing human behavior is incredibly hard. I know this from trying to launch products and services. Many of them have failed and people are very resistant in general to innovations. And so I think it's great that we have more options out there and I support these, but I think getting mass usage is challenging. But that shouldn't discourage any entrepreneurs from pushing and building these things because I would love to see that world compete with GAFAM. So, Sony, let's stay with that question of the continuum, the resignation to outrage thoughts. Yeah, I mean, I think that to answer this question, you have to think about maybe three different sources of potential change, right? So one is behavioral, right? Which starts with a micro question of the consumer and consumer practices changing in response to entitlements that they might demand. The second layer of change you could see in the marketplace, right? Where a company has become increasingly aware that consumers are upset about their personal data being used and then create options like the ones that you've described. I think that the big obstacle there is whether or not these companies are listening and whether or not they actually can design options that allow individuals to protect their data. And I suspect that to the extent that those options conflict with their business model, which may be about selling data and collecting data, I think that that's where we'll see avenues of conflict. And this brings me to kind of the third avenue of potential change, which is law. And I think that without a comprehensive system that governs these large companies to really understand privacy as a fundamental consumer, right? We are beholden to market practices and that puts us in a very vulnerable place. One thing that I do think is really, so that sounds really pessimistic, I should be a little bit more optimistic. I think one of the things that I think Tim and I both agree about, and we can talk more about this, is the fact that Europe is actually a little bit farther along in terms of thinking about how to build in fundamental entitlements to the consumer. And so to the extent that you have these multinational companies that are now gonna have to make sure that their practices are in line with GDPR, which provides this amazing system of entitlements for the consumer, we might see entitlements ramping up in the United States, but in the absence of congressional and state level commitments, it's really hard to make that happen because the marketplace is just simply hard to rely on. With regard to kind of human rights we talked about, just that this is so-cap, so we are thinking about business and technology and society. And when we think about human rights and freedoms, we can think about FDR and the idea of the four basic freedoms. All right, so we talk about freedom of worship, freedom of speech, freedom from want and freedom from fear. And I would argue that right now in the place we are with data, there is a lot of fear because we don't know exactly what's being done. And at Cornerstone we think about business models and investing, and so you kind of have to put a framework together. And the way we think of it is usage of data, the breadth of the data, and the business dependency on the data. And so on the latter is a company of kind of low or high dependency on the data for their business. On the breadth, is it data that is given or if it's data that is being profiled and kind of taken? And then in terms of the usage, is it used internally for the business model or is it used externally sold? So that kind of framework for business models can really help investors think about freedoms and human rights and combine the issues of big data and transparency and society. So this is big stuff, right? All right, so let's keep going with the idea of regulation and ethics and morals. And GDPR, I understand, is very, very difficult to actually implement. But in terms of the US, hopefully we go to principles rather than rules for a regulatory environment. But can you give us a sense of where are we now with regard to the social, the societal discussion on kind of principles for responsible data? Where are we? Are you asking at the consumer level or are you asking at the company level? Yes. Company, okay. All right. Well, where are we? I think that GDPR in Europe has done a lot in terms of encouraging the conversation in the US. I think that companies are really sort of looking back at their policies in the United States and understanding how to comport. And I'm sure Tim has a lot of thoughts about this, watching companies try to kind of ramp up their practices to be mindful of the commitments that we make in Europe. But I think in general, like the things that I feel like many companies try to aim for is, so there are two big kind of clusters of issues. So one has to do with the quality of the data that is used for automated processing decisions. So if the data is incomplete in some way or if the data reflects patterns that are structurally illustrative of discrimination, then the automated decisions that will flow from this poor quality data will inevitably potentially be discriminatory, potentially impact people's privacy, and perhaps it also affect their sense of due process. So that's one cluster of problems that stem from bad data as computer scientists say, garbage in, garbage out. The other cluster of issues that we see has everything to do with who is being processed by these automated decision-making programs. And also, whether or not these individual consumers have an opportunity to contest the decision or to even understand a right of explanation to understand why these decisions are made. And if the sounds kind of, I don't know how many of you have seen the movie, Gattaca, but I often think about that film. But if it sounds reminiscent of that film, which I all encourage you to see if you're interested in AI, it really is a world that I think symbolizes a growing divide in the United States, which is that individuals with a high degree of economic security have the right to contest certain decisions. They have the right to pay certain amounts to discover why decisions were made about them. The poor are increasingly being processed by machines. So welfare, government entitlements who gets on the list for a no-fly, being banned from being able to fly, whose government benefits, how they're being meted, all of those kinds of issues affect low-income communities. And particularly individuals, we see a lot of this in the criminal justice system where algorithms are used to predict the likelihood of individuals committing future crimes. And these algorithms are pro-publica, which has been doing wonderful work around this issue. It basically revealed that these algorithms lead to tremendous racial disparities among people who are being predicted for the likelihood of committing future crimes. So what we're seeing is a world that I think we all have to kind of be mindful about in terms of the risks of AI, but who is particularly being targeted, I think in terms of the result of these decisions and the inability to contest them, are not the wealthy. They are individuals who typically rely on the government, either for entitlements or are part of the criminal justice system. Yeah, if I tackle the same question, but more from a commercial end, and there is a link with the wealthy that I'll come back to, what I've tried to convince my Fortune 500 clients of is that having people trust them is a benefit, a business benefit, that you can build better products and services that your consumers are gonna be more willing to give up data to you if you've demonstrated that you're trustworthy and that you give value back in the form of tailoring a product or service and that they should be really doing things to build that trust. And whether that is forced on them through legislation, which I think will be, or whether they proactively try to build that trust, that this is a business advantage and it comes up time and time again. So I recently did a study around financial services and health and we asked people of these companies, which would you trust with your 401k and which would you trust with your health data? And nobody trusts Facebook with any of that type of personal data, whereas companies like Apple and other tech companies have a somewhat higher degree of trust and that gives those companies an advantage because they can offer products and services in those more sensitive categories and they can put sort of lamar, less feature rich, minimum viable products out in market and expect consumers to use them because they have that degree of trust built up already. Now the link to sort of wealthy part of the consumer market is that I fear that privacy becomes something for the rich and Apple has clearly gone down this business model that says, we will only use your data to enhance your product or service experience and I think that that's great. It allows them to compete against business models that require selling of that data. But Apple, I worry that we move into this world where if you can afford an iPhone, you have a degree of privacy. If you're on Android, you're out of luck and that privacy becomes something that is a luxury good. So you guys product design, product development. So I'm gonna ask you a question and I can ask Sonya a question after so you don't have to answer it immediately. You'll tell me. If you were gonna design a product and you're a big bad monster tech company and you were gonna design like a really evil, abusive, offensive, but effective product that uses data in the most egregious ways, what would that product look like? Do you wanna answer that or do you wanna hold that? Let me think about it. Okay, so you think about that. Challenge my inner evil while I hold that. So hold that. Okay, so Sonya. Case law, can you give us some examples of the case law associated with data privacy? Do I need to go back to towers? No, it's just, so the case law unfortunately is really, really, really limited. We have a couple Supreme Court decisions. There's one called Whelan that involved sort of the notion of data as something that is privately held. In general, these kinds of issues do not come up through case law. They're dealt with through statutory protections. And these statutory protections are protections that we might see at the federal level. So everything from fair housing protections to protections on credit reporting to FTC practices against abusive business practices, these are the kinds of things that we look to as sources of inspiration in terms of the protection of privacy. And I should say that the FTC, the Federal Trade Commission has increasingly become more and more concerned about issues of privacy protection. And so has really tried to sort of utilize some of the existing statutory protections under health or credit reporting, housing, these kinds of issues to basically try to rein in companies. In terms of the case law, we actually don't see that much. There is one Houston case that I know of that's really interesting, that involved automated decision processing on teacher ratings. So it turns out that teachers are subject to an enormously complex data processing operation that basically decides what determines, what factors go into determining whether or not you're a successful teacher and then issues of tenure and other kinds of entitlements are made based on the results of these automated processing companies. So an external company is now processing this information. And so some teachers in Houston challenge this under due process principles and they won. And so we do have some room for holding government processing of automated decision making accountable. That decision is one of a very small number of decisions. Another area deals with the right of researchers to design programs to basically scrape data off of different websites to understand different practices. So like when you're challenging whether or not a realtor's decisions to show certain types of ads to certain demographics, if that might implicate issues of anti-discrimination, that's another area where we're seeing some case law. But really like most of this is done through statutory protections. And I don't think the federal government is at this point in time terribly responsive. In the Obama administration, there was a lot of talk in the various offices about figuring out ways to protect consumers. We're not seeing as much now and it's really incumbent now in certain states. And California is a great leader in a lot of these kinds of movements. That's really helpful. Whenever you wanna know where you're gonna invest, generally speaking, you wanna invest in places where there's not a lot of case law. Because what it means is there is opportunity. We're gonna hold questions just for once more because I wanna see if Dr. Evil over here has figured out something for us. You got something or should we take a question? Let's go to the question, give me more time. Okay, hold on, wait, we wanna get you a mic. Okay, where's our mic out there? All right, just talk, okay, loud. Where they didn't wanna regulate. So they created an entity called IDESG, which was supposed to be a private public partnership to create recommendations instead of regulations. I think it could be generously called something that didn't work out. And so I'm wondering are there regulations that you feel like should exist? Like if you could wave a magic wand or if you were in Congress and Congress was functional, that you would pass a regulation or a law, what do you think would be most helpful for both consumers and companies right now that we're not doing? That's a really great question. I think that one of the things that I would really think about on a foundational level is really thinking about kind of a consumer's bill of rights, right? So thinking about the right to privacy, thinking about the right to fair treatment, thinking about the right to due process and all of the things that flow with those kind of three big clusters of entitlements would be things that I would wanna see. I think that as a source of inspiration, even though it's subject to a tremendous level of very well-founded critiques, I think that the GDPR in Europe is a really promising example of the way in which governments can entitle everyday citizens to take their data under their own control, demand the right to an explanation if they're processed by automated systems. And we're gonna see a lot of complexity that flows from that set of entitlements, but I actually think that that complexity is something that we should ask of tech companies, that we should ask companies to be accountable to. So I guess that would be what I would look for. And then the other thing that I think is also just really important to keep in mind, and this goes back to I think Tim's really insightful points about behavior among consumers. You know, without some accountability in the legislature, without these citizens demanding of their legislators that they enact policies that are responsive, we're not gonna see a tremendous amount of traction. We do see it in California because the electorate here is pretty smart and pretty, I think, thoughtful about the kinds of things that they wanna see, and we're at a level where the state government is responsive. So that's a great example for the rest of the country to follow. Let's hold that one because we're gonna go. So back to the question. So a product that could really do evil that maybe people wouldn't know, what would it look like in product development standpoint? Sure, let me preface my answer by saying our internal motto at Frog is to advance the human experience through design. So this is very counter to my, by nature. We won't say anything. All right, we'll scrap the video too. So I think the principles of this evil product, first of all, it has to be bigger than just advertising spend. And so I would be looking for a product that allows me to go after things like healthcare or accommodation or food, but basically influencing, assuming I wanna take as much money as I can, it can't just be mind share and advertising. And I think that the way to get there would be to have a total ubiquitous sensing environment. So I know exactly what this person's doing. Think of sort of a Truman Show type life where, but rather than having a whole cast of people creating this fake environment, you would have sensors and computers doing that. And then I would use behavioral techniques to nudge people to move them in the direction I want. So that's another sort of design principle is what can I do through the interactions and the design of this product to nudge them to the behavior I want. And this is something that I think a lot of designers have ethical quandaries over even today as we design mobile apps. Sometimes you can use that to make people do things that are good for them and healthy for them. Governments try to get people to behave in ways that are good for society, but you can use that same technique or those same techniques to nudge them in nefarious directions. Yeah, I think that those would be the two or three principles of my product. Just this was not, you know, this is not, I mean, superfluous that I was asking this question. You know, for someone like me who's been in the capital markets for 30 years, you see what is bad, what shouldn't be done. And based on that, you figure out what should be done. So let's go to what should be done. Yeah, and one more on the evil sides or before we move on. Ideally, the victim of this is a happy prisoner. So they want to be there. They want to, so it's probably got some sort of gaming element or something where they're participating in a very addictive, like I want to do more of this way rather than being forced to do it or having negative emotion about doing it. By the way, we are kind of there with what you just described, right? We are kind of there. Yeah, it reminds me of certain social media products and gaming products. Right, we're there. Okay, let's talk about a better world, all right? And you have so many examples of your clients, whether it's Disney or anyone else. Could you talk about some encouraging examples? And then we're gonna get to across different industries how we can use big data and transparency to do wonderful things, okay? But give us some examples. Sure, so I think the danger in these kinds of discussions is that we go down a path that says we can't use any kind of data and that monetization that's indirect is all bad. And I don't think that's the case. And I have lots of examples from our work. The most recent is a company called Charlie. It's a financial services company that uses your smartphone to give you tips about saving. They sort of describe it as a CFO in a pocket. And what they do is they will look at your spending pattern and behaviors and say, you know, Tim, you're spending a lot on Lyft or Uber. You should get this Capital One credit card that'll give you money back. And their business model is therefore making money from referrals. But as a consumer, I'm saving money and they're helping me manage my money. And so to me, that feels like a fair trade-off that I enter into a relationship with this company. I'm a willing participant and they're helping me manage my finances. They're making referrals. So even though going back to my sort of data access, that they're using this data not to enhance the product per se, but to sell my data to a third party, it feels like a good example or a good use. And that's the kind of balance that we try to maintain is whatever the business model, are you transparent and clear about it? Are you doing it in a trust-building way? And are you giving value back to the consumer that's equivalent or better than the value you're taking? Yeah, I don't have much to add. I think that's a great example. I mean, I think it is really important to also just emphasize, and I completely agree, it's very easy to walk down a sort of dystopian trail when you talk about the challenges to big data and transparency. I think there are enormous possibilities, particularly in the context of health. Like personalized medicine, AI, learning things about the human body, people's preferences, all of that can be enormously enlightening and really, really helpful for the average consumer. Then if we think about climate, energy efficiency, transportation, identity, right? So remittances, they're so much good. So I'm gonna ask one more question of the two of you and then we'll open it up. So let's talk for a minute about what is considered a trustless systems. Let's talk for a minute about blockchain, all right? And so in terms of the applications of blockchain and blockchain and society, either of you wanna share thoughts? I don't have a lot in there, so. I have to say, I mean, lawyers tend to stray far away from blockchain because of the absence of a centralized system of accountability. So I think again, that challenge is really dependent on the trust building opportunities of the discrete units. And I think the most important thing that can empower a system of trust is a lot of the things that Tim talked about. It's consumers demanding protections, it's agencies and entities that are responsive to that demand and that that demand plays a primary role over other consumer choices, right? So we might say that we're concerned about privacy on Facebook but we're all on it, I mean, not all of us, but significant numbers of people are on it despite those concerns. And so I think figuring out how to leverage or balance those concerns is important. I don't know about blockchain. Yeah, there may be ways that makes it applicable to people keeping control of their data. I have not worked on anything that is commercial or more than kind of theoretical yet. We'll do that panel next time around with more. So questions, everybody, yeah. So I'm curious about the language that we use when we talk about AI and automation and all those sorts of things. We often say things like the robots are coming to get our jobs or when Zuckerberg was at Congress last year he said don't worry, AI will solve it when what AI is is just a computer program written by people who have biases and it seems like maybe part of the solution is if we sort of begin to just recognize that simple fact that it's just another computer program written by humans with a whole host of biases. Yeah, I could not agree with that more. I think the problem, I think what we see as being the issue is often the problems that result from not algorithms that are designed by humans but patterns that are detected by these machines. So if we discover, for example, that people who use mobile phones to buy certain types of purchases pay more than people who use people who purchase the same things on desktop computers and we know that poorer individuals who are lesser income wind up relying more on mobile phones that leads to a level of price discrimination that is gonna impact the poor more than the wealthy. So I think that that's the kind of thing. One story that I often think about is a story of an individual who, so there are all these tests that you take when you apply for jobs and this individual had a bipolar disorder. The questions that were framed were questions that were oriented to sort of assess someone's sociability and their preferences. And this person kept applying for these various jobs and kept being turned down and could not figure out why he could not get a sort of entry level job at a supermarket. And it turned out that the issue was that the surveys that he was filling out were surveys that were not responsive to the kinds of concerns that any other individual would have faced if they had been protected under the Americans with Disabilities Act. And so you have lots of examples of that. And these are examples that, frankly, for all of you who care about socially conscious investing, these are examples that are really important to write about and think about and talk about because companies will inevitably look at the promises of big data and say, this is such a great way of figuring out how to hire people, right? Like AI, we should use AI in our hiring, Amazon says. And then they look at the results which we just saw in the news, they look at the results and the results only preference male applicants. And why is that? It's because the data that they use to train the algorithm was completely disparate in terms of its representation of women. So this is the kind of thing that doesn't take a computer science degree, right? To understand the risks of these kinds of situations. So I think your point is really right on. We can totally. So on that same topic, I'm actually curious about how we fill in the gap in the case of Amazon and looking at the resumes. We know that the reason AI and machine learning algorithms is picking up more male candidates is a result of the workforce that we see at Amazon, right? So how do we address some of those more systemic issues and allow our data to capture beyond just what Amazon is using as an example? Okay, so I think, well, there are two big avenues that are maybe three. So one is we have to get better data, right? We have to get data that is truly representative of a population and we have to be mindful of the issue of that our data is gonna reveal the cognitive biases of people who answer questions. There are all sorts of issues in kind of figuring out how to clean up data to be mindful of these kinds of structural issues. That's the first thing. The second thing is when we hire people who are designing algorithms that will eventually train on this data, we need to hire people that are really acutely mindful of the ethical risks and implications of not thinking about diversity or not thinking about due process and the things that they design. So one thing that I often think about, which increasingly I think more about as an educator, is the importance of educating our engineers and future computer scientists and future data scientists about really making a commitment that looks a lot like a Hippocratic Oath in saying, look, and Britain actually does this, there is a code of ethics among engineers that requires you to try to avoid creating situations that might systemically encourage discrimination. And then the third thing that I would think about are laws, right? So again, the entitlements that I talked about before, really thinking about data and privacy as fundamental entitlements or the right to privacy, the right to due process, the right to fair and equal treatment as being things that are meaningful, not just against the government, but actually something that we can hold against private companies. Someone mentioned earlier the issue of public-private partnerships. This is like a huge area of law because increasingly what we're finding is that governments figure out that when they send something out to a private company, they're immunized from their constitutional obligations and that is an increasing area of legal concern that we're thinking about. And the only way to fix it is by creating better laws. I think if I sort of step back from that and think about the arc of all technologies, it reflects the society that we're in. And I came to the Valley in the late 90s and the internet was this really optimistic thing. And I loved being here and it was so exciting and it was so full of promise. And then in less than 20 years, it's come to be this sort of commercial bizarre with all these challenges and problems that we have today. And really it's a maturing. It's just a reflection of our society. And I think what you're describing in AI is the same, that none of these technologies are particularly solutions. They're simply going to reflect the society we're in. So we have to continue to create and battle for the society we want. And then our technologies should match that and reflect that. By the way, as long as we're talking about data quality and systemic issues, insustainable and impact investing, we have a massive data problem also. And so the data that comes out of our corporations regarding the ESG factors, ends up going into indices and ETFs and everything else. And again, there's a massive data quality problem that's built into the systems. So we have to think about that and standards for disclosure and everything else. So there's loads of analogies around here. I think we have time for another question. We're just yelling out. When they try to build trust, they stand to gain a competitive advantage, right? But you also spoke about how consumers are outraged, but they don't actually show it in their purchase behavior. So is it actually true that they stand to gain a competitive advantage? And if so, wouldn't the cost benefit make sense that all the companies would have done it already? And maybe some companies tried it and then they saw that the users were dropping. I mean, the number of users adopting the product. Great question. I think about this a lot. I think it depends on what domain you want to play in as a company. So if, take a company like Netflix, if their ambition is to remain in entertainment and be an entertainment company, they can probably get away with a lot of data abuses, if you like. Because they're not insensitive parts of our lives. It's just recommending entertainment options to me. And so they could make that calculation and trade off. But if they wanted to get into connected home and start to expand from the set top box into other aspects of our lives, then we may be less willing to have them in our lives if they had a, and I'm not saying they do. I think they have quite a nice reputation. But if they were seen as an untrustworthy company, it limits their maneuverability. And so if I look at the social network companies or I think about Google or I think about Amazon and the varying levels of trust they have with consumers largely due to their business model and our sort of implicit understanding of what they're gonna do, then an Amazon which is highly trusted has a much greater degree of freedom to move into other areas of our lives. And I think that's where the competitive advantage will come to play. And so it's not just doing this because it's the right thing to do or it's not really a moral or ethical argument. I think it's a business option argument that I'm making with my clients. But you're right, if you're playing within a very constrained space and you're never gonna go beyond that and it's not particularly sensitive, then yeah, you can get away with a lot. So we're gonna wrap up one last comment on trust. All the empirical research on business, business management internally trust, the existence of a trusted company is the single best determinant of innovation. So organizations that have trust internally not only have a better client experience but internally are more innovative. So in any case, on that note, I just wanna thank you guys, you're awesome. Thank you all and we can talk another time. Thank you. Thank you.