 So thank you very much. Laura is Senior Policy Counsel for New America's Open Technology Institute. She previously held positions as staff at public knowledge and a clinical teaching fellow at Georgetown Law. And David is also a New America fellow currently writing a book on the impact of algorithmic and computational methods on public policy and social life. He writes the weekly bitwise column on Slate about technology and used to work at Google and Microsoft as a software engineer. So thank you both for coming very much. So algorithms and the law. What is, Laura, I don't know if you want to start on this. Why should we be worried about the ethics and the law of algorithms? Great. Thank you. Thanks so much for having this panel. So I think a good place to start maybe would be to talk just a little bit about what this panel is called, which is the legal do's and don'ts. Or I guess the algorithms and the law, the legal do's and don'ts of big data. And I think something, a good starting point to talk about this is to say that when we're thinking about problematic outcomes from innovative uses of algorithms, a lot of times there aren't really clear legal don'ts. So I mean, I am a policy lawyer here, and I approach this issue from a policy perspective. And I think so that's why this discussion will bleed a lot into more broadly ethics and what we think some policy approaches might be. But so to answer your question, one of the issue areas that I work on a lot here at New America and the Open Technology Institute is consumer privacy. So I approach this in part from the perspective of someone who cares a lot about ensuring that those who are collecting data and using it are doing so in a way that's consistent with consumers' expectations when they share that information. And that it is clear to consumers that they are sharing information, what purposes it'll be used for. But I also think that just a topic that you brought up a little bit in the last panel and that the panelists started to talk about. This idea, it's really important for us to talk about this idea that we could be using algorithms to generate outcomes that perpetuate human biases or otherwise have some sort of disparate impact on different communities of people in a way that we would find problematic and to come up with ways to address that. So just to give one example of that, in the last panel talking about quantifying ourselves, there was some discussion of FitBits being used by health insurance companies or some other measurement of information about the body being used by health insurance companies to calculate discounts that some consumers, some insured might qualify for based on their healthy activities or their lack thereof, I guess. Car insurance companies are doing this too. So some car insurance companies now will offer a program where the insured can agree to have a device installed in their car that will measure information about how they drive, which the insurance company might then use to calculate scores for that individual based on their driving habits. So the things that they associate with safety, so they'll look at things like acceleration and deceleration, how the car takes turns, the average speed that this driver drives and how that speed compares to other drivers that are on the road. But one of the categories that many of them consider also is time of day when the car is in use. And I think this makes sense because, or one could might assume that in the wee hours of the night many drivers on the road could be drunk or could be tired and accidents on a per capita basis, accidents are more likely to occur in the middle of the night than during the day. But if we're providing car insurance discounts or discount premiums to individuals based on the hours of day when they're driving and we're providing less, we're having higher premiums for people who are driving in the middle of the night, then that's gonna have a disparate impact on folks who are working the night shift. And folks who are working the night shift are disproportionately poor and of color. So I think this is just one example. There are a slew of examples of this nature. But as we're thinking about ways to take data streams and innovate new uses that might make a lot of sense to, for example, a car insurance company, I think from a policy perspective it's important for us to evaluate when the outcomes might be having a disparate impact across communities that we as a society would find problematic. Right, so just to tie this back in to the discussion we were having earlier, I mean, a few minutes earlier we were talking about privacy and sort of the question, can you safeguard your privacy? Can you keep people who you don't want to have your data from having your data? And what we're talking about now, and this is something that I believe to be the case, is that the discussion about privacy is kind of lost. We've kind of lost privacy. I mean, lots of the people who I know who are thinking about this stuff essentially say, let's not even think that talk that much, not give up on privacy altogether, but the fact is that so much of our data is now beyond our control. That rather than worrying about whether or not we can get it back, what we need to worry about is how is it used? And in particular, is it being used in ways that inadvertently discriminate against certain groups and cause social inequities? And that's essentially the point you're raising. Yeah, I mean, I agree with that in part. I mean, I still think that it's kind of a yes and, right? And I mean, yes, we need to think about possible ways to restrict use, or at least to monitor uses of data and think about problematic outcomes and try to prevent them. But I also think that that doesn't negate the importance of ensuring that to some extent, information collection is at least consistent with consumers' expectations, and ideally that they have real notice and consent about it, and that we try to limit aggregations of data from disparate information sources where consumers might not expect that the downstream aggregation will take place. You have an example that you wanted to talk about as well in relation to this. Yeah, so I think Laura's done a great job of covering some of the legal and policy implications with regard to disparate impact. I think that the issue of the sheer opacity of these algorithms, not just to their users, but sometimes even to the designers, is worth dwelling on, and that we think of algorithms as having easily accessible results, and yet we often aren't even sure of what criteria by which algorithms are valid, the outputs of algorithms are evaluated, or if they are at all. I mean, if Google, for example, who is it that checks to see whether Google search results are accurate? Well, to some extent, the user does by clicking on them, but that doesn't refine them to any sort of precise degree of capacity because you're only going to click on one, and we're accustomed to seeing irrelevant Google search results even on the first page sometime. So the feedback mechanism, which is required as algorithms get more complex and non-deterministic, is a very imperfect model, and in some cases it seems that people aren't even trying to set up that sort of a feedback model, and one of the big examples to that I'd cite is in EdTech, where there's a big push towards automated grading of student papers, and we've seen a lot of, we've seen a lot of sort of results-driven teacher performance testing over the last 10 or 15 years ago, 10 or 15 years or so, and it's met with very mixed success because the question is, are these the right metrics? Are they being measured correctly? Is it, in fact, forcing teachers to teach in a suboptimal way just to meet basically a seemingly reasonable, but in fact arbitrary and misleading set of metrics? Well, that becomes, not only does the use of algorithmic grading of papers sort of reinforce a notion of false objectivity, but you also have this issue that the claims made that they can grade as well as teachers. There's two components of that, which is do we know what it is for a teacher already to grade a paper well, given how badly a teacher assessment has been over the last 15 years? Second, are we in fact making the correct comparison? And certainly from what I've looked at, I don't think there's a lot of use to that claim. I don't think that the claims that are being made that algorithms can grade papers as well, assess papers as well, really stack up beyond a fairly simple, syntactic level. And yet I don't see a lot of questioning on them. There are meetings going on and people and ed tech companies are pushing for this stuff and institutions of education are signing off on it without having a literacy to look at whether that assessment is being made correctly. The other half of that, though, is let's say for the sake of argument that the computers actually are doing a reasonable job. I don't think that they are, but even if they were, would we understand the criteria that are being used? It's not enough to just say, oh, well, they're grading it and they're just doing as good a job as a teacher. If there's no feedback mechanism to put a check on that to say, look, are these algorithms being trained to continue to assess student papers in the way that we think is right? The algorithms aren't going to regulate themselves. Algorithms don't exist in a vacuum. Just as teachers get feedback and exist in an ecosystem of learning certain methods of pedagogy, algorithms need that ongoing sort of pedagogy as well and standards to refine themselves. But we have, I think, collectively, sort of a false notion and in some ways, this is sort of the legacy notion of algorithms as sort of static deterministic set in stone platonic entities. And going forward that's going to be, that not only is going to become less and less true because we're seeing increase in machine learning algorithms and I think we have a machine learning expert who's coming up next who can speak to this better than me. But it shouldn't be static because in fact the only way for machine learning algorithms to improve is if they are getting a well-regimented and explicitly defined set of feedback saying whether or not the results they're producing are accurate and in which cases they are accurate. And I don't think any attention is being paid to that. Certainly not in the case of ed tech and grading in these cases. Right, and I think you're getting at what is essentially the worry about entrusting anything to algorithms and especially to machine learning algorithms which is, sure, this algorithm may, on average, or for a large proportion of the population, do the job whether it's grading papers or setting insurance premiums or setting prices or whatever, may do the job, on average, better. But in the cases where you feel like it's done a bad job, how do you appeal? Because you can't ask the algorithm why did you reach that decision. As you said, often even the creators of the algorithm don't know why you reached that decision. And so that process of the human and the loop who lets you say, hold on, there's something wrong with this decision has been kind of eroded. I would go further than that and say that these decisions are not purely descriptive or sort of objects to be examined, but they have a certain prescriptive impact. That is to say, if these decisions make themselves into the wild, they will start to have a prescriptive impact in terms of us seeing them as correct. Because if we tend to take their outputs as correct, we may not be, you know, we aren't going to be seeing every individual case. So if the assumption is this generally produces good results, that will actually be prescriptive and we may indeed change our standards to be closer to sort of whatever emergent results this has been producing and, you know, not, and those won't be based on sort of values that we necessarily, that we appeal to because the computers aren't aware of them. Right. Okay, so what do we learn from examples like this? So I think there's a few things. One was actually touched on an earlier panel, that is worth repeating that I think it's, it's worth remembering that our, a lot of times the algorithmic insights are statistical insights and not logical ones, right? So this is sort of like the distinction between correlation and causation, where, you know, we might want to design an algorithm that will, that will out, that will predict what category you might place the measured individual into for something like car insurance or something. But the data that, the pieces of information that it will use to link one individual to the desired category or to whatever the category it associates it with, those might be based on a correlation, a statistical correlation and not on, cause I'm sorry, let me, let me provide an example on this, this might. The car insurance example provides that, right? Yes, yes. And it's sort of like you are a higher risk in quotes because you drive at night, you're not driving at night because you're at risk, you drive, you're driving at night because you work the night shift. Right, exactly. So the car insurance example provides that because you have a person driving during the wee hours and the reason that we're worried about someone driving during the wee hours might be because we would be concerned that they could be intoxicated or it could be overtired. And so yes, so of course there is a statistical correlation between driving at night and being more accident-prone, but it doesn't necessarily mean that this particular individual is more accident-prone, is intoxicated or overtired during the time that they drive characteristically. And I mean, I think there are, there's some other, there's a related thing to think about here, which is also that accuracy is not necessarily the same as fairness and there are situations where we might come up with a model that provides us with sort of statistically accurate outcomes that we as a society would not think of as fair. So an example of this might come if we're trying to design a way to, a way to sentence convicted criminals in an automated way, right? So let's say that we take a bunch of information about a lot of people who've been convicted of crimes and try to extract from that information, then also look at recidivism rates and try to draw associations between pieces of information about convicted individuals and recidivism rates. And let's say that we find an association between recidivism rate and county of birth, right? So then it would it be okay to come up with it an automated sentencing algorithm where in part the way that you're calculating the sentence for a convicted individual is based on where their county of birth. Like I think we probably would say no. I think most of us would probably say, well, that seems awfully unfair. You know, it seems awfully, that's something that an individual can't change about themselves. It's not the fact that they were physically born in the geographic location of that county that makes them more likely to recommit a crime. It's probably some associated factors. So, you know, so the fact that there is a statistically significant association between these two factors doesn't, that might make it, you know, might make us think of this algorithm as accurate but not necessarily fair. Yeah. Do you have something else to weigh in on there? Yeah, I think, I mean, not only do I agree with that point, but I think that we also need to look at exactly how we are defining accuracy because, you know, to sort of, like again in the Google case, to sort of hold these things up and say, okay, well, you know, this is an accurate set of result belies the fact that, you know, these results are often configured based on a test set of information and extrapolated from that. And without an ongoing feedback process, there will be a one way prescriptive direction. So, a lot of what will happen going forward is, I think, you know, when we talk about data being out there, there's the issues of collection, there's the issues of promiscuity of the data sort of flowing between entities, you know, without us having any control of it and there's the issue of use in terms of, okay, when can certain data sets be used to make certain decisions? And with regard to that, you know, if you have a certain input, like say, your medical data going into an assessment, say, of your health in some regards, we would say that that is reasonable when we're looking at making a medical diagnosis. And in that case, you get feedback in the sense of, okay, well, you know, in general, one can look back on diagnostics and say, okay, well, was that diagnosis accurate or not? And, you know, maybe you don't get to make the, maybe you don't get to make, to assess the diagnosis to a completely fine degree, but in general, you know, medicine has these feedback mechanisms set up so that if the patient dies, you say, okay, well, where do we go wrong? And if you think of a medical diagnosis and treatment as that sort of an algorithm, you can see how it becomes an iterative process. And if you think of the need for iterative processes rather than simply static algorithms, that affects what it is to be accurate here, which is not having sort of reaching a state of accuracy and freezing that in amber, but rather taking a state as you analyze something like, you know, a person's recidivism or a person's, you know, a job skill or something, looking, taking that measure, having a way to say, okay, well, given that metric, how has the, how has evidence after that fact impacted our assessment of that and was that assessment correct? Not just should we update that assessment, but to what extent was the decision made incorrectly in the first place? And because these decisions are being made so much en masse, it's not, we won't be able to ask, you know, so my own background is, you know, I worked on systems and at Google I worked on, I worked on the web crawler and systems people have sort of a bias against machine learning because when, if something goes wrong in machine learning, it's very hard to fix it in isolation. You just sort of nudge it in one direction and sort of cast Sunstein sense. So that's one way in which they're more like humans. Unfortunately, most people, you know, systems people go into computer science because they want to work with things that aren't like humans. So it's very frustrating to suddenly have to sort of, you know, nudge things instead of just fixing them. You want things to go from 0% to 100%. But increasingly, when you have systems that have an ongoing evolution, that's not possible. You need systems that have automated feedback mechanisms that in sort of a, you know, a probabilistic framework can say, okay, well, the system produces the wrong diagnosis in some sort of way. Here is a feedback and it should wait certain factors less or more heavily in the future because this was a wrong outcome. And the challenges there are A, to establish that it was a wrong outcome in the first place and B, to figure out what changes should be made in response to that wrong outcome. And that two fold problem is, I think, issues in sort of, in, with regard to, with respect to algorithmic refinement that people are not thinking about, especially in regard to mission critical usages, where I think the systems model is predominant, but it's starting to fail to scale much to my, I'm not sure. Can you give a more concrete example of that? Of which part where it's failing to scale? Or I mean, I think you see it, you know, I mean, the classic example probably at this point is, you know, the financial collapse in 2008, where you had these trading algorithms that were supposed to maintain a certain sort of homeostasis and not let us get certain, to a certain point out of risk. And yet when situations were set up in a certain way, basically they zip past what were supposedly what we thought of as the limits that they were respecting and went through them. And yes, we got a bit of feedback that allowed us to refine those algorithms when all help broke loose. It would have been much nicer if we'd gotten more incremental feedback and had been able to modify them in an ongoing incremental fashion rather than having all of our illusions torn up before our eyes. Right. And I think that, you know, it's in the financial sector that you see these algorithms in sort of their most advanced form and they provide sort of a glimpse of what's coming in the future. Right. I want to ask if there is, how should I put it, is there, you talked about how especially machine learning algorithms are kind of more like humans in a certain limited sense. But is there the possibility for them, and I don't even really have a sense of what this would look like, but is there a possibility for them to be able to take decisions that do correct for biases? So here's sort of the background behind my question. As we all know, in the 70s, there were, the man is really sentencing laws in the US which were intended as a way to introduce more fairness, to remove judges' abilities to bring bias into their sentencing. They had to give certain amount of sentences. And of course, the end result was the massive rise in the black prison population and unintended consequence. So there was an example of, you know, where there was an attempt to do fairness in theory, but that backfired. We, you've talked about, you both talked about how accuracy doesn't equal fairness. And you've said even the question of accuracy is dubious, but is there a way in which algorithms can be used that they can correct for those biases and push towards fairness? I would say, yes, I would say. And I mean, one of the things I hold up is sort of a gold standard of data studies. Is a study that was done by Andrew Gelman on NYPD data in which he showed that even having corrected for a lot of, for a lot of variables, racial profiling still existed and certain minorities were stopped disproportionately on the streets of New York as a consequence of the stop and frisk policy. And I contrast this with studies with a lot of the big data studies that get bandied around that I would say are suggestive but should never be taken as conclusive. One of which, for example, Google search queries revealed that there is a very high proportion of searches for racial epithets in the Utica media market of upstate New York. And this is a very interesting piece of data. It's surprising. But all it can do is point you towards further, more detailed investigation. So it really becomes a question of can you constrain the variables to the effect that you know what the biases are that you are trying to correct for? That if we are aware of the biases in a situation that we wish to correct for, then it becomes possible to take those variables and quantify them in a more formal manner to look at, okay, well, is stop and frisk being applied fairly or unfairly? But if all you have are variables such as Google searches, then it's like, okay, well, what is it to be fair or unfair? What is it to be biased or unbiased? Is what you take as a suggestive measure of bias genuinely a measure of bias? So I would say that the answer to your question is an unqualified yes, but that if anything, but that potential must only be realized with a great deal of care and attention being given and that that is one of the reasons why we need sort of an increase in algorithmic and data literacy. Right. Well, so that... So yeah, I was just gonna also talk for a moment about a study that probably a lot of people in this room are aware of and that was Latanya Sweeney's research of where she found that searching for, searching for black identified names on Google was much more likely to turn up search results that was like, that said something like find a rest records for Latanya Sweeney than searching for white identified names. And so she did, I believe that she found this to be the case pretty universally regardless of what name, she tried many, many names, many, many characteristically white names, many, many characteristically black names and by that it just means statistically more likely to be names of individuals who are black identifying versus white identifying. And I believe that the standing hypothesis that she has on this and it's still not entirely clear why this is the case, but I believe that the standing hypothesis is that because the search algorithm will promote search results that individuals are more likely to click on, then if searchers are racially biased and are more likely to click on search results for finding a rest records when they're searching for black identifying names, then over time the algorithm will learn to promote the rest record search links for black identifying names versus white identifying names. And I think, so this is an example, this is very difficult to correct for this. And I think David and I talked about this a little bit the other day and it's just like, what do you do about that? That's the search, the way that the algorithm learns to promote links probably makes a lot of sense. It would be very difficult to identify all of the issues in which human bias is entering into search results and then I don't even know if you would want to eliminate the circumstances where there is bias or maybe it makes the search algorithm more useful that it does promote search results even when bias is present. But one thing that I think we can take away from this is that algorithms can also illuminate some of our human biases. I mean, the fact that this was taking place and the fact that she found it to be very clearly statistically significant and consistent can help us think about what biases we are bringing to our searches online. And you could come up with hypothetical examples of this in other contexts too. Let's say you have an employment screening platform that is helping you, you have your candidates for a job, for a job at an open position that you have posted, upload their resumes, their applications to this platform and based on which ones you look at, which ones you mark as candidates, folks that you're interested in interviewing and whether or not you ultimately follow through and hire them, maybe over time it will learn to promote candidates to you at the top of your inbox as it were that you're more likely to want to interview. That might be a useful mechanism for it, but let's say that over time it learns that you are less likely to interview individuals that belonged to an American Law Students Association in law school. Well, I don't know how many lawyers are in there, but most of the American Law Students Associations in law school are the African American Law Students Association, the Asian American Law Students Association. So let's say that it actually identifies that you're less likely to interview candidates who belong to an American Law Students Association and over time it is no longer promoting candidates who have that in their resume. That if you could analyze the data afterwards, you might actually find out that there is racial bias in the, from the perspective of the person who is making hiring decisions. We could build in a way to check for things like that. You know, it would be worth thinking about how we could build in ways to check for bias and to identify it and to reveal it to the employer. I mean, maybe I'm doing that and I don't realize it. And there are two things that could happen. One, this employment platform that I'm using could perpetuate that bias and never tell me about it and in fact actually obscure it from me by making it seem as though it is objectively evaluating candidates based on cold hard facts about them. Or maybe it could be designed in such a way that it could identify those biases and show them to me so that I can try to correct for them. Right. Okay, so that's a sort of somewhat hopeful slice of the future where we can use algorithms to illuminate, as you said, our human biases. But we started talking about essentially the ways in which algorithms may perpetuate biases inadvertently because of the way they're structured or because of the data that they are fed, which may not in fact be specifically human bias, but maybe just the data that are going in there that end up discriminating against certain groups. So, and as you said, the reason this is problematic is just because there's a correlation, doesn't mean there's a causation. So the bias may not reflect anything in particular. And also the algorithm may be accurate and the accuracy is questionable, but accuracy doesn't equal fairness. So that's what we've got so far. So the question then is left. What do we do about this? Because it seems like there are three areas for action. One is people, consumers demanding their rights or demanding fairer treatment. One is the law and one is companies that provide the services or run the algorithms, changing their practices. And so there's a question of what are the pressures on those companies? Is it the law or is it the consumer demand? Is it a combination of both? So in short, how do we, when we encounter problems of algorithms doing things that seem to be unfair, A, how do we know about it and B, what do we do about it? So the, how do we know, actually, the how do we know about part of it is not simple, but I mean, one could say very simply transparency. And, but I think that that's, it's really difficult to know exactly what that means in this context, transparency about what and what degree of transparency to whom. Yeah, your average consumer is not really equipped, even if your average consumer is not really well equipped to evaluate how algorithms work or even necessarily what data inputs are going in. But I think that at a basic level, transparency about what information is going in and how it might be used to make decisions that could impact the individual, that very rough level of transparency to the individual is important. And I think from the regulator's perspective, full transparency, full insight into what all of the inputs might possibly be and into how it works is important. But I'm, David probably has more thoughts on how transparency could be actualized, because I don't know. I can offer a slightly more optimistic take perhaps, which is that I do think that that more transparency in the relationship with users can be good, if not necessarily in sort of a, you know, in a verifiability way, but just in terms of creating more of a conversation that if Facebook shows you, okay, here are 30, you know, key words that we associate with your interests that people start looking at them and, you know, start seeing horrible things. You know, that can start a conversation that might get Facebook to react and sort of set up those sorts of feedback loops. So I think we should, it is important not to let the perfect be the enemy of the good and at least experiment with setting up more transparent structures so that we can at least see what's going on, both to raise awareness in sort of the average consumer and in the companies themselves, because as we've seen in the past, there are cases in which the companies do not realize how awful something has been until it accidentally pops up. There was a horror story with Google like a couple of months ago in terms of certain suggested tags they had for a person's Google photos pictures and I won't go more into it, just let's just say the tags were not good. And, you know, but because the person saw these tags that allowed that feedback loop to be set up, had those tags not been exposed to the customer, that particular process may just have sort of let sit sat there late and had unknown other effects. So will it fix everything? No, but will it at least be a step in the right direction? Yes, I do think that that can be a good thing. And because I think the path towards regulation is pretty slow in going, I think that sort of policy apparatus is really sort of behind in terms of just getting grips with these issues. I think that that's something we can do at least in the short term that will at least sort of ameliorate some of these dangers. Yeah, thank you. I think some due diligence in the design stage and thinking about how these outputs are likely to be used, are they likely to be used to make decisions about an individual that could impact their livelihood, education, healthcare, employment, sentencing. And if so, I think it's really important for companies or whoever it is that's processing the data to take care to think about how bias might enter in and to take steps to correct for possible biased outcomes from the beginning of the design process. And I think that this is something that regulators are looking into. I know that a couple of the agencies here in DC have been thinking about it. They've been thinking about this as a fairness issue for individuals, for consumers. And they've been thinking about it under a few existing legal frameworks that we have to promote fairness that maybe we just haven't quite figured out how they fit this bucket of work so far. I think the problem with transparency and fairness is in part that if you are being discriminated against by let's say an algorithm that sets credit scores, you won't see that because you'll see that your credit score is however much and you won't be able to see that actually on average other people live in your district or other people of the same racial ethnic group as you are on average being given lower scores than other people in similar situations. And it's only when somebody somehow manages to get a look at the aggregate picture and that somebody might be a journalist let's say or something like that. Only then does the unfairness come out and then it then becomes a problem of proving that there is actually algorithmic disparate impact. There was a I think pretty important Supreme Court ruling in the summer which was I forget now the details but it was about housing policy and it was on the first times or possibly the first time that the Supreme Court had allowed disparate impact to be used as evidence of racial bias. Before that I then insisted that to show racial bias you actually have to show the intent of racial bias. So that's a step forward but those cases are hard to prove, right? Yes, absolutely. Yeah, and we do I think for a lot of good policy reasons we do have a lot of our legal framework is based on this idea that to find discrimination there's we want, we're most concerned about discrimination when it happens based on some immutable characteristic about an individual something they can't change about themselves and when the party that is engaging in discrimination is doing so in an intentional deliberate way. That said, I also think that we do have a legal framework to require due diligence, sort of reasonable steps that companies might take to prevent the most unfair outcomes that exists. The Federal Trade Commission Act has pretty broad authority to prohibit unfair and deceptive acts in trade which is, again, very broad and has been looking into this issue and thinking about where that fits in, what might constitute an unfair practice when you are taking information about individuals and using it to make decisions about their lives. All right, David, since you're the computer guy I'm gonna leave you the last word to ask you are you basically optimistic that the law and consumer associations and society can move fast enough to keep up with the kinds of changes that happen in computer science and in development that create these biases? Because it seems to me like the law tends to, and the government policy tends to move slowly and computing moves quickly. I think things will be very, especially once we move towards more analysis of video rather than text and images things are going to change very quickly. My hope is that the issues provoked by video will sort of put this much more on people's radar than it is now. Specifically, meaning, just meaning sort of what data is incorporated from videos, people are uploading videos that are taken of people without their permission, things like that. I think that these issues I think will come into much more sharp relief than they are now when you just sort of have an online profile. Okay, well, thank you both very much.