 Hi, everybody. Good morning. Thrilled to be here. And this is my first word camp, but I'm a longtime user and a longtime friend of Matt Mullenweg and much of the team. So it's great to be here. I'm a design professor. I'm a professor of ethics and computational technologies. And I'm also the senior associate dean for research at Carnegie Mellon University. And questions about ethics are very, very big there. In fact, I think in the last year, ethics has been a booming topic in tech. And it might be something that you've heard about. Perhaps it's one reason you're here. But I wanted to talk a little bit about what the trolley problem is. And the trolley problem is this. Is anyone familiar with it? Who here has heard of the trolley problem? OK. So for those of you not familiar, it goes like this. You are that guy operating a switch. And there's a trolley heading down the track. And if you do nothing, your trolley is going to run into five people and kill them. But if you throw the switch, you're going to move the trolley onto a different spur. And you will not kill the five people. You'll kill one person instead. So what's the right thing to do? What do you do? Who lets the trolley go? And by doing nothing, kill five people. Is there anyone who would do that? It's kind of hard, right? Who's going to do the one person? Who feels good about that choice? Nobody. So again, the question of what the right thing to do is a big concern. And this particular question was originally formulated by two female philosophers. First by Philippa Foote in 1967. And then it was augmented by a woman named Judith Gervis Thompson, who is a professor of philosophy at MIT. And Judith Gervis Thompson made it more difficult. She also made a trolley problem situation where you can stop the trolley by throwing a large man off of a bridge. She also points out things like she does other versions of it. Like a brilliant transplant surgeon has five patients, each in need of a different organ, each of whom will die without that organ. Unfortunately, there are no organs available to perform any of these five transplant operations. But a healthy young traveler just passing through the city the doctor works in comes in for a routine checkup in the course of doing the checkup the doctor discovers that his organs are compatible with all five of his dying patients. Suppose further that the young man were to disappear. No one would suspect the doctor. Do you support the morality of the doctor to kill the tourist and provide his healthy organs to the five dying people and save their lives? What do you think? No. So we use these questions to talk about the interactability of some of the decisions that we're making around technology and around trade-offs and life and death. Is anyone here a fan of the good place? And those of you who know the trolley problem, is it because you know it because of the good place? Yeah. It's a particularly funny, great episode. If you're not familiar with this TV show, it's a wonderful TV show about ethics starring Ted Danson as the architect. It also turns out that there are trolley problem memes. They started out on 4chan. There's a Facebook group full of trolley problem memes. You see memes like this. There's nothing you can do to save the people. However, the lever you stand next to controls music playing from the boom box attached to the trolley. If you pull it to your left, it plays All Star by Smash Mouth. But if you pull it to your right, it plays any random song. You can only pull it once, and if you don't pull it, it defaults to All Star. And you cannot kill yourself. There's a trolley problem mug, trolley problem guy mug. If I could find this, I would be super delighted. And I want to point out that we also talk about things. Actually, I'm going to jump back to this. I'm going to, these are slightly out of order. So I'm going to come back to that in a second. This is one of my favorite things in the world. It's a two-year-old's answer to the trolley problem. Ready? Uh-oh, Nicholas. This train is going to crash into these five people. Should we move the train to go this way, or should we let it go that way? Which way should the train go? You may not have heard it because you were laughing, but he's saying, uh-oh. What's the trolley problem is a way to talk about these ethical conundrums? And we use another one, too. We use Isaac Asimov's Laws of Robotics. He first wrote these in 1942, really long time ago. There are three laws. A robot may not injure a human being or through an action, allow a human being to come to harm, law number two. A robot must obey the orders given to it by human beings except where such orders would conflict with the first law. And three, a robot must protect its own existence as long as such protection does not conflict with the first or second laws. So why do we talk about the trolley problem so much these days, and why do we think about the robots? It's because the trolley problem is a question of a number of things. It's a way that we can think about trade-offs and accountability and control and design when we're talking about artificial intelligence and digital and computational systems. So we could talk about the trolley problem, or we could talk about the Uber hitting a pedestrian problem. And you're probably familiar with this, that back in March in Tempe, Arizona, an Uber hit a pedestrian and a self-driving car hit a pedestrian and killed her. And the driver, the human operator of that vehicle was sitting in the driver's seat had apparently been watching the view when the crash happened and when the pedestrian died. And it's a really problematic situation because in some ways she's held responsible for situations that were beyond her control. It's a self-driving vehicle. She wasn't involved in the design of that vehicle. She wasn't involved in the design of Uber's decisions. Excuse me. And in that setting, it's a complicated set of factors. One of the things that Uber had done was disabled the Volvo's self-breaking technologies in favor of their own technology. And so it's a complicated situation. Yesterday, Waymo's self-driving taxis have launched in Phoenix, Arizona. And Phoenix is making a lot of trade-offs and putting a lot of self-driving vehicles on the street. My city, Pittsburgh, has lots of self-driving vehicles because we are the center of autonomous vehicle development just up the street from my house. And MC Elish, Madeline Elish refers to these situations like the Uber crash as moral crumple zones where humans are held responsible and held accountable for decisions they didn't make, right? She describes it as where people get caught within ambiguity, within systems of distributed control. So it's a question of who gets blamed, the human operator for design decisions beyond her control. Now, when we talk about self-driving vehicles and the trade-offs, there is a lot of things that the executives of these companies will say. They'll point out things like 40,000 people were killed in car accidents last year, that accidents cost $1 trillion a year, $1 trillion. And congestion is a cost of $305 billion a year. These are numbers that refer to the lost productivity of workers, people sitting in traffic, the cost of transporting goods. I found this kind of interesting. It's a little bit hard to read, but if you take a look, you can see the number of hours you might be spending in congestion here and what that cost would be to the driver and what that cost would be to the city. So it's an expensive thing and it's a costly thing and it costs human lives. So these are major questions and self-driving vehicles will certainly be a part of our future. But there are questions that come up and there are reasons that we turn to trolley problems to try and figure it out. It's questions of ethics and risk and liability and also, for that matter, legal code. And a young lawyer named Brian Casey who is at Stanford as a postdoc basically right now points out that the questions here are these robots and by this he means self-driving cars will maximize morality but minimize liability. And it's a complicated set of factors. So it's a question of lawyers. Might get sorted out by insurance people. And I'm a design professor so I think that it's a question of design as well as a question of algorithms. So when we're talking about algorithms we're not talking about something quite so random. We're talking about questions of fairness and accountability and law enforcement and as we've seen matters of life and death. We also see different ways that they play out in what we could call embedded governance and code as law. Have you heard about GPS starter devices? So if you are someone who has maybe less than optimal credit and you buy a car you might have a starter device that will make it so that if you don't submit your payments on time the car won't start and or the car will shut down as it has done for people who are driving down the highway and find their car is stopping them because their payments are late. So these GPS starter interrupt devices are a way to keep people on time but on the other hand what these do is they embed law in the objects around us with matters of potentially life or death, right? A woman, there's a CBS story about a woman trying to get to her dialysis appointment. She's in kidney failure and her car won't stop, start or a car gliding into a stop in Las Vegas. So we call this embedded governance. And there are other things too. You might consider this question for instance. Pellentier worked with the city of New Orleans secretly to unleash a predictive policing technology that racially profiled people and sought to kind of prohibit arrests. I also want to point out that questions of facial recognition are a really, really big issue. And if you're familiar with the group AI Now I'd recommend taking a look at their report. They say some really interesting things about this but sometimes it plays out in ways that are a little more everyday like this example. I'm gonna point out here that the audio is not exactly going to be tracked right but I want you to see this nonetheless. It's about how HP computers are racist. Oh Desi, this is using the video tracking software. All right. Explain. My coworker Wanda and I are sitting in front of an HP media smart computer. State of the art computer, wouldn't you say? We're using the face tracking software so it's supposed to follow me as I move. I'm black. I think my blackness is interfering with the computer's ability to follow me. As you can see, I do this, no following. Not really, not really following me. I back up, I get really, really close to try to let the camera recognize me, not happening. Now, my white coworker Wanda is about to slide into frame. You will immediately see what I'm talking about. Wanda, if you would, please. Sure. Now, as you can see, the camera is panning to show Wanda's face. It's following her around. But as soon as my blackness enters the frame, which I will sneak into the frame. I'm sneaking in, I'm sneaking in. I'm in there. That's it, it's over. And there we go, it stopped. My hands are here. Wanda, please get back in the frame. Get back in. As soon as white Wanda appears, the camera moves. Black Desi gets in there. Oh, nope. No face recognition anymore, buddy. I'm going on record and I'm saying it. Hewlett Packard computers are racist. I said it. So this is maybe humorous, but also there are hand soap dispensers that don't notice dark skin. There are fitness trackers that don't read through dark skin. And there are cameras that provide a blink detector. But when an Asian person is photographed, it says you're blinking. So these are all things that are questions of design and questions of algorithm and questions of choice that get built into the objects around us and actually do have a material effect on our lives. There we go. And if we want to look more closely at facial recognition, here's something to consider. There are various facial recognition technologies that are in place. And not only do they read faces, but they read emotions and opinions. And they also misread. So in this case, these are three members of Congress. In a set of mugshots that were fed to facial recognition technology using Amazon's particular technology, it misidentified three members of Congress. A number of members of Congress, actually. And so they wrote a letter to Amazon, to Jeff Bezos, saying that this really needs to be considered. But do consider the questions of life and death. And do consider what happens when people are stopped and the disproportionate number of people of color who are stopped when you consider policing incidents. And you begin to see why things are really problematic. There's also a question of the data that you don't collect. So this is Mimi Onuhaha. And she is an artist and designer and writer who writes about uncollected data sets. And she points out that what you don't collect is just as important as what you do. Because you can't analyze. You can't parse. You can't subject to machine learning what you don't already collect. What you see here in this file cabinet are her uncollected data sets. And when she puts this in a gallery, she makes it available for you to open up the file drawer and look at the different file folders inside, all of which, of course, are empty. One of the uncollected data sets that she notes is shootings of unarmed African-American men that actually did begin to be computed. And as a result, change began to be made. She also has done a project, for instance, with Asian actors on Broadway who pointed out that there were almost no roles for Asians. But they began to collect the data. And once the data had been collected, there were newer roles and more roles. They were able to argue for themselves and for more equitable casting on Broadway. A lot of these things take place within what we call the black box. We don't get to see what's there. We can't see inside. We can't see how these decisions are made. In fact, the algorithms can't even explain to us how the decisions get made. And even if they could, when we talk about, maybe we just need to make them transparent, that doesn't really work. Because transparency, if we're completely open and transparent about what's going on, you might be harmful. You might make things more confusing. You might make false binaries, apples to oranges, right? Compare things that shouldn't be there. And there are technical and temporal limitations, limitations of what our technology does and how long we can use these technologies in time and space, right? So being transparent isn't really what it is. And Mike Anani and Kate Crawford, Mike is a professor at USC Annenberg and Kate Crawford is the head of AI now, point out that what you really need to do is not make things transparent, but make it so people can interpret what algorithms are doing. Because interpretation is different than transparency. And I want to point out here that these are questions of culture. They're the questions of the companies that we all work at, the institutions that we all work at. And what Kate Crawford, Meredith Whitaker and the staff at AI Now Right is that just as many AI technologies are black boxes, so are the industrial cultures that create them. So is Amazon, so is Google. And you start seeing ways that this might actually change, right? Here are just a couple of examples of recent news stories. This story that says, can an algorithm tell when kids are in danger is something that's taking place in my county in Pittsburgh in Allegheny County. It's the Allegheny family screening tool, and it's an algorithm that tries to determine if a child is likely to be removed from their family within the next two years if a complaint is called in to the Allegheny Department of Health and Human Services. Now, the data that this is based on comes from 100 different public sources, but those public sources may be biased, right? They may be disproportionately collecting the data of lower class families and people of color. And so the problem here is that biased data will make a biased output. But on the other hand, something that protects children is, excuse me, better than nothing, right? So the head of the Department of Health and Human Services is having these algorithms audited and working together with AI now and a number of other organizations to make sure that what they're doing is okay, or at least somewhat better. And it's hard. There isn't a good answer, right? Finally, apparently the Defense Innovation Board is going to look at the ethics of AI in war. If you look at the history of artificial intelligence, it's completely funded by the Department of Defense. It has always been the case. And so it's really good that ethics might make a start there. When your boss is an algorithm talks about, it's an excerpt of a book called Uberland that just came out this fall. MIT is launching a college that's going to kind of do digital humanities at scale, AI, and the humanities across the campus is a way to understand things. And Amazon has scrapped their resume tool that was biased against women. It had begun so much to bias against women that even traditional women's colleges had been locked out. And these are ways to kind of move people through the pipeline and deal with the large scale of resumes. So I like this quote that Brian Casey has about trolley problems. He said that trolley-like problems are not mere philosophical curiosities. Those designing the decision-making systems cannot simply shrug their shoulders. Rather, they must design in advance how their systems will respond when life and limb are on the line. And I wanna point out that this involves everybody in this room. So what do we do? Paul Verrillo in 1999 pointed out that all of these questions we have kind of are part and parcel with the technologies. When you invent the ship, you also invent the, when you invent the ship, you also invent the shipwreck. When you invent the plane, you also invent the plane crash. And when you invent electricity, you invent electrocution. Every technology carries its own negativity, which is invented at the same time as technical progress. And I wanna point out that these ethical problems are not new. I mean, we know that the trolley problem was coined in about 1967, but AI isn't new either, although we act like it is. I collect these. AI is the new black, UI, next digital frontier. Oops. Space race. AI is the new electricity. AI is streaming up all kinds of new video games. I highlighted the word new here, and six times on one screen alone, although one of those is New York, so it doesn't count. Google's AI is a new paradigm that unites humans and machines. And in this particularly sexist example, I'm apparently the new AI. This says something about, she'll never ghost you, never charge you more for a logo, et cetera, et cetera. But AI isn't the new anything because it's not new. The term was actually coined in 1955, right? By John McCarthy who said, making machines do things that would require intelligence if done by man. If you ever have heard of human computer symbiosis, the idea of humans and computers working very closely together in close partnership, that idea is from 1960, from a man named J.C.R. Licklider who put in place a lot of the agenda for artificial intelligence research between 1960 and his death in 1990. And then of course Marvin Minsky in 1961 said that we're on the threshold of an era that will be strongly influenced and quite possibly dominated by intelligent problem-solving machines. Which is why it gives me pause when I see someone like this describing a new era of artificial intelligence in 2016. It isn't that new. And we still don't have good ways of talking about it, right? So this is why we resort to trolley problems or cliches, things like this. I like calling this woman Cyborg Lady. Am I right that that's JQuery? Is this the future? Here, ready? Here she is. Brits fear the AI future. Here she is again, IBM Watson. She works for IBM. The ingredient brand helping inform the purposeful business of tomorrow. And she works at Ikea as well. Artificial intelligence embedded in furniture Ikea is considering. So we turn to these cliches and these examples to talk about AI because we don't understand it. If you Google, if you do a Google image search this is what comes up. You see images like that. The grid over the face. Cyborg man beholding his hand. And one of my students did an Adobe color pull and pulled the color palette. And he pointed out that these colors and shapes are quite familiar when he thinks of AI. These are the colors that come up. And what do they tell us? They tell us cold, logical, rational. Not friendly, not curious, and not open. Not interpretable. So again, it's that question of the black box. And it's hard to communicate about AI because communicating it means understanding it. So this is why we turn to cliches. This is why we turn to trolley problems. I like what Eric Johnson had to say about the movie Minority Report, one of these cliches that we turn to. It's really, really hard to talk about digital reality tech, right? He says these fields are full of jargon, inconsistent in practice, and difficult to grok or understand if you haven't seen all the latest demo. Pop culture is a shortcut to a common ideal, a shared vision. But this still gives me pause when I see images like this coming up when I type in artificial intelligence on Google. And this is what comes up on the Google page. I don't think that picture really helps. So it's the black box. And that's why we resort to metaphors and cliches and use the trolley problem. I'm not sure this picture helps either, but it makes me smile. In 1945, Kenneth Burke said this about metaphors, that metaphor is a device for seeing something in terms of something else. It brings out the this-ness of a that and a that-ness of a this. It's nice, right? The this-ness of a that. And we also talk of this as a little bit wordier, but stick with me here for a minute. Along the philosophical fringes of science, we may find reasons to question basic conceptual structures and to grope for ways to refashion them. Old idioms are bound to fail us here, and only metaphor can begin to limb the new order. If a venture succeeds, the old metaphor may die and be embalmed in a newly literalistic idiom accommodating the changed perspective. But we need to find ways to somehow encapsulate what these changes are, and it isn't very straightforward. And by definition, if we metaphor is the thing that your English teacher told you was good to use, but cliches are what we're told are bad to use, right? And we talk about a phrase, for instance, or expression regarded as unoriginal or trite. And that's kind of where these different stories and problems, I think, come into play, that they're almost trite in how we use them. But then again, I want to point out that cliches are cliches for a reason, and that they help us to make sense of things, and we use them to quickly explain things. I don't know if you've ever stopped to realize how many sports metaphors you might use in conversation. I'm not a sports person, but they're everywhere, right? Like, I realize I probably got baseball metaphors all over the place, but that's because we all understand them, and we can use them to solve things. But I want to suggest a couple of different options for us, for those of us as designers, as front-end people, as writers. I want to point out that humor and uncanniness might be a way forward, that AI might help, that they could help us to understand artificial intelligence better. Is anyone familiar with the Uncanny Valley a little bit? So the Uncanny Valley is basically why we get freaked out when robots are too close to human, right? And in 1970, Masahiro Mori coined this term in an article he wrote, and mapped out what this kind of weird eeriness is that we feel about robots and people. And he said we should begin to build an accurate map of the Uncanny Valley, so that through robotics research, we can come to understand what makes us human. He also points out, like, what you see here is, he was mapping out different kinds of technologies, a toy robot, a prosthetic hand, and there's a point at which, it's good, it's good, it's good, whoa, it's really bad. And the eeriness is really creepy, but this eeriness is maybe, perhaps he says, essential for us as human beings. Maybe it's a survival technique. I'm a fan of Janelle Shane. Is anyone here familiar with her work, AI Weirdness? So she trains neural nets to do all kinds of things. And I think that not only are these deeply funny, I think they're important and maybe subversive. So she trained them on guinea pig names and came up with a bunch of random guinea pig names via neural nets. So this is Hanger Dan in Princess Powell. As my husband says, any five-year-old could name a guinea pig Princess Powell, but it takes artificial intelligence to name it Hanger Dan. And she also has done things, like played with different image recognition technologies to see what the algorithm is doing. And she realized that when she fed the algorithm a lot of pictures of rocky hillsides with grass, the algorithm started hallucinating sheep. It started describing sheep as being there when they weren't. And so then she took some pictures of sheep and colored them orange. That's what you see in the middle. And the algorithm decided those were flowers in a field. And then she decided to really punk the algorithm and she uploaded a picture of goats in trees. And this is a flock of birds flying in the air or perhaps a group of giraffes. And what she says about this, I think is really neat. She says, if life plays by the rules, image recognition works well, but as soon as people or sheep do something unexpected, the algorithms show their weaknesses. And this is a fun setting. This is an amusing kind of setting to look at these things with. But you could just as easily imagine some darker settings about where algorithms fail us. In a second, I'm going to show you the work of Madeline Gannon who just got her PhD from Carnegie Mellon in architecture. She's a robot whisperer. She has tamed an industrial robot to move and play with people. And she says something I think really important about this. So you'll see her here. When everything comes together and you're in the space with the robot and you just have a very raw experience with this animal-like machine responding to your every move, all the technical aspects sort of melt away into the background. It's incredibly important to have opportunities and spaces to come in and experiment and misuse these existing technologies. So this was at the Design Museum in London. And me, Miss, the robot was there for six months. And very quickly, kids learn how to play with and entertain the robot. And the robot could get bored of people and then you have to try and capture its attention again. But what she says is really important. She talks about even having the opportunity to learn, play with, and misuse technologies as a way to change what we might think about them. I think that this is a question about reframing the problem. So let's revisit the trolley problem. This is my favorite meme. And we could toss the trolley problem out, but I wanna point out that even people who run autonomous vehicle companies understand the use of the trolley problem. As Chris Irmson said, I don't think it's a bad ethical quandary of the trolley problem, but I don't think it's solvable. It forces you to think about it. He said this to my class. I skyped him into a class that I taught this semester called AI and Culture. And one of my students asked, what about the trolley problem? What should we do? And he thinks it's important as a way to think through things, that there isn't a solution. I wanna point out that one of the recommendations that AI now makes is that we need to include other fields other than computer science and engineering. And that this is vital. I would argue that design is vital. I would argue that front end technology is vital and writing and philosophy. And indeed, design is necessary. That this isn't just about the ethics of design practice, but the way that we encounter algorithms in AI is a question of design. And there are a couple of things that designers and probably a lot of people in this room who aren't designers might want to do. And as we think about working with AI, there are things that we bring to the table. We do things like understanding people, human needs and human considerations and translate them for systems. We frame problems. We determine how data can be collected and how it will be used. We can help people to interpret what AI is doing and we can communicate that and visualize that. And we can support accountability. And we might say, well, is ethics gonna slow us down in a world of move fast and break things? I wanna point out what my colleague David Danks says that ethical and efficacious are not antonyms. They don't need to be a trade-off. David asks, do we design for the world we have or the world we want to have? So design is where the rubber meets the road. And I realized that trolley problems are always going to be a conundrum. This one is called Paradox. And that's Oprah. And this is by my house. I go running by Carnegie Robotics. So I'm not totally sure what to think about that. But I put all of this out here to say that trolley problems are problems, they're quandaries, they're conundrums. And there's a role that we have in framing and solving them. Thank you very much. Thank you, thank you. Do we have any questions for Molly? Really? So I'm not 100% sure how to ask this question, but when you showed the colors, the blue scale, to me, not only does it bring up those ideas of technology, but also of business. And so I think when you're talking about these kind of developments in technology for business, you can't discount the fact that there's a profit model that has to be adhered to, or, and also that technology is treated like, treated like property, like it's like patentable and hidden away. So I guess like my thought in the context of the word camp is that what do you think or fear of the idea of this becoming an open source thing? I mean, I'm obviously like it's gonna happen, but like self-driving cars as, and you could do it in your, we are tinkering around and we created our own self-driving car in our basement, which is what we're all kinda doing. So a couple of things. First, it would be really interesting to compare, you're right, and I hadn't thought about comparing, I hadn't thought about comparing the AI colors to like IBM, for instance, or Google. And indeed those blues, like you know exactly when I say that blue, what color I mean, right? And so there's probably a correlation there as well. I wonder what it might look like if AI was rosy, if it looked like the color palette of the movie, or her, as opposed to Minority Report, for instance. And in terms of, I mean one question about self-driving vehicles is that it's going to, Brian Casey argues that it's going to be sorted out in among insurance people and lawyers, that through those settings of risk mitigation, it'll make its way into the public sphere and that that might be ultimately an optimistic and good thing. I don't know if it really is going to be open sourced, but there are all kinds of things that people do to make things autonomous. People have been doing a lot of garage robotics work for a long time. Yikes. Is there a way to use AI to improve the web experience? Like right now you can configure like chatbots to communicate with somebody, if something lands on your website, there's a response, but is there anything beyond that? Like serving content or? So chatbots have actually been around since the 60s and they're really hard, they're really hard problem to solve. I have a master's student right now who's deep in it and he's banging his head against it. We've been trying to get it right, but conversation is one of these things that it's really easy to get wrong. There are all kinds of different focuses, though, for artificial intelligence research, especially for those of us who build systems on the front end and on the web. Again, I have provided examples of Janelle Shane's work. Her blog, AI Weirdness, talks about playing with neural nets and she also provides links to examples. Those are a couple of things and there are countless machine learning-oriented classes out there. I'd also recommend Rebecca Fiebring's work on AI and music and her tool, The Wekinator, as a way to play with machine learning and music. You're welcome. One more? Hi, thank you for your talk. As I think about AI, it seems to me that AI would continually work better with the more information it has, but I also can't help but think that you're going to bump up into privacy issues. Absolutely. I don't know if you've heard about this, but there was this big law in Europe about GDPR. There was a good talk yesterday about it, I heard. There are parts of the different laws that are butt up against each other. You can't collect certain data, but yet to ensure privacy, you have to collect data to verify who they are, so you can't do one or the other, but I've seen with AI to be able to know what you need to know, to do what you need to do, you're going to violate somebody's privacy. And it's a big, big question. So if you've been reading the news this week about more Facebook revelations and questions, it seems to be an ongoing question with that company. One of, I think one of the biggest questions around Silicon Valley right now is, will the government have to regulate these companies because they can't be trusted to regulate themselves? I might argue that the government may need to step in, although they'll probably do it in some ham-fisted way that shuts off future possibilities, but in terms of privacy, it's really, really vital. My colleague Lori Craner at Carnegie Mellon is a privacy expert, and her work on this area for the last 25 years is really, really useful if you want to dig in and learn a little more. I think that's all the time we have. Thank you so much and have a great board camp. Thank you.