 Hi everyone. We are going to go ahead and get started. I wanted to say a quick thank you again everybody for coming out. I think Sophie's birthday party analogy several hours ago was particularly appropriate. It's just really great to see that people care so much and want to talk about the issues that we find so interesting here at Mittler. And it's also particularly appropriate given that this is our journal's 25th anniversary. So thank you all for coming out to celebrate our journal's birthday with us. I have the task of introducing our keynote speaker tonight. She has the difficult but enviable task of balancing considerations of brand loyalty and consumer experience and trust for one of the world's largest payment providers with trying to move fast and innovate and remain competitive in what's becoming an increasingly competitive and cutthroat market for financial services firms. All the way out from San Francisco to our frozen corner of the world out here. So please help me and join me in delivering a warm welcome to our keynote speaker Vice President and Chief Privacy Officer Paypal Christi Chan. Thank you. It's so exciting to be here with you and thank you for the warm welcome. Being able to actually talk about, learn and just have a good conversation about the topics that I care so much about. It's incredible. It's awesome. And I geek out at this. So thank you for indulging me and setting aside a day to be able to do this with you. So as I was getting ready to come here and just like any of my other work trips or business trips, I was preparing dinner with my family and as part of normal routine, I think on Thursday night we made salmon or something like that. And I usually sort of start my cooking sessions whenever I put something in the oven with a command to Alexa. Like, Alexa, 15 minutes on the clock please. Or sometimes I say, actually my five year old corrected me the other day and saying, Mom, you have to say please. So 15 minutes on the clock. As the time ticks, my five year old daughter Leila continues to engage with Alexa. Saying things like, Alexa, count from one to ten. Alexa, sing me the Sesame Street song. Alexa, what is the temperature in Chicago where Nana lives? Going on and on and on, engaging. And this is very fascinating because even a year ago, one, Alexa didn't really understand what Leila was saying. So she would sort of come back with answers that were irrelevant. But the way that my five year old learned how to engage with our digital assistant has just improved dramatically. And with that Alexa's response also improved dramatically. She knows to firmly state Alexa before she begins to wake up Alexa if she's not already awake. She knows exactly how to enunciate each command and string together search terms. And she also knows to slightly over enunciate because sometimes if she doesn't then Alexa comes back with a wrong answer. And this is incredible because like I said a year ago Alexa didn't respond appropriately and Leila didn't know how to engage with Alexa. So just in the course of a year the progress that how a five year old can teach artificial intelligence, digital assistant to be able to adapt and learn the owner's style of speaking, the type of commands that is executed is just amazing. And in order to adapt to the owner's voices and commands and styles Alexa listens and stores our voices, our commands in Amazon database, servers. And this is to enhance the quality of the response to be able to train the assistant and to train the model to be able to be more effective and to be more accurate, right? So this is all happening in the four corners of our home. Normal day, we're going about, and I have, and I don't think about, I actually have not thought about until a while ago, that this information was going to be shared with somebody else. It almost feels like there would be somebody standing outside of my house, a stranger, or even somebody you know, but somebody who's going to stand outside and listen to our conversation and that's a little bit creepy, right? So I don't think about the fact that this potentially could be shared outside of the four corners of our home. Well, and it doesn't, at least I don't think, but a couple in Portland having a private conversation in their home thought the same. So it turns out that a woman in Portland, Oregon, found out that her Alexa had recorded a conversation between her and her husband without permission or awareness and sent the audio recording to a random person on her contact list. Yeah. It was a major glitch. And you know, Alexa doesn't record everything in the house, right? Alexa has to be woken up, Alexa has to be, the command has to start. And so how this happened was that Alexa woke up to a background conversation noise that sounded like Alexa and then subsequently conversation was heard as having, you know, taken place and the background conversation was also translated as send the message. It was a send the message request. At which point Alexa as Alexa does checks to whom and background noise again was interpreted as John, I don't know if it was John, but somebody on their contact list. Alexa then said, John, right? And the background noise again was interpreted as right. And the message was sent. So this is very, very unlikely, but it happened. And we don't sometimes think about the way that we engage with a lot of these technologies, the way that information is gathered and how it is then utilized and transmitted. And enhanced data technology including trained AI which brings significant benefits and convenience to any view and the society. But the protection of this data from misuse and unauthorized disclosure are increasingly becoming challenged by the massive amounts of data. And the technology that's always enhancing smart, getting smarter to become, to combine and analyze the data. And even more than ever trust in how data is processed is top of mind. So let's talk about the massive amounts of data and the footprint that we're leaving around. Who here has a Facebook account? Not anymore. A lot. Instagram account? LinkedIn? Thermo? Yeah. The vast amount of personal information and social media information that we post on social media sites combined with the ability of social media companies to link our online behavior to physical locations and other information is creating a shift in the way that companies are able to utilize the data and of course including marketing. For example, micro-targeting is actually a form of marketing strategy that uses consumer data and demographics to identify interests of individuals or very small group of like-minded individuals and potentially could influence their thoughts or actions through advertising. And to achieve this personalization on such a massive scale, there needs to be massive quantities of data and continuous updating of that data about individuals to create predictive analytics about the sentiment of the group of population. If we apply this to the voting population scenario which is not unfamiliar to us, understanding the population on an individual level can enable potentially campaign leaders and directors to go beyond standard political party-oriented messages and communicate to the voters about specific topics and potentially influence the voters' decision. Online advertising, I think we heard from the panelists a little bit about online advertising where your behavior on websites, what you click, what you visit, what you visit before and after is tracked and that through cookies and it informs sort of what types of products that we are potentially looking for. I consider that sort of back in the days right? Now that online behavior is merged with other data, offline data about persons whereabouts, persons what they did, what they purchased, and is able to create this holistic omni-channel experience. For example, I'm actually in the middle of renovating my house and the other day, last weekend, I was with my daughter and we're at a park and I needed, I was looking for a rug on my phone and as I was looking for the rug I got distracted and chasing after the five-year-old and so I decided to stop looking at it and a few days later, as I was on a different device, I think I was on my work computer, the topic of rugs came and presented flashed from various different companies, not from the company that I looked at the rug from, but also I started to get coupons of 20% off of rugs from all these places. Not only that, I started to also get information and advertising about toilets. Somehow they figured out that I was renovating and I was, but toilets and bath tubs and random house renovation types of advertisements that, you know, to be fair, I actually didn't look for toilets and I actually didn't plan on buying toilets, but when shiny new toilets sort of color came about, I said oh, maybe with my bathroom renovation I could maybe should or maybe use one, right? So it's starting to, you know, influence the way that I could potentially think about. It's just a lot of data that is generated about our activities and we fuel that one way or the other. We may consciously do it in instances of personal finance management sites, right? I don't know if you have memberships or a part of min.com or other financial aggregation sites where it helps you see a holistic picture about your finances. How information gets aggregated there is by your choice. You decide to provide that information, right? Because it is nice and it is convenient to be able to see all of this in one place. Now companies are starting to be able to segregate the types of spend, the types of experiences, who you are, what you like to do, what you like to spend money on, how much debt you have. Did you recently buy a house? How much was your mortgage? What is your credit score? All of those information are aggregated and added and it becomes bigger and bigger and bigger. Apps, right? Before it used to be computer and then offline shopping. Now with the explosion of the use of mobile devices we have put and share so much information, very intimate information about ourselves and we record it on these apps. Because again, it helps us track, it is convenient and it is incredibly beneficial to some extent. We are busy, everybody is busy and so it helps us track exactly what we want to do. So there is a lot of data and then there is the ever-changing data technology and the capabilities to crunch, combine, analyze massive amounts of data that we have not been able to before. The evolving technology is constantly increasing in the capabilities on what it could do and how much data it could consume. So let's talk about your face. We talked about I think biometrics earlier but facial recognition software is a technology that is capable of identifying or verifying a person from a digital image. It generally works by comparing selected facial features from given images with faces in a database. I know that mobile device users are familiar with this because one way to use this technology is through for security authentication. We have face IDs on iPhones. I'm sure there is something similar in Android but I don't use it. So I'm not familiar but something similar to be able to authenticate and to really validate that the person who is trying to access the device is who it's supposed to be. There are also you may be familiar with this technology when you are trying to tag your friends on Facebook photos or Instagram it sort of automatically suggests who the person is in the picture. Or you might be familiar with this Snapchat's animated filters. It's super fun like the dog face or the filters. It is really fun. But other cases use cases include uses for use for law enforcement and law enforcement related activities including there are real-time face recognitions that allow surveillance camera to scan sort of the faces and the pedestrians on the street and stores it in databases along with other images and visuals from driver's license visa applications, passports, etc. And many of these use cases raised concerns about privacy. And while facial technology, facial recognition technology is improving, there's also been studies demonstrating that these technologies may be imperfect, inaccurate, and sometimes ineffective. In fact, facial recognition technology has been proven to work less accurately on people of color. There has been a study done by Joy Bolamani from MIT Media Lab that found that the error rate for gender recognition for women of color within three facial recognition software technologies range from 23% to 36% whereas for lighter skinned men it was between 0% and 1.6%. I mean that is a dramatic difference. Data technology, AI machine learning is only as smart as the data that is used to train. And some of these show that biases in the world can seep in to the technology. So we have massive amounts of data and there's cool and ever changing, ever improving technologies. Let's talk about our reliance on these technologies. Today, we're no longer trusting technology to just do something but decide what to do and when to do it. Simple artificial intelligence organizes and filters my emails as it's coming in. It organizes into three tabs primary, social, and promotion. And it does this by learning it does this by learning models and preferences based on my own behaviors because what may be spam to me may not be spam to you. And Alexa you now know that I love Alexa. Alexa knows which fish oil to oil or order when I say Alexa order me fish oil. It knows exactly the brand. It knows exactly when to send me. It knows exactly where to send it to me and all I have to say is Alexa order me fish oil and two days later the brand that I want is in my house. Seamless incredibly seamless. Smart cars. I live in northern California and I kid you not the other day as I was driving down to my office on 101 I saw a car next to me where a person literally had their laptop plopped up. I moved to California from the east coast and that totally scared me. But smart cars are I mean of course we now some of us have heard of some of the accidents but smart cars we allow we trust smart cars to drive us to the location be able to stay within the lane allowed speed and this person clearly really trusted with their laptop plopped up and going about their days like they were on a bus with the driver. There was a driver but just not a person there right. But imagine your digital assistant being able to plug into the smart city to direct your smart car to the nearest free parking space or your smart refrigerator ordering use supplies when it recognizes that the food stock is getting low. The interconnected devices and the train data through the smart technology makes my experience so seamless and I become more and more reliant and even dependent on the technologies. So talked about the massive amounts of data and ever improving data technologies to combine and train the data and the fact that we rely on them a lot we train these technologies by the way. I think someone said technology is neutral well I don't know we train these technologies so what if we're inadvertently training biases technologies do precisely what they're taught to do and are only as good as the construction and the data that they're trained on algorithms that are biased will end up doing things that reflect their bias and the data used to train the model if it doesn't accurately reflect the environment that the model is going to work on then it's not going to accurately reflect the result and potentially flawed algorithms can amplify and nudge bias practices and behaviors. So the boundaries differentiating organizations in different sectors and also in different types of consumers, different verticals are all blurring as data and technology becomes an enabler of almost everything and it's really converging privacy, security and technology. Organizationally I see this changing a lot as well so just looking back on my career history I started out at a law firm advising clients usually the clients are inside in-house attorneys who would call and say hey this is we need to figure out this problem can you please write a memo for us trying to figure out what actions we need to take. Very seldom at that time did I deal directly with technologists, security folks or product developers privacy used to be part of within an organization privacy as a function used to be within either a legal organization sometimes within a risk organization sometimes within a compliance organization but almost always separate apart from the security organization and the technology organization. And now I see more and more companies organizing under the same leadership to ensure that these topics are thought of collectively that they are now coders and developers with privacy experience that we're thinking about coding privacy into the products and that security and secure life cycle development which is sort of a development life cycle considers and has input of these topics and we need translators right. Largely these versions and these verticals sort of stayed separate because and people didn't really understand. Very seldom did privacy professionals or lawyers go knee deep into technology development if that was happening I think that's tremendous way way way ahead right but you're seeing this a lot more often wearing the technology hat. How do we consider and build policy implications into the technology that we develop and me now wearing the technology hat. Our developers sometimes I think Melissa you said earlier but sometimes developers come up with even actually more creative solutions than what a policy maker or legal team could go and say can you please do this. Instead it is we have this problem that we want to solve. How can we do this right. And so as the converged group thinks about this because it can't be one function how do we think about using data responsibly and ethically. We need to not only think about the principles of data ethics but we need to think about how we can operationalize the data principles and make and really instill it into the culture of the organization. How do we make this into almost like muscle memory and habit right. In other words we need to think about how we incorporate the ethical safeguards and the ethical considerations into the design and architecture to build it right from the beginning as we're thinking about the concept of the product of the tool of the technology so that it's not an afterthought. Sound familiar? I know many of us have talked about GDPR and especially in the last panel had a really rich discussion right. So this concept of privacy by design is not new and in the last couple of years many companies including mine in preparation for GDPR really thought about this and have tried to build this concept out. But you can think about this layered approach of on top of the privacy principles on top of the privacy by design. Think about this as one layer above which is like data by design. Because we now have to think about it holistically with the technology considerations, security considerations and privacy considerations and really this is like data by design right. So let's walk through some of the principles and as a practitioner, someone who thinks about this every day, how do we start to make this and build muscle memory maintain data hygiene. Remember there's a lot of data and AI and machine learning enhanced technology thrives on data and learns to make better decisions using vast amounts of historical data. So keeping data hygiene is critical. It includes things like what do you have, where does it come from, how old is it, is it current, is it relevant, is it correct. You know I think Rita you mentioned how hard it is for companies to do this and before if you really just stop at the compliance layer there were a lot of companies that sort of did this bring in lots of people, consultants, lawyers to be able to map the data that you have in the systems. Map it, put it in an Excel spreadsheet figure out what data you have, when it was collected, if you know how it's used and file it away. It can't be a one-time exercise in order for this to work. It won't work. It won't work. And with the vast amount of data that gets generated every day, you have to be able to think about how do we maintain data hygiene. You have to establish it first, which is already hard. It's really hard. And you think about large companies, kind of grows by acquisition sometimes, or kind of grows by other means, but largely at least based on my experience, companies don't have a unified way to store data in a single beautiful place with the catalog of everything you have. It's lots of systems, lots of different IT infrastructures. Sometimes doesn't even talk to each other, right? So how do we think about maintaining data hygiene, building data hygiene and maintaining data hygiene? I see a lot of startup technologies that are coming out in this space touting to say, hey, I can understand your data. I can go in, I can scan it, and I can tell you exactly what you have and how it moves and, you know, all that jazz. But personal data is unique to each company. Because I think as we learned in the first panel, it is any data that is about an individual. So however that gets formed in a company, of course there's the basics of name, email address, that are spelled out, but largely this is kind of proprietary, right? It takes time to figure out, okay, what data do we have that is linked back to the individual, and how do we keep tabs on that, right? How do we keep tabs on that? How do we ensure that we have a record of something that comes in through a system, track its movement, understand who that information is shared with, understand how stale it is, how old it is, and understand how you, you know, the processing activities that are associated with it. We have to maintain data hygiene and make it part of the company culture. And I will be the first one to say that it is very, very, very hard. And it is not a sort of a, you know, once and done and for all. It is constantly reinventing the way that you do it, the way that you keep up, and the way that you can improve. And when I have conversations with regulators actually on this topic, I'm very, very open and honest about, hey, this is how we're thinking about it. And we're not there. And we won't be there for a few years at least, even to get the baseline. Just understanding your data landscape and being able to refresh it real time to be able to get alerts if something changes, to be able to understand whether the product that you build is potentially oversharing or undersharing to get that built out, it is incredibly hard. But the way to build it into the culture and to maintain it is to bring in our technologists, the developers, the security engineers to say, how are we going to solve this? Right? How do we maintain? How do we build data hygiene and how do we maintain that? Be transparent. And we talked a lot in the morning about transparent notice to the customers about the data that is collected about them through the privacy notices or through our terms. And I do think we have to get better across the industry about providing more relevant and meaningful, you know, providing that transparency, right? But the transparency here goes beyond the customer. I think we need to have transparent functionalities. So whatever we build, the AI technology, how is what is it meant to do? What does a successful result look like? Was the algorithm built for one use but then has started to be used for other purposes? What does the result look like there? And how do models and analytics and algorithms arrive to specific decisions? How can we quantify the accuracy of the models? And how can we communicate this to the customers? And in some ways, you may not be able to, right? We talked about how information flow could come from one place somehow migrated onto the other. Yesterday, I don't know if you read the article from the Wall Street Journal, but I would imagine some of you did. There was an article for those of you who have not seen it yet, article about apps that are sharing very, very intimate information with Facebook even though there is no integration whatsoever. You know how there are some apps where it says, hey, you can sign up for this by logging into Facebook. And it's super easy. So easy. So I know that a lot of people do it. A lot of my friends do it because you don't have to put in the email address, your address and other things that apps ask about you. So it's super easy. But yesterday's article in the journal said even if you didn't go through that route, which gives you that expectation and kind of in the back of your mind thinking, okay, since I'm signing up with Facebook, I guess some data is going there. No, these were situations where you weren't doing that. And the apps were sending nonetheless data about very sensitive information that you thought you were putting in your own device on the app about things even as sensitive as whether you were considering to get pregnant when your cycles are. Body weight where you want to get to, body percentage, bad exercise schedules. I mean, these are incredibly sensitive information that I would imagine the users of those apps did not have an expectation that this information was going to be sent to Facebook, which then combines with the information that they have to be able to use it for whatever purposes. So how can we be transparent and transparent, more real time and transparent where it's really relevant? Because we all know, as noted before that people don't necessarily read the privacy statements. And in the case of yesterday's article, it seems that in the privacy statements don't really mention that either. So how do we really elevate that? So that people can take ownership of their data. I'm sure those people, had they known that the type of information was going to be shared with Facebook may have decided that, you know what, I don't want to use this. I don't feel comfortable sharing this type of information. So I'm not going to use this. I think it gets very, very blurred. Whereas before who we told our sensitive information to among our closed circles of friends, it was very deliberate. And with these apps and not having a good understanding nor on the flip side where it is not transparent what you share and with whom is very blurred. You don't know sometimes. So we need to be accountable, right? As we build these types of models and algorithms assign responsibility for all parts of the process. So here, again, I think in the traditional days, and I've been there, trust me, in many companies where you say okay, we've got to do the PIA fill out this Excel spreadsheet form, you know, what data is it collecting, how is it going, what is it doing, and then file it away. No, this is about what is the product that we're designing? What is the capability? What are we doing? And who is working on what? Is there somebody who's looking at it from the beginning and wearing that data ethics hat and saying just like we have a legal colleague on a lot of the projects, right, there almost needs to be another sort of level of review, right, and engagement so that the products are built in a way and sets clear boundaries for the development. It could of course include monitoring and testing algorithms regularly to know that bias is not creeping in and the models are still operating as designed. In I think yesterday's article, there was one company where they said no, no, no, no, we only share de-identified information, so it's not about you. But when testing was done, that their information was linked to a unique advertising identification number, which then Facebook couldn't understand that that unique identification number actually belongs to Christie Chon. So guess what? I know Christie's body fat or whatever it may be, you know what I mean? But the true de-identification has to be tested for, right? Test for potential biases. As we know and as we heard, unwanted bias and training data can result in unfair results and we need to think about fairness. And I think that fairness is going to be different, right, depending on what you are building. And we have to think about building tests and establishing tests for identifying curating and minimizing bias and training data sets. So this could, you know, I noted one type of bias before, but this could be, if you're trying to build something that's supposed to operate 24 hours in a day, let's say you're building a smart car, that's supposed to operate obviously, you know, 24 hours a day. You don't know when people are going to drive. But if you only put in the data sets for results after it being driven during the day when the sun is out, the functionalities at night is probably not going to work, right? And so how do we ensure that there is a wide enough data set? And this goes sometimes against the principles of minimization, right? So when these topics converge, we have to work through these complexities and think about, okay, what is the end goal here? Because it is not going, it is not simple. It is not black and white. That's the fun of it. But it is very, you have to think about here testing for biases and making sure that the data set is broad enough, bigger and more diverse data sets will help reduce the bias and risk, bias risk, right? But it has to be balanced with proximity and security, the other principles that we really, really care about, right? If you wear the sort of customer centric hat, I think a lot of the times what I tell my team is put on your customer hat. What is your experience going to look like if this happened to you or if you were using this, is this consistent with your expectations on what it's doing and how it's performing? Is it seamless? I mean, we briefly mentioned the whole scraping concept, right? If I go to sign up for a personal finance management website and I want my financial information ported into the single place, if it doesn't happen, then my experience is broken. I would wonder, okay, why did four of these financial institutions port my data but this one didn't? It may be that because that institution thought that the PFM tool didn't have the right level of security but how do we bridge the customer expectation, the customer desire, customer consent with all of these other competing and very important considerations of security, of meeting legal obligations, right? Which is also very, very important to companies. So we have to think about it holistically and we have to make sure we apply the various different filters of the lenses. Maintain and test for anonymity. I mentioned this briefly before but we have to ensure that if, let's say, there are training models that are being performed and that are being developed, let's say for example, about KYC Know Your Customer, fraud protection. These are incredibly hard to do but these are also very, very important to protect the customer. Imagine a world where you can sign up for a mortgage. I think the process of buying a home is so, so painful or selling in fact, you have to provide so much information that I am like, oh my goodness, what else do you want from me? It's like I have to tell you not only what else you need to know. But imagine if we could build as algorithms or smart ways to be able to better understand the seamless experience for the customer in going through these types of experiences. I think it's going to be really important but utilizing that data is one thing but you don't want that data to come back to the particular person. I call it the creepy factor, avoid the creepy factor. And make sure that you test for reverence to engineering, de-anonymization and otherwise the potential for exposing confidential information. Because if let's say a string of data sets would reveal something about somebody and without my name on it, it's a whole different story than if that data set had information and then it said it's about Christie. Or it is so specific enough that somebody so easily can identify that that's Christie. And underlying all these principles, there is protection. We need to think about appropriate protections and security measures. And this is a topic that I think you've heard several times today. But I think with the evolution of the data and the technologies, the concept of safeguarding is evolving as well. It has changed, right? Before we were worried about intruders and hackers stealing our identity, our credentials, financial data to create fake accounts. And if you look at our state breach notification requirements, it is usually, in majority of the cases, it's like name plus, like a financial instrument. Or name plus something sensitive. But think about what we all spoke about today. Think about the large scandals of this year. It's less about the name plus the financial instrument, but it's really a tangible concern around our digital identity as a whole because it's not just piecemeal. There is so much the ability to combine all this information. Whereas before security almost presented a more tangible concerns over privacy. I think we thought about those separate, largely separate. It's privacy, it's a data breach, cyber security breach. But the concerns are broader now. And it brings in concepts of privacy and security together to think more about digital identity as a whole. And requires for responsible use and protection of our data. And these principles, by the way, are not new. You've all heard of these before. But it's evolving with increased data, with enhanced technologies bigger and more reliance and dependency on these technologies that can influence us. We have to create a culture of responsible data use. And it continues with these conversations. And not just companies alone, not just regulators alone not just policy makers alone. These conversations not just students and academics alone. This has to happen collectively. We have to be in the same room. We have to talk about it together as technologists, as customers, as regulators and people who care about not only our data, but the impact that it can have on our society. And we need to continue to innovate. But we also have to have ethical and responsible data practices and continue to figure out what that means and how we do it. I think we've spoken a lot about the challenges, but it's hard to think about the solutions. But we need to continue to have these conversations in order to think about the how aspect of ethical and responsible data practices. Thank you. I think we have time for questions, correct? Hi, I'm Sharon O'Hellern from Columbia University. Sure, Sharon O'Hellern from Columbia University. One of the ways in which there's been tracking of very large quantities of complex, almost realistic data from multiple disparate sources that has been done over decades has been through SIC codes and through customs. And using same types of technologies is actually where it gives you unique identifiers that can be traceable throughout global supply chains, which is absolutely necessary, is very similar to the types of flow data that you have here. So it's not a cultural problem, it's actually a logistical processing problem that you need to have a common source identifier. And once you had the SIC code custom, right, unique identifier that was the regulation that was standardized, customs and tracking and global supply chains and all those other types of things that flow became quite simple. Not simple, but it was tractable. So using the same types of identifiers within a stamp, if you will, on the types of data, you can classify data into two digits, right, you know, something very large, somebody's financial data, somebody's personal data into those, and then it can go all the way down to 20 digits in describing the types of data, right, and then a combination of that could provide the unique identifiers. So that's how customs and goods and services from unique places at times and products are classified. And because that's important, because they're giving a different type of tariff, right, so they're treated in a different way. And that's why you need to have that. So this is a standard problem that can be solved in a similar way. I just wonder why we can't apply these types of standardized applications here. It's known. Throwing that one out. Yeah, no, thank you for that. And I think, you know, in terms of solutions, right, what we are finding is that in different situations it is truly unique, and so there isn't, there doesn't seem, and I would love to hear from other experts here, but it doesn't seem to be a one size fits all type of solution in data management, right, and it's really one solution for one purpose may work, and that may not work in other situations. And so contextual understanding is, I think, and contextual information, I think it's what makes this problem even more complex to solve. A lot of the times, you know, when I talk to the developers, they're like, this is not personal data. And then you say, okay, but what about this and then this right next to it? It's right next to it. It's in the cell that's next to this information. If we combine that, like, do you know who that is? Oh, yeah, of course. But, you know, so it's really hard to be able to I think organize and make me utilize a single sort of shop one size fitting the solution. So what you just said was that each line would be treated in a different way by a regulation, and that who has access to that data would be treated in a different way. Once you have it in that format with the identifier, it could then be subject to a different type of intervention. That's a different question than this type of encoding. And whether that be textual, whether that be numeric, it can be done. So that, again, is a known problem. So those questions are independent of this. Maybe one more question. What do they say is people were adopting Venmo and using emojis? And what is it useful as a data source? How is it useful as a data source? That's a great question. And when we actually tie the way that emojis are captured and how people use those emojis, right? It's tied to the transaction. It's tied to the person. Exactly. So what does it say? It says that... Or is that personal data? It's personal data in that it can identify who said it. It could also be the utilization of analysis on what people are using with their money. It can identify that from the emoji usage could be, right? Proprietary information. But I can tell you that people love the emojis. I mean I think there's probably contest or purposely people not trying to use any words and just going through the emojis adoption. So very popular feature. For everyone in the world to view. So what's the responsibility of PayPal? So the emojis, that's a great point you raised. The emojis are available for people's view if you make it public. Right. So I understand that the default is always public. Which is a bit of a pain for me. I always try to... I'm not a very public person. I would try to make it private. But a lot of people they just don't care to share. They want to share. And you and me both, right? So I think when we start to talk about relevant notice, right? Transparency. I think this comes into play in that we need to make sure that when a person signs up it could be the default feature, let's say. But bring it up. When you first sign up for that product or when you first open the app you can say, hey, these are generally default public. Do you want it this way? Or do you want to keep it between the trans users? And those are types of sort of adoption in creating more transparency that we are adopting for. Because you're right. It is default by that. But we feel like we still need to tell you, right? And so that people may not know. And so how do we elevate that? And to say, hey, this is generally public. If you don't like it then you can change it. Thank you so much. Thank you. Thank you.