 Okay, so my research is in privacy, mostly privacy of consumers of different technologies. Trying to understand what is different technologies pose to people when they use it and then come up with scale able and usable. So that's the keyword that usable solution so that people actually can use this privacy enhancing technologies. In their daily lives of using other technologies. So, let me start with a definition of privacy. What do we mean by privacy? What is privacy? And there are so there are many historical definitions and there are many definitions focusing on the modern technological space, but the first definition that I'm going to talk about is this metaphorical definition of privacy as a boundary management scheme. So that means that we have our private space and there is a boundary surrounding us and we can choose how much we want to reveal or how many people or when, which people we want to let them come inside this boundary. Another definition says that privacy is the ability to control our information, what data we want to reveal to other people. And this applies to our extends to group definitions as well like we are a group and we want, we have some collective privacy information and we want the ability to control whether we want to collectively decide whether we want to reveal some information to other people or not or other groups or not. The third definition is doesn't focus on the data itself rather the context in which data is generated. So, according to that definition is the contextual privacy as a contextual integrity. So it says that data itself is not private or sensitive, but a privacy evaluation occurs when the context in which the data was generated, and then the data in the context in which the data was used. They don't comply with each other or they violate our norms or expectations. So let me give you an example. If I go to a doctor and I fill up questionnaire about my health information health history. It doesn't matter how sensitive those data are. And I'm okay as long as the data that I gave them is used in the same context in the for the purpose of health diagnostics, for example, and not for selling me drugs or advertising. So, the context in which data is generated, it includes the data itself, the recender, the receiver, the purpose of data generation, and the transmission principle, like how the data is transmitted from sender to the receiver. So as long as this context remains same. It's okay. So, so there are these all these definition of privacy and still we are not sure how to capture all these use cases. But at least the scholars agree that privacy is important. Why? Because privacy allows us to be free. So consider like when we are not under surveillance when no one is watching us or listening us. That's the time when are when we are in our true self. But as a social species, we mix with other people we are social being and we modify our behavior in different social context right so social being in touch with other people also influences our thought process how we present ourselves how we behave in order to fulfill some social criteria and also other people's expectations. So, that's okay as long as the other people who also has our best interest in their mind so for example, our friends our family members our parents so they know us, and the amount of information other people know about us the more power they have but it's okay for for people who are close to us because they have our best interest in their mind but that's not the same for some algorithms who learn about us over time. So, and this private moment, they are very important to have creative parts. But when we are surrounded constantly surrounded by all the surveillance technologies. It creates the ceiling effect that we are no longer in our true self we cannot freely. Which is why in European countries privacy has been recognized as a fundamental human right. Unfortunately, that's not the case in the US but there are some states that are coming up with with new records like the CCP in California that recognizes privacy as a human right or should be protected by default by technology. So, and I should also mentioned that privacy is not equivalent or equal to secrecy so it's not only about trying to keep data some data secret. So, one example is like a few point you are sitting for an exam, and you are writing but you see the probably the examiner is looking at you, right it's not. The same person will also create your paper so it's nothing secret to them but still in the moment you don't feel comfortable. And often what I have found that I keep writing even though I don't have anything to write or I pretend to write to fulfill that person's expectation that I should be writing instead of thinking what you should be writing. And then the second most important factor of privacy is that it helps us to evade manipulation so again the more knowledge others have about us the more power they have over us. So, let's talk about the Cambridge Analytica scandal. What happened was, some people were able to get information of Facebook users and their friends and then they created this massive data sets of behavioral shapes, and then they tried to manipulate these people to advertisement and other other ways to to influence their voting preferences or political views. And it was possible because they could learn about certain behavioral case or psychological psychological case of these people. So, when we use good map applications for example, we reveal our homes or office and our other places where we frequently recite when we share photos, we reveal where we went for vacation. And for wearable devices that we use like smartwatch or eye tracker they can learn much more intimate details about us for example when we are excited when we are emotionally feel angry when we how many times we wake up at night. And these are very intimate details and can be so can be used for malicious purposes. And then what can we do like it's not that we can just get rid of this technology because they also have good purposes, good use cases so what, what can we do. So, here is my research so I tried to understand these problems from humans perspectives and also coming up with some solutions that are usable and practical and can be deployed at scale. So, mostly I talk I work in two domains to different domains one is on social media privacy visual data privacy and education. So I will first start with the social media privacy so what's going on there. We share images right and according to one estimate we share more than almost 2 billion images every day on different social media platforms and what happens is a large portion of these images end up in public domains and can be accessible by anyone can be different steps from the websites. They are being used to to create machine learning based tools and technologies to identify people recognize people online to track them online for visual surveillance advertising. So one particular case that came up. It's not very recently now it has been two years but so there is a take startup for clear view AI, what it did was, it escaped millions of websites and collected I think 15 million images of people. And then it trained a model and provided this model as a service so anyone can use this model or their service to check anyone else online. So that's our surveillance for advertising for whatever purpose. And it was so aggressive that other advertising companies like Google and Facebook also like came up and told them to stop or at least they prevented this clear view way to skip from Facebook and Google sites. But that was not the end of the story. It's clear view right data set when there were three billion photos it was hacked and so these three billion images are now public I don't know maybe it could be in the dark web and can be used for any purpose by anyone. So, now, it's, it's not only the case that people who use this different technology social media upset and or upload their images they are at least but also people who exercise sense of shame in sharing data discussing data themselves, even people who do not have any social media account, they are also because photos are a very rich source of data set. And when you take photos, especially if it's in semi public or public spaces, we capture a lot of other people who didn't give us consent to take their photo or upload this on social media. So, privacy is very interdependent. We can value other people's privacy and same the same can happen to us as well. And recently there has been also this incidence of meme sharing so we, we check memes we share memes. It has been, it has become a major source of entertainment online but there have been also severe incidences like people suffer consequences in this professional life or personal life or social life, because their means when vital on social media. So, this is the research question that I want, I focused here is the like how can we protect our privacy when other people are controlled, how are other people are in control of sharing our information like when we share a meme or when we share images that capture other people they don't have any control over their data, which is a severe violation of privacy so what can we do. So, the first thing. First solution we came up was like trying to identify people who should be an in an image and who shouldn't be. So, in other words, we try to classify people. Either a subject or as bystander in an image and once we do that we can easily just maybe remove the bystanders or offer them using image filters and so on. So that's what we wanted to do. But the problem was that this classification is very subjective and context different. So for example, in this image there are people they have very different visual appearance. We probably will classify all of them as subject because they are, they are performing some action as a team. So that means that in classifying them, we are using our background knowledge, our memory and our other knowledge from other context and applying making inference based on all those background knowledge. So here is a different the opposite stereo so in this image, people look very similar to each other. But when we asked, like, study participants to classify them, they came up with different levels for these people. So again, they use their background knowledge their memory and so forth but to do this classification automatically automatically we don't have the source of rich inferential knowledge. So how can we do this. So here is our approach. We first try to understand how humans conceptualize subject and bystanders and then identify some high level features like why are based on what factors to make these decisions and then map these high level features into low level image features. And then we built a classifier. Yes, I think I'm back. Can you share me. Okay, all right. Good. Sorry for the interruption. So, so yeah, so we have this high level concept and then we can ask many low level details and then try to map these details into this high level concept space. This is what we did here. So we showed images to people like you saw earlier and then asked to for each person classify them either as a subject or a bystander. And then we asked them why, why they label someone as a subject or bystander. And they gave us many reasons including like this in this photo or captured by the photographer or if we remove the meaning of the photo. So these feel at a very high level like how can we from an image machine can understand if some person, someone is comfortable or not or whether the photographer intentionally captured them or not. It's still very high level information. So we went one step lower and then we used many machine learning deep learning based models that gave us information draw information about the image or people in the image. So for example, we use one model to to to identify their body pose and then from these body pose we collected data about their body joints and angles between different links and then from this we kind of inferred like what is the overall orientation of this body. So we used another model to detect faces and then we fade these faces into a third model that could come up with scores across different emotional states like angry or happy or surprised and so on. We had many other such models we use them we applied them on images and then we built a lower level feature set about people in that images and then we found we computed statistical dependencies or we map this low level features to that to immediate or high level features at a higher level like. And then in the second step, we again map this intermediate level features to the classification question like whether based on these features now tell tell me if this person is a subject or by standard. And so here is a result so we also you experimented with many other different models different model architectures different features it. But you can see that the last row so this two step process and it has much more higher accuracy compared to these all these other models, including many like large deep learning models like resident. So, and here you can see several examples. So the red boxes shows someone was classified as a as a bystander green boxes shows that person was classified as a subject with this model. And all of these classifications are correct. Here you can see some wrong classifications so again it means subject red means bystander but now these classifications are not correct. Okay, so we have some model to classify people and there are also other computer vision based work that can detect other kind of sensitive objects like contrasting or text and so on and once we detect this models. Sorry, these objects, then the next step is how can we off cascade them or how can we hide them or remove them from the image so there are many methods like encryption you can just in key part of the images and then only people who have these different keys they can recover the full image but this kind of images are not very usable in practice or at least not in the in the context of social media because here is the goal is not to only allow some people or distribute keys to some people this is very cumbersome so that they can recover the full image rather is to share as widely as possible in some cases so the solutions are not usable in practice. So we proposed some other solutions based on offer stations like how can you filter out some of the some of some portion of these images, but still retain their visual aesthetics. So, we started a bunch of filters, but unfortunately most of them actually didn't work very well in terms of both properly hiding this information and retaining the visual aesthetics of this images. So, in the next, here you can see two examples like just blackening out versus using pixelation. So, again they didn't work very well so we came up with, we tried to enhance the visual aesthetics of images with transfer learning so you probably if you use Instagram or Facebook you have seen this like, you have a picture and you have a painting you can transfer the style of this painting to this image. So, for example, the top left and then it's a city in Germany, I think, and then you can see the small in small box of different famous painting is strictly very important for us painting, and then this picture was transfer was modified to feed the style of this painting. So this is what we did. In another study, and you can see here some example like the right person was of a skated but then also the whole image went through the visual alterations to change the style, according to different paintings, famous paintings. Okay. So, we have, so we now have some ways to detect sensitive content in images, we now have some ways to remove or offset this content, but they don't really help in this case of meme sharing or privacy privacy to memes. Because the problem here is not the content, but rather how or why this content is being shared. And this is a very much social issue, not very technical so the problem is with people's behavior. And so we tried to apply intervention approach in this case so this was a very close collaboration with psychologist and community scientists, and we came up with different priming methods to kind of discourage them to share these memes by them I mean people. So, I think you are, you have seen this kind of visual priming so when you see the left the warning sign, you can see it in in emails, for example, when the email client thinks something is a phishing email or it has malicious content so it wants you with this kind of science. And the law, the right log sign, it means it's safe like when you visit an is a website with STTPS protocol. So that means this sign means this website increase data so it's safe to use or if the lock was read then it means that this site is not very safe so don't enter your financial information for example so we have we are used to seeing this kind of visual priming that makes us. think before we take some time to think before we do something. So this concept came from actually psychology. And they call it nudging or priming and they have both good and like, they can be used for both good and bad purposes. For good purpose they have been used to nice people to eat healthier, for example, or consuming like improving people's consuming behaviors and recently after COVID came it was used to motivate people to get vaccinated. So, we tried to prime people to respect others privacy with textual priming we didn't yet use any visual priming. So, in one experiment, we asked different participants to show them memes that you saw earlier, not this kind of means but anyways means and then ask them how likely are they to share this mix on. Their social media accounts and they told us this much so this is the control condition we just simply ask them how likely are you to share this image and they said okay here. In a second condition we prime tried to prime them by asking taking into account the privacy of this person in this image now tell us how likely are you to share this image on social media. And the idea was that if we directly warned them about privacy violations the the sharing actually would go down. But in reality it went up so it was a very paradoxical findings for us so when we want people about possible privacy violations. They wanted to share more. And this was a very robust finding because we replicated this study three times before we publish this paper and every time we got the same result. And so the then the next question is then, like, why this is happening, what other factors are affecting people's photo sharing decisions in this. So, there have been many studies understanding peoples from psychological perspective like understanding their psychological traits. So we looked at a specific trade humor style. So human style means that how people use humor to either entertain themselves or to advance social connections by entertaining other people. And human style has been found correlated with narcissistic behavior with empathy. And with aggressive game years late online trolling so it seemed to be very relevant to our purpose of studying me sharing behaviors means that we potentially value other privacy. So, humor style, it can vary along two dimensions one is humor can be positive or negative positive I mean it's harmless negative means it's the humor is when sending to some other person. And it can also vary along the line of like what is the purpose of using humor, it could be against self entertainment or it could be building social connections. So, there are then four types of humor styles. And so what we did was, we collected data about people's humor style and then we tried to understand like different types differ in photo sharing habits, and then do people with different human types react differently when we prime them with our interventions on nudges. So what we did was we collected data that we clustered people according to their humor style so you can see here a two dimensional projection of the two clusters. We call it humor endorsers that means people in this cluster. They use humor a lot and for all kinds of humor and for all purposes. Humor deniers they don't use any kind of humor for any purpose self enhancers. They only use positive humans. We have three clusters and we looked at so here is only one result so we when we asked people, like, have you shared, have you ever shared any photo that might have violated any strangers privacy, and you can see that the humor endorsers, the people, this group. They share all kinds of humor and for all types of purposes. So they told us, yes, much more frequently than they told us no or also compared to all the, the other two groups. And also when we looked at what happened when we prime them with our intervention, you see that we again have this paradoxical finding, but only for one human deniers so people who do not use humor as frequently as an average person. But paradoxically when we prime them about potential privacy valuation, they wanted to share more now. So, that means that people, we should customize this interventions, according to, according to different personal details and so this is an active area of research like how can we understand those personal details and how can we modify our interventions, whether visual takes well or what, or audio priming feet, or to make it targeted and more effective in practice. Okay, so, so this research was about how people's internal factor like their personality trait affect their, their data sharing behavior. We also looked at how external factors affect data sharing behavior so one specific external factor was. When suddenly someone goes viral on social media, what happens then how their engagement with that person changed or how they're sharing sharing behavior changed. So we looked at Twitter and we collected data of 20,000 scholars like researchers and scientists for three years. And within this timeline, some of these scholars went viral for the first time after they created their future profile and others didn't. So we have these two sets of people now, so viral users, non-viral users, and what we did was we mapped these people in a high dimensional space based on their profile characteristics, based on their tweeting history and so on. And then we identified pairs from these groups and who are similar in terms of their behaviors and profile characteristics. So in each pair, there was one person from the viral group, one person from the non-viral group. And the idea was that we have these two groups and they behave the same until the viral group went viral, and then they change their behavior. So, but the non-viral group didn't change behaviors. That means the change in behavior happened because of the viral events. So that was the purpose of this matching. And this is one form of causal inference analysis like identifying the cause of some effect. So the viral events were identified as reasons for them sharing to change their behaviors. And we have some, so when we compared the subsequent behaviors in these two groups, we found some interesting results like right after the viral event, you can see the zero point is when the viral event happened. And then 15 minutes, 15 days before the viral event, positive 15 minutes, 15 days after the viral event. So this spot shows like how their tweeting frequency changed over time before and after the viral event. And you can see that right after the viral event, the zero point there was a sharp increase in tweeting. So which is expected, okay. We also found that subsequent tweets from the viral group were more factual, like more objective and less subjective, like more information rather than more emotion. And then we also found the viral group had more positive sentiment in their tweets after they went viral. We also found that the viral group had more similarity, like they posted tweets that are more similar to the tweet that went viral. So which is also probably pretty reasonable because people when when they were viral, maybe they also want more viral events to gain more familiarity or gain more social capital. Okay, so that's past research. And in future, I also I want to continue this, this line of research on visual data privacy. But I want to get, I want to expand in terms of like how can we conduct experiments so the next plan is to build mobile applications that implements all this contribution based algorithms to detect and modify image content. And the reason we want to do it on mobile application so that we can collect real data like how much actually this, this algorithm for effective in, in, in real settings and how much privacy value we can prevent like measuring the effectiveness of this algorithm these interventions in real setting. I also want to extend this research on other other context like a smart home surveillance system so smart home system they have these cameras and these cameras are continuously recording images and videos of both the owners of this technology. Okay, I think something is, I don't know. I think it has some kind of sensor. If I put it down, it goes off maybe I because I never used it without putting it on my head. But if someone asks questions that if there is, yeah, I can take it down. Yeah, so there are assistive technologies for, for example, people with visual impairment to help them shopping, for example, they can take images and their applications they can send these images to people to with different questions like what I'm looking at right right now is this the thing I wanted to buy or have the negative navigate so again there are privacy issues for these people, because they don't know what picture that what these what is the content of this images before sending them to other people and also there are privacy issues of surrounding people because when they they we are these devices like AR VR stuff. These technologies also collect data about the surrounding people. So how can we mitigate these privacy issues so if you are interested in mobile app development or IoT devices, let me know and interested in doing research in this area. Shoot me an email. Okay, so now let's move into the second research domain that I am looking at so this is about educational technology. And by educational technology, I mean, whatever technology are being used for educational purposes like you use canvas discord. Maybe also Piazza zoom. And there are also remote proctoring applications I don't know if they're being used here or not. And there are also mobile based applications that track school children to check their behaviors and collect data about them to see if there is something to worry about and so anyways, so the the the tech market is experiencing a rapid growth due to this pandemic so it is becoming they are becoming ubiquitous and you can see the in this chart. The global tech market is expected to be more than $400 billion in 2025, which is much more than the IoT market or even I think maybe the also mobile phone market. Anyways, so, unfortunately, with this rapid growth we are also seeing this kind of headlines in newspaper more and more like the like massive data breaches in in K-12 schools or even higher educational institutes and then hacking are the other issue is intentional abuse of the data that this technology is collect for advertising purpose for example Google faced lawsuit about scanning students emails and proctorial also faced a lawsuit I think it was maybe in Canada. So these data breaches and all are these intentional abuse of data. Both have this common root code like the collection of massive amount of data so this is what we want to prevent the problem is that this data collection is often justified by the need of learning analytics so learning analytics means that measuring collecting and analyzing data about students in the context in order to improve their their learning method. So, but we want to understand if we actually need all this data to build learning analytics models. So, to give you some example of what kind of data this these technologies are collecting and how these data are being used to build learning analytics model so they collect demographic information agenda blah blah blah and then they collect historical data like past grades or past name of past institutions and so on. They collect socioeconomic status like parents income or household G code they collect behavioral data like how students interact with different learning educational technologies like how the canvas records each and every mouse click or mouse track mobility data. For example, from campus Wi-Fi or from GPS this education this technology mobile based technology also collect how people move around and then audio video data from your proctoring. Our remote class. Technologies and this data are being used to model students course performance for example, the probability that some student will drop out from a course their engagement with course material or recommend them books course and so on. They're well being their social connections so for example from mobility data it has been shown that we can. We can check people down like who are the people who are frequently getting together based on their location data or Wi-Fi data and then inferring the social ties among different groups of people. And emotional different emotional state so some of them are just pseudoscience like well being is so vague and so after concept that is there is no way. They can be measured based on just how people click to different applications and some of them are. Useful but again the question is then do we actually need all this data or we can do with less data to to less than the the security or privacy of students. So we looked at causal machine learning for this purpose so give you to give you an intuition some intuition so how machine learning models works is we feed this models with past historical data and then we use this model to predict future. To predict about our future so for example if I feed model to my location history and then I can ask okay where will I be at this time. So for example if if you see me every Friday in at a market you can and next time someone asked where I'll be on the flag on on next Friday you can with pretty high confidence you can say that yeah that person will be in that market. So there is a pretty high correlation of the day of the week between the day of the week and my location at a particular time. But the day of this day of the week may not be the reason for me being in this market so for example I. Maybe the reason is that on Friday as I get paid I get my paycheck which is why I go for shopping. So the reason is not it's Friday the reason is that I get paid on Friday. If I got pay in Tuesday I would be in the market in Tuesday not on Friday. So this is the causal reasoning like what is the reason versus what is what other things are correlated with some observation. So the for whether it's a causal model or correlation model we always the machine learning is always about learning some function so a function has an input and an output. And once we learn these functions, we can then use this function to predict future outcome so very simple case is, you know these straight lines straight straight line is a function so why equals to x plus B is a function. So y is a function of x, x and y both are variables and B are parameters of these functions. So once we learn a and B, then for any x, we can find the why. In other words, if we can learn some functions of the correlations between my, between time and my location, then for in future for any given time I can find or I can predict the probable location. So, but again this is correlation not causation so this function kind of tell whether the x was the reason for why. So, so we use this kind of reasoning to understand whether demographic data which are very privacy sensitive like gender or political affiliation or religion, whether they are actually useful or they have any causal relevance to learning analytic models. And we found no causal effect of students gender and age group on learning analytics or course performance. But we found that gender and age group. It can be inferred with high accuracy from behavioral data. So for example, they are click through data with online portal, we can infer their gender and age that means whoever has access to this data they can infer this attributes about people and then also then maybe do targeted advertisement or tracking or profiling people and so on. So we come we try to prevent this by combining adversarial sensory and constraint optimization to to prevent this kind of inference attack so just very briefly like we have these behavioral features of students. We have this neural network model with some hidden layers we feed these features and then we try to predict performance of students. So inference is like how machine learning models learns is the predict something and that it compares this prediction with ground truth, then it updates its parameters based on how wrong it was. And it does it for many times and then ultimately it learns these parameters good enough so that it can predict with high accuracy. Because we built a model with that can predict performance but then we also predicted this gender using the same model, but we tried to update the parameters so that this model this parameters over time for gates information about students gender. So in other words this at the end of the training. This model could predict performance that but it couldn't predict gender anymore so we removed sensor this gender information from this feature set. So we also constrained so if you know the constant after my I think if you took discrete math or I don't know I forgot the question but so constant after mention is you constrain the parameter space with some cost function. So in the pot you can see the cost function that we applied so the end result was we identified some features behavioral features that could be released and could be used to create learning analytic model but cannot be used to predict gender or age information of the students. So, in summary, we showed that this demographic information are not required to build in learning analytics and they shouldn't be collected we can do with less data. Okay, so now future in the along this line we I have planned to do future research in the in the internet measurement method following the internet measurement methodology like understanding. At a large scale what are the security privacy issues perceived by different user groups like students, parents, teachers, system administrators in different. educational institutes. And that means we need skills like web scrapping data mining natural language processing. Another trade would be understanding different API because this for large companies like Google classroom or blackboard or canvas they have these third party APIs. In this API anyone can build applications different applications that has access to the same data sets. So, we want to understand if these applications are abusing this API, or if there are like part party at ecosystems that are using this data set for unintended purposes, for example. And we need skills like system security data transmit like tainting the mobile mobile devices, for example, to track what data they collect and how these data are being shared with different third party APIs. And vulnerable realities of these applications. So if you are also into hacking stuff and interested in in this domain in this direction of research, let me know. And then also user centric research like again and as I told you like understanding how people perceive the risk how we can inform them about the true risk without. And, and kind of change them to properly use different technologies and different interventions understanding their psychological profiles or mental models and so that would require a lot of skill from it's the eye or psychology so again if you are interested in any of these research directions let me know. There is a if you know there is this fury, fury, or full on underage research initiative so you can get paid when you work under some research program so if you are interested, let me know. Okay, so that. Oh, sorry, I forgot the fourth shade of this research so which is mostly ML based attacks and defenses so for example adversarial machine learning so the one I showed you like adversarial censoring of. A feature set causal machine learning federated federated machine learning is very suitable in this context, because what it does is it trains machine learning models with data that resides in different institutes so think about canvas canvas provides their service to a lot of users and it trains its models using data from all these institutes so federated learning is a paradigm that can change models without sharing this data across institutes so it how it happens is it. updates the model locally or updates the model locally and then send this update to a global service without so that the global model can update its parameters but it doesn't see the whole data sets. So this is a very promising research paradigm in this domain. So here is the summary I will keep this slide on so that you can look at and let me know what you think so now I think we have 15 minutes left right so yeah. Now I'll take questions comments suggestion feedback whatever if you have anything raise your hand and talk to me. So you were asking if you have this uncensored image. Yeah. Okay, so that's a good question because there are these two different ways you can censor each so if you use any keep graphic method, for example, and in keep the full image or part of the image. It's reversible. If someone has the keys decryption keys they can just reverse the image and see the full original version. If you do pixel manipulation like with contribution models so generative models. Then it's not fully recoverable like deep learning models can recover some alterations like blood image or pixelated they can recover this image. Again, so the purpose is different. Social media context is. I see. Okay. Yeah, so I guess the purpose is different if you want to share an image to only specific people and you want absolute guarantee that no one else can recover the original image then the keep graphic method are more suitable. But that is not probably very applicable in the social media context for the purpose is I want to show this image but I also want to prevent unintentional leaking of information for that about myself or other people. And in that case the filtering method is more. Yes, so that's where the mobile application research area comes in. So we want to give these people this ability to quickly and automatically detect and offset content. But they can also have the control like if they don't like any specific operation or they think they don't want to actually hide some stuff they can just share this image. Yeah, so I can read this question from zoom actually thank you for this question. The question is what led you to explore this research it is so valuable thank you if there is time. Okay, do you have any concerns about the Mozilla Facebook partnership. So the first question why I did this research, it was actually very personal years because I was very I felt very annoyed when other people took my image, especially my friends that I didn't want to share. There is also this large area of research like how people negotiate when there is a group ownership of images for example so then how they how do they negotiate ownership and sharing preferences because if there are multiple people in an image. They often do not have the same privacy concerns or same privacy preferences so how do they collectively decide how this image should be shared. So it was for me it was very personal I felt like this is an important area of research because at personal level we have this social or professional problem but at a higher level we have this collective issues of mass surveillance tracking and so on. So that motivated me to explore this this research area and then the partnership is this is unfortunate and of course I in general I worry about like how much we are being controlled by this. Corporations who donate for example to to to our research or who build this kind of partnerships with non non-profit organizations or research institutes and then dictate how we should do research. So yeah this is concerning. A recent story I can tell you is so there is a very famous researcher who does research on this educational technology space regarding students privacy ethics surveillance and so on. And there are two conference venues LAK and EDM so education data mining and learning analytics and knowledge. And I don't remember which one but one of them so this person was invited for a keynote talk at one of these conferences but he declined because proctorio was one of the sponsors of this conference and proctorio is the most invasive. According to this researcher and also many other people and so yeah this is unfortunate but we need to we can just do our best to to kind of maintain our own independence in in how we do research and what we focus on. Any other question. Yes. I think so. I mean it is regarded as a human right in many countries. Most of the European countries and they they have this in their constitution like you cannot collect people's data with their you probably have heard about GDPR which is which dictates like how different online platforms should be collect people's data and share them how they should obtain their consents and so on. We have here in California CCPA is somewhat similar. So if you go visit a website when you are in California you will see different notice about your privacy choices than you are in any other state. Same if you are in an European country will see a lot more like warning signs or a lot more options to choose from like how you want or what data you want. Do not want to share or so but that's not universally true in the US but yeah I do think it should be regarded as a fundamental human right. Because like how so again these technologies learn about us and then we also turn into this technology towards these technologies to for example when you go to YouTube when you go to Facebook or Goodreads. The algorithms decide what you should read what news you should watch what book what movie you should watch. That means that they not only dictate what we should think but also how we how we should think right. So that's scary and we don't know much about this this recent development of this huge complex algorithm like how do they work. There are a huge area of recent like ethics of algorithm or fit fairness of algorithm because these algorithms decide many important fact issues in our life like how whether someone should be given alone or not. Whether someone should be allowed to go on a parallel or not in the criminal justice system. Yeah that's so I mean these are as important as our food or clothes or medical benefits that that are our fundamental. So privacy should be regarded as a fundamental human right. Question comment. I think we are almost out of time so again if you have any question comment you are interested in research should be an email. I think we should pack up right.