 So we can start. So welcome, friends, to this fresh lecture series. And we are starting off with a very pertinent topic. And this is extremely important for a number of reasons. I mean, there's a lot of research going on this area. And for whatever reason, there's not too much of debate and discussion, or at least to the extent it should be in our field and in allied fields. So it's a pleasure to talk about this. And we've done some research on this topic. So hopefully this will start off some debate, or there'll be certain new things to think about as we go along. So as I carry on, if there are questions, you can just ask me in between as well, because there are a lot of, I'm sure there are a lot of talking points as we go along. So I'll just, without any further ado, I'll start off the presentation. And I've used the title, How Google, Facebook, and Other Surveillance Giants Control Our Lives. Now for whatever reason, I mean, it might be very ironic to see that we are doing this on Google Meet. So we'll be talking a lot about what Google and Facebook and others have been doing at times surreptitiously. But we have to use a platform like this, and even maybe Facebook and YouTube to put our voices forth. So there are three different books that I have consulted for this particular presentation. The one on your left, The Age of Servants Capitalism by Shoshana Zuboff. This is a remarkable book. It's a huge book. And the first two sections are what I'll be using in today's presentation. The one on the extreme right is by an organization called Amnesty International. And as you can understand, even the title of my presentation today is inspired by this particular book that has come out recently. It says, How the Business Model of Google and Facebook Threatens Human Rights. So my presentation is on a different perspective. It's about how they control our lives, how they manipulate us to do things that we otherwise wouldn't have done. The one in the middle is by Nick Colry and Ulysses Magias. It has been there for almost a year. And this is about how data is colonizing human life and appropriating it for capitalism. So as you can understand, it's a lot about political economy. It's a lot about how the surveillance infrastructure is used for a very, very new kind of an economics. And as we will see, the paradigm is so very different from what we know or the things that are known to us that oftentimes we try and use the same metaphors for this new kind of situation that has arisen. But we'll just see in a moment's time that this is very, very different from what we know or very different from what we can imagine even. So let's define surveillance capitalism first of all. And that's what Shoshana Zuboff and this book has been highly, highly recommended by many people. And I'm sure it will be compulsory reading for many of us in advanced classes as far as media and communication classes are concerned. So what surveillance capitalism does is that it claims human experience as free raw material. So our experience, whatever we do or whatever we might not do, they take this as a raw material. So whatever I experience on the internet or however I behave even in my real life which might not be directly related to my internet behavior, that experience is a free raw material for them and they translate that into behavioral data. As we know that some of this data is important for them to improve services. And that is how Google has or even Facebook has managed to be the behemoths they are at the moment because they are continuously improving the service whatever they are providing us. But the problem here is that the behavioral data that they draw from us is not only for improving the service they provide to us but a large chunk of it is declared as proprietary behavioral surplus. It is something that belongs to these companies. And then this surplus is fed into advanced machine learning processes basically and they are fabricated into prediction products that anticipate what you will do now or in the next few moments or maybe in the near future. So basically what it does is it extracts human behavioral data and then it uses that data and as you can imagine that data is like trillions of data points. So it uses this massive, massive data point, massive to the extent that we can't even imagine how massive that is. And then it uses this behavioral data to fabricate prediction products so they can predict very, very precisely what we will do or what we might not do. As I go on, we'll find out how this prediction works and how they cannot just predict what we do but they can even modify our behavior and that is where the entire problem is and that is where this should deserve a lot more importance this particular topic than we give it at the moment. As I said at the beginning, the existing frames of reference that we have, the existing frames of thought, they will focus on the familiar, something that we know and when we are into an unfamiliar territory, even then we try to use the familiar framework to define that and what it does is that it contributes to normalizing the abnormal. I mean, we are not even willing to accept it as something so abnormal, something so unprecedented that we are not even prepared to fight it. We don't even know the impact it has upon us. So that is one of the problems that it is so unprecedented and most of the times we are talking about the new surveillance dynamics in terms of the familiar metaphors that are known to us. So they are dangerous because they cannot be reduced to the known harms. So most of the times when we talk about surveillance, we are talking in terms of either monopoly and privacy and that is where a lot of the responses are what we get. I mean, you might have heard a lot of people saying that it does not matter to me, okay, privacy. There's nothing private about me, everything is public so it doesn't matter to me. Or okay, it's a monopoly, so what it doesn't matter to us. So since we deal with these new surveillance paradigms in the monopoly or in the privacy framework, that is why we are not aware of the dangers that this surveillance economics presents to us and that is why we are just using it in terms of maybe to enhance privacy or to bring down monopolies but as we will see, it's much beyond that. It's much bigger or it's much more dangerous than what we assume it to be. So before we start, it's like just a background to this. It's that the surveillance reaches well beyond the information which users provide when engaging with Google and Facebook. So it's not the information that you provide and many times the information may not be exact what it is, so you might give an email address which you don't use or you give a date of birth which might not be right. Phone numbers you have to give correct most of the times because there might be times when you need the phone to unlock your Google account or your Facebook account or whatever. So that's how they ensure that you give the right phone numbers at least. But there is a lot of other information. It includes location, it includes search, it includes the app use and so on and so forth. We'll find out the amount of humongous information that they gather while this surveillance process goes on. Google according to Zuboff and many others is the pioneer of surveillance capitalism both in terms of thought and in practice. And we'll see that when it started off in 1998 I have a document by the Google founders and there we will see their ideology was very different from what it is. So they are the ones who did all this experimentation and the implementation but they are not the only actor on this particular path. So in the beginning of the 21st century they were the ones and we'll find out what kind of things they have pioneered in terms of thought and practice, et cetera, et cetera they have the wherewithal to do all this experimentation and all this development to fine tune the surveillance products to precisely not just predict human behavior but also manipulate human behavior. And that is where I'm trying to or that is where the presentation will lead to. This is what I was talking about. I'm not sure whether you can see that. So what I'll try and do is to zoom just. So this was about to Sergey Brin and Laurence Page in this journal article and there they spoke about the anatomy of a large scale hypertextual web engine. This was way back in, if you can see here, it was in 1998. So this is where the Google projects started and there are things that they talk about that and the most important thing that they spoke about at that point of time, the Sergey Brin and Laurence Page was the fact that they were not going to use it for advertisements. This is exactly what they say in the paper, further down in the paper. I'm not going to talk in details about this particular paper. You can just Google this Sergey Brin and Laurence Page about the anatomy of a large scale hypertextual search engine and you'll get it. It's freely available and it's highly, highly cited. This is what they suggested. I mean, this is from the article itself. We expect that advertising funded search engines will be inherently biased towards the advertisers and away from the needs of the consumers. So they were kind of apprehensive about the power of the advertisers. We believe that the issue of advertising causes enough mixed incentives that it is crucial to have a competitive search engine that is transparent and in the academic realm. This is how Google started off with. Just to provide a search engine which was extremely user-friendly and at the same time it was not going the advertisement path nor was it going to charge money from websites for putting up their search results at the top. So initially that is how Google started off with and they made money at that point of time by licensing their products to various corporates. So anybody who would want to use the search engine would be licensed the product and that is how it carried on for some time. But as you understand, this kind of an idea had lots of venture capitalists funding their project and it went on to a situation where those people wanted to ask for Google to become financially self-sufficient as they would say. So searches, it created that feedback loop. People needed, search needed people to learn from and people needed search to learn from. So the Google search, it needed lots and lots and lots of people to search there so that they could find out how to fine tune or how to make their algorithms even more relevant and provide all the comprehensive search results. So there are discussions about the algorithms. I don't want to get into those details right now. But what it meant was that this was a symbiotic kind of an arrangement that the more people came to Google, the better Google became and the more value people could get from Google. So more queries meant more learning, more learning produced more relevance and more relevance meant more searches and more users. So this cycle was very, very important and anybody who started off with Google would know that everything or whatever you wanted to search, you could just fine tune it to get anything and everything that you could across the world. So it was a very easy thing for Google to get almost everybody on the internet onto its platform. And that was largely because of its initial ideology of providing value to consumers without getting too much bothered about advertisement and revenues and all that. As I said, in late 2000, when the promoters, their pressure led to Google decide on something which is seemingly doesn't appear very innocuous of providing targeted advertisements to individuals. Rather than individuals typing in a keyword to search advertisements or whatever, Google would find out who are the people who could be provided this particular kind of an advertisement. So the behavioral data that Google earlier used to improve the quality of the search results would now be used to give a personality to the users and according to your behavioral data, that kind of target advertising would be provided to you. And that adsense was a revolution in many senses because it led to a lot of advertisers reaching out to people they actually wanted to reach out to because as we know, the advertising industry involved a lot of wastage and if that wastage could be taken care of by just putting it precisely to people who mattered, then Google would, then advertisers would be extremely happy about it. And that's how it started off. The Google using data for the first time, not for improvement of service, but to know more about the people who were there so that they could target these advertisements to them. So that's how it started off. In April 2002, that was a remarkable Eureka moment for Google data team in many senses. In April 2002 and in those days, what happened was that Google data team would look back at search logs to see what people were searching for. So at regular intervals, people would come down with all the data and they would want to see that, okay, this is the largest search in this area, people are searching this one and so forth. What they saw was that there was a lot of search for a peculiar kind of a thing and it was Carol Brady's maiden name. And they just wanted to dig deeper. Why suddenly, why about this? Why not about some known celebrities or whatever? They found out that the exact time when people were searching for Carol Brady's maiden name was when this particular question was asked on their version of Conn Benninga Carol Brady, which is who wants to be a millionaire. And as you know, there are different time zones on the United States of America, right from Hawaii to the East Coast, so whenever and these programs, they are telecast at different time zones in different areas. So as soon as this particular program was broadcast in a particular time zone, the search queries immediately flooded it. So Google could actually predict that now, half an hour from now in the next time zone, people are going to search for Carol Brady's maiden name or something which was happening in real life or in the real world was reflected on Google servers. And that is when it came to them that they can mine it for huge things because now they could actually predict or they could actually find a reflection of what was happening on the ground, on their servers. And that is what led to Google now mining data to reading users' minds, just to not only behavioral data, but also to find out what they were thinking, what they were feeling and what they were doing. And we will find out in more detail about all these things. But their unique access to behavioral data one because the amount of people who are doing it was huge and by now they had perfected this algorithm of getting these user profile information to the precisest possible extent that everybody who was on Google if for a length of time, they would be the Google's engine so it would know what that person was like. So instead of us searching Google, it just turned, the paradigm turned in a sense where Google was searching us. So this, as I keep on talking about Google, I'll talk about Facebook as well, but at the same time we must understand that this is not only about these two companies, that these are just indicators since this is how it started. That's why we keep referring to Google at the beginning. So that is when they found out that okay, there is something which is a surplus. And this surplus can be a game changing zero cost asset because Google was not spending any money for this exclusive raw material. This raw material was exclusive to them because there was no one who came close to them. It was almost everybody on the planet was there and whoever had internet connection was there and they would now use, secure more and more behavioral data than it needed to serve its users. So there was this shift once again, instead of using the data to serve its users, we'll see that in a diagram in a moment's time. They now stumbled upon this surplus and that surplus was the game changing thing. And Zubov suggests that this was a game changing zero cost asset. That was diverted from service improvement to a genuine and highly lucrative market exchange. We'll find out and see how lucrative it was. So initially, I'll again zoom this to make sense of it. This again is from Zubov's book. Initially, it was like whatever your behavioral data was, whenever a user walked into Google, he provided all those behavioral data and through these analytical engines, those behavioral data were used for service improvement. So this is what was going on in Google and whatever behavior was not required, that was no longer, it was just useless. It was not for any, Google had no usage for that. So this was the initial behavioral value reinvestment cycle. Whatever behavior could be rendered in terms of data. Whatever behavior could be translated in terms of data and that data would be used by their machine intelligence algorithms for service improvements and this is how it carried on. Later on, we'll see that it just had a different, it led to the Google's extraction imperative. Now, there are parallels with the Ford's invention with which revolutionized productions and we know about the Ford model and so on and so forth that the economy of scale and all. What Google's inventions brought was the extraction imperative. Means extraction of these raw materials and these raw materials are our behavioral data at an ever expanding scale. So it demanded economies of scale because the more data it had, the more precisely it could predict or the more granular data it had, the better it was for them to analyze and predict. So that is a very, very important imperative of surveillance economics, the extraction imperative of extracting as much data as possible. And that explains why a lot of companies like maybe even Twitter or Facebook, they are very low to throw people away from the platform. So you will find out that, even if there are a lot of discrepancies or a lot of people are doing things that they shouldn't be doing on their platform, they are not summarily removed because removable, if you start removing people, that means that is a loss for you. So you are removing a certain class of people whose data you would require to make your predictions or to perform all the activities that you do. So that is why they want to extract data from whosoever is available as much as possible. So this again is very important and this follows from what we just did. So again, I'll zoom it because this is a, so this is what we did earlier. If you can see that, this is how it was that users, the rendered behavior, some of it would be used for behavioral data from analytics and for service improvement and that is how it would go there. Instead of the exhaust, now this is the surplus. This surplus data is there used as a new means of production and they can be used to create these prediction products. They can be used for how these people can behave in the future. So those kind of things and also a lot of these surveillance revenues and this led to a huge amount of, so it's almost like a gold mine of data. So instead of the behavioral data being used only for service improvement, now that data went into a much larger cycle, a much more, as I say, a cycle which would be used to continuously. So it's like, it keeps on adding to what they already know. This cumulative cycle is giving them an advantage. Say for example, a new company starts coming in and starts to think of surveillance, they would be starting off with a massive disadvantage. Even governments can't do things that they've been doing and we'll find out that especially after 9-11, even the governments were following the Google example. So instead of the data being regularly used and thrown away, this data is much larger, much bigger. In fact, this cycle is a much bigger cycle than the earlier one and this is where the new paradigm lies. One problem, one way in which this could have been solved, like in a lot of people suggest, is by some kind of regulation, some kind of government regulation about what should be done or what should not be done or how much data should be extracted or how much data should not be extracted. So instead of government regulations, the problem is that over the years, and this is again, I'm just taking a slight detour from surveillance capitalism to talk about regulations and then I'll again come back to this. So instead of the government regulation, something that people want, the regulation itself is a bad term. And when you have these heroes, all these people, all these software giants, people at the helm of affairs, they are like public heroes. So they could get away with things like do not regulate us or else. And there were logic like, this is a company which is very new and these are new logics and all. So there should be no question of any government regulation or whatever. So any government regulation was out of the question. Again, this is about our research. This is, as I said, not related to this, but this just shows that over a period of time, there is a deep seated anxiety about the coercive nature of administrative government. We do not want government interference. And that is one filter that could have been applied and that was very skillfully avoided by these giants because any government regulation would have led them to scale down their surveillance activities in a huge way. Now, another thing that accidentally or otherwise led to a lot of acceptance among public about the utility of surveillance was the September 11 incident. And that is when everybody kind of agreed that surveillance was necessary for national security. So lots of all those exclusionary and intrusive surveillance practices, they were kind of accepted by people at that point of time because that was important for national security. And that is how we have a concept of surveillance, exceptionalism. There are a lot of people think that, okay, we need these kind of techniques just to take care of the terrorist or the future terrorist activities, so on and so forth. So this was one reason why the Google and all other companies could manage to do all the other kind of surveillance that they do. And it's also about the soft power of Google and companies like that, where their influence over academic work and the larger cultural conversation is so vital that public opinion can hardly go against the kind of activities they do. And there's a lot of corporate social responsibility activities they do. So it's virtually impossible for all, it's virtually very difficult for governments to rein in their power because the moment any government starts to act there, there will be a public opinion against that. So the Google's influence has grown over the years because of their influence over academic work and even the cultural domain. So this is another question that people keep on asking, if it is a search company, why is it investing in smart home devices, in variables, in self-driving cars? If Facebook is a social network, why is it developing drones and augmented reality? Is it only to provide us things that we probably don't require? I mean, do we require all these devices for our everyday work, I mean, we'll find out how it goes on. Another terrain of behavioral surplus, another area where Google ended up looking for behavioral surplus was the Android platform. So the moment internet shifted to mobile phones, that's where the Google search and Google services, they ended up using Android and it would sustain the efforts of behavioral surplus and getting all the data that was possible about everybody who was there on the internet. So how does it extend? It extends through all these activities. I mean, without us realizing, it extends us through the digital books that we read, that it writes them about the options about, okay, he is that kind of a person and he or she is reading that kind of a book. Collection of personal information, there are all chapters on that, on their street view Wi-Fi, they would go along with their street view, taking pictures for their Google Earth and all that kind of things. And as the cars would go on, it would capture all the information about the Wi-Fi that were there on the way. The capture of voice communications, voice I'll talk about in a moment. So all the, you know, the Alexa, the Siri and the Google Assistant and what have you, all these voice communications are used in the same laboratory to find out more about you and to predict more about you, not just predict about you, but we will see also to modulate or to change the way in which you function or the change the way in which you would behave in particular settings, bypassing of privacy settings. So even if you say that I do not want to give this information, there are ways in which they would bypass that. So it's not always that, you know, that you have to agree to that. And in many cases, if you do not agree to their terms on conditions, you can't even go to the next page. So whether it is your, you know, Wi-Fi enabled air conditioner or your refrigerator or even your smart TV or whatever, all these devices and you know, I mean, if I start talking about that, I'll take hours and hours to get into all that, but all those things are used as surveillance devices as extraction devices to gather as much information as possible to fine tune their information, to fine tune their intelligence about human beings in general. Even the manipulation of search results or extensive retention of search data. So whatever we are searching, it stays in their servers for very, very long time. Tracking of smartphone location data, even if you turn off your GPS or whatever, they would keep on tracking your location data. Variable technologies. So whether it's about your, you know, you're taking health data or any kind of data. So that's where, you know, the surveillance is extended and it can be extended to a lot more about you than you can imagine. Even the facial recognition capabilities, the collection of student data for commercial purposes, all these are, you know, highly documented. Consolidation of user profiles across all Google's services and devices. Nowadays, as you can see, even the other, you know, Google's various products, they all resemble each other. Through drones, through body sensors, you know, you can just put on like your, you know, a bandaid kind of a thing and, you know, it will transmit data to wherever. So the Internet of Things itself is one big setup where, you know, all this data can be extracted or all that data can be extrapolated to, you know, give more precise information about us. Neurotransmitters, digital assistants. I mean, this is what, you know, probably you don't even require digital assistants, but the idea that if I have a digital assistant and I can just tell Alexa or I can just tell, you know, Siri to do something or whatever and every time we ask them to do something, that's when, you know, we are providing them even more data, not just, you know, in the content, but in the intonation of your voice because we'll just see that these emotions are a very important indicator in how, you know, they would find out that this is the time when you might buy or this is the time you might not buy or this is the time you might, you know, make a voting decision or this is the time you might not make that decision. So it extends to, you know, that particular level. So all these voice assistants are again, you know, a very important data because whenever you're talking to, you know, one of these assistants, you are, you know, talking in private and that's where, you know, your data is very clean as they would suggest. So that's where, you know, the Google surveillance capabilities or the surveillance capabilities of all these giants, they extend even beyond the realm of, you know, things that we can imagine. This is what happened in, you know, Australia. I'll talk a bit more about that. So this location tracking policy was on, even if you, you know, locked out your location, even if you switched off your location, Google would, you know, keep on tracking the location. So the extraction dynamic, it includes, you know, it includes the searches and it includes emails, texts, photos, you know, all these kinds of things and it's a location, communication patterns, attitudes, preferences, interests, emotions, illnesses, social networks that you go through, the purchases you make and a lot more. So this is just an indicator of, you know, the kind of information that is taken. And as I said, this information is taken to know more about us so that they can make predictions about us for the products of the advertisers. That's one end. And then also to manipulate our behavior. And that is where the problem is that these surveillance giants now have the capacity to manipulate our behavior. We'll find out, you know, how it happens. So again, you know, I'll zoom it. This is the third figure of the same thing. So the behavioral surplus or the extraction imperative is there in the online world. They gather information about us from the online world. But the prediction imperative is in the real world. And you know, the more precise the prediction, the better it will be for them. So that is why they require all those millions and billions and trillions of points of data so that they can make predictions with a lot of precision. And then, you know, it can also even lead to modified behavior. We'll talk about this modified behavior also in a moment's time. So it extends, so this territory is no longer the online world, but it is now into the physical world into our daily life, into our body and self, and into our modified behaviors. So that is where it has come. So I mean, it is there, you know, so I mean, as you travel, this is just one, I mean, many of us would have, you know, faced things like that. His Android phone prompted him to download the McDonald's app at the very moment that he crossed the threshold of the restaurant. So they were tracking them, you know, for long and they knew that, okay, he might require food at that point of time, so this is actually happening. And I don't want to get into PUBG, you know, where people would pay them to, you know, show them on their screens that, okay, that's where you have to go. So it would lead them to, you know, restaurants or to malls or whatever. So it is actually now happening on the real world. So the tracking is there on the online world and the prediction and the change is now happening in the physical world. The same with Facebook, you know, the like button there, it helps them to provide all the information that they have about us. Even if you log out of the site, it was found out that the Google would, sorry, not Google, Facebook would keep on tracking you. So we're talking about Facebook here. So these surveillance devices include smart TV, it includes the smarter plans, it includes voices, it includes toys and there are, you know, I wouldn't take the names of the companies, but there are, you know, toys, which are, you know, these intelligent toys where they would be tracking information, they would be recording voices, even smart TV, I'm sure many of us would have heard about that, the smart TV would actually, you know, record conversations to find out, you know, more about you. So actually that means, you know, the outward or the thing that they tell us is that, you know, they're using this to make our experience better, but at the end, they want more and more detailed information about us to predict our behavior in the future. There was this fabulous book, you know, called Snoop by Sam Gosling. And, you know, this is, I'm just, you know, taking a slight, you know, backward step to, you know, suggest that, you know, these are the things that have been happening for quite some time now. So for example, there are these, you know, five basic personality types. Either it's openness, people who are, you know, open to feelings or curious or creative or whatever. There's a consensus people who are, you know, self-disciplined, you know, achievement-oriented, whatever. There are, there are agreeable people, you know, who are warm and compassionate and cooperative. There are extrovert people and there are new people who are said to belong to the neuroticism category. People who are, you know, susceptible to unpleasant emotions. So in this particular book, because Sam Gosling pointed out, just by looking at people's homes, they could predict about, you know, the people's personality characteristics. Now, as you can understand with the amount of data that, you know, Facebook and others have, the prediction is a lot more precise. So whatever you put in or the very fact that you decide to, you know, like certain things or whatever, it gives them with a, you know, huge advantage. So this was this paper by Galway-Kraubels and Turner in, you know, this CHI computer journal. And this is where, you know, they could predict personality with social media. Your social media use can be, you know, a very important raw material to predict your exact personality. So, you know, your entire personal characteristic, you know, whether you are, you know, you have this much of openness or this much of neuro-criticism or whatever. This can be predicted without even, you know, having met you for any time. And it's about the metadata. It's not only about, you know, how much you are, you know, what you are putting out on Facebook, but how much time you're, you know, spending on a particular platform, for example. So they can, you know, not only predict about your personality, these five personality types, but they can also predict about, you know, things like your, you know, satisfaction with life and all such things. So your Twitter profile, your selfies or Instagram photos, whatever, they can give people, they can give these machine learning algorithms a huge insight into your exact personality. And now this personality has been, you know, further broken down into, you know, more and more precise characteristics. For example, they target these 12 categories of, you know, whether you're excited, whether you are in harmony, whether you're curious or ideal or close or you believe in self-expression or, you know, practical or stable or whatever. And even five dimensions of values. So I'm just, you know, giving these examples, these are all from research that has been done recently to suggest that this is the information that is possible for these surveillance giants to get us. In other words, you know, they can make these, you know, personality correlate predictions about us to a very, very precise level. And Cambridge Analytica, I'm sure most of us would have heard about, you know, their involvement in surveillance and all. Their CEO Alexander Nix, you know, claimed to have data to an individual level where we have somewhere close to four or 5,000 data points and every adult in the United States. Not sure how to that claim is, but this is just an indicator about, you know, that if you have access to the Facebook or the Google or the, you know, or Microsoft even, their database, then you can have, you know, that much amount of data points on every adult in every, you know, state. And the amount, and as I said, you know, it need not be about the time you spent on your online activity. I mean, that carries on beyond that as well. And that is, you know, this new kind of a thing about, you know, emotional analytics. So they can even detect your smile, your joy, your humor, your amazement, your excitement, surprise, frown, sadness, disappointment, everything. So over time, your interest, you know, tracking your emotional type to find out which is the exact time where you would be interested in certain kinds of content. So we will just talk about, you know, our particular thing that broke out in Australia recently in a moment's time. So this effective computing market is expected to be, it was expected to be $54 billion before the COVID crisis. So it will be near about by next year. So this brings us to the question, why should we be bothered? Why should, you know, it affect us? And that is where, you know, now I'm into the final part of my presentation. So all the extraction imperative, all the imperative of extracting all the, you know, granular reviews about us, what is it for? It is for a very important thing. And it is for, you know, something that we are not even aware of and that is behavior modification. So just like, you know, sensors were used to modify people's behavior, you know, like for example, they say that this is what a scientist there says. Their sensors are used to modify people's behavior just as easily as they modify device behavior. There are many great things we can do with internet of things like lowering the heat in all houses on your street so that the transformer is not overloaded. So they can lower down your, you know, electric bulbs or whatever so that it's not heated up. At an individual level, it also means the power to take actions that can override what you're doing or to put you on a path that you did not choose. And that is where, you know, we are talking of a new kind of a paradigm. And that is where this surveillance goes beyond privacy and monopoly because it is leading up to a situation where they can modify, I mean, even if they can modify the behavior of a very small percentage of people, that would be enough to ruin democracies or that would be enough to, you know, make certain markets grow and others just fall down. So that is what is now going on with surveillance dynamics. So it's about extracting information, then making those predictions and using those predictions for a behavioral modification. And that is where the problem is. This is what, you know, has been there for quite some time. And Richard Thaler even won the Nobel Prize in economics in 2018 for behavioral economics. And this is a lot nice. So this is about, you know, and it shows very scientifically about, you know, the choice architecture that is provided to you. Something that we study in our framing theory as well. So by providing those choice architectures, I don't have the time to get into the details of all that. It's possible to nudge you towards a particular kind of behavior. Shoshana Zuboff also talks about two other, you know, behavior modification techniques. The one is herding and the other is conditioning. Conditioning again is something which was, which is there in behavioral sciences for quite some time. And this particular book was published in 1971. And that is when it was imagined that, you know, this basically talks about positive reinforcement and all. And then even then, you know, there was a lot of talk, a lot of debate about, you know, how unethical et cetera it is. But he imagines a pervasive technology of behavior that one day enable the application of behavior, modification methods across human populations. So since these giants, they have the wherewithal to, you know, reach out to people, you know, across national boundaries, across, you know, religions, across ethnic identity, so on and so forth. So they have this massive power to influence people in a particular way, knowing the precise details, you know, on the thousands of data points that we provide them. So since it doesn't involve us and, you know, one important thing that works for them is that they have done it or they do it without us knowing that it is being done. And that is where, you know, one of the problems is that the moment we realize that this is what the data can be used for or this is what it is used for, then probably we would be slightly aware or we could be using it in a different way. But the manner in which it has already been accumulated gives them a lot of power over, you know, all such things. So this is what they say that, you know, the prediction has to approximate certainty. So it's not about, you know, making predictions that are, you know, correct in a particular way. So as you can see, a lot of machine learning techniques, you know, they are not very good with translations for the time being, but I'm sure, you know, they are continuously improving the system so that, you know, you can just use the same phrase, you know, and to get exact translations with different voices and all. So since, you know, the system produces these predictions, you know, keeps on continuously improving the system so that, you know, the gap between the prediction and observation is narrowed. So it's like they want to make those predictions almost certainty, with almost certainty that, okay, this is a kind of a person and this is what he might vote for. This is what he might buy or this is, this is a particular, you know, topic that he might have a decision on or that this is a particular area where he might, you know, believe or he might have a particular belief about. So it's about, you know, figuring the construction of, you know, changing person's behavior and then changing how lots of people make their day-to-day decisions and that's where, you know, we should be concerned about. So again, you know, this is from a book that, you know, again from a recent, very recent article, this was in Nature. So this was about a 61 million person experiment in social influence and political mobilization. So 61 million means about, you know, 6.1 crore Facebook users in, you know, 2010 US Congress election, how using certain impulses, you know, giving, you know, certain, you know, some people were given pictures about, you know, their friends suggesting that I voted. So that would, you know, kind of influence the person to vote as well, so on and so forth. So there are a lot of details about that. I don't want to get into the details of that, but suffice it to say that, you know, there are experiments and there are these very successful experiments about, you know, how, you know, political mobilization can be done through, you know, providing certain impulses to voters. So this is the point. Again, you know, this is about, you know, emotional contagion. So just like, you know, contagion basically, if you, you know, if you see one person outrace, you know, it leads to a lot of other people, you know, outraging about the same, you know, similar people outraging about the same issues. So this is one way in which, you know, the online messages can affect your offline behavior. So if you see some of your people, you know, angry about certain things and then you end up being angry or, you know, this emotional contagion again is something that has been worked out in recent research. So this is just to, you know, demonstrate the kind of precision and the kind of scientific techniques that are being adopted to influence our offline behavior and to modify our offline behaviors. This is what happened in Australia as I was suggesting where Facebook was targeting, you know, insecure youngsters to buy things when they could, you know, find out that, okay, they are angry now or they are feeling emotionally vulnerable now. So this is the time they can buy this particular thing. So this has actually happened. I mean, this is what, this is there. So if you do a Google search about that, you can find out about, you know, these things. Yeah, they've been there. So this comes as a, you know, global market architecture which is unfettered by geography independent of any constitutional constraints because as we have just shown that, you know, regulation does not work or has not been able to work. So there's hardly any regulation about that. And even whenever there are regulations, they have a way of, you know, going around those regulations. So it poses, you know, risks to freedom, dignity and the sustenance of liberal order. So this is the final slide. And what it means is that, you know, the means of productions are subordinate to an elaborate new means of behavioral modification. So which relies upon a variety of machine learning techniques. So first of all, you know, their extraction imperative of extracting as much information from as many people as possible. So when they are, you know, when there are, you know, terms of, you know, getting more and more people inside that, you know, internet umbrella, they want more and more data points about more and more people. So that extraction imperative, you know, goes on to those prediction imperatives. So using those extraction or data extracted data to predict about people and then using techniques of, as I said, tuning, which is some kind of nudging on herding or conditioning to shape individual group and population behavior in ways that continuously, you know, provide them guaranteed outcomes. So you can actually predict that these are the people who can be, you know, modified or who can be kind of, you know, they can be used or we can, you know, make them make certain decisions which they wouldn't have done otherwise. We can make them vote in a certain way just by, you know, using their emotional conditions or their personality characteristics and their way of thought at a particular time. So all these things are there, you know. So this is where, you know, I'll end my presentation. And if there are questions that you have to ask, please let me know. So as I keep repeating, it's about, you know, using the extraction imperative to make predictions and to, you know, make these behavioral changes. And that's where, yeah. So, I mean, that's the fabulous question, God, Sam. So one easy thing would be to, you know, say that, you know, we have to go out of, you know, those networks as much as possible or to, but the power to realize that, you know, our behavior might be modified or, you know, they are using these techniques, persuasive techniques. If that power is with us, then a lot of, you know, problem is being sorted out. So, you know, putting out, you know, it's not about how much data you put out. The fact that you are online means, you know, you are already, you know, providing all that particular data. So it's not just about, you know, using information, you know, and violating our privacy or, you know, doing it as a monopoly, but it's about, you know, using the power that they have over state governments, over nations and over, you know, and exercising their power over societies to maybe modify their behavior. So that's the power they have. And maybe it hasn't been used to the extent that it can be used. But the fact that, you know, it's not even about that we are the product. Yeah, so this is not just about artificial intelligence. It's about, you know, getting that massive data point which is not possible for everybody. It's not possible for anybody. The amount of information that we have already provided them, you know, to the extent possible. So they, so it's easier for us, for them to, you know, use those data points to make predictions and to also, you know, help or to, you know, not just towards a particular modified behavior. So if there are questions, I would love to answer them. Thank you, Shailesh. I mean, if anyone of you wants to speak, you can please unmute and, you know, thank you for coming in. So if you want, you can just unmute and talk to us. So Shailesh says, since most human being are inconsistent in the behavior, how long one expect the data useful? Yes, sir, as I said, this is, that is why, you know, this is real time data which we give them. So maybe, you know, and that's where, you know, the point is that, you know, it's not just one time data. This is continuously improved. This is continuously, you know, worked upon to, you know, provide more and more granular information about all of us. So it's not just about, you know, one kind of information, but, you know, about information, you know, how it, you know, carries on. If there are any other questions, any other clarifications, as I said, you know, we have tried to make it very academic and there's a lot of, as you say, information that has, that is available, you know, through the research work that is there. But most often, you know, we are not even aware of, you know, the kind of possibilities there. So if there are any questions, please let me know. There's a feedback link onto your chat box and please make sure that you fill up the feedback form because that's how our certificates are generated. So please write your name and the email ID, you know, carefully so that, you know, we can provide you with, you know, the correct certificates and you'll be getting the certificates in an hour's time. So within an hour, you know, we'll be sending out the certificates. So if you can, you know, just send in your feedback forms properly. If there's a question, please let me know. I would love to answer them. So I imagine there is, is it any question? I have a question. Yes, Mr. Najib. Yeah, I wanted to ask you, how do we protect ourselves? Are there any kind of things that we can do practically speaking? Actually, sir, as I said, you know, this awareness is very important first of all and it has to be long winding process. You know, it has to maybe start off with maybe government regulations. It has to start with, you know, you know, discussions like this. It has to, you know, begin with, you know, how much kind of information that they actually use. Because as of now, you know, the kind of information that they, you know, gather from us is unrivaled and there's no protection. And even if you say that I do not want to give out that information, there is no way in which, you know, any government agency can find out, you know, whether they're actually extracting that information. So we are, you know, as I said, you know, we are in a very, very unprecedented situation. So we don't even know, you know, how that, you know, combat will happen. But a lot of people, you know, talk about, you know, awareness, for example, about, you know, government regulations. And, you know, as we know that government regulations always are, you know, two sides of the same thing. But, you know, it has to carry on like that. And of course, you know, the more intense the pressure is, the, you know, there will be more and more, you know, companies who will have to, you know, come out of this rut. Because, you know, it can't just be a one-way traffic. So we already have, you know, a lot of companies who say that we are not going to, you know, take your date. And of course, you know, if I can take the name, you know, Apple is coming out with all those advertisements saying that, you know, it's private, so it's no longer there. So, you know, putting it there, you know, right in the agenda. So when that pressure is there and they know that, you know, we know. So now that we know half of the, you know, problem is, you know, taken care of. Because, you know, this has carried on because they didn't know, we didn't know that this is how information would go out. And, you know, probably people would think, you know, that, okay, it doesn't matter to me. And if I lay no matter information, I don't, it doesn't matter to me. But, you know, billions of that information, you know, has given them capacity. And, you know, Shailesh was so right in suggesting that, you know, I mean, the human, our mood changes, you know, maybe I'm supporting one party, tomorrow I might end up supporting another party. But this is continuous and they're doing it in real time. I mean, even, you know, probably the fact that, you know, WhatsApp is like encrypted, so on and so forth. But, you know, the fact that, you know, they have information about how much time you're on WhatsApp, you know, whether you're in groups or that or whatever. So that metadata itself is, you know, data enough to, you know, for their, you know, machine learning things. Because I, you know, would think, you know, why would I need a Wi-Fi air conditioner, you know? Why would I need to switch it on, you know, from, you know, so the reason they are providing with all these things is, you know, they need it not only for this, but for all other things, you know, smart things and my God toys, you know, which kids would use and they would record kids' voices and all. And I'm not sure, you know, this thing is not there in our society so much. But, you know, there was a lot of you and cry in Germany, for example, when those street cars, you know, would go out and they would take the pictures of people's houses and all and it was banned there. So all the street view, it was possible because, you know, they have, you know, that granular data. PUBG, people would rush into, you know, stores because, you know, their screens would show that this is where you're going to get your Pokemon or whatever, so just go there and, you know, shoot it. And that's how, you know, modifying our online, you know, using our online information to modify our online behavior. So Dr. Pandey, would it be advisable to break down these monopolies or to create competition in order to ensure that some ethical players also enter the market? What do you think? That would be a very, very difficult thing to do. So as you know, I mean, they are like behemoths, you know, the kind of, as you know, I mean, there are a lot of things that Google does so well. I mean, the Google news initiative, for example, it's a very good program that they do. They help out, they help society in many other ways. So, you know, the moment you start acting, any government starts acting, so they are bigger than, you know, many governments combined. So it will be very, very difficult for us to, but you know, China does the same thing. You know, it has its own buy-dude to, you know, doing the same thing, but there the information is used by the government to keep track of people. So it's not an agency doing that. So, you know, it is the same monster, you know, of a different kind. So again, you know, that's like. Thank you, Dr. Pandey. These are the options. Thank you so much, sir. Thank you. Thank you. So I have shared, you know, the YouTube channel that, you know, I have started recently where I, you know, post on videos and like that. So some of you could want to, you know, subscribe to that channel as well. The feedback link is there. So please put in the feedback. Also, let us know, you know, what kind of things that you would like to have. Next week, I want to plan to come back and talk about, you know, 18 cognitive biases that cloud our decisions. So we'll be talking about cognitive biases next week, next Saturday, at the same time. So we'd love to see most of you back. Thank you, friends. Thank you, senior colleagues. Thank you, everybody for your support and your good wishes. So I take your leave now. Thank you. Thank you very much. Thank you very much. Thank you. Thank you, yes, sir. Thank you. Thanks a lot.