 Hello and welcome to NewsClick. After the recent fiasco relating to Cambridge Analytica and its use of Facebook data, we really need to be concerned about how our Facebook profiles are being used to manipulate our choices. To discuss this issue, we have with us today Prabir Phurkayasta, the Editor-in-Chief of NewsClick. So Prabir, firstly this Facebook data, this whole issue is being termed as a Facebook data breach and this expose it done by Channel 4 News. How accurate is it to call this a data breach? Well, that's an interesting question because honestly Facebook did allow Professor Kogan, who was an assistant professor in Cambridge, access to the data through a particular app that he had developed. Either Facebook was not aware or Facebook allowed Cambridge Analytica who had an agreement with Professor Kogan that this data could be used by Cambridge Analytica and they had now transpired, paid $800,000 to Professor Kogan for this particular data, a particular app to be developed. It's also interesting that the whistleblower who came on Channel 4, he said Professor Kogan financially did not benefit from this. It was really something that he helped Cambridge Analytica to do. Let's take that on face value. The purpose of this app was to find out psychometric data of the people. So if you gave or you answered the question and then what kind of personality you would be, would be then mapped by the app. And since they sent the people who took the quiz that was given to this app, they were also told that if they agreed to their receiving a profile of themselves, they should also download the app and install it. Once the app was installed, 270,000 people's data was collected by Cambridge Analytica, which may not seem a large number. But what came along with that were all the friends' data whose privacy settings were kept open. And obviously Facebook played two mistakes. One is the, shall we say, the nature of its internal privacy settings, the settings, the security, the defaults are all to be questioned. That if they allow apps to harvest data on this scale through a few people, then I think there is something seriously wrong with the way they maintain the security of the Facebook settings, the privacy settings for its users. The second part of it is that this data should have been monitored to check what actually was being removed from the Facebook servers. And the fact that 50 million Facebook users' data was actually accessed to this 270,000 profiles would seem to indicate they kept no check on what kind of data was being harvested to see at least that this does not happen. Now, is it a criminal violation? Was it a data security breach? Strictly no, because the... Facebook settings allow this to happen. Facebook settings allow this to any user to see your friends' likes. And since it allows you to see your friends' likes, this app also could conceivably see the friends' likes and therefore could get your friends' choices as well. Now, this therefore cannot be called a data hack or a data breach in this sense, that it was not a criminal activity which was done by breaching basically Facebook security. Facebook allowed this to happen. Did they allow it consciously? Probably no. They are very unhappy about it today and they did give notice to Cambridge Analytica that this data should be destroyed. It should not be stored. They claim they've destroyed the data. It makes no difference because once the data is removed, it is massaged, created different files. It can always be said this is really not Facebook data anymore. So I think Facebook's notice later on that they should destroy the data, they should not keep it, etc. becomes locking the stable door after the horse has been stolen and I don't think they have legal traction on that either. So I think all said and done, it shows a fiasco on Facebook's part and the fact that they don't really protect their users, that is very clear and that's for something which Facebook has to apologize to its people and it also shows the amount of information that Facebook really gathers for each one of us. But the second part of it is Cambridge Analytica certainly was, shall we say, less than scrupulous at best that we can say if it was not criminal in having harvested this data from Facebook because they had not taken permission from whom this data was being finally taken. They had taken permission for only 270,000 people. So it could be argued that this is at least a breach of ethics if not a criminal exercise. Certainly in both Facebook's account, the laxity with which they have handled this and on the power purpose of Cambridge Analytica, the fact that they have been definitely unscrupulous, the way they hid their purpose through Professor Kogan, approached Facebook as if it was a research project and used it for purely commercial purposes later on, creating profiles of 50 million Americans. I think both these are to be seen for what they are. Criminal liability for Facebook? No. Definitely something that should be ashamed of and should apologize to the view for the users? Yes. That so much of power is given to Facebook over us, something for us to be concerned about. So moving on to the next thing, we have a very common phrase in these days, data is the new oil. I think this case makes it very evident how data analytics is being used at this scale to really manipulate our choices. How has Cambridge Analytica done that? Cambridge Analytica did this by using methods which had been developed by two researchers, Kosnicki and Stillwell, who had run this earlier, the same psychometric approach, personality test on Facebook and so on, and figured out that if they look at what your likes are, then they can create a psychographic profile of you. It was interesting that Kosnicki's data showed that with about I think some 68 or 86 likes, some X number of likes, they could predict whether the person was a man or a woman, they could predict whether he was black or he was white, they could predict whether he was gay and so on. So it shows that our likes, what we like on Facebook says a lot about us, and therefore if we know what we like, this psychographic profile is easy to create. This is where Cambridge Analytica's interest came from, that if we can harvest this likes of Facebook as Kosnicki and Stillwell had showed and that's where Kugan came in, then we can create psychographic profile of the users. Once you create that, then you can manipulate him or her into making various choices. One example could be for instance if you're selling guns, and if you know the person that you're targeting the advertisement to is a person who is insecure, who's probably therefore anxious as a personality, then you could show that at night there is a fist smashing through your window pane, and then you say protect your home and then buy a gun. Now for a person who is say much more secure, who is not really bothered about this, he doesn't think his house is going to be bagged and he needs a gun for that, you could show him a picture that you're taking your son out and shooting birds, teaching him how to shoot. So it's a family picture in which you are really talking about the gun as a sport. So here is the same gun being sold, but for two very different purposes. And if you know how to target your advertisement, then you are likely to be more successful. Now this is not a Cambridge Analytica issue. It's an issue which spans all big data today as you called it, the new oil, because it's really the availability of huge amounts of data which we create through our digital footprints every day on the internet, that we also reveal a lot about us. And it is said that Facebook knows us better than our mothers do. Now given that Facebook or any of the digital platforms knows us so well, or this data can be collected from collating different digital sources, we have reached a point that it's easy to manipulate people into buying things or in voting. So therefore the psychographic profile means that you have now the ability to change people's behavior at scale. This is Professor Troshana Zuboff has said that all these platforms like Google, Facebook and others have the ability to change people's behavior at scale. And this is what big data is all about, changing our behavior at scale, trivially to buy things which we don't need, but in a fundamental danger to society, also our electoral choices. And therefore our electoral choices which were less mediated by technology, but more listening to the people on television, in public speeches, where there's almost you can call a direct link between the people and the candidate. Today is being broken and mediated through these kind of tools by which people will make choices which are really governed by what they feel, their fears, their prejudices and playing on that. And of course on that we have the addition of fake news. Yeah, like ads is one thing. We are pushing ads, Facebook also does that. Facebook uses our data to push ads. But then the issue of fake news comes up. And even that can be used to manipulate our choices of course and if they have the data then they can use that to show us the kind of fake news we would fall for. And now with these increasingly interesting and more sophisticated tools coming up I think that's also becoming much easier. Well that's the real issue that Facebook's anger at Cambridge Analytica is that this kind of thing should be restricted only Facebook. They should have the ability to manipulate our choices and they should be able to manipulate therefore what ads we see. Now once this becomes available, this profiling web becomes available with others, then of course they can also play the advertisement game but that's only one part. The part that is dangerous for us and for democracy in the future is what happens if messages are tailored to our prejudices and our hate, our fears and so on. What I would call as the instincts which are more easy to be manipulated and not so much the cognitive arguments that you know you give an argument of reasons why A is better than B and you should vote for A instead of B. But identify essentially is he more favourable to certain communities which you are prejudiced against. Paint the person that he is, say in India, that he is pro-Muslim, that you always talk about any affirmative action which for disadvantaged communities as somebody who is only helping that community and therefore is basically creating vote banks. So this whole argument about vote banks that was created for instance by the BJP was really to say that anybody who talks about secularism is actually talking about a vote bank. In the United States it was against anybody who talks about affirmative action or talks about social welfare is only helping certain black communities who don't do any work, welfare queens etc. etc. So they become coded words to mean something else and in that context to play hateful messages for instance riots pretending a lynching which is taking place in Pakistan as a lynching which is taking place say in Zafar Nagar etc. All this can be made to create riots, create hatred and therefore tilt the voting preferences and in this we are really entering into what somebody is called infocalips. That means it's not just apocalypse but we are creating information which can change the way societies go and we are societal choices not just only electoral choices and these societal choices can be created by instruments not only fake news which can at least be easily captured by say searching through the images through a Google image search and finding out that it is fake but we are today creating tools by which you can manipulate images so well that even these images that you make are not going to be captured from by a Google image search is fake. You will actually have your image on a body, your face on a body which is not yours doing things which you are not doing but it is so seamless that no Google search is going to show that as fake that easily because it's no longer a face that you are searching for but the body. So this kind of visual tweaking shall we say or visually creating false images are very very easy today with the kind of AI tools we are building for image shall we say image editing. Similarly you can have there is an example of Barack Obama giving a true speech that he really gave and Barack Obama's face giving a speech which he didn't give again using tools, audio editing tools by which it's totally lip synced you cannot make out which is false which is not. So if these tools are used then the fake news ecosystem is facing a scenario where you can actually create almost real life images which are far more difficult to discern what is true and what is false and if you combine this with all the other things we have talked about we are really entering a different zone how do you prevent it? That is the next question I would suggest that we should really look at regulating we should look at informing the public and we should digital education of the public to be more discerning has to go hand in hand with actually legislating new laws for the digital realm and regulating the platforms will become much more powerful than any country today. Thank you for joining us in this discussion it was certainly very enlightening to have you talk about all of this and thank you for watching this clip.