 British mathematician once said that data is the new oil. Our data points are collected, analyzed, they're utilized on mass. Our swipes, our purchases, what we're interested in, how we move, how we hold our phone, anything can create material for analytical companies to make extensive profiles of us for the purpose of selling those. So in the so-called economy of surveillance, a lot of actors make a lot of money, maybe not the user him or herself so much. But how this works in detail, that is going to tell us Wolfi Christel today. Wolfi Christel is a technologist, researcher, and activist from Austria. And together with the privacy scholar Sarah Speekerman, he wrote Networks of Control. This report has been published this year in October. He has also developed a game called Data Dealer that some people might know, which is about privacy and surveillance. Today, Wolfi is going to give us a talk about corporate surveillance, digital tracking, big data, and privacy. Please help me welcome Wolfi Christel. Thanks a lot. Can you hear me? Yes. Hi, everybody. My name is Wolfi Christel. I'm from Vienna, Austria. And first of all, I'd like to modify the title of my talk a bit and get rid of the big data. To be honest, I only use it because it always helps to get some people to listen to me. I mean, big data can mean everything and nothing. So let's just get rid of it. Nevertheless, during the last few years, we've seen the birth of a large-scale surveillance economy. Today, thousands of businesses are monitoring, profiling, categorizing, rating, and affecting the lives of billions across platforms, devices, and life contexts. In my presentation, I'll talk about how networks of these companies are collecting, analyzing, and utilizing our data, often hardly with our effectively informed consent and even largely without our knowledge. I will also talk about how personal data analytics is already being used in fields like marketing, retail, insurance, finance, health care, employment, and so on, to make decisions on people. I'll take you on a small tour through examples, many examples, of corporate practices. And I'll also address the question, what could possibly go wrong? But beware, I will use some more bullshit terms, because I have to. If we want to discuss all this marketing data sphere, and so on, we unfortunately have to dive into some marketing data bullshit terms as well. I'm sorry, and I also will use the big data term again. And this is Leiping. Many of you probably know it, a browser extension, which shows who is watching us when we're surfing the web. I visited five websites here, a large health website, a weather site, dictionary.com, The New York Times, and Weiss Magazine. In the background, these five sites connected to 118 other third-party services and told them about my visit. Only five websites, and it was recorded by more than 100 companies. Let's have a look at it a bit more in detail. The five websites I visited are represented by the circles. The third part is by many small triangle icons, you can see. So why do these five websites transmit information about our clicks to other companies? Because they actively put some small pieces of code into the websites. And who are these third-party companies represented by the small triangle icons, ad networks, analytic services, consumer data brokers, and of course, Google and Facebook and many others. As you can see, some of the third parties are connected to two or more websites I visited. That means they're able to monitor and track me across several websites. Now imagine what happens when I'm surfing the web for a day, a week, or a month. This is how thousands of companies, tracking companies, are able to compile profiles about our online behavior. I know there are some tools to block some of these trackers, but most people outside the privacy bubble are still very surprised when they see this. And beside of that, some of these anti-tracking tools became part of the tracking ecosystem as well. Of course, it's not only surfing the web. There's also our smartphone, a powerful small computer, even if it's a bit broken, even now and then, like this one, containing a list of our contacts, our friends, our calls, our messages. It tracks our movements, lots of very private information. And who is able to access this data? Many apps, if we give them permission to access it. And what's the businessman model of many app developers collecting and selling our personal information? The interesting thing is that most people would not hand over detailed information about all the contacts, addresses, and movements to other people they don't know. At the same time, most people don't seem to care to hand it over to thousands of companies they don't know. Back to the five websites I visited, let's have a look at some of the 118 third parties which received information about me surfing the web. Let me introduce you to AddThis and BlueKai. These two domains tracked my visits to the weather websites and to Weiss Magazine. I'm sure you've seen AddThis already. They offer website providers to show these tiny little social sharing buttons, mostly looking similar to that. And BlueKai, you won't see it on the website you visited. This is a typical data collection service. And both AddThis and BlueKai belong to ORAC. One of the world's largest database and business of the providers, ORAC hasn't been known as a consumer data broker until recently. But in the last years, they acquired several data companies, BlueKai, an online data marketplace, DataLogix, which has partnerships with stores who offer membership or loyalty cards, collecting purchase data from 1,500 large retailers. And DataLogix is able to link those purchases to the digital world. And ORAC acquired AddThis, which is harvesting behavioral data about website visitors on more than 15 million different websites. ORAC has now integrated these companies into its data cloud. Very nice name. This is a nice slide about it. Here, the company explains that they record what consumers do, what consumers say, and what consumers buy. Great, right? According to ORAC's own statements, they aggregate 3 billion user profiles from the 15 million websites which have AddThis installed, but also 700 million social messages daily. They claim to do that. And billions of purchases in order to target people, personalize content, and to measure how people are interacting across platforms, channels, and devices. From online, mobile, email, social media to TV, radio, direct mail, and even in-store, they also try to monitor and link offline purchases. And in the center, you can see the ORAC Identity Graph, which allows them to link and match user profiles across platforms, devices, and company databases in order to create one addressable consumer profile as they write to identify customers everywhere to unify addressable identities, and so on, and so on. Here you can see the different IDs from the cookie ID and the email ID, postal ID, mobile ID, set-top ID. This is the set-top boxes for TVs. So they also try to link that information and probably also from other devices and platforms. ORAC also provides a wide range of data from partners. This is ORAC's data directory. Quite interesting document. ORAC provides shares and combines data from data brokers like Axiom, Infogroup, Noistar, also from Experian, TransUnion, and Acrifux. These are the three big credit reporting agencies in the US and many others. They also provide data from credit card companies like Visa and MasterCard. And of course, ORAC also partners with Google and Facebook. DataLogix, one of these companies ORAC acquired, was even one of the first data brokers which started to partner with Facebook back in 2012. They connected their offline data about purchases, financial behavior, credit cards, owned home value, net-worth, income, and so on, with Facebook's rich user profile information. Of course, platforms like Facebook or Tracebook, as I'd like to call it, is collecting vast amounts of information itself. About the everyday lives of 1.8 billion people and still 1.2 billion people use it every day. Facebook puts these nearly 2 billion people into thousands of categories. And let's advertisers use these categories in order to include or exclude people from ads on an individual level. A few weeks ago, the US-based non-profit pro-publica was able to purchase a housing ad. On Facebook, they wanted to address people who are likely to move and are interested in houses and so on. But if you look at the bottom, you can also exclude people who match specific criteria. In this case, people who are categorized as Hispanics, African, or Asian-Americans are excluded from seeing the ad. Beside of that, this is probably illegal discrimination in the US. How do they know about the ethnic affinity, how they call it, of someone? And are these classifications accurate? Basically, we don't know. But maybe they use similar methods as in this Berkeley study, an academic paper. The researchers tried to predict these private attributes of users just based on the Facebook likes and those were the results. They were able to successfully predict ethnicity, sexual orientation, political and religious views, just based on data about 170 Facebook likes. Per user. The study had 60,000 participants. As you can see, ethnicity can be predicted quite accurately. In Europe, Facebook doesn't offer to directly categorize people according to the ethnicity, but they still sort people in many different categories. For example, you could purchase an ad on Facebook, target it on people living in Norway. My last talk was in Oslo, so this is Norway. 25 to 40 years old, speaking a specific kind of Norwegian language and then exclude people who are interested in Arabic language, Islam, multiple sclerosis, online gambling, personality disorder, plus size clothing, stomach cancer, trade union, wheelchair, and so on. These are basically protected categories of data in Europe. So ethnic profiling isn't directly available in Europe, but you can still use proxies. However, relax after massive criticism and media coverage. Facebook told us in November that it will build a system to prevent advertisers from buying credit, housing, or employment ads that exclude viewers by race. So take it easy, right? Anyway, it's only about ads. Who cares? Also a few weeks ago, when the large UK insurer, Admiral, announced to introduce to price car insurance based on Facebook posts, Facebook quickly reacted and blocked the insurance app. Perfect, right? On the other hand, Facebook itself registered a patent about credit scoring based on Facebook data. They write, when an individual applies for a loan, the lender examines the credit ratings of members of individual social network. So from your friends. If the average credit rating of these members is at least a minimum credit score, the lender continues to process the loan application. Otherwise, the loan application is rejected. Is Facebook planning to provide credit scores based on data about our Facebook friends? We don't know. Facebook will probably say, it's just a patent. We won't do it. But can we trust this company? In 2014, when Facebook acquired WhatsApp, they said, don't worry, your WhatsApp data is safe. We won't share it with Facebook. 2016, the company announced to start sharing WhatsApp data with Facebook. OK, after continued criticism, Facebook told the Europeans that it won't combine Facebook and WhatsApp data for now. Whatever, as I see it, like many data companies, Facebook mostly acts in a way like two steps forward, one step back, and so on. We'll see. Of course, it's not only the web, smartphone apps, and Facebook. There are many ways of how personal data is being collected and utilized today. Let's have a look at visual DNA. They use data from online quizzes to create personality profiles about users. They say that the quizzes have been taken by 40 million people already. Altogether, visual DNA provides digital profiles about 500 million consumers, for example, for marketing and online targeting purposes. But also, and that's remarkable for credit scoring and risk assessment. Therefore, visual DNA partners with MasterCard, the credit reporting agency Experian, and Admiral, the large UK insurer you remember maybe from before. This is remarkable because they cross the line between the context of marketing data on the one hand and risk management data, credit scoring on the other hand. And if you remember Orecky's data directory, Orecky also offers data from visual DNA. As you can see, although we've got some large players, we're talking about a landscape of many interlinked commercial databases about people. The basis about people, this is what I call networks of corporate surveillance. For example, have a look at Segments, a company which says collect all of your customer data and send it anywhere. It provides tools for website and app developers to easily collect data from their users and then automatically send it to more than 150 other companies from ad networks, data brokers, and analytics providers to CRM system and even fraud detection services. This is only one third of the services that companies can automatically send their user and customer data to, a nice logo wall. And by the way, this one, Eloqua also belongs to Orecky. So networks of corporate surveillance, but which types of data about people to companies collect and trade? One way to group it would be on how personal data is being obtained. First, volunteer data, which is created and explicitly shared by individuals, at least in theory. For example, address information in online form or maybe the quiz data from visual DNA, but I'm not sure if the people were really aware what will happen with that information. Second, observed data. Consumers generate it passively. Often it gets recorded completely without their knowledge. For example, when a company tracks the visits, the websites we visit. And finally, inferred data, which is based on the analysis of volunteered or observed information. Another way to group it, based on the contents of the data, companies indirectly collect financial information about people. For example, about the income or credit ratings. Of course, contact data. Demographic attributes like age, gender, or ethnicity. Transactional data, such as purchases and the prices paid. Contractual information, like service details and history, for example, by utility or mobile network provider. And by the way, this categorization is from a data broker point of view. Information collected also includes location data, not only from mobile devices. Behavioral data, like the websites visited, rap usage, technical identifiers, such as IP addresses, or device IDs, and not least communication contents, like social media posts, or email texts, and social relationships information about your contacts and your friends. To just take one of these, especially location data, is very popular in corporate surveillance. Even mobile network providers are increasingly selling insights into their location databases. This chart is from an industry report, which carefully investigates which ways of utilizing location data could be the best for a mobile network provider. Maybe based on apps and GPS location, or on network cell location, what about indoor location or even use the emergency services location? Look at that. They even think about selling the emergency location, which would have many advantages. If you look at the green fields, for example, no need to install anything on the device to get the emergency location. Great. Yeah, however, I don't want to explore this in detail. Now it's also very small, I know. Instead, I want to show you a short part of a nice marketing video of Factual, a company providing a product which is called Observation Graph. They've got such nice names of the products. They are combining geographic information, meta data about events in places with mobile location data of users. So here we go. Factual's Observation Graph technology is a revolutionary way of understanding the real world behavior of mobile users. With products powered by Observation Graph, you can deliver intelligent mobile experiences. Observation Graph is built on Factual's proprietary global places data, which includes over 90 million local businesses and points of interest in 50 countries and is integrated into leading applications such as Apple Maps, Facebook Places, and Microsoft Bing. Observation Graph also uses demographic data, event data, and other geographic data to fully understand the physical world. All of this data combined with signals from mobile devices enables Factual to catalog real world user behavior. Each day, Observation Graph generates billions of discrete observations globally. Observation Graph powers products that enable advertisers to create highly accurate mobile audiences by describing specific real world behaviors. OK, enough. User ID 987123, activity walking. Sounds so innocent, right? What about activity protesting? I guess the Eastern Germany, German Stasi, would have been really happy to have a tool like that. As you maybe know, US police departments have already used data feeds containing location data from marketing analytics companies to track protesters. But location data is just one of many different types of personal data. Companies are collecting another way to group the data. Collected by companies is to distinguish between first party and third party data. While first party data is collected by businesses which have a direct relationship with consumers, for example, the shop you've bought something, your mobile network provider, or the app you've installed on your phone, third party data is either purchased or licensed from a first party, or it is collected from publicly available sources. However, as we've seen, third party data collection can also be invisibly embedded into websites, mobile apps, and other first party contexts. Finally, data being collected by companies could also be grouped into actual and modeled data. Where actual data is straightforward information about individuals, for example, the postal address, the date of birth, or the fact that they've been at a specific location, or that they've bought a specific product, and modeled data. On the other hand, results from drawing inferences about personal attributes or predicted behavior. In marketing, for example, the idea of segmenting, people into groups with shared characteristics and likely behaviors already dates back to the 1970s. Example, segments could be, data broker cell could be rich posers who will likely buy an expensive car, or old woman who will likely donate for low-income animals, or simply valuable customers, on the one hand, and waste. On the other hand, data brokers have really used the label waste to categorize people in financial difficulties. But while earlier, consumer segmentation was mainly based on large-scale information, like census data or surveys with small sample sizes. Today's segmenting systems can use detailed individual level information about billions of consumers in real time. Another somehow related concept is scoring, which emerged more on the risk side of personal data business. Credit scoring has been around for decades. A credit score is a number which claims to describe your credit worthiness, or to predict a person's future payment behavior. On an individual level, the FICO score, still one of the most important credit scores in the US, is based on the consumer's payment history, the amount owed, the length of the credit history, and other information. Even though this is a rather conservative mix of data compared to many other of today's scoring machines, credit scoring can still destroy lives. And the US and in other countries, a bad credit score cannot only mean that the person doesn't get access to financial services, but also that this person doesn't get an apartment or a job anymore. So when somebody who is already in financial trouble doesn't get a job anymore, the situation cannot get even worse. This way, a bad credit score can become a self-fulfilling prophecy. In addition, a credit score may be based on flawed data in someone's credit report or on flawed prediction algorithms. The latter are typically completely secret. Today's scoring products are often based on a much wider range of data, and they're used in many other contexts, not only in the field of personal finance. Have a look at Trustive, an online fraud detection company. They seem to be rather open-minded. On the website, you can see which kinds of data they process to keep fraud out while letting good customers through. What they're doing is a kind of fraud scoring, which can decide about which payments and shipping options you get in an online shop, or whether you're accepted or not as a customer in general. They use many different types of information, phone numbers, email and postal addresses, browser and device fingerprints, credit checks, transaction histories, IP addresses, mobile carrier details, cell location, and much more. Its parent company, TransUnion, one of the three large credit reporting agencies in the US, has data on 1 billion consumers globally, obtained from 90,000 data sources. Fraud scoring, another example of a company which provides scoring based on personal data completely out of context is Signify, a US company which calculates credit scores for individuals from phone call records. They explain to allies call durations, the time calls are made, who is frequently called, and so on. And they say four weeks of calling history is enough for us to predict the credit worthiness of someone. The partners include large mobile network providers like Telefonica, and AcreFox, one of the three large consumer credit reporting agencies in the US. But how does Signify calculate their credit scores from phone call records? The answer is, we don't know. But maybe they use similar methods as in the following academic study, researchers found that they are able to predict someone's personality just from someone's smartphone metadata like call dates, frequencies, and durations. They used offline personality questionnaires, recorded the participants' phone data, and found statistical correlations. The prediction accuracy is far from perfect, but still significantly above chance, one could say. So maybe companies like Signify just use similar methods. We simply don't know. So what is this all about? It's all about data mining involving methods from mathematics and statistics to machine learning. Machine learning algorithms learn from existing data. They get trained to find correlations in large data sets, and they're able to more or less find patterns and connections between variables where human beings really give up. Another example in credit scoring, this is Test Finance, a company which has been founded by Google's firm chief information officer. They're combining thousands of data elements to calculate credit scores about consumers, which data from how people use smartphones and social networks to the spelling somebody uses in an online loan application form. In 2016, Test Finance announced a partnership with Baidu, China's largest website search provider, like Google in China. Test Finance says that Baidu's rich user search data will be valuable for loan underwriting and assessing credit risk. On the website, they state, we believe that more data is always better. And the founder once said, all data is credit data, we just don't know how to use it yet. All data, there's lots of data about us out there. Today, virtually everything we do is recorded, monitored, attracting some way, all kinds of collected data end up in huge clusters of databases. Data mining technologies help to find the relevant information in these massive amounts of data. And then there's one US company, which is always mentioned when it comes to consumer data brokers, it's called Exim. Who has heard of Exim already? So quite some, okay? Exim is one of the largest of these companies and says it has up to 3,000 attributes on 700 million people. For example, they have the credit history, driving history, criminal history, residential history, employment history, education history, information about income and so on, purchase behavior. They don't collect information about illnesses of people, but about health interests. By the way, this is a marketing video from Exim. The world of information has sought enlightenment. I think we don't need to sound. What I really like is how they use the X in the name. Now it's, yeah, it's appearing. And now, just wait a few seconds. It's too slow. Target it. Oh my God. Exim is a kind of old school data broker. They started 40 years ago or longer with sending personalized letters for the Democratic Party in the US during elections. Later, they also sold voter profiles to the Republicans. And they became a consumer data giant for all business fields. Since 2014, they've also got data partnerships with Google, Facebook, and Twitter. Another large data broker is LexisNexis risk solutions, also a very nice name. Coming more from the risk side of personal data business, they've got a similar amount of profiles on consumers and the really impressive range of offers on the website. They're not only selling data on problem renters, they also offer their employment screening enterprise edition. And they've got offers for healthcare. Here they say, social network analytics reveal hidden relationships. And they write somewhere on their website, we help predict the likelihood that the consumer will become delinquent in the next 80 months. And of course, they also have offers for governmental agencies and law enforcement. Look at this nice black helicopter or whatever this is. But at the same time, they also provide some marketing solutions. And this is a general trend. The same data and the same analytics technology is more and more used in completely different contexts, from marketing to banking, insurance, and even law enforcement. Take Palantir, this Silicon Valley data mining company founded by Peter Thiel, the first Facebook investor, co-founder of PayPal and currently member of Donald Trump's transition team. Palantir provides products for companies in the fields of healthcare, insurance, finance, like kind of big data analytics. The software is based on PayPal's fraud detection algorithms and they've got partnerships with the German business software provider SAP. The US Department of Defense and with the CIA. Or take SCL Group, a company that sees itself as working on the forefront of behavioral change. They provide data-driven marketing for commercial purposes, but also information operations for defense and intelligence. And at the same time, they see themselves as a global election management company. SCL Group's US branch is called Cambridge Analytica, you probably heard of it, of them. They claim to have a national database of 220 million US citizens, containing 5,000 data points about every person. According to the Guardian, the company has also harvested data on millions of Facebook users. But what are they doing with this data? They sort people into different categories. For example, alongside the political views on issues like pro-life environment, gun rights, national security or immigration, in order to target and address people differently. This way they can communicate different messages to different small groups of people according to the political views and based on vast amounts of personal information on an individual level. And by the way, Cambridge Analytica also contributed to the Brexit campaign and also to Donald Trump's election campaign. And Peter Thiel is on the board of Cambridge Analytica. Okay, voter targeting, there are many other companies like that, also for the Democratic Party. And there are many other fields where personal data analytics is being used. But for example, GNS Health Care, a US company calculating individual health risks from a wide range of patient data, including from genomics, medical records, lab data, mobile health devices, and also consumer behavior. They offer to identify people likely or not to participate in interventions, to predict the progression of illnesses and intervention outcomes, and to rank patients by how much return of investment the insurer can expect if it targets patients with particular interventions. And maybe to just exclude the hopeless cases? I'm sure not. Okay, we had healthcare, banking, insurance, election campaigns, law enforcement. And not every field I mentioned is already completely connected to the consumer data ecosystem, but they're working on it. What if our data would be only used for marketing? Based on my research, marketing has been and still is the major driver for pervasive corporates of Waylands. In 2007, Apple introduced a smartphone and Facebook had just 30 million users. Also in 2007, online advertisers started to use individual-level data to profile and target users. Four years later, it was around 100 relevant companies in the field of so-called marketing technology or ad tech, which are largely based on personal data collection and profiling. In 2012, it was already 350 companies then 1,000, then 2,000. Now, in 2016, we've got nearly 4,000 relevant companies in marketing technologies. The logos are quite small here. So today, less than 10 years after 2007, we've got thousands of online platforms, ad servers, app developers, analytics companies, data brokers, and many other kinds of companies, which are constantly tracking, profiling, categorizing, rating, and scoring as in real-time. Have a look at Telepart, a so-called predictive marketing platform. Their slogan is, turn our silence into your sales. They claim to provide a so-called Telepart Identity Network, which incorporates massive amounts of data from both online and offline sources to create a Telepart Identity Key. For each in their reshopper. And then they calculate a customer value score. For each shopper and product combination, a compilation of the likelihood to purchase, predicted order size, and customer lifetime value. A kind of score about how profitable or valuable or non-valuable a customer is. As a result, some customers get personalized, offers based on their online and even offline behavior. Last year, Telepart was acquired by Twitter for about $500 million. Personalized pricing, based on real-time customer value scores and the like, could soon be everywhere. Large global online shops already show differently priced products for different users or even the same products at different prices. Based on people's online behavior, location data, or the devices they use. A research paper from 2012 already showed that personalized prices differed up to 166%. The problem is that it's difficult, if not impossible to prove price discrimination based on individual attributes or user behavior. For individuals, it's completely non-transparent. And maybe personalized pricing will soon not only happen in online shops or travel websites, but could appear in the form of many smaller or bigger differences in how consumers can offer discounts and personalized communication from companies in general. It clearly won't be a problem for somebody to miss a single ad or discount. Often we are even happy to miss those. But it could be a problem on a structural level. Imagine someone experiences many of these small disadvantages, disadvantages every day. Perhaps without being aware of it. Michael Fertig, a US privacy advocate said, the rich see already a different internet than the poor. Based on personalization, based on digital records about their lives on a much more general level. Personalization based on our data is now nearly everywhere and often it's a good thing, but it's non-transparent and bears the risk of reinforcing discrimination or even increasing it. And the more invasive the data sharing between companies and the decontextualized usage of the recorded information gets the more opaque and non-transparent the whole system becomes. A key concern for me is that the data companies are increasingly using unique identifiers across different companies. Devices in context which they pretend to be anonymous. Mainly identifiers derived from email addresses and phone numbers. This way a person can be recognized again as the same person as soon as he or she clicks, swipes, buys, or does some other recorded interaction anytime across networks of data companies. And in fact, those identifiers are not anonymous at all. They're pseudonyms. Facebook also uses those pseudonyms identifiers and calls them de-identified I think. In addition, we've got the identifiers from Google, Apple, and Microsoft on the mobile devices and so on, which are more and more replacing the hardware device IDs. And several companies have introduced their own persistent identifiers, unique identifiers for people. For example, Oracle, Axiom, Verizon, Experian, and many others. To match and link names, email addresses and postal addresses, phone numbers, cookie IDs, device IDs, set up box IDs, and many more across the data providers and client companies. And both Oracle and Axiom run so-called data management platforms. A data management platform is a kind of real-time online data marketplace. It acts as a central hub used to aggregate, integrate, manage, and deploy disparate sources of data as a consulting company summarized it. A DMP offers clients to import data. For example, their customer relation, customer database, email addresses, purchases, and so on. Then match customer IDs, collect new data by putting text on their websites, in their email newsletters, and so on. They offer access to other data vendors, sometimes to many other data sources, and they offer to analyze and categorize people. The marketing guys call these segments or audiences. I would say it's categorizing people. They also offer to find similar people than the people, the customers of a company, so-called look-alikes, and they offer to send instructions who to target, on which device, how to personalize content, or messages, or ads, or websites, or whatever. And sometimes, DMPs also allow to categorize and enrich the company's own customer databases. This way, companies can sort the customers into valuable and non-valuable customers and into risky and non-risky people, and so on. Examples of companies running DMPs include Axiom and ORAC, but also Salesforce, maybe somebody heard of it, CRM vendor, and Adobe also runs a data management platform. Another example is Lotami, which provides access to 3 billion cookies and 2 billion mobile device IDs. And by the way, they also tracked me when I was visiting those five websites I showed at the beginning of my presentation. It's a very interesting domain name, crowdcontrol.net. Wonderful. But Lotami explains on its website, it's your data. You have the right to control it, share it, and use it how you see fit. Sounds nice, right? But of course, they don't speak to consumers. They speak to their corporate clients. I really love that. I think it's a good representation of today's information technology landscape. While we as individuals became more and more transparent, practices of corporate surveillance remain largely obscure. Today, each of our interactions is monitored, analyzed, and assessed by a network of machines and software algorithms operated by companies we've rarely ever heard of. It's not balanced at all. Without our knowledge, and often without our consent, our interests, weaknesses, illnesses, successes, secrets, and purchasing power as a weight, and companies increasingly use the collected data about our lives to make decisions on us. From which ads and products we see, which discounts and prices we get, how long we have to wait when calling and the phone hotline, which payment method we get to massive decisions in the fields of personal finance, insurance, housing, employment, and healthcare. While we as individuals and human beings, and as a society, mostly don't even know who is tracking us, how our data is being used, and how this could impact our future lives, the companies to control of our data. So what has to be done? One option could look like this. No. I love information technology. During the last few years, I tried to find out how to beat this challenge. My first try looked at that. I was asking, worried about your privacy? Forget it. Turn the tables and find out all the dirty details about your friends, your neighbors, and the rest of the world. Together with a small team from Vienna, I created DataDealer, an online game. A browser game. And even a Facebook game about collecting and selling personal information. You can still play it online. And I promise that we won't sell your data. Trust me. Since then, I did lots of research about today's personal data ecosystem. I wrote newspaper articles, contributed to documentaries about digital tracking, and recently I published an extensive report about these issues. It's called Networks of Control, a report on corporate surveillance, digital tracking, big data, and privacy. You can download a PDF version for free, but also buy it as a printed book. I wrote it together with Sarah Spiekermann, a privacy researcher and university professor, also from Vienna. In our report, we basically tried to explain how today's companies are collecting, analyzing and selling information about our lives on 160 pages with 900 references. I've been working on that for a long time without any budget. So if you wanna help, please read it, spread it, or even write about it. It was published in October, but I guess it could still need some more attention. Yeah, and of course, I'll continue with my research on these issues since we'll soon start with another project. I'm working on a prototype for an online research platform called Tracking the Trackers. It should become a comprehensive online knowledge base about the topics I've been talking about today. The main target groups for this will include academics, journalists, activists, and policy makers. So it's planned to be a research tool for kind of expert stakeholders. In the medium term, it could also evolve into a collaborative community platform. We'll see. If you're interested in this and anyhow, if you're interested in collaboration, please say hello to me after my talk. Okay, this is what I will do, but I still didn't really answer the question what has to be done. In general, I think what we should not do is things like blaming people because they're using Facebook and telling them they're guilty and it's their own fault that their personal information is being abused and so on. Of course, everybody should use alternatives to the dominant services and apps and browser extensions to avoid some of the most invasive tracking and so on. But I think there is no easy way to completely opt out of today's surveillance economy on an individual level without opting out of too much from modern life. For example, if a non-technical person wants to use a normal state-of-the-art phone today, this person has no other choice than choosing between using it with a Google, Apple, or Microsoft account. It's a disaster. It's not an individual problem. We have to solve this on a societal level, I think. And that's why I'd like to present a quick summary of basic policy recommendations which resulted from my research. I think the most urgent challenge is to make corporate data collection and utilization more transparent. This could happen by supporting research, also by developing technical tools to examine the black boxes from the outside, but of course, also by regulation, which quickly leads me to the never-ending story of the European data protection reform. I hope the European data protection regulation, the GDPR and the ePrivacy Directive will make things at least a bit better. From 2018, if not much better, we'll have to carefully watch how it will work on a practical level, and if necessary, we'll have to further update it. In addition, I think that also other fields of law such as consumer protection, anti-discrimination, and also competition law could help to rebalance the current information technology landscape. But even if we get the best regulation, I'm afraid, that this won't be enough. In the moment, companies are not only in control of our personal data, they're even trying to shape our future information society at all. And during the last 10 years, they were successful without much democratic debate or discussion or whatever. They've got billions and billions of financial resources and they're moving ahead very fast. I think we need much more support of decentralized privacy-aware technology or maybe even a completely new industrial policy and billions for open source components and frameworks which help creating a different kind of innovation which is respecting our privacy. For example, on the European Union level. And not least, there is already a large group of people committed to this kind of a different internet which is not dominated by corporations building centralized services. For example, here, the Congress. That's why I think it's also crucial to make digital civil society much stronger. I mean, many organizations and individuals doing amazing work are fighting hard to get some 50,000 years. That's not clever from a society point of view, I would say. And yes, we need much better level of digital literacy. We need well-informed citizens for a democratic future information society and I'm not talking about just knowing how to use Microsoft Word. It's about better knowledge of what the digital age really means for us as individuals and as a society. Beside of this, I think the worst thing that could happen would be if we get desperate or even cynical. Like the NSA is collecting every piece of data anyway and everybody is using Google, so fuck it, why should we care? Actually, quite the opposite. And that's why I'd like to finish my talk like always. With a major recommendation by a major guy, Google's Eric Schmidt said in 2013, you have to fight for your privacy or you will lose it. Friendly advice or a serious threat? I don't know. Thanks a lot. Thank you so much. We do have time for questions. So if you do have questions, please go to one of the four microphones. For those of you who are leaving right now, the room, please do it as quietly as possible so we can not get interrupted or disrupted in the rest of this talk. Let's just take a moment. Okay, we have a question on the microphone to my left. So do you know any kind of studies made how these tracking technologies make self-sensitive? Could you come nearer? Can you speak into the microphone? Yeah, yeah. So do you know how these tracking technologies influence self-sensorship? How people start to change their behavior online and while restrict their communications in order not to leak some private worldview to trackers? Yes, I didn't talk about that now, but this is a crucial point, this kind of chilling effect. When you know that you're constantly being watched, then you will behave differently. We know that since a long time, there are many studies about that. And I think the crucial point today is that in many cases, we really don't know. For example, we see some ads and we think, was that because of that interaction that website visited or was it not? Did they track my location? Didn't they do it? So I think this really leads to an uncomfortable feeling and in general, the situation is a disaster because we know the governmental surveillance, which is also accessing the corporate databases, is omnipresent. So yeah, I think this is a crucial point and I also addressed that a bit more in detail in my report. We have a question from the internet. Yes, there is the rather cynical point of how do they categorize people that are trying to hide their data? They're mostly white, male, earning above average. Is that not just an extra data point? Again, sorry, people who are hiding their data. Can you repeat the question? People that are trying to opt out of the cookies and everything. Yes, I think people who don't participate, sometimes they are considered as suspicious from the beginning, also from the marketing, from the fraud detection companies and from any other companies. And I think this is also a problem. So we don't really have the choice to just solve this on an individual level, to just use encryption and browser extensions and digital self-defense and so on. That's why I'm talking about that we need really a societal solution for this. And then I take a question from the microphone in the back on my left, it's you, yeah. Do you have any individual tools to support like Ghostry or StartPage, stuff like that, that will help? Ghostry is a tool that tries to block known trackers and StartPage is an alternative search engine that does a search through Google but claims to hide your personal data. The first part of the question, and the second is, do you really trust governments that governments could actually impose rules on how data is used? Is it what? Governments with laws could actually provide good framework. Yeah, the first question, of course, I would strongly support, recommend to use StartPage or search engines which are not based on individual profiling and tracking, of course. But it's getting a bit more difficult when it comes to browser extensions like Ghostry because they started to participate in the tracking ecosystem now itself. So I'm not sure if we can trust companies like Ghostry, I think we cannot. Also, Adblock Plus is the same story. So it's really difficult. Currently we have Ublock Origin and Privacy Badger from the EFF which are working quite well but they are not the solution for the whole problem. We should use it, it's good but they are not the solution. The second thing is, I don't know. I know many people don't trust governments and I also don't trust governments but I don't think if we're talking about governmental surveillance, law enforcement, intelligence authorities on the one hand and on the other hand we still have something like a democratic parliamentary system with a balance of power. And if we have, there are interest groups, different interest groups in governmental sectors and also in the European level. So these are not the same people who are fighting for a better data protection regulation than the people who are running the intelligence and data collection law enforcement intelligence agencies. So I think yes, it's important to use the regulation and law approach to address all these problems. We won't make it without. Okay, time is up unfortunately. For those who still have questions, I'm sure you can reach Welfie afterwards. Yes, thank you so much, Welfie.