 OK, well, Benvingutza thoughts. Thank you very much for coming. This is a lecture organized by the MA in International Studies on Media Power and Difference. As you know, most of you know what some of you are coming because of you have seen the announcement. We are very happy to introduce you to Robin Mansell. Robin Mansell is a very well-known scholar at the communication sphere everywhere. She has been working on media and communication studies from different perspectives for decades. She has published very meaningful and relevant books. And well, just for the ones that are not very familiarized with her, she is currently Professor of New Media and the Internet Department of Media and Communications at the London School of Economics and Political Science. She has been at this same university head of the department. She has served as president of one of the most important communication and media international organizations, the International Association of Media Communication Research, IMCR. She has been also scientific chair of the Euro Communications Policy Research Conference from 2008 to 2014. And she's currently member of the board of directors of the TPRC, which is the annual research conference on communications information and internet policy. She has worked on several perspectives, as I have said, from political economy, from media policies, perspectives, and mostly focused in the last, I would say, decade or more on the social challenges of new media, which is the chair he currently holds in LSE. And while she's internationally known for her work on the social, economic, and political issues arising from these challenges, the new challenges of the new information era, if you want to call it like this. He has written many papers, several important books. For instance, the most, the latest ones are Imgine and Internet, Communication, Innovation, and Governance, published by Oxford University in 2012. Also the International Encyclopedia of Digital Communication and Society, published by Buck Willey in 2015. And a very well-known handbook by Buck Willey, also the Handbook of Global Media and Communication Policy, one of those handbooks that are of reference by many, many scholars. And she is today to talk about inequality, social justice, and digitally mediated communication, about the contradictions, the challenges, the consequences of all that. She's going to talk for around half an hour or 40 minutes, and then there will be a turn of questions for all of you to take advantage of having her here. So thank you very much, Robin. And thank you all for coming. It's really nice to be here with cold weather and blue sky. As you can tell from my accent, I'm not British, I'm Canadian, so I can do cold. It's OK. So how did this talk come about? It started because with all of the discussion around Brexit and Trump, people were asking our department, what does digitally mediated communication contribute to social inequality? And what do we say? Oh, that's a complex relationship. And so I decided to put together a talk. And so you're actually the first recipients of this particular talk. It really is trying to grapple with that huge question and obviously I can't deal with all of it. But I'll see how we go. I suppose if you think about inequality in the digital world, just about all of you will agree that there is inequality present. But the thing that's curious is that there are huge differences between different kinds of researchers and also citizens about whether the social and economic inequalities that we see all around us and that are related to information communication technologies are temporary and will get passed them or whether this is a long term structural problem that we face. And what's even more interesting is that there are instrumental researchers, many instrumental researchers who ask, how does this happen? How do algorithmic platforms work? And then there are critical researchers who ask, why are they the way they are? And how does that contribute to inequality? So I'm going to compare and contrast some of those perspectives on this very big question about inequality and the digital world. I'm also going to talk about what I see as a very big contradiction that is more and more something that we have to grapple with. And that contradiction we'll talk about in more depth is very much about the relationship between control and autonomy of human beings in relation to their technologies, and in this case, digital technology. So that's basically what I'm going to do and by an outline. I'm going to talk a little bit about history. Some of it would be very familiar to you. What do we know about the relationship between the social and the economic and technology? Little bit about differentiating instrumental and critical approaches. And then how do the policy people, people in Brussels, people in national capitals, hear these different research traditions? What do they take away from them? And then I'll talk about the contradiction and then I'll reflect a little bit on what all of this means. So what do we know about the history of technology and power relations or social relations? And just in a nutshell, one thing we know is that historically technologies have always been an instrument of power, and that those power relations, historically, and now tend to be hidden and non-transparent within the social system. So you think today of Facebook or Google. In the media and communication field particularly, and in my tradition in my department, we had Roger Silverstone who used to say that mediated connection basically is the dominant infrastructure of our lives, and this is becoming true across the world. And then we have, of course, I'm sure you, for those of you who are students, you study him, Castells, who has made statements like the communication process decisively mediates power relationships, social practice and political practice. So if that's the history of these debates, if you go back the last 20 or 30 years around debates around technology, one of the, some of the policy makers had to say, and some of them, if you go back to the, Dag Hammersold and the report that was based, produced in his memory some years ago, said producing technology means producing instruments of control and influence. Question of who controls technology is central to who controls development. He called for another development, not the mainstream view of development, self-reliance and a need-space policy. So historically, some elements of the policy world have been able to understand that power matters in terms of technology and its development. But on the issue of inequality and digital mediation in particular, as we'll see, the dominant research tradition has been very instrumental and has been very much about digital divides. So I'm just gonna refresh your memory. I'm sure you'll be familiar with digital divides. The first digital divide, which is everywhere in the literature, is about an access divide. People have access to broadband technologies. And you can go to almost any report which looks at access and find maps like this, which show lower income developing countries falling behind other wealthier countries. And people continue to generate statistics on whether or not those behind are catching up. So the first divide is an access divide. It looks very much like this, if we think about it in terms of inequality. Economists basically crunch numbers to find that, in fact, alongside global digital investment, poverty has actually been declining. So they take this as a good thing, yes? Eventually we're gonna close the divide. Sorry. In the first divide, you find texts like this. This is just a quote from a text for a report that was prepared for this year's World Economic Forum. And a critical point to take away from that is the fact that no matter what the divide looks like in terms of access, the mainstream view is that eventually digital technologies will help to reduce poverty and improve the well-being of citizens everywhere. So that's the main message coming from the first divide literature. Second digital divide is all about skills, literacy, and the use. And there are large numbers of studies in this area which have been very helpful in pointing to the fact that we need to invest substantially in skills, in ensuring that eventually, even within countries, some of these gaps will start to be reduced. Additionally, in this area, people have been doing work on what has started to be called the third digital divide, which is about differential life chances and outcomes. And what that means is that large-scale surveys are done asking people about, depending on their skills level, do they actually think IT, digital online activity, social media, is making a positive difference in their lives? And in the studies that were done by colleagues of mine at the LSE, Ellen Helper and others, the answer seems to be on balance, yes. So people want more access, more skills to be able to engage online. In this context, it's all about outcomes. The approach to just thinking about it in terms of what functions you can perform if you have better IT access is complemented by studies like those of Manuel Castells, who asked the question, not just about what functions can you perform if you go online, but also does better skills and access contribute to people's sense of human dignity? And he finds that, once again, there are big gaps in terms of whether or not people feel as if their dignity has been enhanced. And of course, some people talk about other kinds of divides, like divides between generations, genders, the disabled and the not disabled. So the overall message from much of the literature on digital divides is not terribly surprisingly that they matter, but the relationships are complex. So in terms of instrumental researchers and some critical researchers, so far what we've been able to do in understanding the relationships between digital technologies and societies to come up with the idea that these relationships are complex. I don't think any of you would disagree with that. Bridget Blessles says very simply, the general circumstance of an individual's life is a prerequisite to be able to utilize the potential of digital technology. So we've had to invest an awful lot of money to come up with that conclusion. It doesn't really tell us very much about what the future is gonna look like or how inequality, social and economic, is related to it. So if that's the basic result of decades of research on digital equality, what do the policy people actually hear? One thing that they hear is that connecting the unconnected is always a good thing. It's always a good thing for individuals, groups within society, and for society as a whole. The relationship between digital technologies and social media might be complicated, complex. The policy makers mainly focus on speeding up the rate of investment and speeding up the upgrading of skills. They always talk about the rate of investment in these things. They say also that we need to do this, why to increase European competitiveness and economic growth. They say that access and skills gaps have to be closed. They say also that any risks of doing that, like surveillance, privacy infringement, growing unemployment must be addressed, but basically the technological juggernaut to use Christopher Freeman's term needs to just roll on and produce the next generation of digital technology. In Europe and elsewhere, this focus is mainly seen in three main themes. You can pick up almost any document from the EU on the digital environment. It's on the economy, it's on broadband connectivity and on access and skills. So on the economy, there's just some bullet points which cover basically what the digital strategy in Europe is mainly about. It's about competition policy, like the ongoing long-running Google case that may or may not succeed. It's about removing barriers to access across the different member states of the European Union so that geo-blocking is seen as a problem for audio-visual content. Document I read from this past year says that only 4% of online content is actually available across member states. There's efforts to boost broadband infrastructure higher and higher bandwidth, especially for rural areas. And there's efforts to deal with the lack of transparency in the way consumer information is used in algorithms. We have the new General Data Protection Directive in April last year, where all European companies will have to gain user consent to use your personal data. But we had a seminar at the LSE last week with industry representatives there. And one of them said, in his estimation, and he was a very well-informed industry representative, in his view, it's probable that 80% of companies have never even heard of the directive, and probably never will. And that less, even less, have the resources to actually implement it. This kind of legislation, when it was introduced last year, finally in the US, was met by advertisers and some online platform operators with, and this is a quote, this is unprecedented, misguided, counterproductive, and potentially extremely harmful. So there's always resistance to this kind of legislation. Sometimes policy aims to support content like levying fees to support the press industry, which hasn't actually happened yet. And sometimes efforts are made to address the tax base. So you see cases like in Ireland, the EU is going after Google or Apple there. So in summary, a lot of these interventions aim to deal with the rate of investment, which is my point earlier. Often, even when the discourse talks about empowering consumers or the citizens, the main aim is to boost the rate of investment and competitiveness. There might be power asymmetries between companies and digital users, but the claim is that by changing company incentives, we will eventually have a fairer, less unequal world of digitally mediated communication. That's the claim. The reality is, especially in the economic sphere, that these kinds of approaches only tinker, play with the direction of technological change. The great hope is that the next generation of technology will close the gaps. They may have gaps now, but it'll be the next generation of technology that comes along and basically fixes the gaps. So right now, what is the technology that has everybody most hopeful? 5G wireless. It's the next technology which is going to fix the gaps. The focus is not on who will be left behind or on whether or not the digitally mediated world is consistent with social justice, human flourishing, the good society or whatever you want to call it. On the skills side, the policy focuses on upskilling. So around 90% of jobs are forecast in the future in the EU to require digital skills by 2025. There's a skills agenda for Europe for last summer which says the highest priority skills are what? Computer science, nanotechnology, artificial intelligence and robotics. Sometimes they include transversal skills like teamwork and creative thinking and problem solving. But as STEM, science, technology, engineering and mathematics, which are the privileged skills where all the investment is going, occasionally arts and design is included. But basically the aim is to reduce barriers to growth in areas like big data analytics, data driven science and the internet of things. Why so much emphasis on this top layer of skills because 415 billion euros is supposed to be contributed to the European Union GDP from these areas. The goal is to increase choice for consumers to create new sources of employment. The overall overwhelming focus is on R&D, faster innovation and faster growth, the rate of investment. Excuse me. So even when the policy discourse talks about disruptive technologies, technological unemployment, the need for greater social justice, the focus is on gap filling and digital divides in the short term. Not in the medium and long term, in the short term. The idea that actually these inequalities and gaps are persistent and long term is something that very few policy makers particularly want to talk about. This is a quote from a study that was done on some of the data to do with unemployment across Europe that's related to digital technologies. And they find that it'll take 60 years for Europe's lagging regions to close even half of the current gap in high tech unemployment based on current technologies, let alone the technologies that are just around the corner. Why is it that long? It's that long because lagging regions and populations do not catch up. They just fall further behind the leading countries as each new generation of technology comes along. We know that critical scholars, and one of them is the fellow named Keen, do talk about the persistent gaps between the rich and the poor. And we know that inequality is a flash point for these days, particularly for right wing populism, anti-immigration sentiments and much more. We also know that economists like Atkinson who's a critical economist has shown that even if full employment could be guaranteed, just imagine that you've managed to get full employment in Spain, for example, or in the Catalan region. Even if full employment could be achieved, the structural shift towards highly skilled labor is likely to result in more, not a less unequal distribution of income. In other words, even if we could have a better world in terms of employment, the long-term trends are towards more and more unequal distributions of income. Why is this the case? It's the case because an increasing concentration of profits from technology and trade in a smaller and smaller number of companies, think about the big platform companies, steeper pay hierarchies, the highly skilled being paid off a lot and others not so much, and new forms of contracting like zero hours contracts are happening as digital technologies become more pervasive and more advanced. So these are developments which actually outside the instrumental researchers narrow focus on fixing digital divides and investing faster in digital technologies. They just don't ask the questions. They just ask how are things happening at the moment? In the popular press, there is a lot of talk about jobs and automation. You can't miss it these days. In some academic work, it is recognized that the direction of digital technology innovation is resulting in a skewed income distribution, especially when you hear about machines replacing workers. Some estimates put employment which is at risk of computerization in the form of sensors, internet of things, algorithms, machine learning and robotics, all of which sometimes bring us good things, at nearly 50% in the not too distant future, 50% over the next three decades, not over the next hundred years, the next three decades. But still the instrumental researchers assumption is that in the long run digital technologies will bring much better benefits than losses. So I don't think I need to probably remind you that people, we, you, me, do not live in the economist long run. We can't wait for the long run which never comes. We live in the here and now. And when deep social and economic inequalities persist, they undermine societal commitments to democracy, the capacity of people to generate an income, and not only that, but also to live a decent life. And this has to be at odds with social practice, social justice. Of course there are some critical economists like Joseph Stiglitz who call for progressive income and wealth taxes, and for strengthening the social safety net. Or in Europe, stopping the social safety net from disappearing. But here's the worrying thing. Even these analysts, even Stiglitz, tends to think there's a natural direction to technological change. Sort of a feeling that what will be, will be. We need to adjust to whatever comes from digital technologies. And to use Charles Taylor, the philosopher's term, what today's dominant imaginary helps people to believe is that the more digital technology we have and the more advanced technology we have, eventually the better life we will be able to live. So let me shift gears a bit and talk about what happens with all this obsessing about the neoclassical approach to the economy. What are the consequences of this obsession with the rate of investment in digital technologies? The logic is that social and economic inclusion and empowerment of citizens is just around the corner. The market's the driver, maybe. We think historically again, the political economist, Canadian Harold Innis, warned about this obsession. In the 1950s, we need to appraise its limitations. This was a strong criticism coming from him on the obsessive focus on economic growth in his time. That was 1940s and 50s. What about in our time? In our time, this obsession links with the idea that proliferating digital platforms bring us good things. They optimize consumer choice. This obsession makes it seem as if there are no power asymmetries, but if there are some, they can be addressed by tinkering around with industry incentives. Policy is all about the rate of investment, moderated, of course, by some attention to the common good. Jeffrey Sacks, who's the director of the Earth Institute at Columbia University, put it this way again in a recent report that just went to the World Economic Forum. So, yes, he recognizes that technology itself is never a solution. And he thinks it needs to be properly deployed. And it should be combined with a will towards the common good. So perhaps we need to think a little bit about what people imagine the common good to be. Common good, less inequality, less poverty, greater human flourishing, but the report, and it's a long one, says nothing about the direction of technological change. People are very reluctant to talk about the long-term direction. Of course, digital technologies bring benefits, so don't get me wrong. I'm not somebody who says, let's not have any digital technology. There are lots and lots and lots of benefits, like in the health sector for education, for financial services, for smart agriculture, for global monitoring of the environment. But my point is, the direction of technological change is just taken for granted. It's seen as either being inevitable or as good. So it seems to me that critical scholarship needs to take much more time to think about the relationship between humans and their machines and control and accountability or authority. And it seems to me that the more we see an increasing rate of investment in these technologies, combined with basically tinkering policies, what we're creating is a world in which the more digitally mediated benefits we have, the fewer opportunities there are for humans to exercise control and authority. It seems as if this is a fundamental contradiction. For scientists, for government representatives, corporate decision makers, consumers, or citizens, the dominant imaginary, if you like, gives us one choice. That choice is to adapt to these technologies as they advance. But what if the risks to human beings of the current direction are so great that the whole notion of being human starts to change and be destroyed? You might think I'm exaggerating, but here's a view from a very respected scientist and philosopher, his name is Dennett. And he says, he's been around for quite a while. The goal is an all-powerful executive homunculus whose duties require almost godlike omniscience. That is his view of the near-term future of artificial intelligence. Now you may say I'm exaggerating, but basically, that is the direction more and more control and authority in the hands of what we today call algorithms, data science. So the principal contradiction we need to be concerned about is between humans and machine control, over who gets what incomes or what kinds of justice. Advanced robotics and machine learning are supporting data pattern recognition, as you will all know, problem solving of all kinds. And this is consistent with the all-powerful technological inevitability vision. It used to be science fiction. It's not necessarily science fiction any longer. This kind of contradiction goes beyond the technical developments that are being promoted by Google or Facebook or Alibaba or Amazon today. It goes beyond whether Facebook allows fact-checking for fake news stories. It goes beyond whether commons-based peer production using digital technologies can embed certain values like moral legitimacy. The dominant imaginary which obsesses with capital markets and economic growth is reinforced by discourse which sees digital progress as basically a force of nature. Algorithms are seen in some of the instrumental literature as self-organizing systems which crank themselves out of themselves. That's Brian Arthur. And here's an example just from a recent wire, it's Kevin Kelly. What does he have to say? He basically says rather than struggle against it and he means digital technology advance when all this emerging AI arrives we should basically welcome it. Why? Because it'll give us new products, innovative services who could complain our lives will be better off. But it's the notion that this is a one-way pathway which is significant here. So what are critical researchers saying? Critical researchers of course are saying the technological progress is not a force of nature. But if complex systems of algorithms, data structures and software code progress along their current pathway we might see consequences like this. We might see a reshaping of cognition, reshaping of the human brain. We might see a gradual reduction in what it means to be human. And we will see more and more asymmetry between humans and their machines. To think about how we might reduce these social and economic inequalities in relation to the digitally mediated world my argument is that we need to also focus on the direction of change. Critical scholarship is helping to demonstrate how far the commodity form of online engagement is reaching into our lives. Of course it is. We have lots and lots of evidence these days of algorithmic bias, whether racial, gendered or other. But the march of algorithmic complex systems is not often questioned by critical scholars other than to propose using somewhat different designs of those same technologies. So much so that Shoshana Zuboff has recently said what happens when human authority fails? Is anybody really talking about that? As wealth shifts to digital market organizers and owners of infrastructure, there will be calls for transparency, of course there will. And there will be some public resistance, of course there will. But on the current digital pathway, the future is quite bleak. And here's a political economist's view, McChesney and his colleague. He envisages on the current pathway basically disposable human beings, a disposable population no longer skilled to participate in their lives. That's a very bleak picture. Political economists like McChesney are assuming that human beings still do have control of the technological system, mainly of course company executives. But McChesney and others also have this to say. They say that the violence of technology resides in the way it cuts the link between the person and the sensory interaction of the world. And this has huge implications for how we understand each other. Scientists who do the work and understand many of the scientific and engineering aspects of the world that's been created, like Stephen Hawkins has this to say. In the future, artificial intelligence could create a will of its own. It could be that the best thing or the worst thing ever to happen. We do not know which. So we have an environment in which we really don't know and don't have a crystal ball, it's true. So in the summary, the response to the principal contradiction between machines and people is basically to build more advanced computational machines to use bigger data sets and to improve our learning machines. It's to focus mainly on investing in instrumental research. Today, behavioral economics is growing. Can machines predict human behavior better? Sometimes the focus is on ethics, it's true. But it's on ethics, which is formalized into the code of the machines and then possibly lost to our future control. The dominant imaginary is about a natural trajectory or direction for change, which grants relative autonomy to the large infrastructure and platform service operators today, but in the long term, they might not control their own technology. What would this mean for social justice, for human flourishing and for equality? So I'm gonna move on to inclusion now. Reflections, it's an awful lot to assimilate and it's not a very happy story and I'm an optimist. So, first of all, of course we turn to formal governance and policy to control or at least limit accidents resulting from experiments in the lab and the excesses of the private sector. We turn to legislation and regulation to shift, to tinker with the incentives of companies and sometimes in fact they do act differently. Some of these digital platform providers, the Googles, the Facebooks, they do respond because they have to seek public legitimacy for some of what they do. But my argument is this doesn't tackle the long-term principle contradiction that I mentioned earlier of authority and control. The computational practices are being internalized and when they are internalized, it's really, really hard for people to imagine how things might be different. A report by the UK on artificial intelligence and robotics that just came out a little while ago. It emphasizes and reflects on the fact that of course transitions will be difficult to the more complex artificial intelligence, robotics, automated world. But it also says we have no choice. And in addition to that it says we must adapt to the transformation which is coming. It does say there is time for scrutiny of ethical, legal, and social dimensions and that is hopeful. But the question that you need to ask and I need to ask is what kind of dialogue is actually needed? What would a dialogue look like that would actually shift if it was necessary the direction of digital innovation? How can we expect citizens to participate when alternative imaginaries or visions of the future are suppressed? Think about the fact that 71% of Europeans feel there is no alternative to disclosing their personal information. Think about the fact that when industry leaders, as in a recent McKinsey report, see anyone who resists these developments as being disgruntled employees, criminals, or terrorists and political activists. So their resistance, whatever it looks like, is discounted. But it seems to me there is no alternative to some kind of deliberation. As long as people can exert authority and certainly we still can, I think we can learn from Paulo Freire, who said this, dialogue is a moment where humans meet to reflect on their reality as they make and remake it. Even political economists who are very bleak have this to say, we can through conscious thought and action change both the world we live in and ourselves. So in the face of the obsession with economic growth, not wealth distribution, with an ever more complex digital environment, not one which is nuanced and matching the abilities of human beings, we do have to take the view that it's possible, can be possible to think differently about the future. So it would seem to me that the dialogue needs to be about two questions. First one is what will human beings do in their lives in the future? They're not going to be doing the tasks of the past, what will they do in the future? And how and by whom and or by what will life chances be established? Seems to be those are the fundamental things that we need to reflect upon. There needs to be a dialogue about what people value in their lives, not just as is increasingly case as what values are embedded inside technologies that come to market. Not only about the power of policy to constrain or limit or tinker with unwelcome corporate and state behavior. If we come back to the comparison between instrumental and critical research for the instrumental researcher, since the digital future cannot be known with any certainty, then the best thing that can be said is that the digital future is beyond the scope of human understanding. That's a quote from the recent UNESCO Social Science World Report, which is otherwise very critical. But when it comes to technology, that's what they say. It's far beyond the scope of human understanding. We have to get beyond the idea that the study of the impact of digital technologies is in its infancy. That is a quote from a recent survey of literature on all of the digital divide literature, inclusion, research in this area is in its infancy. We have to get beyond the claim that we can only understand social and economic harm after it has happened and has been empirically documented. So we need to be able to think about alternative pathways. We need to be able to problematize the notion that the natural pathway towards the future is the one which is depicted in most of the policy literature and in most of the discourse of the instrumental researchers. Neoliberal policies and corporate agendas are complicit in supporting the current pathway clearly. Political classes moderate some company incentives and shift their strategies, but only within the boundary of this current pathway. Filters might be introduced to protect children. Pressure might may mount to ensure that content aggregation platforms moderate content in somewhat different ways. Measures might be introduced to track trolls and bullies and targeted or mass surveillance might be authorized by independent judges to protect public safety, but these operate after technologies are released into the marketplace. If more people were to come to understand that the pathway of digital technology innovation is not inevitable, we could start a discussion about whether existing and alternative pathways are consistent with, for example, oops, sorry, I'm pushing the wrong one. With the rights and obligations, we have to as individuals to each other. It could be a complimentary discussion in addition to the one about digital divides. We need to pay much more attention to expectations about the future, not just to after the fact or ex post empirical evaluation. We need a reflective dialogue on future expectations. Actions may be required, not just to mitigate or reduce the risks of today's digital applications, but to fundamentally rethink the direction of technological innovation in the digital area. It might be concluded on existing evidence and on industry expectations, but it's not possible to ensure that an artificially intelligence-based computational system meets the desired standard of citizen production. If it was concluded that, what would be the response? In other spheres, like GMOs, for example, some kinds of in vitro fertilization, et cetera, what do we do? Sometimes we have a moratorium where we pause and we said, no, you can't introduce that particular technology in the here and now. This period of reflection, in the IT area, how often do you hear about a moratorium on the next generation of artificial intelligence? So basically, in the end, conflict over priorities in the digitally mediated world will go on, it will persist. There's no doubt about that, especially in a supposedly post-truth world. Accept ourselves somehow from society. But it seems to me that that is not an excuse for doing nothing. It seems to me that deliberation must be possible and that it must be acted upon, or we could well find ourselves in a similar situation to what we see, for example, in the context of global warming, where the feeling of futility is sometimes so great that people say, oh, well, just carry on as we are. Should we just carry on as we are in the digital sphere? A very interesting elaboration on the technological myth, I would say, the economic-driven sphere of ICDs and how it has been considered a sort of natural disaster or luck, depending on what side are you off. Yes, Elidem? Yeah, I think it's a good example of working within a particular framework of technology. So the sharing economy is built upon the same technologies which are used for commercial purposes. And yes, it enables all sorts of kinds of things to happen that were not possible to happen outside a market system. But what we are seeing with many of these initiatives is that they are gradually incorporated back into the mainstream or the become hybrids. And so I think that if they benefit large numbers of people in local communities, at the country level, even globally, as they sometimes do, especially in fields like crisis communication, open source software development, it's all to the good. But the point is they're still working with the same platforms, algorithms, computational things where more and more decision-making, even by them, is driven by what the numbers say and less emphasis is put on the capacity of people to make judgments about those numbers. At a PhD student who just finished, who part of his thesis was on the way in which open source communities have engaged in firefighting and finding lost children in Russia. It's all comments-based, pure production, huge initiatives against an authoritative state. And they would claim to have made a big difference. Point is that those technologies and the platforms they're using are becoming more sophisticated. Fewer and fewer people actually know how to build and maintain the system. And his argument is that they are not skilling up enough people, which is part of the digital divide aspect. But the question then becomes, how are they gonna keep it running? They need more servers, they need more of this and that. They're gonna get investment from not the Chinese state, but a private company in China into the Russian platforms for doing this commons-based activity. So you get the market coming back in. It's the same technologies and it's the same reliance on an automated system increasingly. It's cheaper. There's a number of ways to answer that. I think what I would talk about it for the purposes of this talk, because I'm asking people to think about not what's happening now, but what is likely to happen in the next period. That it'll be a continuous process of catching up. So yes, absolutely. Facebook goes in and does deals with telecom operators and allows access for much lower cost with its content on its site. And this is enabling for many people. The extension of networks throughout sub-Saharan Africa has yes, allowed many entrepreneurs to make a difference in their lives and to market their goods overseas. These are all very positive. So nothing I said should be taken to mean that we should stop trying to close divides that exist now. Of course not. My issue is that we need to start thinking about how those divides are perpetuated into the medium and long term because it's always an over-emphasis on the next technology. So Facebook and Google can do an awful lot of good in the here and now, perhaps. They can do a lot of bad too. But they can do a lot of good in terms of opening up access. But does that actually address the longer term problem of the relationship that people have over the kinds of calculating machines that are becoming central to their lives? So the whole world becomes incorporated into this environment. Is that really good for how we think about human flourishing and about understanding what social and economic inequality really mean in terms of wealth distribution? So yeah, I mean, if Facebook doesn't do it, it seems likely that the UK government won't do it, especially not in the current environment. I think one of the interesting projects that people can do is to trace the histories of these alternative technologies. So some years ago, I did a study of the pathway taken by Negra Ponte's brilliant idea about the $100 laptop computer for Africa and the deals they did with governments and companies to introduce those things. I ended up bundling them into telecom operators packages and not only did many of them sit in boxes, but basically they became unaffordable for the very people for whom they were supposed to be affordable. That was what, 10 years ago. I know that there have been various iterations of low-cost, adaptable technologies for poor and rural areas. The question isn't whether or not people shouldn't have access, of course they should. If you need access to networks in order to live your life, then the more we do to enable that, the better. The question is, where is the direction of change heading? So we're always trying to catch up. So let's say people get access to fairly modest computing power in rural areas of Ethiopia. What happens next if you just stop there with the access gap? Another PhD student of mine did a study of what happened in Ethiopia when they did introduce laptops into schools, especially in rural areas. So where did the content come from? It was straight packaged content from the US in English and it was never used in the school system. That was late 90s, early 2000s. So there is a complex issue. You're catching up again. You're not skilling the teachers who do the teaching. The students don't really have access to any new knowledge and if they do, it's US-centric knowledge. It's part of a package. So what I'm trying to challenge is getting people to think about what does addressing these gaps mean in terms of the complete picture of people's lives, where they live and the kinds of autonomous development that they hope that they are going to be able to achieve. Exceptionalism, if you like, around digitalism, that. So when we think about social and economic inequality in the world at large, we don't have any difficulty thinking about it as a long-term structural problem in general, right? The problem comes when you see a technological fix as the answer to what is a larger structural problem. And what we seem to see is that historically and now, the investment that goes into the new technology which is supposed to fix the problem is all on the side. It's heavily weighted towards the much more advanced because that is scientifically and potentially most revenue building for big companies in the future. So it's about the scales of justice, if you like. The emphasis is here, not enough emphasis is here. And that is the case in all other walks of life. Why shouldn't it be different in IT? But no, when it comes to IT, you get this sort of nod to the common good and this nod to project-based investment in developing countries through aid agencies and some civil society initiatives. Always trying to close the divides and their initiatives are welcome. That's good. But they are never, ever going to be able to tip the balance unless, because technology is raising a head. And if it's really raising a head into the automated sphere of technological unemployment that most people are forecasting now, these people in those countries are gonna have to find some other way of living just as are Western people in the wealthy countries, but they will have less resources to do that with. And that's my point. Say something about how this is not dystopian. It's actually very hopeful. If you read the literature coming from so many of the, there must be five books published a day by reputable publishers, especially in America, big books, airport best-sellers, that tell this story about technological inevitability and how good the automated world is going to be for us. And there are critical researchers out there, many of them, who talk about the materiality of technology and socio-technical developments and also about the political economy of structural problems. They're over here. Airport best-sellers are over here. Policy makers are somehow in the middle. I haven't seen the new report on AI from the EU, but if you look at the UK one, they're in the middle sort of saying, oh, we have difficulties with problems. We can't understand the future. We have to do empirical research to find out if there are any risks, if there are any risks, we'll address them when we are sure that they're there. Of course, one should not leap into a risk mentality. But on the other hand, surely over the last 25, 30 years, we have learned that human beings need to, yes, they need to adjust to a changing world, but to do that, they need resources. And if the resources are skewed, then we need to fix the resources somewhat if we're going to enable them to take control of their lives, et cetera. I can't expect IT to fix things. Most critical researchers would agree with that, but they're not in the same room together. And that is why I talk about a reflective dialogue. Can I do one more anecdote? Is there time? I wanted to give you, I did a study for the UK government three years ago, was about public understanding of science and technology in the digital area. And we wanted to have some dialogues, some workshops. And this is a true story. I put down the names of various NGOs and civil society organizations in the London area on the list to be invited. They weren't going to be invited if there were representatives from the Home Office, Trade, Development Office, Cabinet Office. You can't have any of those types in the room. So you have to keep trying is my point. But if there had been in the room, I am sure that the discussion that we had, which at the time was about surveillance, it's not the current surveillance bill in that earlier one, would have been different. I don't know that the outcome would have been different, but I know that we could have tried harder. I think it's very interesting the idea of this inevitability actually doesn't exist because we built our own future. But policymakers are like glamorously convinced by the industry that this inevitability exists. But at the same time, policymakers are open to all the views. For instance, I'm thinking when I'm listening to you, you have a very critical stance. You have always had very critical stance and very clear towards this idea of technology as the solution of everything while it's economically driven by some interest. But you have been invited many times by the politicians sphere at all levels in the UK, in the EU, in Canada I think also. So there is room for this different thinking. So what can we say? What can we explain? Here we have a lot of PhD and master students that they can choose to do very instrumental research and be sure that they will be very likely politicized and maybe have corporate interests behind or they may be much more critical and then they will always only be the university, the academia behind probably. But we can all have an impact on policymakers. The problem is that the balance is not in our favor because probably the economic interests are more powerful than our voice. But the policymakers are not evil. They are human beings as they are there. They are open to changes. They're very different political colors of policymakers. So what's your experience with this? Because you have been there advising them. Mixed is my experience. I guess, and I suppose this is in a very popular view, but I don't use the same language when I talk to policymakers as I might have used in some of the talk today. I do change the way I talk. So words like, I don't know, materiality of technology or discourse, et cetera. Don't appear if I'm talking to somebody in London or Ottawa or wherever because they just look at you and say, you're talking about, I have a much straighter way of accounting. And I also don't think that there's as great a separation between instrumental and critical as is sometimes. If you don't know how something works, which is the instrumental question, then how on earth are you going to have anything to say about why it came to be the way it is and whether or not it should carry on being the way it is? I think you need to try to combine. And there's all sorts of, I think, effectively instrumental projects, whether about the digital divide or about artificial intelligence and automation that start out looking very empirically driven. Very concerned, for example, about the takeoff of the market or corporate strategies, which very soon become problematized. That you as a researcher can turn that into a problematic issue and begin to tease out power relationships, but you don't need necessarily go into a policy environment and hit people over the head and say, these structural power relations, you have to fix them now because they haven't got the power to do that in the first place and they probably won't. So I think you have to be sensitive to, in dialogues, to those with whom you're engaging. And if they ask, yes, of course. But if I want to get my message across, which sometimes I do, sometimes I don't, I would say in the area of surveillance and privacy are the two areas where myself and maybe other researchers that I work with had the most impact, working with civil society rights groups. But in areas to do with the structure of the digital economy, in areas like competition policy, legislation in North America and Europe less. If you're a PhD student or a master's student, you should pick the topic you love. You don't want to investigate anything that you don't absolutely love and want to know the answer to how it works the way it works. But then you need to work with your supervisor to say, hmm, am I asking an easy question or a tough question? To do the empirical research, you probably have to ask a modest question. Then you need to contextualize it. I would advise for everybody. Anything else? For those of you who are young, think about what your children are gonna be doing in everyday life. What kind of world are they gonna live in? Will they be jobs of any kind, except for the top rarefied 15%? If there aren't, which some may view as a good thing, what will people do? How will creativity be expressed? How will incomes be earned? These are all critical questions and they need reflection from more than just industry pundits and airport bestsellers. Carlos? We're talking about a policymaker who thinks in a dinner with one of my friends. He is from Mexico. He has a PhD in Harvard University of Education. He's older than me. One we met in a conference that was his PhD supervisor. He was a meritless professor and retired. He was about, I don't know, 85 years old. And he's all mental as, please, I've been working for different government for more than 50 years. Not only in the USA, but in America and Africa. And only once, they took into account my suggestions. So it was frustrating because he has produced a lot of reports of the total public education. Only once they took into account. So if we translate, if we move from education to musical technologies and media communication, I would say, forget the policy makers. Let's think in another kind of transformation. Both or not, it's initially, when you were talking about alternatives here in Catalonia with that internet movement, it's an alternative to commercial, if you went by its name words. And it's a both or not project and they are increasing every year. They have notes in different places in Barcelona. And I was trying to work in collaboration there. So I think, I believe in that. I'm going to say the thing we're talking about, social science, we're talking about policy makers. I don't know if social science are prepared for answering these questions about the future. Because I don't know, I think we have a still a lot of industrial society part of our style of reflections and models. And I would think also that that's there, we have a beautiful exposition in the CCCD here, the Contemporary Cultural Center in Barcelona. It was a beautiful exposition about post-human and now they're preparing one about Anthropocene for this year. And it was a good combination of science and artists. And I think we should include artists in this discussion about the future because I think they can add more things beyond social science and they can open our mind. I mean, artists and fiction. Black Negro and so on. I think that's how it's important to introduce this. Well, if you start. I absolutely agree, I think bottom up. I just, my only concern about bottom up is that when the same technologies are appropriated, not now, but into the future, when there is a possibility of them becoming less subject to a knowledgeable individual's notion of what they're doing, that's a concern and it should be considered, that's all. On the side of art and science coming together, I fully agree that Royal Society in the UK invited me to a seminar on digital networks and I work with an artist who's based in France. And we did something on high bandwidth and online performance at a distance. And they found it really, really interesting, but of the whole day of presentations, there were maybe only two which were about that kind of potential and engagement with people. All of us were about 5G technology with Huawei, the Chinese company leading the pack. And they are the ones who got money for investment, the scientists and engineers, after the fact. And as far as I know, nothing came out of the social science and the arts. So also the social scientists are predominantly instrumental as you know. And until they stop being predominantly instrumental, we get more of an interchange between critical and instrumental researchers. I think you do have these. But at the same time, leaving the policymakers aside is exactly what the big industry says. So if we leave it completely aside, then it's very unbalanced, the power in society. Because of course the voice of artists, critical thinkers and scholars is not much for the big industry. The research on society, we get to measure, et cetera. Interestingly, what we tried to do is to explain whether they take up your act and paper or blog and publish it. It comes in multiple, multiple forms. And we've been sort of somewhat winning that battle. My work and Sonya Rolling Stones last year was judged to be very impactful, top marks. And nowhere did we say that Minister X and policymaker Y hadn't necessarily been set up. But we were able to show how we were trying to change hearts and minds and all sorts of things. The case also for your friend, may be representative for some spaces, spheres of policies. But I think that scholars can be very influential. And in communication, we have seen scholars being very influential. Here in Catalonia, for instance, a few scholars have been very influential to the Catalan government in different points of the time and some of them coming from here from this department because of their giving advice all the time and preparing the reports and the new regulations for not just digital sphere but communications in general. So I don't see the impact so little as your friend, your friend, maybe in some eras is right, but not always, it's not so little, I think. Well, just my idea. Any other comments? Yeah. And I'm police brutality in the States. One viral on Facebook doesn't disturb as a way to make a restructure of power relations. And if we're able to do it now, what makes us think that we won't be able to do it in the future? Certainly, social media played a role. There's no doubt about it. But are those developments sustainable? Does it mean that we only turn to those developments to make a difference? I think that's an open question. And again, my point isn't that the technologies that we have today are not hugely made in many, many, many ways, including for authoritarian states and also in some difficult situations. But they have another side to them. Technology is never only that, sometimes it also comes from the dark side, and we know that the counterpart of all of those engagements to take out a politically-encharged character. So the question I'm raising is that insofar as there are dark sides in the environment, do we only wait till the future when damage has been done? Or do we think now about whether or not to take the next step with that technology? So the question isn't, how do we use the tools that we have now just after we solve a new rule that is in social media and to cause an effect, but are also using it to make a better effect? Just imagine if those algorithms would make your websites 20 years in the future are not in your hands to control or socialize with this. That's the issue. Questions are about the artificial-induced future and two various consequences. So just to final thought, beginning a lot of attention for various reasons to drive those cars because the scientists just announced they might have put them on the streets in England. So the insurance industry has said categorically will not ensure individuals who are run over by a driverless car or rules. So does that mean we could have driverless cars at late to see? Or do we wait to have driverless cars until the insurance company has to assess and says we will not show you the phone? Which is it? Well, the driverless cars are going on the streets. And we'll wait for the first few cases of people to get run over when their software goes on. And then people will complain and they'll be in a uprising and then they do something to change. Always exposed. Is it not possible to think sensibly in some of these areas before the fact? No more questions? Well, maybe we can leave it here then. Okay, thank you, Robin.