 So, friends, colleagues, welcome to this director's lecture series, The Evolution of the Internet. As you well know, the director's lecture series is really meant to focus on the planetary questions of our time and how do we go about enabling a collective human response to these challenges. In this historical moment, all of our big challenges, pandemics, climate change, inequality, social and political polarization are transnational in character and require a cohering of the human community. And how we go about doing that is the subject of these lecture series. We bring in, we're focusing on a big transnational challenge. And then we think through with colleagues how do we address those challenges in a way that is progressive, in a way that allows the human community to career. And so, today what we're going to talk about is the internet, the evolution of the internet, how we changed our world, how it has challenged democracy, and how it has both united and divided us. Around 30 years ago, we started the world as a whole, began to confront the internet. And it marked the start of our engagement with the internet social media and the global medium for news and information. Three decades later, the evolution of the internet and the birth of social media has brought unimaginable change. While many would argue that it has led to the democratization of information, we have also had the fragility of democracies against the growth of misinformation. And our perceptions of science, ethics and community has all altered drastically in the last three decades. Can and should the internet be harnessed to solve global challenges? How will new developments like advanced chat box and generative AI transform labour, creativity and ethics? Will they improve workloads and speed the creative process or damage livelihoods and stifle the arts? In this content, in this series, we will ask how has the internet truly influenced our world? We have two speakers that will be engaging with us today. The first is Richard Gingras, who was on the screen a short while ago, but my colleagues have done something that made Richard disappear. I'm assuming he's going to pop up back again. Richard is vice president news at Google. In that role, he looks at strategies and on how surface news on Google news, Google search and its smart devices. He oversees Google's efforts to enable a healthy, open ecosystem for quality journalism, which includes subscribe with Google, Google's tools for journalists, accelerated mobile pages and various other efforts to enable journalists and news providers to be effective and sustainable. In March 2018, Gingras announced that Google News Initiative, a global effort including $300 million, to elevate quality journalism, explore new models of sustainability and provide technology to stimulate cost efficiency in newsrooms. Gingras is a member of the Night Commission on Trust Media and Democracy and was a co-founder of the Trust Project. Richard has been involved in digital media since 1980 and he has he once put it since the days of the steam powered modems. He helped found salon.com where he once worked with Pulitzer Prize winner Glenn Greenwald and he has also worked at Apple, the At Home Network, the Excite Portal among a number of other digital ventures. He serves on the board of First Amendment Coalition, the International Center for Journalists and the Schoenstein Center on the Press, Politics and Public Policy at Harvard. René Cummings, who's on my left, is an artificial intelligence ethicist and the first data activist in residence at the School of Data Science at the University of Virginia where she was named Professor of Practice in Data Science. She is a non-resident senior fellow at the Brookings Institution and a distinguished member of the World Economic Forum's Data Equity Council. She is also a criminologist, criminal psychologist, therapeutic jurisprudence specialist and a community scholar at Columbia University. She also serves as co-director of the Public Interest Technology University Network and is on board of advisors of the Carnegie Council for Ethics in International Affairs. Her work extends to include data, democracy, representation, identity and governance and critically examines data rights, algorithmic justice, social justice and design justice through a criminal justice lens. René also specializes in AI leadership, AI policy development, AI governance, public sector AI and AI risk management and communication. She's committed to using AI to empower and transform, helping governments and organizations navigate the AI landscape and develop future AI leaders. René also just notified me that she and I share an alma mater in the United States at the City University of New York. So she's doubly welcome as of that information. So colleagues, what I'm going to do is give 15 minutes to Richard to say a few words and then I'm going to come to René to say her 15 minute introduction and then I'll feel the series of questions and conversations around the issues that they cover. So Richard, the flu is yours. Again, I want to thank you for the opportunity to be here with you both today. It's a real privilege and an honor to have this conversation. I've had a rather unique and somewhat accidental career at the intersection of media technology and public policy spanning five decades. In the 1970s, a wise mentor told me, if you're interested in the future of media, in the future of journalism, stay close to the technology. It's what creates the playing field. It sets the boundaries. That was not obvious then. Change was slow. It is obvious today. Change is rapid and constant. In 1974, I worked at the Public Broadcasting Service in the United States. We were building the first satellite network to deliver television programs to stations across the U.S. Until then, distributing programs to stations was costly and cumbersome. Telephone lines were hugely expensive. Tapes were often shipped serially from station to station. Satellites would change the face of television. It enabled dozens of new programming networks. It laid the foundation for the explosion of cable television. The number of channels in the U.S. multiplied from 5 to 500 within a decade. At the beginning, there were channels for a wide array of special audiences. Today, the programming is reality shows and shark attacks, all owned by a handful of large media companies. The expanded distribution of cable was left in the hands of a few, which in the mind of some was the rightful order of things. What's my point? The fight for share of voice will happen no matter the means of distribution. Those with real or perceived influence, be they governments or the private sector, will give no quarter to maintain and expand their share of voice, their share of influence. The Internet expanded access to distribution. It lowered the bar, eliminated the friction for any voice seeking an audience. The audience might have to find you, but you can be found. There can be spread, new audiences can meet new voices. It was the dream of free expression purists. At the dawning of the Internet, many, including me, were optimistic. We believe the more free expression, the better. We believe our better angels will win. We learned there was a dark side. We're not all angels. The Internet enabled challenging and problematic behaviors. Understandably, governments are starting to regulate the Internet, typically with good intentions, but often with unintended consequences that potentially damage the free press and the openness of the Internet, which leads to a fundamental question we face. How can we assure that evolving Internet policy will enable an open and diverse press and not reinforce a specific political interest or prop up a legacy industry? I fear the open Internet is slipping away from us. That our 25 Internet years enabled an ultimate model of free expression was an aberration. The challenge of problematic expression cannot be ignored. However, it's essential that we understand and balance the risks to free expression itself. The slope is slippery. Some participants in the policy discussion see the Internet as a threat to their share of voice, their share of influence. They'd rather turn the Internet into a distribution environment like those that enabled their earlier success, where share of voice went to those with power and influence to command distribution. They would rather see more friction between new voices and the audiences they seek. They would prefer to see core concepts of free linking and fear use curtailed. They may campaign with noble words, but the bottom line is a desire to maintain prior dominance, to constrain the openness of the web, to reduce the diversity of voices it enables. I urge close attention. I urge journalists reporting on Internet policy to dig beyond the memes, to not be blinded by short-term self-interest. The stakes are high for the future of journalism, for the future of open societies. I support, we support, thoughtful Internet regulation. I only hope that it respects these key principles. To protect the open web and the open Internet in the free expression it enables and not a closed distribution system favoring the few. To enable a diverse free press. And to protect against undue government influence that imbalances the news ecosystem. We trust regulation to have the intended effect. But analyze the fine print. Will legislation that purports to address misinformation be effective while creating wide exceptions for politicians or any spin master calling themselves a journalist? Will legislation proposed by legacy interests seeking a return to their era of dominance constrain the open web and the opportunity for a more diverse free press? The world has changed. More than ever, societies need quality journalism to understand their world and express their roles as citizens. The Internet overwhelms us. It continues to change click by click with every glob of media it spits out. From the sweet memes of social networks to an endless array of opinionators and influencers. From the helpful tutorials and inspired dreams of video creators to hucksters and propagandists. From snapshots of cute grandkids to doctored photos of false righteous indignation. From thoughtful forays into innovative digital journalism to astroturf journalism funded by who knows who. It's a complicated media ecosystem composed of frightening simplicity. Our culture, politics and news reduced to memes and 280 character sound bites lacking context and substance. Our world is twisted and torqued by daunting cultural memes we are induced to amplify by bad ads offering false remedies by politicians igniting the fears he or she pledges to extinguish. Yes, there is thoughtful fact based journalism sprinkled in often hard to identify and largely overwhelmed by the cacophonous mind numbing chaketa buzz that is the collective expression of the Internet. How does journalism perform its critical role in the midst of all that? How might we better understand how journalism is perceived in the societies we serve? Do audiences understand the role of journalism? Do they know which sources to trust with their precious time and money? Is the explosion of inexpensive but popular opinion smothering the credibility of fact-based coverage? Is the drift towards partisan news making the problem worse? Do they understand what we think they understand? News sites seek subscriptions and memberships making earnest pledges about the virtues of quality journalism. What small percentage of our societies understands any of that? How might news organizations better understand the needs of their communities? What information do communities need on a daily basis? What will build ties within the community? What will they value? What will they pay for? Do they respect your work? I work with emerging local news outlets around the world. There is reason for hope. Cityside and non-profit in California, city news in Italy, and village media of for-profit in Canada have found sustainable success. They engage with their communities and understand their information needs. Their success is an opportunity that many local news entrepreneurs can benefit from. Accountability journalism is critical to the role of journalism, but it's important to address a community's broad information needs, community events, local sports, obituaries. It's this service journalism that drives engagement, builds community ties, enables local advertising, and expands the audience for the accountability journalism they do provide. How can journalism rebuild trust? Eight years ago, I joined Sally Lerman to call for a focus on the declining trust in journalism. With the trust project, Sally has generated research and assembled principles and playbooks for news organizations and approaches to transparency, known provenance, and trust. The trust project works with hundreds of news organizations. There's more to learn, more to do. Ulrich Hagerup and the constructive journalism institute in Denmark pursues a different angle, rethinking the models, the formats, the linguistics used in our journalistic work. The word constructive is key. It's not news that makes you feel good. Constructive journalism goes beyond the typical coverage model with clear signals and clear intent to include the necessary context, the hows and whys, and importantly, a consideration of how the calamitous event could be prevented. It's designed to seek common ground when staging debates, they avoid divisive labels like crossfire. What better way to gain society's respect than by demonstrating the power of journalism to help a community understand its challenges and address them? But how does a society understand its challenges? How do we address synthetic generative media and its inevitable existence in the information ecosystem? Artificial intelligence is rapidly enabling the creation of media. Excuse me, all media, from images to text to artificial persona. It is now readily at hand. It can be used to quickly create compelling communications and media components. How will this affect our understanding of authorship? How can we be assured of provenance, of basis and fact, of authenticity? How can journalists benefit from these tools and how should they appropriately disclose using them in their work? How do we enable the benefits and manage the harms of AI knowledge engines? How do we address the likelihood of silos of knowledge? My knowledge engine is smarter than yours. How can journalism avoid amplifying a society's distorted sense of risk? In the U.S., we are 400 times more likely to die in a traffic accident than an act of terrorism. We are 35 times as likely to die from cancer or heart disease than from violent death in any form. Yet research tells us we perceive those fears in reverse. Our fear of terrorism is exponentially higher than dying in our cars. We live in a landscape of distorted risk. We live in a society where our perceived fears are amplified such that we lose sight of our society's real challenges. Every day, we read about terrorism, home invasions, kidnappings, all the horrific but anomalistic events that occur in our modern world. However unintentional, news reporting plays an intrinsic role in molding perceptions of reality that conflict with actual reality. If we believe the role of journalism is giving citizens the information they need to be informed citizens, might we provide more context? Was there a trend of home invasions or was it a rare occurrence? Can we close the gap between a rational fear and rational fear? How might we adapt to the media forms our cultures are adopting? The underlying assumption of a democratic society and the profession of journalism is this. If we express our ideas with the right words, the logical arguments, if enough people read those words, then our democracy will be effective and the world will be a better place. Again, the internet and new media forms have rearranged social, political, and cultural structures. We see it with social media. We see it with short form video. The messages get shorter and inescapable progression or digression of how we communicate, how we understand the society we live in. We can't ignore it. Kevin Munger argues that forms of human conversation have an overwhelming influence on what ideas we can conveniently express and what ideas are convenient to express inevitably become the important content of a culture. I'm not suggesting TikTok is the future of journalism, though in its own crowdsourced way, it is a medium of journalistic expression. We need to adapt to the language constructs of our time. Last but not least, how can we reach those who don't care our lost interest? Only 10% of us, or less than that, actually regularly consume what we might call serious news. The Reuters Institute tells us even fewer pay for news. We hear it from our friends. They avoid the news. They make them sad or anxious or fearful. They find solace in other ways, binging the latest on Netflix or feeding their addiction to TikTok. I recall what Neil Postman wrote about television in 1985. We are amusing ourselves to death. What George Orwell feared were those who would ban books. What Aldress Huxley feared was that there would be no reason to ban books, for there would be no one who wanted to read one. Orwell feared those who would deprive us of information. Huxley feared those who would give us so much that we would be reduced to passivity and egoism. Orwell feared the truth would be concealed from us. Huxley feared the truth would be drowned in a sea of irrelevance. As Huxley noted in Brave New World Revisited, the civil libertarians always on the alert to oppose tyranny fail to take into account man's almost infinite appetite for distractions. In Orwell's 1984, Huxley added, people are controlled by inflicting pain. In Brave New World, they are controlled by inflicting pleasure. In short, Orwell feared that what we hate will ruin us. Huxley feared that what we love will ruin us. It was one of the great Greeks who said, our open societies, our democracies, will be destroyed by the freedoms we enable. Wise words, terrifying words, they hit a little too close to home. The political sphere has adapted to the capabilities of the internet to speak to voters, to build political alliances far more quickly and effectively than the world of journalism. We see the impact around us, the trend is concerning, as concerning to me as it is to you. The challenges are as complex as they are critical, indeed existential. We'll need a close understanding of the challenges and the collective wisdom to address them. I am optimistic. I am a glass half full person. But now it's a bit easier if that half glass is vodka or gin. I look forward to the conversation. Thanks Richard. Rene, the floor's yours. Thank you so much and let me say it is an absolute honor and I think what I'm going to do is I'm going to give you the chat GPT-4 version of what Richard just said. So the internet is about power. It is about profit and it is about privilege and persuasion. It is about control. The internet is probably the most dangerous playground at the moment. The challenge with the internet is that our data is what the internet is now about. All the big tech companies are using the internet to access us because that is where we spend most of our time, if not all of our time. That's where we do everything. So it's a great place for information, we know that. It's a fantastic place to communicate and to engage, but it's no longer a safe place. Much of the conversation is around safety. It's around protection. It's around privacy. It's about disinformation. It's about deep fakes. It's about questions around democracy. I think for many the 2024 presidential election in the US is really going to be the test case for the future of the internet and what the internet can really do. I think what we have realized is this, each and every one of us because of our use of the internet and because of the fact that we have been using the internet, some of us for probably decades or more than a decade now, we have acquired an extraordinary amount of data and that data that has been generated from our internet use is the data that all the big companies have been using to create all those exciting products that we have been engaging with. So the challenge is this, each and every one of us are being accessed. We are being packaged. We are being repurposed. We are being traded. We are being sold. So many of the conversations around the internet at the moment are about monetization of that data, are about the weaponization of our own data against us. It's about optimization. It's about oppression. There's just so much conversation about content moderation. I think some of the interesting things about the internet at the moment is how do we deal with the safety questions? How do we deal with questions around the ethics? How do we treat with the misinformation and of course the disinformation wars? How do we deal with things like deep fakes? How do we deal with democracy? What is going to be the impact of disinformation and synthetic media on democracy? How is that data going to be used? How is that data going to be analyzed? So when we think about the future of the internet, we are really thinking about questions about agency and autonomy and self-identity and self-determination and whether or not the internet is a safe space for us to communicate. It's also about questions around justice and around equity and the voices that are being heard and the voices that are being denied. It's about visibility on the internet. It's also about questions around the digital divide and the fact that many individuals still do not have access. If we think about the US test case, we would realize that the digital divide there is very, very wide. There are so many people in rural states, young people, students who just don't have access to the technology. So again, more questions around power. About once we start to really reimagine this relationship we have had with the internet, then we start to understand how much control our data has given access to and access to us. So for me as an AI ethicist and a data activist and someone who spends a lot of time thinking about relations on the internet and thinking of the internet from the perspective of a criminologist where the internet has turned into probably one of the greatest crime hotspots and crime scenes of our time and how do we protect our children on the internet? These are really big questions. But the internet is also a place where we have this marketplace of ideas. But the question comes back to whose ideas are being shared on the internet. It comes back to that question of control and persuasion and the fact that many of the major players, because I think at the moment with the deployment of Chad GPT the conversation is now on the search engine wars and which one of these search engines are going to take us to the future. So I think if we're talking about the internet and of course the future of the internet, there's just so much there to dissect from the fact that it is still a cultural phenomenon and the fact that it continues to reimagine and reconfigure and reposition so many of our systems in society. But it's also a space for great creativity. So where there's extraordinary reward, there's extraordinary risk. Where there is risk, there is a need for a new type of education. AI is teaching us that at the moment it is a new language. It requires a certain level of proficiency and literacy and we're seeing that the internet is not only changing us and changing the ways in which we engage and interact and changing the world but what it is telling us at the moment is that we have got to pay particular attention to how we are using our data because the internet in as much as we have fun and as much as we engage and as much as we get all the information that we think we need it really is a place that we have got to think about how we're negotiating and how we're interacting in that space. Just one question or sorry, just one point around the large language models and in chat GPT and the fact that all the data that was scraped from the internet to create these large language models that's our data. That's all the things that we have said and we have felt and we have shared and we have commented on for all these years so it's nothing that amazing. It really is just that content. So it's about content generation, it's about content moderation but just three things that I want you to think about when it comes to the internet. Profit, power, persuasion. I'm going to go fairly quickly to the audience but before I do that I'm going to pose a question to both Richard and yourself. Richard, what you've spoken about is how the internet has amplified voices of extremes and how it's compromised what is the quality journalism that is required urgently in our world to understand the myriad of challenges we confront. If you were given the power to advance one recommendation, to advance one reform that to put into place around regulation what would that be? Well that's a very interesting question and frankly a quite daunting one. First of all I want to point out I'm largely in agreement with Renee's perspective. We are in a time where we are and should be reassessing how this extraordinary resource called the internet is utilized, how it impacts our societies, how it's controlled, who influences it, so on and so forth. They are very thorny questions, rife with secondary consequences as I have alluded to. There's no question the internet has had the huge benefit of enabling new voices but it's also enabled people to take unfortunate advantage of that. One thing I point out and I think one thing we should be very clear about technology has value but it doesn't have values. That's really on us as a society, as a society as individuals and as well as the government who will take and influence the use of that technology. So is there one piece of regulation? No, I can think of many. I can think for instance I spent years working on Google search. I think it's extraordinarily important that not only for search engines but across our society do we have mechanisms of algorithmic accountability? Do we have mechanisms that allow for instance third parties to research and analyze the outputs of these different systems be it a search engine, be it generative AI, be it an insurance company or medical institutions use of AI? How do we create mechanisms that don't necessarily fully control it but allow it to be monitored such that we can understand what might be good or bad happening and put the appropriate pressure on correcting for those? That's something that I think we feel strongly about. When it comes down to, again as you can probably tell I obviously speak for Google but I speak beyond that we live or die based on the trust of our users. If they want to use a different search engine tomorrow they can and they may very well. So it's very important for us that we hold to that trust. It's very important to us that when we use data we use it appropriately and privately. We for instance do not make any attempt to understand the political leaning of an individual or the political leaning of an article that we might surface on Google because that creates dangerous opportunities for undue influence and will not go there. So that's just one but I think there are obviously and as Renee would agree there are many areas of potential regulation and it's going to be very, very tricky to find the right path that both allows the good elements of these technologies to evolve and at the same time does not allow undue control by the governments themselves. So Renee I want to come to you. I posed the same question but in a slightly different way. You've made the point that the internet is about power. It's about profit and it's about influence. And Miley taught us what isn't. Railroads when they emerged was about power. It was about profit. It was about influence. The steam engine was about profit. It was about power and it was about... And isn't the internet exactly the same? And in all of those cases both society and governments intervened in ways to regulate that. To enable inclusion. To make sure that power didn't become all too powerful. That there were constraints put in. In many cases in the United States huge oil companies and railroad companies were actually broken up so that they couldn't dominate the economy in significant way. Would you suggest that the internet or at least the companies that own the internet or different parts of social media that they are broken up, that there is an intervention made to make them less dominant? And what are the dangers and advantages of that? Because how then do you regulate? How then do you enable those sectors to be regulated in a way that is in society's interest? I will say this. Although that sounds absolutely amazing. The challenge is it could be too late for that when we think about the internet. And when we're talking about all the other inventions those things we could actually see and touch the thing about data is we cannot see data. Nobody can see what all that data that has been acquired over decades actually looks like what the other side of that interface looks like and feels like. The challenge is this. We know that there are questions around trust and questions around safety and questions around privacy, accountability, transparency. Those are all critical questions. This is it for me, the algorithm. The algorithm at this moment is probably the most powerful aspect of technology. It has the ability to do extraordinarily positive things but it also has the ability to be extraordinarily dangerous. I think coming back to your initial question about what is that one thing. I think what we are going to see more and more would be more disclaimers and disclosures particularly disclosing when we are interacting with an algorithm. Because the power of the algorithm as it impacts democracy is that the algorithm has the ability to make a decision about you without you even knowing. Once the algorithm has that power it has the ability to undermine your civil rights, your human rights. It now has the ability to undermine your process and that is something that we have got to think about. Fairness, equity and justice. The internet is now about justice. Something that started as a playground where people were sending emails and communicating and having a good time has turned into a very, very dangerous place even with the controls that exist at the moment. What we are realizing from the perspective of an ethical approach to the ways in which we treat with content on the internet is that what we need is real time governance and one of the things that we know about governance is that it is not real time and that is the challenge because so much is happening in real time. So much is being deployed. The frequency of the deployment of these new tools are creating some very, very unique challenges around safety and around privacy and around just general protections. So I think what people need to know and what people are going to be asking more and more when they are using the internet is what am I actually communicating with at this moment and I think more and more algorithms are going to have to be revealed so people can get an understanding not only of that power relationship because it is a new type of power that this algorithm is creating. I mean I am very committed to AI, I am very committed to innovation but one of the things that we have seen at the moment is a lot of ethical theories like an opera with ethics around AI and generative models and what we are not getting is really an ethical approach. That seems to be something that is very challenging for us to do. So I think if we are thinking about the future of the internet and how it impacts democracy, democracy for me is the ability to participate in my own decisions not for an algorithm to say if I am worthy of credit or not for an algorithm to say where I should live or where I should not live or for an algorithm to say because I look a particular way I am considered a risk and these are the kinds of high-stake decisions that algorithms are now making so I think if there is one thing that I would like to see and it is not going to be one thing because I want to see a whole lot when it comes to the internet but one thing I think people are going to start to demand more of is knowing who am I communicating with when I am on the internet and the decisions that are being made if I apply for a loan, if I apply for a scholarship that is this an algorithm that is making that decision so it comes back to questions of trust comes back to questions of accountability Richard raised the point about auditability and how do we audit these algorithms and what we have seen also in tech is that no matter how much is invested in diversity we are still not getting the kind of diversity in technology we are still not doing inclusive innovation we are definitely not at a place of equity and now more and more we are realizing that spaces such as data justice and algorithmic justice are now becoming powerful spaces to ensure that we protect the underserved, the high needs vulnerable proof when we think about disability and the internet there is just so much to think about Let me push you a little bit on that because it seems to be what you have posed that the first intervention is the demand for transparency we want to know what algorithmic interventions are who am I speaking to I am saying there is going to be a lot at the same time not just a lot but if you are taking that why isn't the question simply not one of transparency I think transparency is important but even there was transparency what can you as an individual do except not participate and then you are quite constrained given what you need so isn't there a much more tougher sense of regulation that one could demand that says you can't use algorithms for certain purposes most definitely beginning with criminal justice because I feel that is a space that we should not be using algorithms to make decisions about bail and parole and people's lives and people's freedom definitely a policing is when we think about algorithmic policing when we think about the deployment of algorithms on the streets when we think of facial recognition technologies and the kinds of impact it has had with the misidentification and wrongful arrests of black and brown men in the United States these are spaces that algorithms should not be used the EU is looking at that in a very critical way looking at the levels of risks when it comes to algorithms so definitely there are some spaces but we know that in all other spaces from finance to health care to education to communication there are a great purpose but they also could be very dangerous if we don't pay attention to those risks and find ways to mitigate those risks and find ways to build a requisite level of literacy around AI so it's many things that need to happen but when it comes to regulations we know that the technology is always way ahead of the regulators and way ahead of the law and what we're seeing because of the deployment of these tools in real time and the frequency of the deployment it's like every day something is being deployed or something is being changed or improved we're realizing that the regulators cannot catch up they are still trying to figure out how do we treat with algorithms we don't even have standard operating procedure at the moment when it comes to auditing an algorithm or comes to understanding bias and discrimination in AI so there are so many challenges but I'm confident that there is a great community of ethicists and activists working in the space Fantastic one final question and then I go to the audience Richard I want to come to you and perhaps it's a slightly unfair question to ask you but I'm going to pose the question that I posed to Renee to you and that is do you think that too many of the internet companies social media conglomerates if you like Google, Meta do you think they've become too big and that they should be interventions to make them smaller because in a sense isn't it a question if you take it as a question of power then conglomerates that are too big cannot be regulated in the ways that they need to be you need them smaller you need them less powerful if they're going to be subject to government regulation in progressing ways or does that create other risks that make it just incapable of regulation so what is your thoughts on that I think it's a bit too simplistic a dimension to look at things on that for instance big is bad or big is good I think it's so much depends on what's the area you're talking about what's the application you're talking about what's the marketplace you're talking about for instance when I look at a digital advertising space the digital advertising space is remarkably more efficient than affordable than the advertising environments in the past we now see for instance at Google well more than half of our advertisers are small businesses because they have access to advertising that it's cost efficient they can target it to a community and so on and so forth so you look at that you have to dig deep and understand how does that marketplace work so it really depends when I look at the question of say even a search engine now again people can switch from a search engine to another quite easily it's unlike a social network social networks I worked on Google's social network which frankly was the most delightful failure that I think we've had because it's a challenging space but social networking it's very tough for people to move from one to the other I couldn't get my family to move from Facebook to Google plus because their network was there but search engines are simply much more simpler to move from one to the other I do think that for instance if depending on how it worked out what I would be concerned about is do you go from a product like a search engine that serves massive amounts of users across the spectrum of belief and thought and expression and thus like I look at it and say like we don't want to create distrust on any dimension of our users right now if you had ten different search engines my sense is what would likely happen is they would become associated with vertical groups of people silos of thought so again it's complex I don't think there's any singular rule there's no singular rule that says big is bad I think it very much depends on the context so let's let's go to the audience now I'm going to take any questions in the house first and Ian will keep an eye online and see if we have any name any questions there I've got two let's go the woman first Samir and then we'll go next thank you so much for really interesting presentations I had a question from Renee I totally agree with your critique but I was also wondering about what next who is the they who is the we how do we fight for better forms of justice because it occurred to me and also listening to Richard the idea that the internet was going to be such an open fantastic place was almost devoid of history human history isn't it's littered with world wars and civil wars and so this idea that 30 years ago we were all going to become harmonious because of the web I think was a bit misplaced but going back to I guess my question to you is is the problem with the algorithms or the power behind them so it almost seems quite old fashioned in terms of when we're fighting for justice and equality and fairness it is about fighting about power it seems to be more disseminated now and pockets resistance but what do you do who are you speaking about with the I and the we and the they and we have to be cognizant of what's going on because what do we do because the power seems to be increasing right so yes I agree with Richard there could be multiple engines but Google is powerful right a matter is powerful and this is a lot of power here so what's behind the algorithms there are people doing this right and are we just doing the old fashioned activism in terms of fighting those who are empowered to be held more accountable cool I have a question yeah thank you by the way extremely interesting I have a question for Richard actually Richard given some revelations from the Twitter files that has shown things that there were governmental collusions between companies or like Twitter, Facebook and I think Google to promote certain information and to hide certain information can we trust Google to say they won't use them monopoly in their power in the interests of their customers or the people and Renee well same question to you Renee who should we trust in these situations and what are good guidelines to help us to know who to trust in these situations given you know I think Google is always going to say bad isn't necessarily bad you know but we all know the economy is a scale so if you're big you can often crush your opponents and still put out the idea that it's fair everything's fair so who do we trust and what are the guidelines to help us to trust to know who to trust okay so that's two questions just hold them I'll come to you in a minute Ian is there anything from online yep there's a couple of questions here about what let's just take one and then I want to give it one of these questions is about the sort of environmental impact of the storage of data and sort of therefore AI's contribution to that AI's ability to create masses more data and so the environmental impact of AI should I start with you first Renee and then we'll go to Richard sure so we can talk about the environmental impact and that's one of the big questions at the moment in AI the fact that the footprint that AI is creating is certainly not one that looks at questions of economic I'm sorry environmental justice so it's a question at the moment and the more and more we are using these large language models the kind of energy it's taking to do the processing of that information is really does not speak well to the environment and to the climate so that's an ongoing discussion in AI the question about power and algorithms it's this now the algorithm is just a formula a mathematical formula the fact is it's not value neutral because algorithms have been developed in historical data and historical data carries with it a memory of pain and trauma and systemic racism and many of the challenges that have created this world as it is the fact is that many of these big companies have said that they can come up with algorithms to devise the bias in algorithms it's not happening we've got to devise our minds and the fact is the fact is that we do not really have a methodology or a business model that truly speaks to inclusive innovation so we continue to replicate and repeat all those past challenges through the memory of the data sets what do we do yes it's about power but an algorithm does great things as well when we think about efficiency when we think about some I mean it has reinvented just about every methodology every business model when it comes to the power what I'm saying is that we need to be aware we need to be informed we need to be educated on it we need to bring critical thinking to the things that we are doing when we use social media when we use any of these tools I come from a place of critical data science which is I'm going to really drill deep into all of the data sets to ensure that what we're doing with our data sets is really what we're supposed to be doing with it when it comes to power power is just simply educating yourself we all have to use the internet sometimes it's just unavoidable right we all love Google so we all have to use the internet but what I'm saying to you is just understand that your data is more than a data point understand that your data is not only being monetized but many times we're seeing the weaponization of that data particularly women the internet is no longer a safe place for many women who feel really violated it's really a place that you don't feel safe so one of the great things that you can do is really explore data activism and see how you can use data activism in your own work in your own community just in ways in which you can sort of like reimagine your own relationship with data and how you move forward and the question about trust I think another big question when we think about ethics who do we trust pretty much I would say trust yourself and sometimes even that is a challenge so it really is about again bringing critical thinking to what you do when it comes to the internet understanding that you an understanding that that kind of awareness and understanding of much of what you are reading maybe not what you think it is you know there's when you think about disinformation but the good thing is this you know when we think about an ethical approach to the ways in which we're doing AI and when we think about you know just responsible technology in trying to build trustworthy AI and I would think one of the things that we hopefully would like to see would be more community engagement more stakeholder engagement and something that I'm committed to which is public interest technology which is technology should be built in the interest of the public and the public should be involved and partner with the private sector to build many of our technological solutions so the question of trust is going to be a continued question of concern with anything to do with the internet social media or AI. Richard? Okay. So first I just want to repeat one thing these are critically important questions for our societies how governments take and regulate the internet is an extraordinarily critical thing and keep in mind that every government good, bad or indifferent wants to control the information space to one degree to another so I just think we always need to be cautious we also need to be very cautious not to for instance regulate by meme there are so many areas that need to be looked at if you just talk about the algorithmic space and Renee knows this well right how one looks for instance at an algorithmic accountability in criminal justice or in medical diagnostics versus medical insurance versus a social network versus a search engine and so on and so forth are all specific cases that deserve attention and do not necessarily have the same solutions now to your earlier point I don't really know about the twitter files I know that we have not entered into secret agreements about what information we present or do not present in fact we have I think we were the first company to issue transparency reports that we do on an ongoing basis with regard to government requests and other requests by criminal organization they should say police for instance courts to disclose what is being demanded of us I think that's very important for all of us to do when it comes to algorithmic accountability again which I think is the appropriate term here transparency I think people often think that that means you're actually going to look at the if then statements of an algorithm and with machine learning it's gone far beyond that I think it really comes down to three points and this is what we practice today and this is what we think should be practice across the board are we being clear about our principles we have a hundred and seventy page document available online which outlines the principles that the search algorithm follows second is methodology can we be clear about the methodologies we're using within the bounds of security because we do have to recognize that there are bad guys and sometimes less bad guys who are constantly trying to trick the algorithms that's an ongoing challenge it's an effect in arms race and the third which is I think really key is are there established methods of accountability are you accommodating third party particularly academic research to analyze the results of the algorithms one thing I point out this is unique to Google search it's just it is like we show our work every day you can go back to a hundred pages of results and see how we've ranked and people do that and I'm not going to suggest that our work is perfect it never will be because the underlying ecosystem is constantly changing and also I think we need to be clear that it's not up to us to determine what is the truth which is why we always seek to provide diversity of perspective diversity of source and everything that we do so I'll come back again I mean I'm quite struck both by the the conversation around awareness I think it's important one can't get away from the importance of transparency and awareness and the importance of critical thinking and all and information I give you that and Richard speaks about it really you've touched on it but I think that there is a question there to be asked the bigger you are the more difficult it begins to be however transparent you are because if you dominate the the market with 75 or 80 percent and as an individual entering into this you are acculturated to go to the big player in the game and the question about size has to be something we have to take to account by the way American history economic history is precisely about that the entire legislation is premised when you become too big we break you up not because we don't think you're a good person but we think you've got too much power and it is in smaller bits of conglomerates that if you share power you act as as a check on each other if you like and it seems to me we've got to be cognisant of that point but we're also going to be cognisant of the other point where I come from South Africa and one of the interesting challenges in South Africa is if you want to regulate the transport industry it's so fractured everybody is a single person driving a taxi that it is impossible to regulate and size you need a certain border size to regulate itself it was one of the most striking things was when did an internet company get under control when the Chinese state went Falibaba they brought it under control because they went for the top end and so it enables regulation on the one hand but then it also enables for a long period of time they become so big that they control the state in many times and that is if you like a reflection that is going to be required on how big should you become and I think Richard you're right in that it depends on the market and in the area etc but I do think size matters and it's something that we need to be constantly aware of because you can be as transparent as you want if you've got size and you've got power then there's some challenge and I just want to put that to both of you to think about as we go through a second round of questions so let's start this time on the online Ian is there any questions there let's take two and then I'll come for one year yeah so there's a question I think sort of reflecting comments made earlier really about increasing the digital divide the economic advancement of those with access to the AI technology and that multiplying sort of democratic and social domination and exclusion of people without internet access and then there's also this might be more of a challenge for you Adam I don't know have universities failed us in preparing us for this technology yeah just obviously social science institution at SOAS as well as humanities but the diminishing importance of causation when it comes to studying human behaviour and the emphasis upon analytics that can predict human behaviour without really understanding you know just straight from correlation, extrapolation etc so how can social scientists kind of reassert the importance of causation to claw back some of the ground on this to Richard first Richard why did we come with you and then I'll come to Rene I think the with regard to the digital divide it would certainly agree I think it's unfortunate that access to these resources aren't available as broadly as they should or that might be priced inappropriately to that objective that's obviously not a business that we're in so I can't really comment specifically on it but we certainly do recognise the value you know look I think but let's also keep in mind that you know throughout history media and control of the information space was always about power right the printing press you know Gutenberg's introduction to the printing press did not happen overnight there was a tremendous amount of pushback over decades about unleashing that technology on the world and we've seen those progressions time and time and again as I reference at the beginning of my remarks again this occurs at many different layers and deserves thought about social and I'm not sure I fully understand the question but you know one thing I've struggled with over time you know I've been doing this a long time after the 2016 election we were all struggling with this explosion of quote fake news and how we address it and there were a couple of significant things there that I came to understand over time one right at the outset I happened to be doing an event shortly after the election and I was asked about what we were doing about fake news and misinformation and I went through the various things that we were doing and continued to do but I had to point out to folks that I said I put the First Amendment the United States First Amendment on the board which is likely as extreme a codification of free expression as we see on earth and I said the word truth is not in there and so let's be careful about what you're asking any individual entity to do in terms of making decisions about what free expression is appropriate and what is not in fact one of the concerns I have I see on both sides of the political spectrum today a somewhat absence of the core principle of believing in free expression which is meaning accepting that there will be heinous expression even within the legal bounds of the country that's a challenge the other thing was this was a struggle I went through my first thought was well how do we in dealing with the divisive extremes how do we bridge the gap between those extremes can we appeal to the innate rational ability of a human being to reason I thought that was sound but then the more I looked into it the more I studied it more I read like Daniel Goldman's book I realized actually human beings don't have an innate sense of reason we are tribal beings we first analyzed a question based on what do we think our cohort will want us to believe right if the head of the tribe says the moon is blue or green or purple then I'm more likely to say yeah it is because I might not get a leg of the calf at dinner that night it's a challenge you know and in a sense one of the big challenges of the internet is that it is intrinsically, mathematically divisive because we can all find the information that satisfies our bias that's a human challenge I would love to find a way past that the question about the social sciences I think from the perspective of data science and I will just give you our own experience at the school of data science at the University of Virginia where we realize the data science cannot stand on its own it definitely needs the social sciences to ensure we bring a more critical thinking to the space that's one of the reasons at the University of Virginia we are so committed to deploying ethical data scientists to the public space that commitment really is to the values aspect of this technology it's about building an ethical resilience in our data scientists and it's really about ensuring that the social sciences play a critical role one of the things I always speak about is the interdisciplinary imagination that is required for data scientists to truly stretch their own imagination so as a criminologist and a criminal psychologist I am very committed to that social science space and lifting up the work of social scientists in the space of data science so I am for that there was a question of whether or not universities have failed to prepare us is that a question for you? I think maybe not fail to prepare us but it's got to do more work to prepare us I think particularly with AI and I continue to say it is now the way in which we are communicating algorithm it requires a literacy it requires requisite levels of proficiency it requires even more critical thinking at the moment media literacy I think there needs to be an extraordinary amount of investment in definitely media literacy and of course in spaces such as social justice and of course data justice and algorithmic justice so Adam if you're thinking of a place to put some money that would be a great place because those are the things we need with the ways in which we are moving forward with the internet and of course with AI I can't resist saying a little bit about universities I actually think we failed really badly firstly I think we came to researching the technologies far later than we should have and frankly we have not used the technologies in the most inclusive ways that we could have and I think both those issues are issues that we failed in I would also say the part of the challenge with the universities is both in the US and in the UK we're so driven by money and trying to break even and make sure the balance sheet signs off at the end that we've lost sometimes the purpose of our mission and I think that that challenge is something that clearly we'll pick up we're trying to catch up now but the internet revolution took us by surprise as much as it took everybody else by surprise and we've been playing catch up since then right from the beginning and I think it's worth acknowledging and figuring out how universities would need to be structured so that they could act as a serious counterweight I saw both hands here and I'm going to go one, two, three I'm just checking my time so let's go the first two and then I'm going to go to you thank you two very quick questions Rene I see a lot of energy in you and you talk a lot about data activism is it a story behind it what is your passion what is driving you that's one second and both of you are like a person who is quite powerful he's got an OB and he showed me a few weeks ago a Google search for beautiful babies and what came out was only white there was nothing and he shared me a story that he fought for a long time that now when you type you also get blacks and browns and different colors and he said he alone fought a lot so he is quite senior and he worked with Google, Microsoft and so on I tried to replicate that in the chat and I asked who are the top strategies only US Americans came in and then I had to feed and make it learn saying why don't you talk about South Africa, India, Japan and other and he started to learn but I'm like one person right how do you take this power to make AI not repeat the mistake of past but to now learn from it and do better I also have a question for you Rene I wanted to find out looking at your work with regards to data and the internet giving back the the internet do you see a specific pattern that maybe is driving mankind to maybe specific pattern like looking at the evolution and also you've talked about the issues, the dangers of the internet if lived in terms of maybe literacy and awareness looking into the future what do you think would happen to the internet if it's lived and neglected going forward Hi so early you quoted Aldous Huxley at great length and I remember that Huxley had this book called which was called time must have a stop and this idea that every every technology brings in its wake the sense of its own ending so for example it's commonly said a ship the invention of a ship means that it brings in its wake the idea of a ship wreck and so on and so forth I'd like to relate that to the point that was made about all technologies when they're invented having being about power being about convenience being about persuasion so is it not an obligation even if this is a more kind of cognitive technology is it not an obligation for technologies to think about their endings because every technology has of course been superseded or been marginalised taken over by other forms so that first question is about how we articulate thoughts of obsolescence and the second is to do with this evolutionary question which I think rather got lost in the conversation so the idea of the meme came from evolutionary biology came from Dawkins and it has transformed greatly and been forgotten so in some senses thinking about evolution means thinking about the death of things it means thinking about future ideas so that's where I see that academia may be important because it has the vocabulary to think of these things I don't live in the west I'm from India so I but I just wondered because I teach at an institute of technology and I'm very interested in these questions I have one brief question which is to do with the fact that in our google searches we've always found that husbands and wives come up in a search for every known individual it's about the third and the fourth and we're trying to investigate why it is that husbands and wives are of great import even now in the 21st century that in every google search they come up third or fourth and that's just a question that's kind of we're investigating that and then I'll come to the urinary and then to Richard thank you hi I'm Sue Black professor of computer science at Durham University and also deputy president of the British computer society I've run loads of programs to working with women from underserved communities and helping them into tech careers or into tech I'm going to become president of the British computer society next year and my theme for the year is going to be educating the UK there are so many really interesting points that have been talked about this evening I'm going to be putting together a program to hopefully go out to everyone in the UK and probably over 10 weeks, 2 hours a week in like local libraries and schools take them through a program to help them understand what's going on in the world in terms of technology and my question is what should be in that program to help them understand what's going on because in the academics or professional people we can discuss this at length but I really want to get out to the everyday person and help them to know and understand what's going on so that they can make their own judgments about what they see, how can they tell what's fake news for example so what should be in that program so I think I'm going to start with you and your question about my energy and I guess my passion when it comes to this space I think this is a great time for research AI is an amazing technology I think I as I always say I was not there for the invention of the printing press but I'm certainly here for this and the thing that drives me is that I've realized that coming from the criminal justice space where we've been using algorithms and using algorithmic powered risk assessment tools to determine who is worthy of bail who should get parole who should not get parole, sentencing questions and just the ways in which the criminal justice system has used data to define what a threat looks like what a risk looks like what something dangerous looks like those are the things that brought me into the space the fact that algorithms were creating these zombie predictions about black and brown men that really were not and we were using these tools that were over promising and not delivering but the other thing about an algorithm that brings me to this space is that an algorithm has the power to create access to create opportunities and to create resources but it also has the power to deny access deny opportunities and deny resources I come from a place of legacy building and what I'm realizing with algorithms is that and I say it all the time different communities historically have experienced data differently with algorithms is that power to really deny legacies destroy legacies and when you deploy an algorithm you're really deploying a legacy and one of the things I want to see with AI would be resilient and sustainable futures being built your question about the internet and if I'm seeing any pattern the only pattern that I'm seeing right now is that we need more critical thinking because too many people are going on the internet and losing themselves finding themselves in danger particularly children so I am committed to media literacy I'm committed to critical data science and just committed to critical thinking to build that awareness that education around information the other thing about the internet is that we are only fed a certain amount of information because all the other information you know there are firewalls to get to it and you've got to pay an extraordinary amount of money to get the other kind of information just think about what you're doing on the internet think about how it's impacting you think about what kind of ways in which you are using the internet and just think about all ways you know for me questions around justice and the groups that are being denied voice so to your question that's a big question what to put in it I think you've got to put a little bit of everything and all at the same time because it is so important to ensure that people understand the power of the technology to disempower and that is so critical to you in the back fantastic suggestions that you made the question about why is it that husbands and wives always come up third I think it's because I think third or fourth I think that's because people are nosy and people just want to know those things and people just probably search with it's just I guess what would I call that pop culture as to your question about things becoming obsolete and things dying and things being reinvented and things being reborn again those are big questions and I think more and more we need conversations like these we need conversations for change and we need to take these conversations outside of academia and into communities because one of the things that we're realizing most when it comes to this technology that these types of conversations have got to be happening at the community level it's about civic engagement it's about civic empowerment it's about that commitment for me to public interest technology and ensuring that many of the tools we are designing and developing and deploying really brings together that collective intelligence and I'm very committed to collective augmented intelligence and very committed to stretching the imagination of data science with an interdisciplinary imagination that really brings diversity equity and inclusion and justice into the design space Richard I'm going to come to you but I want to stress one question that Renee answered but I thought she answered it at one level but there is a second when you raise the issue of biases you raise the issue of biases as I understood it as it pertained to the issue of race and that's real and we've spoken about it but you also use the second example of biases in relation to geography how biased the answers were in reference to the United States rather than others and I want to pose Richard that question because as it goes to you Richard how does Google confront that bias how do you reflect on that and what do you think we can do both to address the biases of minority communities the racialization that plays out there and patriarchy etc but at the same time address the geographic specificities that was raised as well so I think we think about biases all the time with a search engine it's always a matter of improvement we largely feed off of what is public expression on the internet and obviously sometimes that can reflect poorly on the actual nature of our societies that's why I go back to you know the value of of research and accountability I can't stress how closely we listen to the feedback we get and seek to correct things that are incorrect or inappropriately balanced in what we do it's not perfect as I said it unlikely will never be given the evolution of the ecosystem itself but it's critical that we hear the criticisms and it's critical that we respond to the criticisms in terms of critical thinking it couldn't agree more media literacy couldn't agree more I would only say media literacy shouldn't stop at the schools media literacy is a problem across our culture particularly today as media environments change the notion about spouses I mean here too what we surface is what is known about an individual so it's not about spouses surfacing third or fourth often we surface what we call a knowledge card or we might surface a entry from Wikipedia that shows siblings and parents and spouses if they exist as a journalist I find that very useful to understand the connections between people but that's indeed how it occurs there I'll just close with that first question made about the importance of technologists first of all being responsible in their inventions no question I would hope everyone is in this regard I think all of us in our work need to be responsible about the ethics of our work I think the question of thinking through to the end game or the end of times for that technology that's a fine thought I think it's virtually impossible because if you look at the evolution of technology what you often find is that technology created for one thing gets used for something else and technology is evolving so quickly that the interactions about technologies and the impact on societies and their use in value changes over time so I think it would be delightful to understand what the path is on any technology created and we should carefully think about the consequences of the technology we created but I think that's an objective and not likely easily accomplished or readily accomplished over time I just want to add to that question there's a whole field of AI that looks at decolonizing AI which is so critical to dealing with that geographical question and of course looking at algorithmic reparations the fact that we need to think of that as well I think for me the only thing I would like to say is that I just want you to think always about the power of an algorithm to do these great things but also think of the power of that algorithm to undermine your civil rights and your human rights your agency of course questions around autonomy and just to think about legacies and think about the fact that an algorithm deploys a legacy and how could that impact not only you your community and generations and the other thing that I'm going to say is much of my work on data trauma and when we think about data trauma just think about the ways in which data has been historically and continues to be used when it comes to looking at the ways intergenerational trauma continues to move through many of our data sets that are being used to build the models and the methodologies that we're using at the moment. That's a good point to stop I think if I was going to make it's going to be difficult to summarize that except to perhaps highlight things that I think come out from this conversation one is that the internet has been promising and remains promising as does AI but it also raises huge challenges for human community and for our future and if there's anything that needs to happen firstly there is I think an acknowledgement that some regulation is required the form of that regulation needs to be debated seems to me that the form of the regulation needs to enable particular outcomes but it is regulation has to be on the cards the second is algorithms are a feature of the AI world in the internet world and they do leave the kinds of legacies you've spoken about and while I think it is absolutely important what Rene stresses about transparency and awareness and understanding that I also think that there has to be consequences for companies and individuals that behave in particularly malevolent ways so you use the example of the vulnerability of young children and if there are people that are on the net preying on young people then not only must young people be made aware of it but there will be consequences for the people who do that and that means we need to adapt our criminal justice system in ways that are able to acculturate a behavior that is required and then finally I do want to say that we're ending we are in a historical moment where all of our challenges are transnational they're all really serious challenges and we are not going to survive as a human species the next century or two if we do not develop the capability of co-hearing as a human community of coming together as a human community and the human community has never been as divided as it is and in a sense one of the ways we regulate the internet the ways we think through the algorithm is to try and mitigate the divisions and enable the co-hearing because without that I don't think we survive as a human species and so I want to bring this conversation to an end to thank Richard for doing this all in another part of the world thank you for making yourself available for this Richard we particularly grateful and really yourself for coming here and for being face-to-face with us these conversations would never happen without the incredible participation we have of our audience so thank you to all of you thank you to the team that has put this together thank you to our speakers and have a wonderful evening tonight