 Good evening everybody, thanks for the turnout in this beautiful Berlin spring day We've waited long enough for that although we shouldn't be complaining about the rain of course these days We're pretty glad that fell so Welcome to our first session of making sense of the digital society in 2023 we started more than five years ago in December 2017 on camper leave her already in our sixth season It's going to be a little shorter season than usual three events or plans here in Berlin Today the next one will be roughly in September and the closing event in December so welcome to season six so to speak of this joint venture between the Alexander von Humboldt Institute for Internet and Society and Federal agency for civic Education and thank you. Thank you. Also have a mufa how I think one of the most beautiful theaters We have here in the town of Berlin and welcome also the viewers Wherever you're watching on alex TV on how fear which is the digital sphere of Hebel and Hoover and on the respective websites Of the partners involved in this event Some of you probably may know already How this is going about this going to be the talk after my introduction will have a one-on-one conversation for maybe We'll see how this goes. Maybe 15 to 20 minutes. Maybe a little more There'll be microphones here on the floor for you to ask questions in the venue There's also a participatory tool called slide. Oh, I think you'll see it on a slide in a minute where you can ask Questions anonymously. I think you can also vote them up or down Whatever questions you'd like to be you'd like to have answered here They're going to be read out by somebody in the audience and we're going to talk about this on stage then it will be run Between we don't know exactly 90 or at the most 120 minutes kind of depends on you too if you Are up to asking questions or not So I think it is probably safe to say especially in this room full of experts and people interested in the topic that Artificial intelligence has been all over your social or traditional media feeds in the last what a couple of years once weeks even days a Eye has certainly been part of the series, especially the ethics of AI but things have moved on so fast Just think of what happened between our last session in Frankfurt actually with Stefania Milan in October and now Open a ice chat GPT GPT for Google's bar to Microsoft's Bing There's a race going on to almost quote the funk musician Sly stone who turned 80 years old last month by the way and his famous album of 1971 which was titled. There's a riot going on The AI race the riot what the current AI coverage and the bleak Sly stone album also have in common is the conviction or the belief That society was or is again at a turning point Sly stone the black American superstar with a biracial band who paved the way for Prince later Sly stone turned in 1971 from optimism to start pessimism with this album the civil rights movement crashed But even stars like him had to face gun points backstage in Las Vegas Because his girlfriend was white the fun in music gone the inner cities not in good shape Sly was not alone in this think of another milestone record of 1971 Marvin Gaye's what's going on The outlook in pop music was very bleak kind of like AI today From the riot to the tech race We have seen two open letters just recently the first by global tech leaders the second by scientists Asking for a temporary halt in research One asked for a six-month moratorium on AI research in order to I'm quoting audit the algorithms as you have Kenny Morozov put It with a smirk as always in the Guardian or at least warned about the ethics in this capitalist race as the scientists had it Moreover another news a top senior scientists like Jeffrey Hinton is leaving Google in order to speak more freely About the dangers of AI something demand the man behind the idea of the neural network had not foreseen to come So close so quickly Let me quote the New York Times just from the beginning of this week first of May It's not a holiday in the US apparently I quote Dr. Hinton said that when people used to ask him How he could work on technology that was potentially dangerous? He would paraphrase Robert Oppenheimer who let the US effort to build the atomic bomb Oppenheimer said when you see something that is technically sweet you go ahead and do it That's the engineer talking of course, but are there different voices to be taken into account Jeff Hinton sometimes referred to as the godfather of AI had not always been right with his predictions In a talk in this series here from June 2020 in this very theater Which was empty due to covert at the time. It was just the two of us actually Joanna Bryson showed a clip from a conference of Norwegian radiologists At this conference Hinton basically told his audience to stop training radiologists AI is soon going to take care of that Four years later Joanna Bryson remarked There were more radiologists than ever Not despite of AI but because of it Because AI pushed productivity But enough on the past of AI tonight We will get a very concrete outlook on how AI can work for us not against us in the future So this is neither a doomer nor a boomer knight So to speak how it will be able to work for us I should maybe add because there are many prerequisites that will have to be put in place before this happens Especially now in this time of a giant power grab how tonight's speaker called the current state of AI when we met for coffee yesterday The questions then will be among others whose power How to distribute it more evenly? What about the role workers will play in the workplace in the changing workplace? How to adapt and adopt less discriminating AI on a private Entrepreneurial and then a public regulatory level what can be done by technological innovation and what can't Really happy she's here with us tonight because the research much of it about the work About the notion of work and the workplace fits in so well in this series and complements What we've heard in the last more than five years now and takes us further into the future She is the executive director of the Minderuz center for technology And democracy at the University of Cambridge Her books include venture labor in 2012 self tracking 2016 and just last year human center the data science Published at MIT also her research focuses on the effects of the rapid expansion of our digital information Environment on workers and workplaces in our everyday lives She's a sociologist schooled at Columbia University and devices international organizations including UNESCO the OECD And the women's forum for the economy and society She chairs the international scientific committee of the uk's trusted autonomous systems program among many others Her academic research is one both engineering and social sciences awards Also, uh, please do check out a very special website if you want to tell your parents or your kids what you did tonight actually Um, she led the team that won the 2021 webby for the best educational website on the internet And it's called a to z of ai and it really is very instructional and educational And it has reached imagine that over one million users in 17 different languages Please welcome now from kentucky where she was born to new mexico to new york Now from cambridge to berlin. Please welcome jenna neff Thank you Everyone. Thank you toby for that very kind introduction Thank all of you for joining us on such a glorious and sunny evening here in berlin And thank you to our hosts this evening For convening us together in this glorious and beautiful space My talk tonight Will apply two core insights from science and technology studies To what we should be thinking about in terms of artificial intelligence Now those of you who study these concepts coming from the academy Will recognize a focus on infrastructure as being central to how we think about technology and the relationship and socio-technical systems The idea of work can be thought of in two ways And tonight i'm going to bring in a concept rooted in design in use That technologies always have A completion that comes with how people use them how they get finished That is that we work on them by working with technologies And then i'll point to what our future might hold and might look like i'll take hopefully Equal about equal chunks in these three in these three sections i'd like to start With a metaphor from ai researcher computer scientist british computer scientist living in the united states steward russell He's used this metaphor of asphalt Pavement and he said imagine Imagine that there are pavement asphalt engineers Who are really really good at making asphalt? and imagine They said well because we're so good at being engineers That we should make the decisions on where asphalt should go That beach We don't really need it. It would be much better paved Your garden Yeah, grass is overrated Pavement is much more efficient. In fact, you don't want to be left behind You must start paving paving quickly and paving fast Now in many ways this is an apt metaphor For what is happening? Oh, sorry, forgive me In many ways, this is an apt metaphor for what is happening That the idea that engineers have the loudest Voices in the decisions that are social political and cultural Is not going to be a surprise after the recent headlines But there's another way that we should be thinking about infrastructure And especially when we think about infrastructure that's being built and the choices being made now The idea that we will no longer be able to understand whether or not we're standing on pavement beach or grass Is soon going to be part of the realities that we face in our everyday life Put aside for a moment the hype The exuberance the fears That generative ai models such as chat gpt and doll e and their ilk Put aside those fears for a moment and think about How these technologies are going to be soon integrated into The products and services that we use on an everyday life So unlike asphalt We won't be able to see when or how Or whether we're using choices tools interactions decisions That are may being made for us There's another way that infrastructure becomes a very important metaphor for how we think about Our our daily interactions Sorry, I'm trying to get my script down forgive me just a moment There's another another powerful way this becomes infrastructure Those of you from science and technology studies recognize that infrastructures are powerful social forces once they become invisible That is the invisibility of infrastructure is literally what makes an infrastructure We have choices. I will argue tonight We have choices that place us in a moment at which we can understand How our technological infrastructure is influencing our societies And we have choices that we can make today in making a ai artificial intelligence technologies work and work better for us But unfortunately Those choices Are being vocally Held by people who say they understand the engineering Not those of us who say Wait a minute We should have some autonomy Some accountability and some transparency over the decisions being made There's three ways to think about the data That is powering our systems today Infrastructure allows us to see That the data driving ai models are made. They're not natural. This is not a natural data. It's it's it's not found data Data are always the product of choices and decisions Now these choices and decisions Are ones that have both social and technological ramifications So the metaphor the common Repeated phrase in silicon valley that data is the new oil is simply Not true. It's not out there waiting to be found or discovered. If anything data are the new hydropower The dams of our collective data need to be built They need to be engineered and that data need to be collected and harnessed So this concept that there is an objective data reality out there That's somehow separate and natural and occurring Is playing into a set of political and cultural decisions on who gets to have power Over the choices that are being made What are those choices The second kind of key concept about the data that's driving Our systems today Is that seeing data as the product of human interactions and human communication Let's us understand that the large language models that are posing such a threat in our newspapers emanate from the traces of these interactions the data are us But without the context These interactions Become meaningless This is where we get into challenges of how this debate has been framed Toby mentioned The Google engineer Jeffrey Hinton Very publicly saying he wanted to now be able to criticize large language models and neural nets that he helped put into place But two years ago this spring Two researchers at Google Along with others Author to paper Suggesting that the very pathway this development was occurring on was flawed The idea that we can use massive amounts of data Massive traces of our interactions And emerge And emerge with intelligence Is on the face of it A little nonsensical to social scientists and humanists They called their paper A metaphor of stochastic parrots Or probabilistic parrots that what large language models are able to do Is pretty well parrot back what's likely to be the next word the next phrase The next framing But language as we know as linguists like emily bender one of the authors of this paper Language is more than parroting back. It's about understanding context It's about understanding conceptual models. It's about fundamentally understanding What my co-author peter noige and I have called the imagined affordances of what technologies are Another linguist computational linguist Has put forward the idea that if we ask chat gpt Which is better for a worker who has forgotten their hair covering In a restaurant a hamburger bun Or a sandwich wrapper, which would better serve as a replacement for a hair net For a human that's not a hard choice. I mean imagine putting a brioche Bun on your bun It doesn't work conceptually. I haven't even heard a giggle in the auditorium Conceptually we have a mental Picture that tells us of course that's not right And yet language models don't have those pictures What they have is a map a map of conversations that have happened on the internet A map of our interactions But not a map of conceptual navigation So without this context the data are both human but completely dehumanized That large language models Driving this enthusiasm around artificial intelligence Has not the ability to help us navigate these systems I have a quote if I can get if I can navigate if speaking of navigation if I can navigate toward The rest of the nodes. No, I think I'm only going to be able to see my first my first sentence. Oh, well We'll get there So in the stochastic parrots paper to google researchers Faced retribution for calling for Caution in developing large language models Caution in understanding that to develop these models They would need to get ever more resources built ever more finely grained Into the system that that it would become Not only stochastic, but asymptotic That the line to the marginal benefit Of increasing these models will take more data more resources more time. What did they suggest? That the benefits of improving models should be weighed against financial And environmental costs Just this week just today New york times columnist thomas freedman said humanity is now opening two pandora boxes Simultaneously that of ai and that of climate change. What if he suggested we use one to help the other And yet this denies the realities that we're facing increasing Material physical Infrastructures that are being built to drive these models Where are the guardrails? Where is the paint on the asphalt? Where are the highway signs? Where is the driving instruction? Where is the infrastructure? That we all enjoy for safety in our social and cultural worlds How is that infrastructure being developed? I would posit That we are faced with one of the world's largest ever social experiments That this notion of power within data Is being based on a flattened insight on social behavioral analysis that The the sense of humanness Built into our ai models Is a flattened one And it is a powerful one because it is as I have said An enormous grab for political economic and social power We have allowed this idea of data driven efficiency to be the value by which We will be evaluating our choices So this data That companies Are using Is an enormous financial investment Make no mistake about it The very large models that we're talking about Are really the purview of only a few companies in the world Some of the largest financial concentrations of power Humanity has ever seen And so when we talk about data as power We need to remember that we are thinking clearly and consciously About choices that are building into infrastructure Financial and economic concentrations of power That society will build upon I mean power too as material and physical The current operating costs for chat gpt not the training costs the daily operating costs are estimated to be north of 600,000 600,000 euros a day That's simply to run the infrastructure Of the model And the people Excited to work on it We get to ask ourselves Is this The kind of future that we want to power And we must ask ourselves as a society Is this the kind of future We can afford to power With climate change and net zero goals As part of the other pandora's box So we've talked about infrastructure And now i'd like to talk about work So we've talked about the future of ai ethics as if The only thing that matters Are the choices that technology designers make in fact i've been really amused By the the bluster Around ai in the last i don't know month the kinds of questions i get Just today i was asked by a reporter i was asked Can you answer this in an email will ai take all of our jobs generally are there specific Industries that ai will take jobs in and and how long will it take? Let me let me just Bang that out in a three sentence email because i know we all know right this is an enormous social experiment that we've done The the state of the art In economic knowledge around productivity Is kind of a collective shrug We actually don't Know what we're going to see In terms of the net impact of artificial intelligence on work We the iolo have estimated that a hundred million jobs will be gained In the next 10 years And 75 million jobs will be lost Now that's a net gain of 25 million jobs. That's great Except if you're one of 75 million people who will lose jobs over ai We know that The composition Of our jobs is going to change very quickly. That is the composition of individual tasks that make up our work I was joking today that when ai replaces me as a as a doctoral supervisor Some there may be some phd students in the in the room. That's phd student right there Um, I didn't mean to embarrass you sir. Um, but you should finish your dissertation When when ai replaces Doctoral supervisor it will say you know three things you need a stronger introduction You need better transitions between your points and you need a stronger conclusion because that's what I always say so Okay, we forget um In the bluster of the last month of of concern on ai we have forgotten Some of the most critical work that is done of making technologies function in use This is a a picture from um my book Manuscript that i'm finishing With kerry sturtz dosik a civil engineer At the university of washington and this is a picture from the beginning of such an automated technology that was supposedly going to Completely revolutionize Work in large-scale construction When we started oh so many years ago Um, the image on the hard hats on the gentleman in this photograph the protective head covering Um Says their safety check date is 2009 so it gives you a sense of when this photograph was taken um When we started in this project industry Enthusiasm was so high that this would completely change what companies would look like Who would be working in construction? How construction would be done? It would eliminate jobs. It would create jobs Not unlike the rhetoric that we are hearing about ai and yet fast forward More than a decade And those kind of highly anticipated changes Haven't occurred even though the tool is widely used And so part of the challenge of our book Is to ask well, why? What makes it so hard to completely wipe out Industries what makes it so hard to completely disrupt ways of working What makes old patterns of work? Cultures at work laws rules regulations what makes them so sticky and yet The answer is in my line of inquiry We do We talk about technology as if it's always the Input to work that somehow the tech does stuff to us And not as if it's the output of an enormous and phenomenal amount Of navigation and negotiation on behalf of us in everyday choices and decisions that we make People will make these negotiations at work Much like these gentlemen did How are we going to use this tool to benefit us? Will it become something useful for us? And what parts will we simply ignore? Or resist Or fail to adopt We're using the concept of negotiated innovation And negotiating innovation we argue Is a is a four step model of how new technologies come to make sense to us and how they come to change what we do Now the social scientists in the room will recognize in this model that it's an interplay between Social structures and localized practices In other words, you know, some of the building blocks that we have In social theory of this idea between individual agency and organizational and institutional constraint And yet Try to find that balance between the individual and the institutional organizational social cultural In theories of technology disruption Much less try to find them in the newspaper headlines and it's really hard to do So our model starts with the process of sense making of people trying to figure out what a new technology is And part of the sense making is what we call futuring work of understanding the future And how and making the future In how a new technology will come to matter It's these conversations that we have that we're having like the one tonight That come to show us What might be possible and what we might want to try to change This sense making work that happens and here I'm using sense making in the in the sense of Carl White the organizational psychologist The sense making that we do leads to certain expectations that sets up how we interact with New technologies in our workplace And these expectations Shape and deliver How new technologies come to be So in the case of the building information technology that we studied in construction These expectations Were very quickly shaped by the fact that getting engineers architects and constructors to share Highly sensitive information simply wasn't going to happen easily Too many laws too many regulations too much History standard of practice in how those documents came to matter within that industry Stood in the way of a technology that was literally designed to help them share And so what happens when these expectations don't meet the affordances or the imagined affordances of certain tools It becomes putting into practice Negotiating those practices literally On the ground The people making change are not the technology designers. They're not the ceos. They're not the cto's. They're not the cio's They're literally people deciding in their jobs What works what works for them? And what will work for their team and in the process They come to understand which rules They break now rules here i'm using As a stand-in They're understanding Which of the social Organizational and institutional constraints that they can push back on and as we know as social scientists Some changes take longer than others That's where the model comes full circle Understanding how to negotiate those changes takes time And it is through a process of negotiating those changes That we become to see the technology designed in use the infers the socio technical infrastructure that it can be Thus negotiated innovation is A better lens for thinking through how these changes are made in practice Then simply one that short circuits that process If we are to make AI work for us We're literally making it work We are literally as societies figuring out What we will change and what we will do It is With this notion of social agency That these choices become apparent and so vital to our future And so that brings me to our third section of tonight's talk The future that AI is what we make it And i'd like to present A short overview of how i think work And infrastructure help us think through the future so first This notion of infrastructure The choices About what kinds of technological paving Are being made now and by that i mean What kinds of standards What kinds of data What kinds of systems What kinds of norms What kinds of challenges Are we going to allow and tolerate within Our societies Will these choices that are being made now be accountable To multiple publics That's a choice we face at the moment A second choice Around infrastructure Is that of lock-in The idea that we are making choices That allow certain companies With enormous amounts of data power To preserve their power And with every blustery fear and blustery rhetoric We are only bolstering their case discursively By saying There is no alternative There are alternatives and there are alternatives to the lock-in And the choices that are being made now The question that i would pose is how will openness in these systems be preserved Will we continue to build a geopolitical reality Where the material And infrastructural resources for building large-scale computing systems are so great That governments And democratic values are set aside Will we continue making choices about infrastructure That prioritizes and privileges innovation over every other value In terms of work Expectations now are shaping how people understand what AI will be able to do And how it will be able to function My challenge to audiences like this one is will we rise to the challenge Will we rise to the challenge Of understanding what we can do creatively with these technologies And how we can make them work for the good of society And finally Our lessons on work and looking at large-scale construction over a decade Show us that the negotiations that people make in their jobs really matter For how technologies come to be adopted on large-scale Industries and across sectors Those negotiations are happening now How will we ensure that the expertise of people on the ground Counts How will we ensure that people Who can bring the context and oversight into models Who can understand the background The foreground The conceptual map and the imagined affordance How will we ensure that our human expertise comes to matter in building these systems And so I promise to point us toward a future of what can be done I am running a center called the Menderu Center for Technology and Democracy And we have a mission to do values-driven research Now these two concepts sometimes come into an uneasy alliance within the objective scientific approach to social science And yet we are fighting to make digital technologies that work for people, societies, and the planet And we do this by trying to re-imagine our relationship with digital technologies through evidence-based change So how can we bring evidence to the kinds of debates that help us re-imagine What we want, what society wants out of their relationship to digital technologies We have four key initiatives that we're working on And the first is the public impact, the public understanding of digital technologies and their impact Trying to shift the narrative away from thinking that technological change is inevitable Or always in the good And trying to shift the discourse away from let others decide The second is to bring a lens that helps us understand the enormous environmental costs of our digital information infrastructure These are not costs we should be taking on lightly Nor should we, let me be clear, nor should we simply reject them But we should be making choices on the kinds of impact that we want to have on the world And perhaps if we go back two years and we only listened to team Nick Gabriel and Margaret Mitchell At Google saying, please, large language models won't be improved without vast resources of energy Financial costs and environmental costs for marginal benefits and their quality We might have had different kinds of conversations If the critical voices in this debate were not silenced The third initiative we work on is making the future of work work for all And by this we take an access and ability lens Too many of our digital technological infrastructures are quickly becoming new kinds of urban infrastructures Consider every app that doesn't take into account the enormous amount of work that people with different abilities need In order to navigate their daily lives By taking an access and ability lens to the future of work We start to ask questions about where is work and who's doing that labor And finally we have an initiative on building informed trust in digital societies And this is one of the hardest of our initiatives for me to explain because I want to be really careful here Trust is declining in western societies People's trust in one another has declined and yet It's we know it's not because of digital technologies. There's a complicated relationship To be unpacked and discovered between how we can have technologies that help reinforce Trust and help rebuild social capacity So it's not as easy simple to say let's get rid of technology. We'll have more trust But rebuilding trust in society is vital If we're going to have fair just and equitable societies that are resilient and sustainable in digital futures So what are we doing about it? Well, last month we announced a new consortium Funded through the european union ai for trust. This is a consortium of 17 partners In 11 different countries Including some of us who are no longer in the eu The idea is to the idea is to use Um To build an early detection system for miss and disinformation And you use the best of our ai tools to propose Counter narratives so think of an early warning system that allows for human in the loop fact checkers and journalists to be ready With counter stories It's a great project working with fact checking organizations across the eu And the challenges that many of these tools that we have for fighting miss and disinformation Sit in the hands of large platform companies And they're not multi-lingual as the challenges are in the eu They haven't been For thought in terms of being multi-channel being able to jump across multiple platforms And they don't work well with generated text video audio Yet And so that's the challenge that we've taken on in that project The next project is one to help bring more researchers more social science researchers into these questions of technology design for too long I would argue that social scientists and humanists have been sprinkled on top of technological projects We've sat in the corner And critiqued after the fact Rather than get involved in the hard choices and decisions Of building technologies that work for people So with funding from the uk's economic and social research council We're building a network in the uk and beyond to focus on the digital good Well, this is another one of those social science projects that Skirts that line around values What is good good for whom? Good when and why? If we want to have good digital societies, we need to be able to understand what good looks like We need to be able to define it and measure it And yes, we need to be able to hand A page of tech specs over to engineers The challenge for the digital good network Is to think about these ideas of good and how we might do something about it from a social science perspective And that's a kind of challenge that many of us myself included have been We've shied away from My call to action for the researchers in the room Is that we must absolutely begin To be invested and involved in making sure that our digital technological infrastructure works for us And with that i'm going to issue a call to everyone in the room and listening online That the choices that we will be making With artificial intelligence technologies Are not set in stone yet. They're not paved We have options ahead of us, but it will take concerted work Negotiations and yes, difficult work of challenging Dominant narratives and resisting change In order to make the kinds of digital societies that work My fear Is that we won't rise to the challenge But my hope is that we Particularly looking at this audience tonight. We'll be able to and with that. I hope you'll join us in discussion Thank you so much Gina for the many insights In your topics, of course, the first question Has to be so how many jobs are going to go and how long will it take exactly but Maybe we'll put this at the end of her 57 years. Thank you. So please note that down. This is the first I kept thinking about the example you gave about the two social Researchers at google that were fired After they warned about, you know The lack of context on the road to To ai at google. So this was two years ago. And as we know things changed incredibly fast In the field in the technological field in the discourse about ai also Do you see a certain change in that that the standing of the social sciences and the development of Generative ai or other types of ai has actually gotten better in the last two years. It hasn't gotten worse Even what's your take on that? so We are now seeing we're when when we look at these Models online. We are looking Back in time, right? So we understand this about chat gpt, right? It's not the latest model that open ai has Done and it's it's view of the world Stopped on the internet, right? It's built and then it's put into place One of the things I find audiences Have the hardest time Decoupling is a notion of learning Away from what we say when we say these systems learn So the the learning in terms of the knowledge on the internet that's being used to generate these systems That learning is over right for chat gpt, right? That's that's a static model And what we can see and build on top of that is the interactions that people have with it And so in many ways there's a there's a brittleness there There's an assumption because we're calling it intelligence Because we because we as humans learn because we learn from mistakes and experience That that these models will adapt they'll evolve All the all the rhetoric that we see in the in the news around you know ai growing too smart and becoming sentient That's based on a model of intelligence. It simply isn't the model of intelligence that these models are working on right? It's not how they work And so the idea that they've somehow evolved over time It's not true now what is happening is they're out of data like we're literally at the risk of having exhausted The corpus of the internet that can train large language models when you're talking about the numbers of parameters that are in these models the sheer size of the data that that The ability for these These systems to be able to be improved we're we're running out of things to feed it We're running out of energy We're running out of Computing capacity and we're running out of data wells are dry the wells dry Now would it helped or had it helped? in the past if we Called it differently if we hadn't called it intelligence or if we had like more Different types of terms to describe it because as you you know say pattern matching predictions probability and so forth Is not what we usually associate with human intelligence Because of the lack of conceptual thinking that you talked about in your talk You think we'd have to find new terms The terms that we're using to talk about artificial intelligence come with political choices that Put us on particular paths And those choices should be should be made consciously So the idea that a group of engineers Obsessed with science fiction That's not a joke Obsessed with science fiction build out Visions of the world where technocratic expertise Is prioritized above all And computing power is the dominant power in society Where everyone else can literally be enslaved and subservient to it Wow, that is not my vision of how I want a digital future to look like And yet these concepts have deep Seated roots in these 1950s 60s 70s visions that are overwhelmingly male Overwhelmingly western over and overwhelmingly white And these these conceptual notions of what intelligence is What knowledge is what values are and what society is Have driven a flattened model of the socio-technical that leaves us with less room to draw on for our cultural imaginaries Where we go forward So one of the I always do a shout out to this book if I can for those of you who haven't seen Meredith Brassard's Wonderful book artificial unintelligence. She traces this history in a critical way And then her new book glitch more than a glitch Looks at how these concepts of of bias aren't simply byproducts of this vision They are literally baked into the very engineering and the choices that have been made There's always the question about About agency, of course, you know how to change these processes and you have hinted at them and In your talk too putting into practice negotiating change negotiating innovation So to speak with a different set of stakeholders taking account to everyday lives of workers at the workplace and so forth Could you give us a couple of examples how how this actually works very concretely? What does this mean negotiating innovation at the workplace? What would have to be done differently? In order for those models to work more justly Let's start with a simple fact that in countries where workers have greater representation in manufacturing Decisions so in countries like Germany where workers are part of management councils To understand how to implement new technologies. We actually see higher productivity gains From the introduction of automated technologies. So lo and behold Having people who understand the frontline choices and challenges that are happening in the workplace when automating technologies come into line That kind of sweet magic of having human and automated human and ai Interaction becomes A pathway for productivity in countries where we don't see that close coupling around that decision making We see lower productivity gains. Okay. So there's one pretty concrete example In the work that we did on automating visualization technology techniques in in in construction This wasn't a Kind of a call to arms. In fact, we didn't we didn't even start out trying to see This negotiation Happen right we thought okay great, you know There's this new technology and it's going to cause job loss and We're going to be there to study it at the ground and kind of understand what people are doing about it and with it And how they're resisting it and challenging it and what we saw instead was a whole lot of work That went into simply making the thing work And that's what I think it's missed in these concrete examples about how ai will replace jobs. It's like, okay Well, tell me what What part of the job? and how will you make it so that Um, you can trust it. How do you make it so that you can Understand it. How will you make it so that it works with the other parts of the system? And how will you bring that ever important context to the decisions? That ever important conceptual map that humans carry with us that understands the difference between a hamburger bun And a sandwich wrapper and what they might look like feel like how they might be embodied This idea that we have these certain capacities for being able to imagine the affordances of the life world of the technological life world around us And bring it into how we interact Is part of what makes that so powerful and so seeing construction workers basically do this right spend hours and hours and weeks and days to build these complex models of the construction projects they were working on simply to have Millimeters of tolerance Millimeters of difference throw the models off or simply to have some of the Organizational challenges of not being being able to get the right information from the right company at the right time When we look at these banalities We start to understand That those kind of complexities are really what we do in our work. We navigate these challenges It's what humans are really good at and so when we have these conversations about ai eliminating those kinds of jobs It's like, okay. Maybe there's a lot of really onerous repetitive tasks that will be eliminated But in terms of making things that delight Excite Engage And yes fit in a particular context We're still going to be very good at that interesting what you say about the coupling of Artificial intelligence and the human workforce. I think this is something similar I was referencing in my introduction with Joanna Bryson told us two years ago about the swedish A radiologist like where Hinton was wrong actually when he predicted that ai is going to take over their work And joanna told us no The opposite happened. There was a gain in productivity What you just described but this leads us to another I'm not sure of problem, but to another topic of what you talked about and this is something like You know masked ai or opac ai that sometimes consumers or users however you want to call them Do not know is it actually a real ai i'm interacting with or with The human factor built into there that some ai companies sort of mask or veil The human aspect or even part of the product they're selling Be it because they want to attract Capital that is you know interested in artificial intelligence and not in the human workforce so to speak Be it because it's just not very cool. Be it because they can outsource cheap labor to other Regions of the world, which they probably couldn't do in other countries So how to fix that or in other words, how do you make transparent the human factor In an artificial intelligence model I think the first thing is to remember that these systems are enormous accomplishments of Of work and the they're very good at masking that so Take chat gpt right Open ai has this wonderful little simple box and it looks so simple and easy You don't see the literal Millions that have been spent on the engineering you don't see the the The hundreds of thousands being spent every day On the energy and compute time to run the simple little box But you also you also don't see The sheer human labor that it takes to clean these systems up The amount of work it takes to translate The internet into something that is safe and seemingly safe Is enormous amounts of work and so just today It was announced that several of the Employees who worked with one of the companies that outsourced some of the work on open ai's model That they are union they have voted to unionize The amount of work that it takes Someone somewhere around the world to do To make these systems work is phenomenal. So they look like they're automated, but in the words of Anthropologist mary gray and sit hearth suri It's ghost work That's making them run its work. That's hidden. It's global. It follows old colonial patterns of labor exploitation It's this kind of unseen work from labeling from content moderation That is Making the thing run in addition to the designers and things. So there's that kind of work There's also the work that we will be doing that we're doing in order to make them Sensible and integrated into what we mean. Now, what do I mean by this? Well The the principle of my of my kid's school recently kind of called me. I'm so I'm really worried about chat gpt And I was like, okay, really like what I don't know like tell me. What are you worried about? and there is Suddenly a sensible Educator's white paper. How are we going to use react respond teach about? Guide do suddenly there's an enormous amount of work That we're doing to think okay. How are we going to make this thing fit? With our social values with our social systems with our social and institutional rules And and that's the work that I think gets missed in in how we think about these So if we're just listening to the narratives if we're just listening to the discourse that's coming out from an engineering Perspective of the world. We are missing the work that it takes to build capacity in society to ready digital society for building good societies and the work of social science and and everyday Interactions to make sense of what we're seeing and to try to make it better Yeah, the engineering perspective if you allow that last question before we open up here To the floor the engineering perspective is most of the times if I may say so perspective About growth, right? It's a perspective that Is designing growth for those large language models and other? Artificial intelligences now when I think of a couple of your key terms Of your talk or of our conversation. The bell is dry, you know massive resources would be needed for marginal benefits To make those models better and so forth and the carbon footprint of this all so we might make those models a lot better, but Our butts are going to be burning at the same time and those models are ready to do so I come to think of your initial metaphor Of infrastructure and pavement Paving right? I mean it seems like most of the beaches are paved already When it comes to artificial intelligence and it's even the same word in the internet and and in infrastructure. It's traffic, right? Traffic is going to I mean if we continue like this and with all the Even just vague projections of what artificial intelligence is going to burn up In the let's say next 10 years This will now work I mean the perspective of growing exponentially due to artificial intelligence is really explicitly dangerous When it comes to the carbon footprint so traffic will have to decrease somehow it cannot grow exponentially Is this something that you think tech companies are actually aware of and do tackle? Or is it something that is just plain wrong? There's There's a push within tech companies That if we only Buy enough green energy So energy from around the globe for from green sources to power our ever-growing arsenal of data centers And if we if we only buy enough carbon offsets that we will solve the problem and yet We have an industry that is represents an enormous electricity footprint With growing capacity needs Demanding more resources More data centers More energy My colleague Yuli Rone's new paper on the data negotiations That the negotiations the political negotiations that happen around citing data centers Shows that large platform companies Are ruthless in presenting to local and national governments This supposedly sweet deal that they will take Increasingly large amounts of the green grid capacity So that they can report that they are green they can report greenness We also know that within companies compute time has become A difficult resource so that when we're that when these very large models are being trained and operated That they are literally working at global scale to understand Where compute resources are and how to manage and shift them around the globe I'm not saying It's a future we want to unplug Quite the opposite. I think we want to be thoughtful about whether or not Serving up marginally better advertising Is worth that cost? Versus Models and modeling that may be Giving us more social benefit. So the idea that we're simply allowing google For example or facebook or amazon To make the choices about how they will develop large models Because trust us, you know, this is this is really good Like we know where we know where to put this paving down and we're putting our bet over here The idea that that is how we are allocating these resources in a time Of climate crisis to me gives me pause like okay. Is that the choice we are going to be faced with? Is that the choice that we want to sit with that? This is the best investment Of our resources at a time when We need to be asking is this enough and then finally i'll and i know we've got questions coming from the audience My colleague at the center hunter von is With nicole star scaleski looking at the undersea cable network This metaphor of the cloud feels very appealing until you understand that most of the cloud is literally under the sea that old colonial pathways of telecommunications Mapping telegraphs Around the world still trace The global sea cables that power the internet And when you look at what is happening in the development of companies building their own Infrastructure when large platform companies are literally laying their own sea cable To build their own internet infrastructure their own data infrastructure. We have to ask Is the impact on our oceans worth it? I don't know. I don't know how we go about answering that question But there are questions that should be asked And they're not being asked in ways that allows for people to participate in the Transparency and accountability that will lead to fair Just and equitable digital futures Thank you so much for this wrap up for now of your talk in our conversations. Gina. I think it's time to open up I think we have one or two microphones. We have two microphones On the floor we start with questions here in the theater itself And then we'll go on and see what happened in the digital realm at slido I can't see that well. I can see a little bit, but uh, yeah I'd take a question from the chetlin chetlin in a second roll, please and then I thank you. Thank you for your your presentation. My name is methana. I'm a tech ethicist Gender non-binary I think it's one of the things We can all do to make sure that we don't pre assume any sort of categorization or codifying in our social context So I use they them not a gentleman And I think this goes through the one of the issues around ai is even when our existing social constructs Presuppose things about the other That is very hard to have these systems not trickle into Our large language models are our machine learning um Dr. Neff. I have my question for you is One of the things that I kind of saw was missing from this discussion Was any talk of china Or the global south We're seeing Baidu and other companies roll out also large language models as well And when you talk about the well is dry, I was curious on one hand if that is Only from kind of the corpus of english language text Because we're as global infrastructure projects are competing if china china is going into africa Technology is also following that way as well. So there's this kind of global geopolitical struggle right between western tele The big five companies and and chinese companies and there's a struggle kind of for the next generation of digital societies And so i'm just wondering when we're talking about making ai work for quote-unquote us Who is the us here and does this take into a part the the narrative from people in the global south When you talk about multiple publics and good of societies when you talk about The you know alternatives to lock ins is the alternative for lock ins actually does it exist only outside of a silicon valley model or outside of also a a global technological landscape and Just finally how do we bring in more voices voices of color people of of Who are non-gender binary into the institutions like yours? I think we've actually had some discussion on on twitter in the past you and I and and this is one of the things I've been saying for a while. How do we make sure that the organizations that are Leading the charge in this are also representation of not just Into not just academia, but society global society at large Thank you Thank you for that question A few things that resonate yes, um the Dame wendy hall Computer scientist in the uk has talked about the four internets right that we can't think of a single global internet that we have Multiple Assemblages of Rules and norms and apps and interoperability and and Systems that are shaping how we think about this Um, yes, it's no surprise that it's being seen and painted as a geopolitical struggle of who will win in some of the national security conversations that I've been a part of it will not be surprising that Um, there is an american rhetoric that says, you know, america must win or the world will be over um Not that that's a rhetoric that I would sign up to or agree to but that is that is a language that is being used So what does this mean? outside of a u.s. China russia struggle and is the world being Carved in such a way that repeats a kind of New technological colonialism I Look to activists who are actively engaging in the kinds of tech ethic conversations that happen around the world The organization whose knowledge has been incredibly active in bringing Highlighting the challenges of having the internet in local languages For many people in local language Facebook is the internet that in something like a majority of the world's language them the majority of the pages they can access are now Through private providers and so there is Increasingly, I think this call for doing the kind of on-the-ground work that you're leading and I applaud you for it So keep up the great work Thank you Oh, you already picked one. Hello. Good evening. Thank you very much Actually, my my questions a little bit can be seen a little bit as a follow-up for on that I was wondering we were talking a lot about like how this ghost work happens how populism the global south especially suffer more with these Um Badly paid works to train AI systems, which will be mostly Benefited by the global south in many aspects and also the mineral aspects that come forth on that Expanded of water, which is this the case that we saw recently in Uruguay when they Disclosed how how the amount of water that will be Consumed by a data center of google So my question is And like just one brief comment but for that in brazil friends, we are discussing the a Bill proposal which has had some similarities which aims to bring platforms toward accountability and we have been seeing a Campaign by social platforms to influence population how they have been driving their Advertisement and their recommendations against this bill in a way that is much more Violent, I guess than what happened for instance here with the digital services act. So I wonder due to the Biggest to the bigger power Economic political that countries Have to large degrees in the global south glow in the global north. Sorry. I would like to hear from you if If you think that regulators here legislators as well shouldn't be doing maybe more to tackle these issues Considering that as I mentioned first Although we are using the work of these people the minerals of other populations Isn't the main aren't the main benefits coming towards this region of the world where we are? So thank you very much That's a great question Yes, should we be doing more? Yes, we should. I think the the pathway you point to regulation absolutely Regulation is A regulation and organizing two of the strong tools in our toolkit I want to also say that there's a kind of accountability That we can hold companies to It's difficult work. It's not a replacement for the other two, but Encouraging demanding that better of the companies That we have products and service with that they do full environmental Accounting that they understand and report the costs of their ai systems that they are Clear on what they are doing in terms of mitigating these Um infrastructures and that we are not simply taking For an answer that because something is digital it therefore must be green this this kind of false Green narrative of the digital economy has Has gotten us into a whole lot of trouble. We're forgetting about the resources it takes For data centers we're forgetting about the resources that it takes For the hardware the minerals the extractive technologies that are that are that are fueling this infrastructure so that so that You know, we need we need to be we need to be working. We need to be working as citizens as civil society And I think as because there are researchers in the room. I think there is a role for research to play I think we can bring Our voices together. We can bring the best expertise That we have together. We can start to have clearer Calls for the kinds of things that are good And the kinds of good that we want to see for good digital society I think there is a role for that to play too Okay, let's take one more from chanette and I'll come back to you before we Look to slide over chanette, please You don't have a microphone. So please the microphone to the third row up here Anybody That will switch to the digital sphere for a minute and come back to the gentleman in the first row afterwards. Okay Thank you for this interesting talk As far as I understood you you make a very strong Point sort of opening up the discourse questioning narratives getting more voices heard But what follows from that because power of course doesn't go away from opening the discourse and listening to more voices and also It's not clear to me how we do make Substantial choices to be for example asked do we need to downscale? Do we need to use less? Energy do we need to sort of reduce traffic and then you say I don't know I'm guess nobody here knows but how do we come to these Decisions, how do we define what is good when we know that good as such is a very Contested issue and it can't be sort of it would be naive to think There is consensus somewhere in the future there never will be As always provocative question from chanette. Thank you And I will take you at your word Because the good is contested and difficult does not mean we should shy away from it And that's what social science has done for too long We have simply said oh, we don't have a clear answer therefore We don't have an answer And I don't think no pun intended. That's good enough for now We have excused ourselves from debates over what is good for society And ceded that to if I may Engineers who have happily stepped into the void That is the socio-technical choice and challenge. I think we face in social science because We have found that Engineers will happily engineer the world for us if we don't get involved Do I think there's clear choice and consensus? No. Do we have democratic modes of accountability transparency? And decision-making that help us model what some of these complex choices can be yes Do we have models of technologies in the past that have not been clear cut in terms of their benefit? But we have come with nice choices We have come with understanding the risks and the balances absolutely We have never gotten to a place where we want A good bare just an equitable future We have never gotten to that point by simply relying on the private companies to tell us what's good for us And that's the fear that I have now So do I have the answer should which Which large language models should we tamp down on? I cannot answer that question alone nor would I presume to try But should we be allowing for profit companies to dominate the infrastructure that we will have for communication And interaction in our daily and everyday life. Should they be the only ones making the choice? I can answer that question and that's a definitive no Thank you. Okay. Let's look at Slido. Sarah, please for a minute She needs a microphone here with the computer and a fourth row to tell us about Slido And I'll get back to you right I'm a question on a more concrete level In what areas of work and societal area do you think it is promising to extend employing AI systems and to make them work for us? I'm sorry. I didn't hear the second half of your question. In what areas of work do I think are promising? Yeah, um, in what areas of work do I think are promising? Listen, I Have been I have been so impressed in watching a Radically automating Technology at work over a decade. I do not recommend this for any dissertation PhD students do not spend a decade in the field. I have been so honored to be able to witness the creativity of people In their jobs trying to make sense of how a new technology will change their work Trying to make sense of how to Do their jobs while being told they must integrate These new ways of working and figuring out those pathways I think a lot of jobs have that capacity So concretely, I don't want to say well, you know professional workers are going to have more power because they You know, they do things in this particular way The the place I worry about where I have less optimism Is is in How people are trained at work. So in in many early studies that we have about Robotic surgery for example We see new surgeons Struggle to get the experience they need When Robotic surgery is introduced. So it's not the experienced surgeons that have a challenge It's people new to the space that are fighting for the more routine kinds of Every day Ordinary kinds of situations that are the first ones that are automated Tax audit for example Many of us do not think about tax consulting work But it is um, I most of us don't think about tax consulting work some unless you are A sociologist of work who is sitting in the audience everybody does in Germany. Yeah, exactly But in large consulting firms the tax business is the business that Young consultants got their start in so it's become it's become an entry pathway Into professional work. What happens when much of that work can be automated? I don't think we're going to eliminate consultants consultants will be incredibly Creative and they'll find new kinds of work to do Productivity will continue and will increase but this pathway of work Will be a pathway that may be eliminated. We've seen that happen in legal discovery In law firms legal discovery was a pathway where young attorneys learned how to Read files read documents parse information. We know that can be done very quickly through large large systems and yet We're not going to get rid of the legal profession anytime soon However, that pathway Is going to have to be recreated. How do we get new people into and skilled in the profession and work? And so that's the that's the place where it's it's less of an optimistic point But it is a point that I think we have to be as societies We have to be incredibly mindful to how young people entering workforces will get Those kinds of experiences when when several of the rungs of the ladder of the of the occupational job ladder have been automated Yeah, I do have a follow-up on the future of work risking journalistic bluster I know but maybe we can close with that. I don't I'll let the gentleman in the first Row wait any longer. Please get the microphone to the first row. Please We'll have another question from the floor. I think we're done by slido as I gather On the audience. Thank you, sarah please My question was about the filtering of the language based AI and How that relates to making AI work for us? For example, when chat gpt first came out and was put to public use people were very fast to see the sometimes very dangerous Suggestions or answers that chat gpt could give and Open AI was quick to Put some filtering and this filtering got stronger and stronger where some users were now Complaining that chat gpt was unable to provide the same help it did when first came out so How do we manage the balance between having AI give us Safe answers and having it just completely Use this because it tries to so strictly Keep these filters That's a that's a great question. How how do we balance safety and efficiency Notice that's the kind of question that is a set of values Um Where where do we put that balance? My choices of safety over efficiency Might be very different from yours and it would certainly be very different from what I would wish for my child Understanding these balances between the different competing values of what we want out of our technology is part of the choices that are being made and that are not explicitly addressed in in in how people understand what they're looking at and what they're doing so You know, I'm not going to defend or attack open AI for how they make those choices on on on safety and efficiency except to say that Um, we we we know we want to be able to use systems in ways that allow us to interact um with information that we understand is um Accurate that we understand Is useful and helpful that we understand is not harmful to us and so You know, unfortunately We've had to push for legislation That ensures that people are not harmed From interaction on the internet that that harms that companies do what they can To assess and prevent the harms that are happening on their platforms so one of the things that I have worked with with ruman chowdry who was head of an ethics team at twitter is understanding the difference of harms that happen as One off kinds of harms occasional harms versus people who are subject to chronic abuse online or chronic harms The idea that we can have a one size fits all Safety strategy is not a safety strategy that will work And so we need Better understanding of the differences of how people navigate spaces In order to ensure that everybody can benefit from these tools toby Yeah, I hear you I'm thinking how to put this um as a wrap up because we're running a little bit out of time here Oh, there's one more question. Okay. Let's take one more and then we'll wrap it up Yeah, my question is would you say we have we as a society have to challenge those big tank companies and Open ai and everything more in providing an actual use for the tools and technology they Provide us with they put into the world because We saw that open ai Published jet chat gpt, but didn't really provide an actual use case So would you say we would benefit from um also regulation? That's Totally different topic, but um Making companies more aware that they have to provide an actual benefit for us when they put us out these tools Right. So where's where's the benefit of generative ai? And it's a it's a it's a great it's a great question. I mean we we Society Those of us in the room We are all figuring out what We will use these tools for right now. I can tell you Many ways they will be used to amplify miss and disinformation To destabilize trust in trusted media To become political weapons of understanding how to spread abuse and harm Work that we have done in both the eu and the uk on on generative ai Image abuse so Ai enabled image abuse Sometimes shortened or our nicknamed deep fakes Suggest that we we know that Without company guardrails in place Certain kinds of users will be subject to more and different kinds of use So we can think about a lot of different harms from generative ai on the flip side There's wonderful creative projects and artistic projects that are looking at what might be possible and what those boundaries might be I'm not saying that any one of us have the answers to the kinds of choices of the of the of the good digital society But i'm saying it's a conversation that collectively we need to be having and we need to be asking and pushing for What kinds of tools and technologies will benefit people societies and the planet? And if we can't use that as a measure of the kinds of futures we want then i'm not sure we we we want to be Risking the kinds of Challenges that we're going to face in light of that Gina the initial idea of this whole series when it started in late 2017 was sort of to find out at the end of Each session or to each ask the question is there A european take or is there even room for european take in the geopolitical tech race? We've been talking about a little bit. Thanks to a question from the audience. Is there such a thing? and what can we do so Thinking about the future of work. I'm trying to tie this in with the with the future of work One of the biggest as you certainly know is we all know of the biggest challenges We're facing not only in the west but pretty much all over is economic equality And that is in certain areas as big as it was a hundred years ago Talking about the u.s. But it's getting there so to speak in europe It's mostly the economy inequality of wealth not so much the salaries, but wealth capital So I get the feeling and this might be just a very Normal sort of anxiety But I get the feeling that the jobs that artificial intelligence is going to wipe out Sooner or later are going to be what I would call or other people called midcult jobs, right? It's not the unskilled labor. It's not the cheap labor and it's not the work at the at the level of excellence Right, this is something that is probably not going to be affected on either end Of the scale so much by a model by AI models, but it's the midcult So this is something that I think points towards A future where we will see an increase of the already monstrous inequality of wealth that we're facing And that some social scientists think is going to be one of the biggest, you know Roots for crisis in the future. Thomas Piketty is one of them, but he's not alone in in saying this So i'm asking You know long story short. Is there a european way of regulating these things are there european initiatives that sort of try to regulate Those things from you know refraining from from happening actually because they are I mean this is imminent danger to Not just social inequality, but uh turmoil suggesting that we're going to Get to social turmoil On on on the on the on the backs of AI working So well in so many different contexts in so many different kinds of settings Um to me suggests a kind of technological determinism that says this thing is coming. There's nothing we can do about it We should just stop we should just give up the question is what can we do about it? How can we regulate it? is it The question of inequality Economic inequality is always a political question to solve And that question of economic inequality is not something that I think we need to Couple to the choices and challenges of AI We need absolutely to address economic inequality. We need absolutely to address Failing schools. We need to absolutely address skills inequality We need to absolutely address gender gaps in involvement in the greatest technological expansion that has happened We need to absolutely address the questions of who's getting left behind from the creation of jobs. We need to absolutely address the question of why such concentration of financial wealth has been Handled in companies that have such a small number globally speaking such a small number of employees We need to ask very different kinds of questions about the distribution of the benefits of of productivity Then simply say how are we going to regulate the AI systems to make those things happen? So so So we have our work cut out for us We have work to cut out to understand how These tools will be used in the workplace the creative and interesting And sometimes challenging ways that we'll be able to push back and negotiate that innovation And make it work in practice for us For our co-workers in our teams and in our companies and ways we can resist it But we also need to be having those kinds of questions about economic inequality and the kinds of fair and just society On a different level and they may not be in the same realms of political contestation Thank you. Thank you for turning up on this day again. See you in september in this series. Thank you so much Chinna Neff Have a good evening