 Thank you so much for joining us and welcome. My name is Kate Epler. I am program manager at the bridge at Maine at San Francisco Public Library. And I'm very excited for today's conversation about how artificial intelligence is shaping our lives at work and what the future might hold in store. We are here on YouTube and zoom. So we're in two separate places today. As we go along, if you have questions, please do use the chat or use the Q&A feature. And we will answer and sort of address as many of those questions as we can. I'd like to take a moment to acknowledge that the San Francisco Peninsula is the ancestral home of the Ray Matush Aloni peoples. The Ray Matush Aloni continue to live, work and play here today. They have not ceded, lost or forgotten their responsibilities as caretakers of this place. We recognize that we benefit from living and working on their traditional homeland. And it is with the deepest respect that we recognize their ongoing stewardship of this land. And I'd also like to express my gratitude for the thinkers and writers who have joined us tonight. Andrea Dallendorf, Annaly Newitz and Stephanie Bell are moderator. They have been wonderful partners in helping to plan this event. And I've also provided a reading list for everyone to explore later. So you can look for that in the chat or in the comment section of YouTube. And I'm going to turn it over to our moderator Stephanie to lead a round of self introductions. Thank you all so much for being here and thank you to our panelists. Thanks so much, Kate, and thank you as well to the San Francisco Public Library for hosting us for this panel today. I'm very excited to be here with two wonderful thinkers, writers and actors on this issue right now. And my name is Stephanie Bell. I'm a research fellow with a partnership on AI. It's a nonprofit multi stakeholder organization focused on making sure that the advance of artificial intelligence benefits society broadly rather than just a few of us or for that matter, none of us. My particular work focuses on making sure that artificial intelligence creates shared prosperity. And I personally am focused on ensuring that AI helps create better job quality for workers, not just in the United States but around the world. And with that I'll pass it off to Andrea to introduce herself. Hi everybody thrilled to be here my name is Andrea Dellendorf I'm the executive director of United for respect we're a national organization of people who work in the retail sector who are up against some pretty significant corporate actors and trying to gain some dignity and respect on the job. I've been working with people in low wage jobs fighting for dignity and improved pay and job quality for the last 30 years. I'm passing to you, Emily. Hey, I'm Emily new it's I am a science journalist and a science fiction writer. I write for a number of places, including the New York Times and new scientists. And way back in the day when I first started my career, I was the culture editor at the San Francisco Bay Guardian RIP. And I had a column there called textploitation, which started as a column about work and technology. And, as you can tell from the name textploitation, I had a particular interest in thinking about how technology allowed companies to exploit workers who carried that interest on into my current journalism and into my fiction, where I deal a lot with how artificial intelligence will affect humans of the future, both in good ways and also in particularly pernicious and toxic ways. So my novel autonomous which is all about these issues. And I'm just super excited to be able to talk with Stephanie and Andrea who are both incredible experts in this field. Right so before we get started on the future of work side of this panel's title. I'd like to start with AI or artificial intelligence. I think a lot of people use though that abbreviation or that phrase to mean frankly all kinds of things some of which currently exist others are fantastical or far off in the future. So before we dive into thinking about the impact of AI on the future of work. How would both of you define AI what should what should we think about when we're thinking about that concept. Andrea, do you want to start. Sure, I'll go ahead. I mean I tend to think of it in a pretty expansive way along a continuum, which is around how increased how technology has increased our ability to collect vast amounts of data and analyze and use it in order to to make decisions that are that are decision making of human beings and so it's really kind of turning over the process of agency and decision making to intelligence that is collecting and analyzing and utilizing vast quantities of data. I would, I would definitely use that definition as well I think that's a really helpful place for us to start. I use AI in different ways depending on the audience. If I'm talking to experts if I'm talking to folks who actually work in engineering or science, usually I'll say machine learning instead of AI because I think that's a bit more precise. It explains that this isn't about you know building how from 2001 or Skynet from Terminator. This is really about building processes that can help us analyze data. I am really interested in machine learning from the perspective of predictive models. I write a lot about environmental science and so a lot of our models for climate futures come from machine learning looking at huge amounts of data from the environment and making predictions about where that's going to take us. So I think that that's one way I look at it and then the other piece of that to acknowledge the fiction side of all this. I think when most people hear AI, they really do think about things like Battlestar Galactica or Terminator they think about sentient creatures that are technological like a cyborg or some kind of, you know, a box that's basically a person that doesn't exist. I don't know if that will ever exist. But I think that there's a lot of almost religious fervor around that idea, especially in Silicon Valley and I think that that fervor comes from the idea that we will build these super beings and they will save us all. And I think we should, one of the things we're going to talk about today is pushing back against that idea that our AI overlords will save us. Yeah, I think the point that Hal or Skynet or for that matter Cylons haven't yet turned up in our workplaces is honestly something I'm personally quite grateful for. But there are definitely some of the technologies that you both mentioned that are in people's workplaces today, like machine learning and some others. And maybe starting with you for this question. How is artificial intelligence turning up in people's workplaces what are you seeing, you know, in the present as opposed to in the distant or potentially never exist in future. Yeah, I mean I think one of the byproducts of what Emily is raising that people tend to fix state on this far futuristic vision of kind of our biggest fears and desires of what this technology might become and the challenges around that but actually the application of artificial intelligence and workplaces is extraordinarily mundane, and is actually happening at the lowest tech companies that you might imagine I mean I don't think we think of Walmart for example, as a, many of us is a big high tech industry leader, but they have incorporated algorithmic decision making and scheduling at all levels of their corporation and if you're somebody who's working at Walmart, it is a machine designizing customer data and patterns that is deciding when you are expected to show up to work. If you work at Amazon, you are, you know your work is being monitored and tracked, not by a supervisor, who is reporting to another higher level at Amazon but rather it is being recorded and tracked and monitored by a device that you have to carry around with you that is measuring how much time it literally takes you to complete each task and what percentile compared to your peers, it takes you to do said task as opposed to your coworkers, it will measure how long it takes you to go to the bathroom. If you are in a, you know, a Walmart dressing room you might that you're, you know, you're responsible for this might show up as a robot that is standing in the closed area doing a 360 surveillance of whether or not you're doing your job and so it really is showing up in again the most mundane basic ways but that are creating this just tremendous, tremendous sense of constant surveillance, constant monitoring and, you know, and you'll hear that, you know, what people actually describe when they talk about it is that they have become like robots, that it is not so much that they're saying the robots are here in my workplace or that are here but rather, because of the integration of technology and automation, I am now being treated by a robot by my boss and I am being managed by technology and by algorithms and by an analytics of my data, which you can, you know, really, you know, just is the stress and the strain and the psychological impact of that kind of oppressive constant monitoring is, you know, is really tremendous and I think, you know, when I think what's going to be five and 30 years, you know, I look back to what was five years ago and 30 years ago and it's not so much that these are radically different technologies but they're more that they are intensifying and the existing dynamics of inequality and extraction at work and so I think we can expect these same things to just continue to intensify in the in how they're being implemented and experienced and it won't so much be new, but it's going to be harder, unless we make some important choices. So much to pick up on and build on in that answer. Annaly, I'll give you a first shot to be able to do that if you like. Sure, and I hope that you'll jump in here Stephanie because I know you've thought about the stuff a lot so please, please come in and don't just ask questions. I think, you know, surveillance is a huge issue, and I just want to kind of echo what Andrea was saying about that. I think for me the thing that I think about the most with technology coming into workplaces is the way that it's fragmenting work, even more deeply than it's already been. And so as we look to the past what we can see is that yes technology was being integrated into workplaces you know people had computers and they could do desktop publishing. But that didn't mean that you lost your full time job and that you could only do a part time job where you would submit task work to your company because there was an algorithm or some kind of machine learning device that was kind of filling in the gaps right. And now that is happening. And I think that the more that we see gig work being used as people's main source of income whether it's, you know, driving a ride share or delivering food or working in a ghost kitchen I actually live right down the street from a ghost kitchen and I'm always wishing that I could just walk in there and order something instead of having to try to figure out which of the fake restaurants on Grubhub go there. I think the more we see that happening the more we're going to find out that all of these devices all of these apps that are supposed to make our lives easier are actually making it much harder for people to earn a living they're having to do multiple kinds of jobs. You know they have to do some kind of care taking job in the morning that they get through an app and then they have another app that they use in the evening to get task work. And as Andrew was saying, that leads to incredible amounts of stress. All of these kinds of gig jobs are constantly gathering data on performance and firing people arbitrarily. If they're not getting the right kinds of numbers from customers who may well give you a low rating just because they're racist or just because they had a bad day at work. So you can't always trust the crowd to evaluate whether someone is working effectively. So that's what I see happening is this trend toward more and more disenfranchisement toward jobs that because their task jobs can't be unionized. You don't get benefits. All you can do is beg for more work from the algorithm. And one of the threads that I saw in both of your answers is this idea that we're handing over decisions to these technologies. And if you think about what the technology actually looks like it can only process certain types of data. Right it only understands some things it certainly doesn't understand you know your individual context that perhaps you can't pick up a shift last minute because you're a primary caretaker for children or for an elderly parent. You aren't able to get to the work site in the speed that the algorithm assigns you to because there's traffic conditions or because public transit doesn't run like that at that hour. And, you know, some authors at Mary Gray and Sid Arthur Surrey in their book ghost work called this algorithmic cruelty. Because the way that this often turns up in people's lives is that the algorithm without thinking about it is obviously cruel to people in a way that another human probably wouldn't be given the opportunity to kind of hear that context and understand it. I'd love to hear a bit more kind of on the concept of, like, how we try and wrangle with some of these instances of algorithmic cruelty like how do workers have the opportunity to push back against some of these assessments. Yeah, I mean I, one of the ways that this is showing up in my world as I've mentioned is this algorithmic scheduling where the, you know that you know there's, you know, tons of software products that have been built that will analyze retail sector and versions of this exist in any any sector to analyze what are the business needs so in retail it's what are the high the peak customer demand and so the inputs are exclusively focused around what allocation of staffing resources are going to maximize profit for the employer. And we know that human beings as you mentioned Stephanie are extremely complex and and there are and people have, you know, are more than just units of profit production for their employer. People are mothers their fathers their grandparents there's caregivers for elderly parents their students their musicians their artists like people have these whole other lives and if you think about the choice points here, there's no reason it's not possible to have those all be inputs in the algorithmic decision making I mean why and actually it was, you know, partnership on artificial intelligence is pretty extraordinary space where there's a very diverse community of people working at all different spectrums around AI and I was actually able to speak with somebody there at a conference who was one of the developers of some of the algorithmic scheduling software that employers like Walmart used and it just hadn't occurred to him that creating people's caregiving or school needs or faith needs might be a relevant input because they were only listening to the client which is the corporation and I think the big challenge here is in particularly around work is that we are, we are approaching the integration and development of technology from an employer and corporate perspective as opposed from the perspective of the common good and what is good for society and if we were to rethink it and imagine alright how do we balance the need for I mean let's just assume we're existing within the current capitalist system, you could easily say let's balance the needs of profit and maximizing customers and the human needs of people who work because we want people to be able to hold and maintain these jobs, then it's not the technology for that, but it's just not, it's just a, it's a choice that is about the context in which the technology is being designed and another example of algorithmic cruelty, Stephanie that I think you alluded to is, is with more performance and evaluations that Amazon will routinely just assess who's the lowest performing 5% let's say and they're just gone, right and nobody's paying attention to was somebody pregnant and had to use the bathroom more frequently because she, you know, had a health need and that's just not a consideration you either make the cut, or you don't and it's just, and it's completely takes the human being out of the decision making because humans will empathize, right, and, and, you know, and will make choices that are more balanced and so instead of figuring out how to scale that balance perspective through the use of artificial intelligence. Instead it's about taking out the humanity so that we can treat people more like machines. Another question was how are people pushing back against this. And I think I really think that this is an important question because certainly people are pushing back through labor organizing and I think we are witnessing a kind of blossoming in union that's really different from the way unions existed in the 1920s and 30s and 40s. And I'm very heartened by that but at the same time, of course we're seeing the rise in workplaces that make it impossible for workers to organize because either they're gig workers, or they are virtual workers who only meet each other in kind of corporate owned spaces where they, you know they have a company slack or something like that and so it's very hard to find a private space. But I think there are other ways that people are pushing back I mean there's certainly things like the partnership on AI. There's researchers both within these companies like Timnit Gebru who of course is not at Google anymore, but that kind of job I think as we become more aware of the importance of having people who can work within a company to show the shortcomings in their data collection the shortcomings and how their algorithms work. I think we are going to see more people doing that kind of work and I think the fact that Timnit Gebru is firing from Google got so much attention is actually, yes it's a bad sign that she was fired but it's a good sign that people gave a shit about it and for those who don't know definitely look this up. It was a researcher who was working on ethics and AI at Google and she spoke out too loudly and the company, I'd say made up some reasons to lay her off. And she has done a number of very interesting interviews where she talks about it. The other thing I was going to say is that I think another way that people are pushing back against this is going to sound a little odd because we're talking about technology but I think people who are organizing in favor of reinvigorating our social support networks in this nation people who are interested in, you know, the kind of human infrastructure part of our infrastructure projects that are going on, because part of the reason why gig work is so oppressive is all of the surveillance and all of these other problems we're talking about, but also because people don't have a safety net they don't have a choice. They have to do these jobs and if we had a safety net that provided people with reasonable, you know, a healthcare that was provided, not by jobs but by the government, if we had decent unemployment. I think that people would be in a much better position to actually do these kinds of gig jobs in a way that wasn't oppressive, and that was actually fine that was exactly what they're supposed to be which is flexible, and which, you know, and that are work from home. So I think that's another place we're seeing a lot of pushback that I'm really excited about. I love to jump in that what I think is so extraordinary also about the, the voices of resistance around the applications of these technologies that these are, these are, you know, folks who are in very well paying jobs, who are not just organizing around working conditions but are actually organizing around the, the, the corporate uses of the technology that they're building and I think really shows I think traditional, you know, kind of union organizing tended to be very focused around what is happening in my workplace as opposed to how as a worker in this workplace, can I influence and shape the decisions that the employer makes about what they're bringing into the world and ushering into the world and so it's just I think a very exciting, expansive moment and, and then I also think in addition to the increase in unionization and activity there's also just really been a flourishing of, of a range of different ways that workers are speaking out and organizing you know ours, our organization is not a union but it brings people together to advocate and fight for policy change across the retail sector. There are a lot of spontaneous groups that are affiliated with an organization where workers are just coming together and acting like a union, even without having that one and it's sort of legal regulatory, you know, kind of framework that exists in this country. It really feels like this moment of collective expression of working people against the, against the extremities of the applications of these of these policies and then I also think there's tremendous and as a result there's just really interesting new alliances that are being made where we have, you know, I was at a meeting with some you know engineers from Google with folks who worked on the shop floor at Walmart who were able to talk together about what was happening, and then with Athena which is a big multi issue multi constituent coalition taking on Amazon's dominant role in our economy and society. There has been tremendous linkage between racial justice groups that are looking at the impacts and the uses of Amazon's technology with ice and with the police state for surveilling black and brown communities with how those same technologies are surveilling workers and how workers black and brown communities can come together to resist the way that these technologies are being applied so there's just there's really just an exciting sort of flourishing ecosystem I think of pushback that's starting to happen here and I think, you know, Emily you're right that it's all about a real fundamental question of what kind of world do we want to live in is this one that centers humans or is this one that centers profits and it's I think incredibly exciting moment in that way. Yeah, for sure. And I think, you know, we've spent a bit of time talking about how workers kind of across the spectrum are trying to change organizational behavior or carve out spaces of freedom for themselves and their workplaces. And I wonder the degree to which either of you think that regulatory solutions are helpful in these spaces. You know I think there's kind of a common thread that runs through a lot of, you know, tech discourse if you will, that this moves too quickly to be regulated, and if we do try and regulate it, the government's too slow and clumsy it's just going to harm innovation. I recognize that's a bit of a straw man but we're on a time limit here so my apologies for that. Andrea, what do you think about the law that California just passed for instance to try and protect warehouse workers. Yeah I mean AB 701 I think is that it's a great step forward, and it calls for disclosure of what is the data that's being collected how is it being used. I don't it's not a silver bullet, but it absolutely is something that workers can use. And it's first of its kind because really, while there's been some movement and discussion around how to protect consumer data. There really has been silence around how to protect workers and so I think that this is exactly the kind of innovation that is needed. And I'd say it's a first step it's great I'm excited to see how workers use it to get justice in their businesses and I'm excited to see other states and cities take on, you know, similar and then I'm excited for the federal government to take a look at this as well and I think what we have to remember around regulation is so much of the R&D money that of which AI has emerged and these technologies have emerged came from the federal government, like we have you know we as citizens of this or citizens and non citizens of this country pay our taxes, we have a claim on on the discrimination and regulation of this technology. And I think we there are creative ways also to to incorporate governance, community governance worker governance like we don't need to wait for the regulators I mean we need the regulators to regulate and we need the politicians to pass the laws, but there's also no reason we just can't establish a worker voice and governance of how these are being applied in a workplace or the communities can have some control around the surveillance set up in their world and so I think we've got to think both, capital D democracy, small D democracy, all the different levels. I wanted to follow up Andrea and ask you another question which is, we also just got a new AB five a new law that's supposed to you know help here in California that was supposed to help specifically people who were working for places like and it's sort of widely been considered to be a failure. And I wonder if you could talk about that like why that law wasn't maybe crafted right because I remember when it first came out and I was like yes we won and it was like no that was not the way. Um, so, I mean there's a big debate about whether or not, you know, gig work should be considered should be is is legal and whether and because basically as you know as you've mentioned it's a way to avoid all the regulatory infrastructure that is built and built up in this country over, over the last century, and it basically just does away with it by saying oh people aren't really workers so they don't get all the things that the that the labor movement one and that the government regulated for workers. But I do think there is some debate around, you know, is is the answer to move everybody under the existing labor law framework or is it to expand the rights that people working in gig work have and you know it's somewhat complex and you know and I think that there, you know that the, you know, but I think we have to look at how much money did these gig companies spend in order to defeat it and to frankly distort some of the conversation because it's actually very simple, right, which is, do workers have the right to the regulatory and social safety net framework that was built over the last century, or are we going to allow these companies to completely get get around it and, you know, and I do think that work is changing and we've got to think about what are the, what are both the new laws and how do we close the gaps between how companies are taking advantage of the gaps in the law to avoid provision of those benefits, but curious also what you think about it Emily. Yeah, I mean, I agree with you that I think a lot of it boiled down to these companies were incredibly wealthy they you know did in a concerted campaign among their gig workers as well as you know by putting posters on bus stops and like you if you were in San Francisco you couldn't walk four feet without seeing you know destroy AB five billboard. So I think a lot of it was was propaganda and a lot of it was playing workers off of each other which is a classic Union busting move right right out of you know the 19th century pretty much so it's not a big surprise. I would say that. Yeah, I mean it's been, I mean it's been interesting seeing how these regulations fall out I used to work with the electronic frontier foundation and one of the things that we always said back then and that the EFF still says now is that one of the problems with this kind of legislation, especially around technology is that oftentimes laws get written like many of these laws about algorithms that are kind of working their way through Congress right now there's a lot of different pieces of legislation. Oftentimes the law is crafted to target a specific technology rather than a set of practices. That's where things fall down and you can look at anti spam laws as a perfect example of what happened there where they were targeting kinds of technology not a practice of sending unwanted mail in bulk to people who did not request it. And so I think that's what I am always on the lookout for is, are we seeing a law that is improperly targeting a specific technology that will go away, or are we seeing a law that's targeting a practice a labor practice that we want to reform. So yeah, I think I'm still, I still think that it'd be great to have something like 85 where contractors who work full time are treated like full time workers. And, and I hope we do get something like that. The point about, you know, targeting specific technologies as opposed to thinking kind of broadly about the suite of effects that might be created by a set of practices I think really resonates with me. One of the things I've been reading about a lot recently is quote unquote tax the robot laws. And for those of you who are less familiar it's about like it sounds the ideas to try and reduce the frequency of automation of people's jobs by taxing robots that people might put in their workplaces to do their work instead. And to my mind that feels like it's probably a bit short sighted. And we need to be thinking a little bit more broadly than a specific robot tax you know maybe let's think about the balance of taxes between labor and you know capital investments for instance. But that also just caused me to realize that we haven't really touched on a topic that I bet a lot of people were anticipating which is automation. So, let's take this to a slightly more philosophical level. I think a lot of folks talk about automation as something which we should really desire because frankly work is drudgery or it sucks for a lot of us and we don't want to be trapped in our jobs as often as we are as long as we are. Is automation actually the answer for humanity to be better off. Will we find ourselves in a land of bounty and plentiful goods for all of us or is that maybe a little bit too utopian. Let's queue you up for this one first. Okay. Nothing but the biggest questions. I mean it also it kind of depends on how we're defining automation here because I think that's also one of the big questions in these conversations is, do we mean a giant robot arm that's like helping to build cars. Yeah, that's automation. Do we mean an algorithm that can write a sports story without any journalist being paid to do it. That's also automation that's also happening. Apparently it's very easy to write sports journalism. I don't know if that's true but anyway. I think that as we think about automation, and we think about the idea of being relieved from drudgery, we have to kind of go back to some of the questions we were talking about earlier about what does it mean to have human judgment in the loop on a lot of these things because even if you're talking about a big giant arm that's building a car, you still need to have humans somewhere in there making sure doing quality assurance basically, maybe even they're doing quality assurance with the help of automation. But a lot of things that are being automated in the kind of cultural realm like things like moderation moderation on Facebook moderation on you know your whatever site that you like to go to and read comments on. You know, oftentimes moderation is a very. It's a very ambiguous job in the sense that you're dealing with ambiguity like oftentimes that once a statement in one context means something very different than a statement in another context. And also, if you think about localization like a statement that's really offensive in Quebec is not going to be offensive in California. The worst curse words in Canadian French are like all words for like parts of like the Catholic like Catholic churches and stuff like I don't know I'm not Catholic so but at any rate the point is, if you said the word tabernacle in English, nobody would be upset. You say tabernacle, it's very naughty in Quebec. So my point is that automation might have a hard time with that might have a hard time understanding that those two things are very different those two words have very different contexts. No, I guess my answer is no automation is not going to save us unless we have humans in the loop, helping to make decisions about context and about the meaning of what's happening about the quality of the goods that we're producing. And again this goes right back to thinking about workers as we're designing this stuff, thinking about work processes and thinking about the product and how it's actually going to be used by people. So I, I think our automation future was maybe overhyped. What do you think Andrea. Yeah, I 100% agree. I mean I, you know, I mean I think we just we saw what happened with what's happening with the driver was this list cars where you know I mean everyone was predicting I think that by now we would have them all over the road and it's just much more targeted and there's so much margin for for error and in small ways we see this in retail workplaces all the time I was talking to somebody who worked for a long time in an Amazon warehouse, who told said that everybody, they all had little, you know pieces of paper and because to write down how long it actually took them to do the task because actually the algorithm was wrong. And so they were monitoring it and keeping track of it to be able to push back when they, you know, got written up for failure to meet the production quotas, you know there were, you know, automated cash machines and Walmart's that were constantly breaking and required even more people to come in to fix the machine than the labor cost that they save from incorporating the machine and so it's just it's much messier and and and and doesn't and is it's much more difficult to handle the nuance and the complexity than what I think our, our popular imagination would lead us to and I, I think in some ways our fears and fantasies about automation says a little bit more about where we are now than it does about where we're going to be in in the future and and I, you know, but I think this still takes us back always to the same question which is what is the purpose of automation. And, you know, and is it a balanced purpose that is around looking at the different needs of different aspects of society and as human centered and common good centered, right, or is it purely being developed and driven just for the sake of profit and automation and you know one of the things we've we've heard in some of the retail workplaces is that it's actually not the easiest hardest, we riskiest jobs that are getting automated but it's actually some of the more interesting engaging fulfilling jobs right and again that's logical if the people who are deciding what technology gets developed and deployed are doing it from a place of the pursuit of a profit line. It would be different if that was one set of interest at the table and designing the implement at the table and, and I think this is you know the solution that I'm really very excited about is the notion of, of kind of co governance and how do you actually bring multiple perspectives into the decision making at all stages design implementation and review of the technology and, and I, you know, and I think, you know I'm an optimist and I do believe that if we're able to bring multiple perspectives to the table, then I think we could figure out how to make the hardest, scariest, most risky jobs that cause the most injury and distress, and how do we, and then how do we ensure that people are able to fill in and do the most rewarding meaningful work that's out there. But right now that's just not the driving logic, but I do think it's possible it's just choices. That point about choices is so spot on. And I think that, you know, there's certainly the point was raised earlier about, you know, the need for human judgment or discretion or decision making in the loop of some of these automated decision automated decision making technologies. And I think, you know, one of the big points here is that it's really important to have workers in that loop. And also, you know, kind of building on Andrea's point, throw out that maybe we should, you know, widen our definition of the loop. It's not just that we need to have a person doing, you know, QA at the end of the assembly line. We need to have the people who are involved in making those cars, also involved in making the technology that makes the cars. And right now that's something that we aren't seeing happen to the degree that honestly would probably be beneficial not just to the workers which I think is incredibly obvious, but also to the companies. You know, Toyota was able to radically improve the quality of its cars, not by having people in the office think through and, you know, troubleshoot all of the different things that could have been going wrong on the assembly line, but by asking their workers to contribute their best ideas to improve the technology and to improve the process. And so, you know, that's what made Toyota sort of globally recognized quality car maker, not people making decisions without keeping workers in mind and keeping them in the decision making loop. So I really appreciate all of those points. And so, if you're not keeping track of the chat Kate or whoever is manning the chat but I think it's probably Kate has noted that we are about to hit audience Q&A time. So I'm going to ask our wonderful panelists one more question to wrap up from my side here, and then all of us would love to hear your questions and your thoughts. So the question that I'll throw out there is, let's go ahead and start with Andrea. What I would love to hear is, you know, how can we, having listened to this, if people who are thinking about these issues, what can we do as individuals and to use our knowledge to try and make some improvements on some of the ideas or the issues that we've talked about today. What can we do differently, either as technologists or as citizens to try and make conditions better for workers as we are all trying to navigate this period of technological change. Yeah, so I would say a couple things. One is, I think, is to not is to have a lens that doesn't assume that the way things are now is how they have to be and to actually interrogate the why why you know it's not that I think that there's a risk that we have as a society of sort of technological determinism right, the technology is just getting developed and then we're going to have to adapt to it, as opposed to having a lens of we can all be active agents in shaping how our society prioritizes and engages and develops and utilizes technology because I think if we make that shift, then we think all right well what can I do as a voter what can I do in my workplace what can I do as a consumer, and really look at the places where you have influence an agency to actually shape the future of how this happens because all of us are going to be touched and we're all already touched by this, we're going to continue to be touched by this, but we do not have to be passive as this happens we can be actively engaged in shaping this. Yeah, I have a couple thoughts to these are pretty prescriptive. Join a union, no matter where you're working, figure out a way to join a union, if you don't have a union figure out why figure out how you can make one. I've worked at a couple of different places that have unionized while I've been there and there is nothing more glorious, it is a wonderful feeling it's a psychological experience because you really do feel connected to your colleagues in a new way and it's also a way to get some better protections, get either better salary better healthcare, more guarantees that you won't just be laid off because of an algorithm and all kinds of other really important things. The other thing is be really skeptical about claims around automation and artificial intelligence. A lot of what we've been talking about today are systems that are marketed to companies with the idea that because they are machine learning they're somehow objective that they somehow can evaluate how people work more objectively than a person might. One of the most heinous examples of how this is being used right now is in job interviews as you know, especially when you're applying for a job at a lower level maybe you might be a brand new college graduate. A lot of jobs say just send us a video or send us your resume, and they have various incredibly scammy systems that run on machine learning that will evaluate these videos that people send in to see whether they seem personable based on any crackpot data about facial expressions or about voice tone. And you know people have been turned down for jobs they absolutely are qualified for because some kind of basically, to me it feels like it's astrology, you know it's like AI astrology like but if this person makes this face they're obviously sad and they wouldn't be a good worker or they might unionize. It's really important to be skeptical about those kinds of technologies. And if you find they're being brought into your workplace and you're in a position to push back, do it, say why are we using this what's the scientific evidence that this is actually something that will help. How is this helping us to find a more diverse workplace, whatever, you know whatever it is you can use to get people to think more critically about it. Just find out what your company is doing find out what companies are doing that you're whose products you're using find out if they're using this kind of scammy bullshit automation or machine learning stuff, and see if you can change it. Yeah. Not that I'm angry. But I'm a little angry. That's a good reason I'd say on a number of these things. So, first question we've got from our audience comes over from the YouTube side. We've got somebody who is curious about universal basic income and how this factors into everything. We've touched briefly on more robust social services and you know social safety now we've also touched briefly on automation. Is it going to help us if our jobs get automated is there enough to go around and is that the right way to do it. I mean I'll say on universal basic income that I, I think there's a few different lenses that people approach to looking at it I prefer the adding this to the social safety net not replacing the social safety net with it and it's a very important because there are some kind of libertarian thinkers who would argue that you know that we do away with the state we you know get rid of jobs everything's automated and we're all just going to get a check and everything will be great and I just don't think it's that simple. It's an important part of the safety net that requires people to be free and to make choices about leaving or standing up leaving toxic workplaces or standing up to bosses that are extractive if you've got that, you know kind of cushion in that in that space, but I do sometimes you know I just think it's important to to kind of filter through a little bit some of the discussion on UBI because often it's very much married to an inevitable future of complete automation versus understanding that it's it's it's complex and it has to happen with reinforcing and strengthening and making it easier for people to come together to improve their workplaces, having a more robust social safety net like it's really got to be part of this holistic approach in my view. Emily not to channel some of that AI astrology you were talking about earlier but I did notice what might have been a skeptically raised eyebrow would you like to jump in here. You said it perfectly and in a very generously put so I didn't have to make any jokes about Bitcoin at all. I'll throw in a couple of additional points on that. I think that the sort of future in which all of us don't have to work and kind of pursue whatever it is that brings us you know pleasure joy freedom whatever we value. And all of that's taken care of through UBI frankly sounds extremely appealing. In a lot of ways, and I do think that there's a few logistical hurdles that will have to get out of the way first before we get there. And one has to do with redistribution in a really polarized political environment, which I think is just going to be pretty darn hard. I think that there's a lot of national distribution thinking about the fact that a lot of the way that these technologies are going to affect workers is not just here but it's around the world. And where a lot of the economic gains are occurring are in some pretty specific places. You know, if our viewers are indeed sitting in San Francisco or surrounds that's one of them. The workers that are being affected are in places like the Philippines, India, Kenya, China, there's a very wide array of people all over the world who are going to be impacted by this and we don't have a strong ability to do international redistribution at this point. I think in the near term, what we're seeing right now was an awful lot of what the economist drone as a mogul calls so so automation that is to say automation that doesn't really give you much beyond what a person could achieve in terms of productivity. So you aren't getting that surplus that you're looking for to be able to then go ahead and redistribute. So I think, you know, I'll consider myself to be a critical friend for now. In a moment I don't think we're quite there. I would however love to see something like that added to the social safety net in the near term, and I think there's a lot of cities that are doing some really wonderful experiments on that front stocked in for instance. All right so you got another couple questions in the zoom chat here. Let's see here. This is a great one I'll pull this one out of this is from Marina who's in the zoom chat. How did the set of workplace technologies or you know effects in work environments affect racial equity. Have there been any studies about the different racial impacts of these technologies on different groups of workers based on race or for that matter other demographic categories. And, you know, frankly, are they are they fair. I'll jump in on this on the reading list that we're recommending there's a few books that really look really specifically at the racialization of the application of technologies, both in work and beyond work and you know we know, you know one, I think that the kind of data collection and analysis really started during slave I mean proper pre slavery but it happened during during slavery and I think that there's a really compelling case to be made that a lot of the algorithmic and data driven is really just a kind of a sort of updated manifestation of the kinds of practices that got us that were established in the US during the period of slavery and if we look at who are the workers that are concentrated in the lowest paying most insecure jobs. It is black and brown workers it is women who are more likely to be impacted by these technologies, and, you know, all of the kind of inequalities that exist in our society are all being re inscribed in the technologies whether it's about what faces the technology recognizes when it does facial recognition. It's about, you know, again, how are these technologies being being deployed and utilized so it is, you know, in our view that it is that is a necessity to have an intersectional lens that looks both at economic exploitation and racial segmentation and workplaces and how these technologies play out and impact people. And I think it's important to I mean everything Andrea said, yes, this is all great. And I think it's important to remember when we're looking skeptically at these at these algorithms and at machine learning as we should to realize that, you know, ai is only as good as the data we feed it with right the only thing that we can engage in predictive work is by looking at what's happened in the past, and a number of the most popular algorithms that we use in a number of applications are getting their data from the internet. And the internet is racist and sexist in a lot of ways. There's all kinds of other things, no shade on the internet. But when you train an algorithm on racist data you get something like the famous Microsoft experiment with Tay who was the Twitter bot who was turned into a Nazi within like 24 hours basically you know she started as just this I'm calling her she because Tay was supposed to be I guess like kind of a girl bot. Although we don't know how she identified maybe she had some other identification but whatever. So Tay was supposed to just chat with people on Twitter. And because people figured out that Tay was this kind of experiment. They started tweeting racist comments at Tay so Tay learned about how to talk to people from people sending racist comments. And so pretty soon she was talking like a Nazi. And you can read about Tay, you know, online anywhere it was a really interesting moment and that's me Tay is machine learning and a lot of algorithms for example the ones that I was talking about that are used in job interview situations are also being trained on racist data and so you can see how it would be very easy for an algorithm that's analyzing facial expression to decide that anyone with a brown face is less desirable for the job. Not because the algorithm has any kind of thoughts but because the algorithm has been trained by people who've given it racist data so. Yeah, this is a huge huge issue there's tons of awesome people working on it, and it's, you know, it's a it's a huge and growing field and I'm super grateful to be alive to see people actually doing that critical work and not just cheering for the idea that algorithms are perfectly objective, which they're not shifting gears a little bit. Another question from the chat here is about the relationship between these technologies and the great resignation that's been going on and being reported about over the course of the last couple of months. Obviously COVID plays a role in that and just the sheer strangeness of the COVID economy, but how is that all connected to the ways that people might be reevaluating their work and seeking out other opportunities. I think there's a number of intersecting dynamics that are happening here, one of them is that during this period, there was a huge leap forward in the integration of these technologies and practices into workplaces as as workplaces shifted to the to having to shut down and and move to more virtual work, you know, sort of across the board, there was there was a real quantum shift in this and so I do think that the, you know, if you use the metaphor of the boiling in boiling water who isn't noticing that the water is heating up heating up it's like somebody just jacked up the heat and so people you know it definitely, you know the again it wasn't something new, but there was an increased intensification of it. And then I think there was a really interesting disconnect that happened between what people were being told about how people who had to continue to work, how important they were told their job was that people all of a sudden were essential right I mean these are people who for years we said oh those are just jobs teenagers do or people who don't really you know like our immigrants can do though you know like it was just this very dismissal of these jobs and then all of a sudden everyone was saying well these are essential. And so you simultaneously have the job quality degrading, and you have this elevated societal recognition of your job. Of course that's going to cause some cognitive dissonance and I think that's why we do see more people participating and supporting and labor activity more people looking at and exploring other options. And, and I think it's a thrilling conversation to have the challenge is, it is not yet translating into mass collective action of workers which is what I'm most excited to see start to happen and I believe it will happen as we start to emerge. Out of the crisis right now because we know from history that when things start to recover people's expectations rise and there's more militancy and action. And so I you know I think we'll start to see that, but they're all and I'm sorry and then the other side of it is that there has not been any structural shifts and policy, like that just hasn't happened yet right and so the policymaking is lagging behind what's happening on the ground. Yeah I don't have anything to add I think Andrea said it perfectly. It's an all close out with one last question before handing back over to Kate. I would love to just hear one thing that each of you think could change the trajectory of artificial intelligence for the better. And I'll pick a really wonky one to start off with Andrea mentioned that a lot of these technologies are government funded and one of the things that they fund is you know performance against a metric that's called human parity, how well can this machine do what a human is. I would love for us to come up with new metrics fund the creation of new metrics fund performance against new metrics and make sure those metrics are designed to be in alignment with social benefit, as opposed to potentially just automating away people's jobs. So that's that's my walk wonky pick. Andrea you're off mute so I'm going to call on you first. I think we assemble a council of workers representing all different industries, give them a team of 100 AI researchers and say what do you want to build it's going to make your life and job better. That's what I would want to see. Well I want to see that too. But I thought that the question was also kind of speculative like what could really change the trajectory here. So, it's my idea as a science fiction writer and I've written about this in a couple of places that the moment at which AI achieves some kind of human equivalent intelligence sometimes people call that general AI, general intelligence that we will know, because the machines will go on strike. So that's what I'm looking forward to general robot strike. So general robot strike is a great note to end on. Thank you guys so much this has been enlightening and lively and I really appreciated hearing your thoughts on this issue which is touching us all. Before we sign off I'm dying to know. Faye do you have a question. Let me go ahead and unmute you just in case you do. Not sure if that was an accidental hand or not. Okay you're off the hook I'm going to let you go. Thank you. Thank you so much Andrea and Emily and thank you so much Stephanie for leading us in this fantastic conversation. We are going to be. You can watch it again on YouTube, and you can tell your friends to watch it again on YouTube on the library's YouTube page I will drop a link in the chat. Please do check out that fantastic reading list as a librarian I am proud to be putting links to that and uploading it for everybody so thank you guys so much for putting those recommendations together. And most titles are available from San Francisco public library so take a look there. I'm dropping the list one more time for everybody who missed it, and also in the chat is the link to our YouTube channel where you can watch this conversation over and over and refer your friends, because this is an issue that is hitting us all. Thank you guys so much.