 All right. Hello, everyone. Can you hear me? Awesome. Hi. My name is Becka Tabaschi. I am the director of community at the Berkman Klein Center for Internet and Society, and it is my great pleasure to welcome you all here today in celebration of our Dearie Foma's book, The Quantified Worker Law and Technology in the Modern Workplace. Thank you so much for taking time out of your Monday and sunny one at that in November, which we'll take to join us. We're here in this conversation. It is cosponsored by the Petrie-Flam Center for Health Law Policy, Biotechnology and Bioethics at the Harvard Law School and for the Center for Labor and Adjust Economy as well. So we thank them in cosponsoring this incredible event in celebration of this amazing book. So Foma Junois, a faculty associate here at the Berkman Klein Center and a fellow for many years ago, who is the AI Humanity Professor of Law and Ethics at Emory Law and the founding director of the AI and the Future of Work program. This year, she's a resident fellow at Yale's ISP, and she has worked so hard on this book. We're so, so grateful that you're here sharing it with us. Foma is going to be in conversation with Professor Sharon Block, who is a professor of practice and the executive director of the Center for Labor and Adjust Economy at Harvard Law School. And following that part of the conversation, Foma will be in dialogue with Professor Yochai Bankler, who is the Berkman Professor of Entrepreneurial Legal Studies at the Harvard Law School. He's a faculty co-director of the Center here at Berkman, and he is the faculty director on the Program on Law and Political Economy at Harvard Law School. This event is being webcast. It's being recorded. Questions you ask will be recorded and kept as part of recording for posterity, but faces are not being recorded. Folks in the ceiling and from afar, thank you for joining us. We know there are a lot of people joining us from afar, and without further ado, I'm going to hand it over to Foma to take it away. Thank you so much. All right, so thank you, Becca, for that wonderful welcome. I always feel like I'm coming home when I'm at the Berkman Klein Center because I was a fellow here from 2016 to 2017, and they've never been able to get rid of me since. I've been a faculty associate since, and it's always so wonderful to be here with the Berkman Klein folks, and I want to welcome you all here today for this book talk. This book took about six years total in the writing, off and on, and there were several times when I thought, why am I writing this book? Nobody cares, nobody will ever read it. So I am delighted that actually people do care now that the book is published, and I am delighted that people are reading it. I'm not so delighted that actually it remains still relevant in terms of the problems that it describes, but you know, the struggle continues as I say. So thank you all for being here, and I hope you enjoyed this book talk, and we welcome your questions. Great, well let me extend my welcome too. So excited for this, I feel like it has been in the works for a long time, and as sort of you intimated sort of the whole world has sort of caught up with your attention to this topic. And as we were talking about before, one thing I really love about the book is that it draws our attention not to the topic that I think a lot of people focus on, which is job displacement as a result of AI, but how it's going to change the experience of work, which I think will be even more profound as you explain. But having come out of the Biden administration, I have to start this conversation with asking you and talking about folks who are now catching up with you. Just recently, the Biden administration obviously put out a really monumental piece of work, an executive order, sort of laying out how the administration is going to approach the many different questions related to AI. So I just love to hear what you think about those pieces of the executive order that are directed, particularly to the Secretary of Labor, Julie Sue, about coming up with principles and best practices on labor standards and job quality, and the use of data about workers and transparency. Where do you think they got it right? What do you hope to see? What else do you wish had been in that executive order? Right. Yeah, you know, first I want to say that I was thrilled that, you know, President Biden had publicist's attention on these issues of AI in the workplace. The executive order I did feel was a little love-sided in terms of focusing on the big existential issues of, you know, bioterrorism. But the fact that actually there was a section dedicated to protecting workers' rights was very heartening. And I want to commend all the people that had a hand in the executive order. People like Alondra Nelson, Sophia Nodell, Joey Borlamy. I truly appreciate, you know, the vigorous activism in getting that, you know, such, you know, regulations as part of the executive order. That being said, I was a bit disappointed in that I felt that the executive order did not necessarily have teeth, right? I felt that it was more advisory in a sense. It's, I think it's an excellent first step. The fact that actually the Department of Labor is named in it, I think is excellent. But I think also the EEOC could have been named and given, you know, sort of direct guidance in terms of what to do, especially given the rise of automated hiring and all the data that is also being collected as part of that. Similarly, the FTC actually should have been named in terms of how to regulate how automated hiring systems are marketed, the claims that are made in regards to them, you know. So, you know, I guess my overall sentiment is great first step. I wish they had more teeth. There's still a lot more work to be done. And, you know, I am in conversation with people in government and at the White House and I hope to continue to work towards more regulations with teeth. Well, that's great to hear that you are part of the conversation. I assume so, but great to hear that confirm. So, I think one of the challenges in thinking about AI in the workplace is really figuring out what is truly new. What's just a different method of doing sort of the same old ways of controlling workers in the workplace. And so, I wanted to turn our conversation to this idea of Taylorism, which maybe for those who are in labor experts, you can explain what that is, but then move on to talk about what's the difference between sort of old-fashioned Taylorism and what you describe in the book as digital Taylorism. And what's new about it? Why should we be more concerned as it is facilitated in this digital manner? So, Taylorism is essentially the idea that you can quantify job tasks, that you can study how work is done and come up with a standardized more efficient method of getting job tasks completed. So, Taylor would work around, you know, he would walk around in a factory and watch and observe workers in terms of how they would complete certain tasks and he would make notes and would use this to essentially create a standard way of accomplishing job tasks. How digital Taylorism is different is that now the focus isn't merely on the job task, the focus is also on the worker themselves. And now the focus is not just about quantifying how the job task is completed, it's actually on quantifying the worker themselves with the, with this idea of figuring out what is an ideal worker. So, it's moved really from, you know, the idea that anyone can be taught to complete a job task to this other, I think, more pernicious idea that there are ideal workers, that there, some people are more suited to work than others and that it is a matter of finding those people. Can you give us some examples of how this digital Taylorism is actually being implemented in workplaces today? Right. So, it's implemented in various ways. The probably most accessible way to describe it is through the use of automated hiring systems. So, automated hiring systems oftentimes are presented or marketed as like an anti-bias intervention, right? They're marketed with the idea that they will remove bias from the hiring process because it is given, it is understood that human beings have bias, whether conscious or unconscious. And then these automated hiring systems are marketed as somehow more objective than humans and somehow able to remove the human bias from the equation. But this is actually very much a misnomer. It's really a false impression because if you look at actually the development of automated hiring systems, they were never meant to diversify the workplace. In fact, based on an empirical study that a co-author and I completed, the tagline at the beginning of marketing automated hiring systems was clone your best worker. So, it was meant to actually replicate the kind of workers you already had in any given workplace with the idea that of course you already have the ideal worker and it is a matter of just cloning them, finding more people like them. So, in this very insidious ways, right, automated hiring systems can actually replicate patterns of discrimination already in place in the workplace while obfuscating, right, that this is happening because of the way they're presented as objective or because of what Daniel Citrone has called automation bias. So, I want to follow up on that and pick up on what you said about humans being imperfect hiring actors. We, I think a lot of people have spent a lot of time thinking about implicit bias. So, even if you can do the education and whatever so that so that people don't intend to hire, you know, reflecting a bias, there's still this problem of implicit bias. I don't know anybody who thinks that we can actually get to a point where humans are perfectly fair. And so, you've described how those biases are playing out in AI assisted hiring. Is there a potential, though, if done correctly for there to be an improvement through taking human, I think a lot of people's inclinations are like, people are very imperfect. And so, do we have the potential to improve on the hiring process through the intervention of AI? Right. So, that's an interesting question. And obviously the sentiment is, you know, coming from AI developers is humans are the ultimate black box, right? Whereas with AI systems, perhaps, with the appropriate auditing mechanisms in place, you can actually understand where the bias lies. I think that's a comforting idea, but I think we don't want to be complacent about this idea, right? Because as it stands, you know, corporations are using automated hiring with no mandate for audits, right? With no mandate that they actually question the training data that's even going into this automated hiring systems. So, I think we would need a very thoughtful, deliberately designed system, right? One that has all the guardrails in place, right, to prevent bias for automated hiring systems to actually do what they're being advertised as doing, which is removing or, frankly, reducing human bias. So, in the book, you talk about the importance of employer accountability for the bias that is built into algorithmic hiring. I wonder what, if you've thought about, or what you think about the like matching services, so LinkedIn, Indeed, can you talk a little bit about whether or how they add a level of bias and what you think they could do or what more they need to do to address that issue? Right. So, I didn't necessarily focus as much on the book, on the matching services as I did traditional employers and how they're using automated systems. But in some parts of the book, I do talk about LinkedIn, for example, and how some of the features of LinkedIn can actually facilitate age discrimination. So, I think what happens a lot with some of the matching services is there's like a see no evil here, no evil approach. We've set up the automated platform, how people use it, that's not of our business. And of course, there's no legal mandate for them to care. So, I think, frankly, legal mandates are a big part of this equation. If there was a legal mandate for LinkedIn to care about how people are using its platforms and if it's facilitating different types of prohibited discrimination, then they might care. But for example, with LinkedIn, the fact that they urge or nudge people to put in their graduation year from college, that's something that facilitates age discrimination because recruiters can then search on those parameters. The fact that, of course, they nudge people to putting photos, recruiters can also search looking at photos. And that's just something to be aware of. And LinkedIn can choose, of course, to deploy interventions to alert recruiters, hey, if you're doing this, this is actually going to result in a disproportionate impact on people older than 40, which might put you in trouble in terms of employment discrimination laws. But once again, there's no mandate for LinkedIn to do that. So, it really still comes down to what are the legal mandates to nudge behavior? So, that's a great segue. There are at least the beginnings of some regulatory initiatives in this space. I wonder if what you are seeing out there, either through the kinds of guidance that the EEOC has put out. I know there have been the beginnings of regulatory regimes in New York, California, and then I'm particularly interested what you think about what the EU is doing, because at least it seems as if they are once again in sort of the labor space a few steps ahead of where we are. So, what do you think about those different efforts and what pieces of those do you think are particularly useful in this space? Right. So, there have been some trickling sort of legislation and regulation around automated systems and how they impact workers. For me, I think the most effective, well, it remains to be seen, of course, it's still quite new, but I think what will be the most effective is what New York City is doing, which is this mandating of audits for all automated decision-making systems, because that's how you see the problem. You can't necessarily fix the problem if you don't even know what the problem is. I think, you know, EU is all about the more omnibus type laws about privacy. And, of course, there's issues of how do you even, you know, how do you implement it, you know, and it can be unwieldy. But I think New York City, by having this direct specific intervention of you must do audits. And, you know, if you have something wrong, then you now know and you have the opportunity to fix it. And that's actually something I had advocated for in the book, right, this idea of mandated audits with a safe harbor, right, for corporations to then be able to fix what they find. I think we need this, frankly, at the federal level. And I think that's a regulation that can have teeth, because once they have that information and they still don't fix it, you know, take action, then it can embolden lawsuits, right? It can embolden placers and provide them with the evidence they need to succeed. Do you think, do we have the experts available to be able to do that kind of audit? Like, how, how different is that? What different skills is that kind of auditing going to take? Obviously, all kinds of employment tests already have to be validated through audits. How is this going to be different? And are we ready? And I don't know what can places like Berkman Klein do to get people ready? That's, I mean, that's such an excellent question, because I think with the recent congressional hearings, we've seen sort of a lack, right? We've seen a wide gap in the knowledge and skills required to really even comprehend what, what the AI revolution is. So I think it really points to every thinking of legal education. I think thus far, many law schools are still treating AI and the law as a fringe sort of area, or even, even with labor and employment law, I feel like it's still seen as a little bit fringe. And I think that's just wrongheaded. 99% of us work for a living. So, you know, these AI systems in the workplace will impact us sooner or later. So I think, you know, we do need to have our thinking of legal education. Do we have enough AI and law courses? Do law schools even have AI and law courses? Do we have enough lawyers coming up who can feel necessary positions at the EOC, the FTC, in Congress to deal with all the AI and law issues that are, you know, in the pipeline? So yeah, excellent question. We need AI trained lawyers, you know, bottom line. Great. Well, and that, again, is a great segue. My last question before I turn the conversation over to my friend, Yohai. So I want to switch a little bit to workers who not just about the hiring process, but once they're in the workplace, who are being surveilled by employers. I think the most surveilled workers, certainly that I've heard about are ride-hell drivers. Click on an app every time they get in the car. And then Amazon warehouse workers. And I think we've heard a lot. There's been a lot of reporting on how they are tracked. One phenomenon that I know I'm hearing more about are workers who are being discharged or in the case of ride-hell drivers deactivated through the intervention of the algorithm without contact with a human being, which seems, I mean, I'll just say terrible and like dystopian on so many levels. But one level is how workers can even understand or contest that kind of adverse action. And so picking up on what we were just talking about, what do we do about that information asymmetry, where an algorithm says you deserve to be fired? What do you see as a way for workers to be able to say no, you're wrong? So I think, you know, fundamentally, that issue is both an information asymmetry, but also a power symmetry. And I think it's the crux there is that we have at will employment in most states in the United States. And I think the only way we can start to correct that power imbalance is really through unions, right? I think workers who are unionized could actually have a say in terms of, you know, this is how you can terminate someone. These are the procedures that need to be in place before someone can be let go. But otherwise, you know, you know, legally, workers don't necessarily have a recourse unless they can prove unlawful discrimination. But if it's just a matter of the algorithm said you weren't as productive or you, you know, missed work when, you know, you didn't and you can't prove otherwise, then you don't even have a space for contestation. So I think, you know, as I advocate for in my book, unions can have a large outsized role in the revolution in terms of being able to dictate how workers are hired, being able to dictate how worker data is collected, and also appropriated. And then finally, also being able to dictate procedures for firing. Great answer. Okay. So thanks for this really wonderful book, Rich Thoughtful. You asked me to focus in our part of this conversation before we open up to everyone on the more theoretical aspects and on the interaction with law and political economy. And so we'll have a little bit perhaps of just opportunities to go back on some of the things that Sharon focused on. One of the things I like about this book that like Virginia U banks who wrote so nicely, you start with the history, rather than just with the present, not out of nowhere, but out of a rich, long history. And you give examples in detail from Taylorism, not just as a metaphor. You talk about the Ford sociology department and the effort to understand the workers at the cutting edge of the social science of the time. So I guess the question to me is what's your understanding of the role of quantification in the distribution of power between managers and workers as managers and workers. What role has it played in the past? How and you focused specifically here on the ideal worker and you have this amazing you have this whole chapter on personality tests. And then on video interviews and I have this line you have here from the opening on the video. The video interviews you describe as the latest salvo in the war to quantify not just output of the worker, but also the Gestalt of the ideal worker. You compare these to phrenology to eugenics. So give us a little bit more about what you mean about the power struggle between workers and managers, what role quantification, what role social science, and what is this Gestalt of the worker? What does it mean? So I would say that quantification can be thought of as a technology of control. So not even just necessarily about power, but having ultimate control. And you know, Michael Burroway, the sociologist, has theorized that, you know, even just the factory line, you know, the fact that people have to line up and do their work right next to each other is a technology of control because you can see what the other person is doing and you feel the pressure to keep pace, right? Similarly, I would say the technology of technologies of quantification is about control. It's about exacting the utmost labor from workers while exercising complete control over the workers such that they cannot actually organize and resist, right? Because if you're quantifying them to the minute detail, they don't have the time to organize. They're so preoccupied with meeting your quantified standards, they cannot even have the time to organize. And the Gestalt of the ideal worker is really this idea also growing out of quantification, which is all about essentialism, right? That the ideal worker exists out there, that it's not made, but it's already there and it's to be found, right? And so it's this idea that you can separate the ideal worker from other workers. And that the idea of working is something that's somehow innate as opposed to something that can be taught, right? And quantification is a means to find that. So can you say a little bit more about that last piece? Because I could easily imagine a situation where you understand Taylorism as measuring the ideal outcome and then through training and enforcement. And so if you think of Karen Levy's description of the use with truckers, no, don't stop here, go over there. Where do you see this? Is it a necessary connection between quantification and finding an ideal worker that's already there? Or is it something specific about the way in which quantification is in those fields you've looked at that doesn't take advantage of the possibility of minute monitoring and then pushing to be like the ideal worker? Right. So there's this idea of pushing to be like the ideal worker. But inherent in that is that is the idea that the ideal worker exists because you have to have that standard to then be pushed towards that standard, right? And also that idea is that there are some people who will never meet that standard. And you just basically want to eliminate them, right? From even entering the workplace. That's where automated hiring comes in. And you know, Taylorism is, you know, I would see quantification as an evolution of Taylorism because yes, Taylorism was focused on minutely defining the tasks and nudging workers, right? To the correct way to do the task. But quantification goes beyond that in thinking that there is this ideal worker as a standard that all the workers ought to aspire to. And also inherently thinking that there are people who can never be the ideal worker and she just essentially be eliminated from the workplace. So unlike Taylorism, you have workplace wellness programs, right? Which is really a risk transference, right? And unlike Taylorism, you have genetic testing as part of workplace wellness programs, right? And you know, the idea is really that there is this ideal worker. And you start to see that in Fortism actually, right? Because Taylorism didn't have the sociology department, but Ford did. And he was actually making it about this idea. He actually said, I'm trying to create the ideal American. Of course, worker was attached to that. But he did have that ideal in mind. So this is actually exactly the next thing I was going to try to move us to. And that's the role of science as ideology in the legitimation of all these new structures of power. And you again, you come back to it with the wellness programs, with the genetic testing, with the sociology department, you connect the video interviews to phrenology and eugenics. There's a, and then you have this amazing statement early. This isn't in the version I have here. So I'll quote only. You talk about how unique it was that progressives uniformly, labor advocates and progressive politicians all thought that scientific management would actually create a more harmonious workplace. Exactly. So can you talk a little bit about how you see in this trajectory of history, science as a legitimating ideology for a particular pattern of power distribution? Yeah. So in the book, I actually have found a letter from the archives of Louis Brandeis corresponding with Taylor in admiration that he had created this amazing system that was going to allow workers to be more productive. And if you read the principles of scientific management, the way that Taylor presented scientific management was that it was actually about bringing together workers and managers. It was bringing their interests together. It wasn't necessarily about, you know, worker managers exerting power over workers. And therefore, right, labor unions, labor advocates were all in favor of scientific management until, right, it was actually put in practice. And then you had the backlash from the actual workers who experienced, you know, the way that scientific management was used in a very, you know, curmudgeonly way, right, to extract labor from them in a way that they felt was actually reducing their personhood, diminishing their autonomy. And that's why you had those congressional hearings. But you're correct in that a lot of times when we have a new system or a new sort of way of organizing, you know, a political economy, a lot of what props it up is this idea that it's scientifically better, right. So similarly to what you see with automated hiring, it's just, oh, it's more objective, right. And I could see how for perhaps some workers, they might have thought even that scientific management was going to be better because now you have a set system, you can show exactly how much work you did, right, versus, you know, having a manager who might like someone or favor some workers, you know, over others, etc. But I guess what was not taken into account was the sort of power abuses that scientific management would introduce. Because as part of scientific management, we also then got the Pinkerton detectives, right, who were deployed to, you know, bust unions and to keep workers in place in a very violent manner, actually. So there's a strong theme throughout the book implied in everything we've talked about up till now, but I'd like to put it on the table. And that's your theory of race and racism and how they intersect and interact with class, with gender, with disability. You could easily imagine that, not could you imagine, the standard neoclassical model is discrimination is a form of inefficiency because you're leaving talented people on the table. All of this quantification, all of this scientific structure and yet race plays here a central role. So can you talk a little bit about your theory of how race and racism operate, but also their way in which they interact with these other dimensions of power along class, gender, disability. So in some ways, you can think of race as operating to dictate who will be the surplus class, who will be the workers that are warehouse, right? And this is something that other scholars like Michelle Alexander has noted when it comes to, for example, mass incarceration, that it's a way to warehouse what is considered a surplus labor class. But you can also see how a lot of what is now being used as part of management actually came out of the plantation system. So a lot of our modern day accounting practices actually came out of the plantation system in terms of how the ledgers were established in terms of how productivity was quantified. And so a lot of it is this idea that you are creating a class with capital, and you're creating a class that will work to perpetuate that capital. And the question is, who is assigned where? So the way race is played out in the United States is that the racial minorities are assigned to be the ones replicating capital for others. And quantification allows for that to continue to play out in the way that it stymies mobility, right? In the way that it refuses to take into account other exigencies that actually could enable social mobility that would benefit, right? What is considered an underclass? So it's interesting that five stories down, Adna Rusmani is giving a talk on from plantation to mass incarceration. Unfortunately at the same time, because I'm sure there are lots of people who would like to have been involved. In your response earlier to Sharon, you talked about unions as part of the programmatic proposal. But I actually want, before we go there, you write in the last part with this about your conception, the relation between union power and workplace democracy or ethical companies. You write at the beginning of the last part, you write the one-sided power dynamic of modern work means that workers are most often helpless to satisfy their self-interest under the yoke of employer domination. Instead you propose an orientation toward workplace democracy or workplace republicanism and more generally that workplaces begin to shift towards Rawlsian justice at the institutional level. Can you say a little bit more about what that means, what the theoretical framework is and how it relates to specific programmatic proposals like union power as opposed to co-determination as opposed to other institutional structures. How do you think about those? This actually relates back to your question about, for example, what the EU is doing. So something you see in the EU is this rise of worker councils where workers in a given workplace will elect people to be on the council and advocate on their behalf directly with management. So not necessarily like a whole scale union, but within workplaces workers have representatives. So that would be worker republicanism really at play there. In terms of the role of unions, I actually think the role of unions goes towards the data that's being collected, for example, as part of the AI revolution of the workplace. There's so much data being collected and I think that that data could be useful if wielded for the benefit of the workers as opposed to wielded as a lash to keep them working. I really dislike the term data driven because I'm like, you know, think about what you're saying with that term. You're saying you want to drive human beings like they're animals, like you drive a horse and that you're basically going to use data as a whip to whip workers into the productivity you want. And I think we can rethink that. We can actually think more data enabled where unions do get access to the data that's part of the workplace and they use this to talk to the workers and discuss like, okay, this is your productivity for this and that and how can we enable you? What are the sort of environments or structures that you need to be better workers or to be more effective? And to me, this is also part of meaningful work. I think for human beings, work is meaningful. And this is not just for me. This is also a term in organizational theory. Work is meaningful when human beings feel like they have some measure of autonomy in how they perform that work. And they feel that they actually have some measure of some measure of how that work is used or products is used or some measure of how the workplace is even designed, like how the work structures are set up. And I think with unions, having access to the data, the same that employers have, you remove that information as symmetry and you start to correct the power imbalance in that unions can then advocate better for workers to create the environments that workers can enjoy working productively in. And actually, a lot of the data out there shows that workers are more productive when they enjoy coming to work, when they have environments that best suits their needs in terms of getting work done. So we don't need to drive them with data, but we can enable them with data. So I have plenty more questions, but I think there are people here who want to ask and I'm wondering as we're going through just it's almost a question to both of you. Because of your writing in your work and Sharon, because of your rich experience, what's realistic politically from these images? If you had to identify what you consider to be and you talk about audit as a central, what do you consider to be plausibly achievable in the institutionally constrained environment of the United States? And I'd also be really curious, Sharon, to hear your view from halfway on the inside on that particular question. I might let Sharon go first. I'm very interested in hearing you from your experience. To me, obviously, the non-immediately achievable answer is like labor law reform, because even workers who have unions don't have a right to bargain over the introduction of technology into the workplace under current labor law. So even when our system is working, it won't work. You're a believer in Riker's Guild doing the right thing. Right. I mean, so they were able to negotiate over the introduction of AI in the workplace because they had the power to do it, not because the law gave them that power. It came from just their role in their sector, the fact that they have a scarce skill that the companies finally realized they couldn't do without. So understanding that most workers don't have that leverage in their sector, I think there's some interesting possibilities, especially for workers not covered by our federal labor law, where you could have, like here in Massachusetts, I mean, this isn't focused on AI, but it would enable more bargaining over AI for like right-hand drivers who are now, because they are misclassified as independent contractors, there is an effort here in Massachusetts to get them collective bargaining rights, where then, because these issues are so present for them, if that were successful, they could actually start bargaining over these issues. And we would start to learn not, that's a small answer, but we could start to learn what does that look like to give workers the right to bargain over AI, how it operates to control them, and what are some of the, you know, the possible pathways for which when our politics change and we have the ability to regulate more broadly, we would know a lot more. So for me, that's just a really present, urgent, whatever word you want to use, possibility. Now, sadly, it looks like it's not going to actually happen here in Massachusetts, but yeah, I would say that's what I'm keeping my eye on in this space right now. I would say realistically, I think a lot can happen within agencies. I mean, we've all seen the FTC and the work it has done already. So I think, for example, the EOC could take actually bigger actions when it comes to automated hiring. So not this advisory, but more sort of, you know, mandated things and saying, you know, we believe this violates Title Seven is, you know, for example, you don't do audits and, you know, you can be sued and, you know, just putting out stronger language, you know, not just advisories, but specific guidelines. I think that can be a huge help. Yeah, so many questions. Yeah. I have a suggestion and a question. Are you aware of the microphone? Okay. All right. Okay. Okay. I have a suggestion and a question. Are you aware of the MIT Schwarzman Center of the College of Computing has a program? I went to the first one. They had the inaugural one earlier this year. It was the Social and Ethical Responsibilities in Computing, and I would really love if they are going to be here another year or at least I would love to have you give a talk because they talk about medical, legal, ethical, you know, it's not just all tech, and I would recommend highly that you bring this up because obviously the issues of unionization, quantification, AI. AI was like for most of the day. So it's not certainly not going to go out of fashion. My question to you is regarding the perfect worker. I know they have a lot of control in terms of monitoring people within work, but is there an issue of starting to invade people's privacy, like the excessive monitoring of people's social media protests and their political leanings or just kind of like it's one thing when you're a teacher and you're doing something like pornography. That's kind of obvious, but there's certain things where they have really no business or no right to know how you spend your life personally, but nevertheless they made that contingent upon your employment. So great question. Prior to writing the book, I co-authored a Law Review article with Kate Crawford and Schultz looking at the idea that there is this limitless worker surveillance that exists under American law, which is the idea that if you're working for a private corporation, private corporations are like Elizabeth Anderson wrote, treated like private governments and therefore have really carte blanche in terms of the surveillance that they can enact on their workers. So ironically, if you work for the government, you have better privacy protections than someone who works for a privacy corporation, because at least working for the government, the Fourth Amendment protects you from certain invasions of privacy. So that is a big problem, I think, in American law, that we don't have privacy protections for workers or really for anyone. I mean, privacy rights are really seen as a penumbra of rights. They're not necessarily seen as a strong constitutional right. And as recent Supreme Court holdings have made clear, very much under debate how far they actually even go or if they exist at all, apparently. So yeah, unfortunately, yeah, there are no privacy protections for workers. And that is the problem. Thank you. Wonderful talk. I wonder what your response is to the following two musings regarding the entire issue. I come from science and engineering, and I noticed in far less complex systems, how data driven, you go a little bit off your data. And the drive falls with and crashes into pieces, whatever you describe. So which data are you using is just an interesting question. That's Muse 1. In fact, an Israeli company, I know, they have a psychologist. And he said, we have too much, too many of those. We need all of them. We need a geek over here. And we have too many geeks. We need that one. And so there's no ideal worker. There's an ideal workforce, which is all together. And it happens to be good according testimony of a friend and worker. The other thing is given the complexity, the problem with systems like that or the controlling guidelines is the intellectual laziness of people. So they use only this and that and that data. And that for hiring or for government regulation. I mean, experts would give you as many horror stories about government or trade unions or employers or anyone. And how do we allow for that complexity of remedies and rules? Thank you. That's a great question. Touching on data and how effective or even appropriate the data is, we do have in our audience an expert who has written on that, Anupam Chanda in the back. And I hope I'm not misquoting him when I say that he has noted that the data that is fed into these systems matter quite a bit. Because if the data is not neutral, the data is tainted by past historical decisions that may have been racist or sexist. And therefore, if you are using that data without necessarily interrogating where that data came from, how it was collected, and maybe even correcting that data, then all you're doing is replicating an ingest system. So yes, I think that's a great question about data-driven, but which data are you using to actually even drive the workplace? So thanks for a great talk this afternoon. And thanks for the shout out. And I and others are making similar points along those in that vein. Here's the question that I have for you. I feel a kind of sense of despair in this context. I'm not actually convinced that Yokai asked you about possible approaches that are accessible. I'm not sure that reforms, so you pointed to the possibility of worker councils kind of organization, which I think is really probably the most likely hope. But my worry is that once the data is available, any economic system will, so it's not just even the economic system, it's everything will want to use that data for these purposes. Worker surveillance is a kind of, there's such an imperative around it that it becomes impossible to evade. And I don't know if any economic system moves out of that. So that's what I would like some help with. Maybe the worker councils helps temper that you talked about more kind of due process rights before people are pushed out, etc. Is that the best we can do? Yeah, I mean, it's a thorny problem because I think what you're describing is Chekhov's gone, right? The fact that these technologies exist, these technologies of surveillance exist means that people are always going to want to use them, right? And to collect this data that can then be misused against workers. And the question is, what can we do with that? So I think, of course, a big part of this is actually re-educating employers themselves, right? So in one part in the book, I write more data, more problems, right? Because I feel that there is a sense among corporations that they need to just collect data. There's this appetite for data. And they feel that it's like they're not as competitive or they're just falling behind if they're not collecting as much data as possible. So frankly, I think a re-education around that is necessary. Some companies are getting it. And I've presented the book at some corporations and I've seen people nod when I talk about data minimization as actually a process. And they're like, yeah, we do that. So I think just that education of like, you don't necessarily need to take screenshots every five minutes or every five seconds on the worker because whatever you're learning is much less useful than the fact that you've now interrupted that person's flow. You've now intruded on that person's privacy. You've now made that person feel less like a human because you're watching over your shoulder. So yes, I appreciate that question because it's not necessarily just the legal interventions or the union interventions. It's also just about changing ideologies. And that's the ideologies at the top for employers and corporations. Incredible. As always, thank you to the three of you. Thank you all for coming. We have books that you're able to take and signed. And thank you for coming and thank the three of you. Thank you all. Thank you all so much. Thank you both. Thank you both so much. Thank you so