 It is my pleasure to introduce Professor Ifoma Ajunwa, who is my co-fellow at the Berkman Klein Center. Professor Ajunwa hones a JD and a PhD in sociology from Columbia University. She has taught at several law schools. Most recently, she was a teaching fellow at Harvard Law. Starting in July, she will have a joint appointment as an assistant professor at Cornell's Industrial and Labor Relations School and faculty associate at Cornell Law School. Professor Ajunwa's work has been published extensively in top law reviews and in popular media like the New York Times and the Harvard Business Review. She has presented her work at top international conferences, including South by Southwest, and has been invited to testify before the EEOC. Today, she will be presenting some research from her forthcoming book with Cambridge University Press, The Quantified Worker. If you would like to ask questions or participate online, the hashtag is hashtag quantified worker. So please join me in welcoming her with a large round of applause. Thank you very much, Crystal, for that introduction. And thank you all for being here, taking time out of this rainy day to trek to Harvard Law to hear me speak. Thank you very much. Today, I'll be speaking about The Quantified Worker, which is a work in progress. It's a forthcoming book with the Cambridge University Press. So first, let me talk a little bit about what I mean when I say The Quantified Worker. When I use the term The Quantified Worker, I am referring to power relations in regards to how the American worker is viewed and also treated in the American workplace. And the power relation is that in which the worker is sought to be reduced to a set of numbers, numbers which are looked upon as risk factors in regards to the firm. So the private firm looks at the worker in terms of what numbers would represent either benefit to the firm in terms of productivity or risk to the firm in terms of human cruelty related to, for example, disease or potential for disease. So it's important to understand that the path to innovation, which is the path to economic progress, is really historically an exercise or a dialectic of power over quantified workers. It's really been at the expense of quantified workers. So first, you have the Pullman Company town and several of its kind starting in the 1880s. So these were towns in which the workers were obliged to reside essentially within their workplace or near their workplace. And everything in that town was controlled by the employer, including where they shopped, where they got medicine, where the children went to school, such that at all times they really were under the surveillance of the employer. And then you have Taylorism starting in the 1911. And what Taylorism really was presented as was a scientific system of workforce management in which the work or the job task could be broken down into its minute tasks and therefore studied for efficiency, studied for ways to make that task better accomplished as in the least amount of time possible with the aim for efficiency gains. With Fordism, however, Taylorism was taken to another level. So Ford, Henry Ford, the American automaker, would stalk the manufacturing floor as his workers put together the cars with a stopwatch to time the workers and therefore reward workers that worked beyond what was expected, therefore pushing workers really to the limits of human capacity and setting new standards of productivity for workers, which was seen as progress because one thing that Ford did do also was establish what he called a living wage, such that workers were paid wages that would entice them to come work at the Ford companies, but then where they were subjected to really intense productivity caps, or I'm sorry, really intense productivity requirements. Ford also brought with it the sociological department. What the sociological department was was essentially a project on the part of Ford a sort of paternalistic surveillance in which he tried to nudge, I'm sure some of you have heard that word before, nudge his workers to behave in ways that he thought were presented fordliness or was more American because many of his workers were immigrants, and therefore the sociological department really were investigators, right? They were detectives who went around inspecting the workers' habits both in and outside of the workplace, checking who was smoking, checking who was gambling, checking who was drinking too much, and essentially creating norms of behavior that would subscribe to fordliness and a certain type of Americanism. And then finally you have the Pinkatons. So the Pinkatons came about in the 1850s during the rise of unionization and they were really a private detective agency once again, but this time they were geared towards breaking up unions. So the Pinkatons went around as essentially paid, a lot of people would say paid guards for corporate firms who wanted to disband unionization and dispel any sort of impulse to unionization on the part of workers. Interestingly, the government passed an act, the Anti-Pinkaton Act, essentially mandating that no government agency could use the Pinkatons as part of the program. However, the Pinkatons still exist till today as a private agency that do still work for corporate entities. So fast forward till today. Today, when we think about workers, the focus is not so much on the experience of work or the surveillance workers' experience. Rather, we are somewhat anxious about the idea of machines taking over work, over work. We're anxious that the robots are taking over. So automation of work is what we are most scared of, but in truth, if you think about it, jobs are not being automated really at the same pace as workers themselves are being automated in terms of what is required of them, in terms of productivity and comportment in the workplace. So in many cases, workers are being forced to pose as robots as you see this exchange with one of our Berkman affiliates, Tim Wu, who's a professor at Columbia. So what do I mean about this problem of humans being forced to pose as robots? Well, as some of you might have heard, recently some startups were exposed because they were using Amazon Turquers to do work that they claimed was being done by an AI. And that is really the ultimate problem with AI. AI has more promise than actual result at the moment. So a lot of the things that we attribute to AI has actually been uncovered as being done by human workers. So things like, for example, looking at photos and being able to determine who's in the photo. A certain company had a big issue with that recently. So guess who does it now? Human workers. But those human workers, the manner in which they are required to accomplish that job, actually may put them in a system of precarity that is actually them being treated more like robots than human workers. And I'll get more into that a little later. But first, let's think about the mechanical Turk. So in, so some of you might have heard of the mechanical Turk, also known as the Turk. And in the late 18th century, the mechanical Turk was really a fake chess-playing machine. Why was it fake? Well, it was presented or proffered as an automaton, right? It was proffered as this machine that could play chess better than any chess master. Well, later it was uncovered that actually it was not the machine playing chess, but rather a human was hidden inside the machine making the moves. So the mechanical Turk actually played against many statesmen like Napoleon Bonaparte and one and was heralded as really the superiority of the machine over man when the entire time it was various human chess masters hidden inside the machine that was operating the machine. This story is funny because some of you might also have heard of Amazon's mechanical Turk. And that is where they get that name, right? Because the mechanical Turk is still with us, right? As much as we tiled the progress of machines, there are still many things that the computer cannot do. And the mechanical Turk represents a digital platform in which you or corporate entities can hire human workers to do the work that machines still cannot do at a very, let's say, negligible price. So those workers are typically not paid minimum wage. They're not paid a living wage to do many of the work that machines cannot do, like sorting photos, categorizing different things. And those workers face really grueling hours just to make a living. And the interesting thing is the Amazon mechanical Turk is presented as empowering, right? So those are the false promises of the electronic intermediaries. So we live in an era now where with the advent of the internet, electronic intermediaries are presented as affording both innovative work solutions but also as empowering of workers. And I'll take us back to the theories of George Simmel, the German sociologist. So George Simmel was the first to really think about brokerage theory, which is the idea that when you have a dyad, there can be issues in terms of their relation. But when you have a triad, meaning a third party, a broker or intermediary, that broker can actually help to the two other parties relate to each other and accomplish their goals. However, George Simmel also noted that you could have what is called a teshary gardens, right? Which is a third party that enjoys. So this is a third party that really comes in to enjoy conflict or power asymmetries between two other parties and takes on those spoils for his or her own. So in the context of electronic intermediaries, however, I want us to think of a tersious by-fronts. What do I mean by that? I mean a two-faced third. So this is the idea that electronic intermediaries are presented as being for the benefit of the worker, certainly, and sometimes also for the corporate entity. But really, I want us to think that electronic intermediaries actually only work for the corporate intermediary, and in turn, receives benefit. And this is why it's two-faced, because it's presenting itself to the worker as benevolent. Whereas in reality, it is quite often exploitative and working for the benefit of the corporate entity and also for itself. So the problem with electronic intermediaries is that they invite surveillance, right? Because when the worker is removed from the boundaries, the physical boundaries of the workplace, there is not an opportunity for a trust relationship to develop between the employer and the employee such that the employer can give the worker the benefit of the doubt and give the worker autonomy in accomplishing needed tasks. Rather, the corporate entity or the private firm has an incentive to surveil its distant workers to ensure that they actually are accomplishing the tasks that they're being paid for. And that is why electronic intermediaries invite surveillance. But they also enable surveillance, right? Back in the days of fortism and terrorism, you literally had to have a human standing there with a stopwatch to time someone. You had to have a team of detectives that you would then have to pay to stalk your workers. However, with electronic intermediaries, you don't have to do any of that. The electronic intermediary does it for you for no pay, essentially, right? So for example, Upwork is another freelance agency. And what happens when you hire an Upwork worker is you have the ability to take screenshots of that Upwork worker's laptop or computer at any time. So while that worker is working at home or wherever they choose to work, you can pop up on their screen and take a screenshot of what they're doing at any time that they are signed onto the system. So this is a greater and more pervasive level of surveillance because, of course, they could be at their home doing whatever else that they're doing on their computer. And this is a surveillance that really has no boundaries. It is limitless, as me and my coworkers note in another article. However, it is important to notice, however, that the surveillance aspect of electronic intermediaries are presented in a way that masks its exploitative nature, that masks the power dynamics, right? So Professor Julie Cohen of Georgetown has talked about the gamification of the surveillance innovation complex. And this is the idea that in the workplace, surveillance is really presented in a way to be seen as a game and that lures the worker into this false sense of comfort with the idea of being surveilled. So for example, surveillance has become really a requisite for employment. We think nothing that the employer is privy to so much personal information. So not only are we surveilled for our productivity, we're also now being surveilled for our behaviors and also for our personal characteristics. So for example, we have the advent of wellness programs, workplace wellness programs, in which we're really invited, prodded, some might say, to divulge all kinds of secrets about our genetic past and our genetic future, to divulge our health status, our health issues, and really to allow this information to be held by employer for purposes that may not always be clear to us or made known to us. So what are we seeing now with the quantification of the worker? What we're seeing now with the quantification of the worker is a move away really from tillerism where the focus was on the job task itself. So tillerism was about breaking the job task itself into discrete minute parts that could then be studied. Now we're seeing more move towards mastering the body of the worker. So now there's the idea that if you can create healthy workers, at least your level of what you think a healthy worker should be, that will in turn translate to higher productivity gains. And that is why wellness programs continue really to be a burgeoning industry, albeit that several studies have not shown the actual effectiveness because it is still representing this idea of mastery over the body of the worker. And that is why, for example, as part of wellness programs, smoking can be hazardous to your job. One of the main functions of wellness programs is smoking cessation. So employees are called upon to admit whether they are smoker and then to join a smoking cessation program. But an interesting thing to note here is that smoking is not a protected category under anti-discrimination employment laws such that in several states, including the state of Massachusetts, one can be fired for being a smoker. And then we also think of obesity when we think about wellness programs. Another major function of wellness programs is weight loss. And it's also important to consider that obesity is also not a protected category for anti-discrimination, such that employees are really being called upon to release information that could then potentially be used for the dismissal. And then I wouldn't even get into the metrics of wellness programs because as some of you know, wellness programs use metrics such as BMI, which has been proven to be very inaccurate and very ineffective really for weight loss, but that's for a whole other talk. And then you come to the issue of genetic discrimination. Common sense tells us that science fiction soon becomes science fact just with the passage of time. The movie Gataka came out in the 1990s and it represented a society wherein genetic testing had become commonplace such that the individual's genetic profile determined also their occupation, right? So an individual's genetic profile constrained what economic strata of society that individual could ascend to. So for example, people who had not undergone genetic manipulation for perfection purposes were deemed invalid and thus relegated to menial jobs. And the protagonist, which was Ethan Hawke in this case, wanted to be an astronaut, but he could not. He seemed a laughable goal for him to have as he was an invalid. While that might seem very science fiction-y, it's important to know now that since this day kid, we now have a tool, CRISPR, that will allow us to manipulate human genetics such that you can now create designer babies, meaning babies that have had the genetics, the eugenia manipulated to enhance certain features. And what does that mean for the human race going forward? But even before we get to that, we currently have passing through Congress a bill, HR1313, which would allow employers carte blanche in acquiring the genetic information of their employees. So this bill, which is a GOP bill, would make clear that all employee employers are within their rights to ask employees for the genetic information. And the problem with that is, of course, that this would reveal potentially serious health propensities like, for example, breast cancer, which is typically a mutation of the BRCA1, BRCA2 genes. And in the context of health insurance that we have in the U.S., where it's mostly employer-based health insurance, this might cause employers to feel that they cannot hire workers that carry certain genetic mutations that would be costly for the health insurance carried by the employer. So all this being said, what are the legal protections for the worker when it comes to privacy protections that arise from the quantification of the worker? You can think of an omnibus EU-style privacy law. You can think of an Employee Privacy Protection Act. Or you can think about an Employee Health Information Privacy Act. The problem with these solutions, however, that they require government action. And as we know, we are living in a very different administration where these types of government actions seem rather unlikely to happen. So perhaps what we want to start thinking about is the companies themselves, the private firms, the private entities, how can private entities take steps to ensure the data autonomy of their workers? So for one, people have talked about the EEOC can take on a certifying role where they can certify that certain companies or certain private organizations are more privacy-friendly, thus signaling to the general public that those are good companies. Also, you can think of a privacy premium wherein companies who have greater privacy protections are able to attract more workers or better workers or more highly sought after workers because those workers would be more willing to work for such a type of company. You can also think about ethical guidelines for use of employee data that private entities could willingly subscribe to. So those are some questions that we can grapple with in the time remaining. I definitely look forward to your questions and I want to thank you very much for coming today. Thank you. Yes, I guess she will pass you the mic. What limits does HIPAA currently put on any of this, especially in the context of the employee wellness programs? That is a great question. So the question about what HIPAA does is one that I've actually grappled with in another paper. And the problem is that the P in HIPAA does not stand for privacy. That's my favorite answer to give. So the P in HIPAA stands for portability, does not stand for privacy. So HIPAA is quite limited in actually protecting privacy, right? And also the problem is that much of the information being collected does not actually fall under the purview of HIPAA. So HIPAA is really focused on healthcare providers. Wellness programs, it's not well settled that they are considered healthcare providers, particularly if they have third party wellness program vendors that are not part of an insurance company. And in which case then HIPAA would not actually apply to them. Some of the power imbalances, I guess I'm curious about the role that you see for grassroots organizing and unions and addressing some of these concerns. That's a really great question. Because as we've seen in the past, unions have been effective at bringing about workplace changes. If we think about the minimum wage, for example, or OSHA in terms of making sure that there's safe workplaces. But the problem is still getting unionized. As some of you might know, unions are in decline in the United States. And also unions are seen as something that blue-collar workers do. White-collar workers are still very much under the impression that they don't need unions. And the funny thing is, in some ways, white-collar workers are now being more surveilled, even more so than blue-collar workers just because of their proximity to the electronic intermediaries that allow for greater surveillance. But yes, I do agree that unionization could definitely play a big role here. Hi, Afoma. Hi. I was wondering if you could speak to the broader political economy that outside of individual firms that may be driving some of the quantified worker movement. So for example, a lot of the wellness program stuff doesn't sound to me like it's necessarily about productivity, but maybe more saving money on healthcare costs, because healthcare costs are exploding and it's an easy way to save on the more financialized portion of the company that is most profits these days. Or your example of Upwork, which is like a gig platform, makes me think about if you're in a gig economy, you're probably more likely to consent to a lot of this stuff because you're sorting between various gigs. So I was wondering if you could talk about the broader historical arc that got us here. Yeah, I mean, those are really great points, particularly regarding the precarity of gig work and the idea that gig workers really have to take what they get. And therefore they can be really the canneries, right? And the cannery in the mind of what is coming next for all of us is really the gig worker because they are at a stage where they don't have any power. Forget the power in balance, they have no power because it's take this job or we get someone else. And the larger political economy there is really, I think, a move towards a sort of hyper capitalism where innovation is somewhat pushed at the expense of thinking about a social net for all workers. And the health insurance is part of that, right? In America, we don't really have a feeling of solidarity towards the sick like you would see in European countries where really the cost of taking care of the sick or the infirm is seen as a burden to be shared by all citizens. Whereas in America, it's really more of an individualistic and outlook in terms of you pay for what you need and everyone pays for what they need. And the problem is then is that with health insurance being essentially subsidized by the employer that creates a moral hazard, right? Where the employer has an incentive to not hire people that would create higher health insurance costs or to try to let go of people who later develop issues that would create higher health insurance costs. So I think, yes, thinking about the greater political economy of this context is really important. Right, right, exactly. One thing that comes to mind is if we don't have the political log jam in Washington or other world capitals and there's actually good legislation that protects the workers in this country, what would prevent a company from simply going overseas and finding a contractor in a country with no laws and doing exactly this if not more so? I mean, that is an interesting question. And of course, that's definitely the argument, right? That corporate entities actually would proffer. They would say, well, look, we're trying to work in America and to keep our costs lower than perhaps working overseas, we have to be careful about not having too high of health insurance costs, et cetera. But European companies have managed to hue to the protection set in place by European governments for workers overseas in those countries and they don't necessarily offshore all their work. And frankly, I don't think it's possible to offshore all work. There's still going to be a constraint on how much work can actually be constrained and also what types of work can also be offshored. Thank you. My name is Caroline Troin, and I'm from the Fletcher School at Tufts. I wanted to dive a little bit deeper into what you were just touching on. Are there examples where we can have the benefits of using data about workers to improve their work while also respecting privacy? And could you walk us through any examples that you've seen of this being done really well? I think that's the ideal, right? That's the utopic ideal. And to be quite honest, I have issues with that ideal. I think it's really, I think it's a straw man in this argument that, well, we can still use workers' data, but in a way that would benefit them. Well, why not ask the workers what would actually first benefit them before you collect the data? It's always very top-down, right? In terms of what we think will benefit the worker is what we're gonna take the data for, right? If the workers are actually engaged in part of the data collection, that would be a different story. You spoke a lot about the companies asking the workers for the data, but even if you put restrictions on that, what is to prevent companies from going to data brokers and just getting health information from third parties? And yeah, I mean, that question, of course, cuts to the heart of issues of data autonomy, right? Because the fact is it's not just the employer that's getting your data, it's also your internet provider that's collecting your data from all the websites you visit. It's data brokers who are trawling the internet and mining data that's put out there about you in all the websites you visit. And it's a multi-pronged problem, I guess is what I'm saying, and what I'm presenting is just one prong of that problem and how to address it. I have a question, actually. So I was very struck by the idea of kind of a privacy premium being a benefit, and to me that seemed to be how the employer is kind of choosing to not do something potentially as some type of benefit that would kind of be added to some type of value package for how much or what a worker seems to be receiving from that employer. And that idea kind of side-by-side with the idea of Amazon Mechanical Turk where people are being asked to do these very routinized tasks, but they're being paid very little. So what all this points to is if we kind of see the future of employers potentially choosing not to do things and not necessarily adding anything generative to some type of benefit package and you have on the other side where people are being treated like robots, how does this, what does this all mean for the value of labor itself? Whether or not it's routinized task or more complex task that workers are asked to do? I think that is the bigger question, right, of our society, like how do we value labor and how do we value human dignity in labor? And I put out the question of the privacy premium, but it frankly makes me uncomfortable to think about it because then it's really presenting this idea that privacy is yet another economic good as opposed to a fundamental human right, right? Because when you think of a privacy premium, you're saying this is an economic good that can be traded in the context of employment. This is an economic good that can be conferred or not conferred by the employer, whereas shouldn't privacy or isn't privacy more of a human right, more of a dignity that is already afforded to all human beings by virtue of them being human beings rather than by virtue of the kind of job they have. So yeah, I mean, these are bigger philosophical questions, I think, but one that we should still grapple with because it is, of course, creating what type of society we're gonna live in. Yeah, other question. Thank you. It was such an excellent talk, but a little bit depressing. I'm sorry. So I'm just wondering what you recommend that people do is are laws the answer, or I mean, you can't unionize people working for mechanical Turks, but what suggestions do you have and are there any hopeful things on the horizon? Yeah, unfortunately, I have a very crichtonian view. I like to see of technology, which is that anything that can go wrong will go wrong. So I don't have a lot of rosy proclamations to make, but what I do have, I guess, is a call to arms, right? Which is that we shouldn't be complacent that this is happening, right? We shouldn't all sign up for our wellness programs and give up our data to basically get $500 back when that data is a lot more than that. So each individual action is cumulative, right? The less and less people who are willing to do that, the more the private entities are gonna start paying attention. And so resistance is not futile, I don't think. I think resistance can still make an impact. Hi, thank you so much for that talk. So I think, as others pointed out, it was rather like a sad outlook, especially with regards to how, I guess, the automation in this sense reproduces inequality and basically gives the more privileged the power to choose jobs where they're not quantified as such or opt out of these programs. But so the question I am asking myself is, like, can we do something as customers? For example, you raised that we should just not use these programs, but I would assume that a lot of people like just need the money, for example, and just will opt in no matter what. So is there something like that we can do, especially with regards, like besides like asking for more privacy politically, but just like do with, like on a very individual level, like when we're buying something or so. Well, I mean, some people are already taking action in terms of trying to correct some of the injustices that can result out of this quantification of workers. So for example, whereas on Amazon Turk, you can pay your worker as low as a dollar. People are actually now only paying, you know, only agreeing to pay workers minimum wage for the work that they do. So certain universities whose academics use Amazon workers for their experiments, they've signed on to a place that they're gonna pay minimum wage. So that's something that individuals or organizations can do, you know, from the start. In terms of, you know, and that's also something as a consumer, you know, we can do, we can write letters, you know, Amazon and things like that. But other than that, it's, you know, like you said, there are limits, right? Because, you know, I said, glibly, you know, don't join a wellness program. But for some people, $500 is a lot of money. And that's a lot to pass up, even though you might, you know, be aware that your information is being collected and could potentially hurt you in the future, in the present, you need that $500. So those are issues that we have to think about, you know, in terms of, you know, how the working poor are treated in this country. So in this vein about looking for the positive, I kinda wanna come at this from the other side, which is that I'm completely with you about the dangers of aggregating all this data about people and its misuse. But to play devil's advocate a little bit, a lot of the things you're describing that lead to the data aggregation, I see as at least potentially positive. I mean, so let's just take the wellness program as an avatar. People wanna get healthy. People wanna be exposed to resources about healthier living or diet or exercise or whatever it might be. And I think that, you know, in microcosm, that's kind of the problem, is a lot of these things could be really good. You know, I mean, like, you better believe it. If you could modify my genes so that I never got the flu or something like that, you know, I'd be in there. So is there a way to separate these, to say you can have as employees the benefits of an employee-sponsored wellness program or, you know, whether it's free gym memberships or healthy food in the cafeteria, whatever it might be. But detach that from aggregating data and the shocking invasion of privacy as it moves throughout your life. That's a great question. And so I wrote another article with Kate Crawford for the Journal of Law, Medicine and Ethics, it's available in SSRN, in which we think about what kind of ethical framework could exist for wellness programs to be able to still function and serve the actual worker. And some of the things we talked about was basically data autonomy. Letting the worker be in charge of how their data is used. So basically the data is collected but the worker has knowledge if it's ever going to be sold to a third party, they're able to object or opt out of that. They can erase or delete the data, a hard delete of the data once they start being employees of that company. So things like that that actually put back control in the hands of the worker, I think are things that would enable wellness programs to still be beneficial. All that being said, however, and we didn't address that in that article, but in the book I will, we want to think about what wellness programs really represent, right? What they really represent is a delegating of responsibility, right? So wellness programs are very much championed by the federal government actually. So the CDC, the NIH, these are organizations that are very much behind wellness programs because what they really are is a delegation of public health to the individual, right? The individual is told, you need to lose weight. You need to stop smoking. You need to do all these things that are better for your health. The problem with that is however, what if the individual is in an environment or in an infrastructure that does not actually allow for the accomplishment of those goals, right? And wellness programs don't necessarily tackle that head on. Wellness programs say go to the gym, right? X amount per week. Wellness programs don't say, well, do you have time to do that? Given the type of work that you do. Are there gyms in your neighborhood? Eat vegetables. Well, do you live in a food desert, right? If you live in a food desert, you might know you need to eat vegetables, but if you're not able to buy them easily or buy fruits easily, you're not gonna eat vegetables. So those bigger infrastructure public health issues are kind of obscured by wellness programs. I see several hands. I think you're first, yeah. Somewhere between a general universal of these privacy principles and negotiated ones, I think firm cultures have a lot to do with it. And one good example, a nice case study actually is right here at Harvard where there was a tremendous uproar after an investigation that grew out of an exam cheating problem led to a university-wide policy on access to electronic information for all members of the community. And it's actually a pretty good policy, I think. And the history of how that came to be is a useful example of how an institution can ultimately address these kinds of issues. Thank you. To answer the other gentleman's question, there are wellness programs you don't have to collect data. Like my previous employer, they had a farmer's market day once a week where they would have fresh local vendors come in and provide fruits and vegetables at the office. And they just let you know that it was there from 11 to three on Wednesdays. And they allowed up to three hours a week away from your work to go to the gym. And there was a gym on site. And they had little signs that said, hey, take the stairs in all the elevators. They had signs that said, take the stairs. And they had a volunteer opt-in program where you could opt-in to tracking and monitoring your weight. But it was an opt-in rather than an opt-out. So there are ways to have wellness programs that encourage the workers and incentivize the workers and make it easier for them to be healthy without collecting on their data. So it's possible to have a lot of support. Can you tell us which company that was? We want to work there. The Department of Defense. Interesting. So anyway. Thanks a lot for this very interesting talk. I was really captivated by something you said in the beginning when you talked about the ways that digital intermediaries present these two faces. And it's their knowledge of what they're really doing that leads to a lack of trustworthiness that they inspire with their workers, which leads to then greater surveillance and monitoring and data gathering because they're not doing anything to inspire the trust of their workers. And that just reminds me a lot of my field of education. And when you are setting somebody's goals for them, you are not creating an environment in which you can trust them to go off and do the thing that they want to do. So you need to start gathering all this information and infringing on folks' privacy just to make sure that they are doing the things that you want them to do. So this is probably a bit outside of what your research is, but it strikes me that it's not the only way that a digital intermediary could work. There are examples of co-ops and other organizations that probably do some of the harder work of creating a community of folks who are bound together to achieve common goals. And I'm just wondering if you come across anything like that or if you just have any sort of thoughts about the fundamental level of the way that work is organized. So that's an interesting question. And definitely, electronic intermediaries can certainly serve more beneficial functions. And one really great one that I saw was in the context of often diseases. I don't know if you know that term, but often diseases are really extremely rare diseases for which the government doesn't fund any research. So basically, if you're an individual that has that disease, it's like, sorry. Yeah, so often diseases. And so what has happened is that those people have created in the context of one genetic disease that's an often disease, those individuals created a community for themselves online in which they do their own research on their disease. And so they collect their own genetic data electronically and share it amongst themselves. But it's all for the purposes of benefiting their group. So that's a way, certainly, that the internet as an electronic intermediary is beneficial for the people who use it. So I'm not saying that there's never cases like that. But I think we still want to be conscious of and cognizant of the ways that is not always the case. I think we have time for one more question. Well, this was a great talk. Thank you very much. And I was wondering, what would you say about all these being rooted in capitalism? Do you think that's the case? Well, yes. I mean, it goes back to the political economy question. And yeah, we have a political economy that fosters individualism, that fosters a survival of the fittest type of attitude, such that human illness, human frilty is seen as a witness and is seen as something to be shielded or from the workplace, from the market. So I do think that this is a byproduct of capitalism. But that being said, it doesn't have to be. I think we can still have a type of capitalism where workers are still valued and protected. Thank you very much. Maybe that's naive, I don't know. Thank you all for coming.