 Good afternoon everybody. Welcome to the 7th of the HMI Data AI and Society seminars I'd like to start by acknowledging the traditional custodians of the land on which we meet And pay my respects their elders past present and emerging So this is the first of our seminars which we can see we can see the seminar as being Sort of time zone agnostic We wanted to be able to bring people in from all around the world And so this is the first where we've made that actual and we've we've shifted to the afternoon Into enable us to speak with Seda Gerses in Europe. So Seda is at TU Delft She's an associate professor in the department of multi-actor systems In their faculty of technology policy and management as well as having many other affiliations And she's a very prominent figure in the in the field of the Privacy engineering privacy and also penance accountability and transparency In machine learning and AI So I say that we're delighted to have you with us. Thank you for getting up early for us And please do take it away Lovely, thank you so much and First of all, thank you for having me here, especially to Seth and shell who did a lot of organizing and making sure that the zoom connection works And it is early in the morning. So we're gonna see how quickly my tongue loosens here I have some coffee on my side. So we'll see how that goes Usually Dave I've heard that the Presentations are not interactive, but I will bomb you with a lot of information So if you feel that something goes too fast, I would say if one of those who can intervene Once to say, could you please explain that again? I'm happy to go back and explain things the title of my talk is protective optimization technologies a proposal for Contestation in the world rather than fairness in the algorithm. So it's a it's a it's a look at what has been Proposed by computer scientists with respect to fairness and its focus on algorithms and let's say even data And to propose an alternative given the limitations of that approach So, oh Here we go So I'm gonna start with a quote from Michael Jackson the requirements engineer I've been trained as a requirements engineer. It's by now an almost obsolete science And Michael Jackson is one of the gurus of requirements engineering He Or let's go back requirements engineers are trying to think about ways to identify what systems are supposed to do So who do you talk with how do you get natural language requirements as they call them? And how do you specify them so that you can get a machine that fulfills the requirements in the world? And in fact Michael Jackson is very very concerned that computer scientists and engineers often focus on making a beautiful machine But not looking at its consequences in the world. So quote he says Computer science or scientists and engineers are concerned both with the world in which the machine serves a useful purpose and with the machine itself The purpose of the machine is located in the world in which the machine is to be installed and used so if we go back to what happens for example in all the fairness frameworks which have been proposed as a Let's say technical response to bias and algorithmic discrimination issues We already see that the focus is very much on the machine if we look at for example the slide deck from Aaron Roth Was was one of the figure heads. He's not the only one and he's a school of thought and fairness But just bear with me You see when he asked the question where is unfairness or where is fairness? He quickly points out input output and the algorithmic process that is of interest to the scholars in this field So what fairness frameworks and the scholar is pursuing it to is they propose mechanisms to achieve some sort of equality Depending on the definition of the equality with respect to some sort of machine learning setup in the outcomes for either groups or individuals I will not go into further detail of fairness frameworks. It's not necessary for the talk I just want you to remember that what they're trying to do is look at the inputs outputs of an algorithm And maybe the process of the algorithm to achieve some sort of parity for some definition of parity To is an aspiring for what could be called fairness by design? Those of you who know privacy by design. It's this idea that already in the design of a system You try to achieve certain properties or certain guarantees and fairness by design does the same It proposes a way for service providers. So those who are deploying machine learning To mitigate the discrimination harms at their discretion So what's really important here for me is to zoom out of the algorithmic and data view that is very prominent Not only in computer science, but also in social sciences and media studies Which look at information technologies to look at the production of systems and see what that reveals About what is left out of these frameworks and then I will come back to what it leaves out about the world The world in Michael Jackson's definition So how do algorithms come to the world? I'm not going to go into the history although it's really really fun But something fundamental has changed ever since these gentlemen who are the upper echelons of Microsoft Had parties to release their software The most important part that to remember is that in the past if you're old enough You bought the skits or CDs or DVDs depending on how old you are To install software on your device what that meant Was the developers in the company producing that software had to have a sharp cut They had to release the software in order to ship it and where shipping was not just a digital activity I was putting it into trucks and putting it through the logistics and getting it to the shops where the customers would buy a shrink-wrapped Box which would they would then install on their devices What has happened since the 90s with the rise of the web is what we call services and what services do is Somewhat different in that most of the code remains on the servers of the software company So under the control of the developers and what we do as consumers is or users is To connect to these servers to get the functionality so our devices are no longer our personal computers Which hold all of our software and data, but in fact their devices They're sort of portals to these servers on the machines of the companies which we then access So what is the impact of this that is relevant to the rest of this talk is that the code remains on the side of the developers? That means that they can continuously Observe how users are interacting with their software so they can watch every click and every keystroke and use that feedback feedback that comes from the use Into producing the software by optimizing their features and At the same time this also means that they can continuously introduce new features or remove old ones Optimizing the production to make sure that they can get the kind of Experience that they want out of the software. They're producing There's a lot more to this and to give you an intuition of what has changed It's like the shift from Microsoft Word to Office 365 or Google Docs Where if you installed Microsoft Word 20 years ago on your device no data would go to Microsoft and you would be using Your for your files on your computer and you'd be managing them yourself Which means that if your computer broke down and you lost all of your files Whereas now with Office 365 or Google Docs all of your documents are online on Google or Microsoft servers And all of your clicks and keystrokes can be used to continuously optimize the software To increase on the one hand user experience and on the other hand to optimize the software for the extraction of value There's a lot more tracking and tracing going on but what you'd need to remember for now is that we have moved from shrink-wrapped software to services and the feedback loops in the services Environment has allowed companies to optimize both observations of and design of the system in order to change the user behavior by continuously updating the features And to optimize the production of software with the plug-and-play of services, which I'm not going to go into today So what I argue in past work is to say that with optimization systems Sorry with with all of these systems and the optimization mechanisms that are now possible We have almost made a hard move from information and communication technologies as we've used to call them to something Absolutely new it's in continuum, but somewhat new maybe of optimization systems So these are systems built using mathematical and managerial forms of optimization They use a sort of at least some projection of cybernetics of using feedback from users For operational environments All feedback is metricized under the authority of objective functions that is the optimization in mathematical forms And that's where we see a lot of machine learning in AI And what also happens is that the production and consumption of software is collapsed, right? Whereas in Microsoft In the 90s you would have the production of the software you put it in the box And then you would have the consumption of the software We now have services in which you produce the software and then you put it out in the world And then you keep on refining and optimizing the software as people use it So these are the fundamental differences. Of course, there are many others Which means that we now can create a different kind of system Which not only provides certain kind of automation or augmentation of workflows But they can capture and manipulate both behavior and environments for the extraction of value Because you can continuously check if updates and changes to your design Either change the behavior that or move the behavior to what you want to see You don't really care how people behave you you have a sort of statistical or KPI approach to the kind of behavior you want the machine to deliver And you check continuously to see how you can design the system to get to that behavior And that KPI is usually associated with extraction of value So if we take a very broad view a lot of the systems we use today are optimization systems And we have in in in the scientific field, but also in the policy field And more generally spoken a lot about some of the negative potential outcomes of Optimization systems One of the things we've seen happen is asymmetrical concentration of powers In the hands of a few companies and that is kind of relevant to what I'm going to say about fairness later too Social sorting as Oscar Gandhi has defined it, which we now call algorithmic discrimination and And respond to it with fairness measures Which I think is does not do justice to the more complex topic that Oscar Gandhi Went into But we also see things like mass manipulations as the case of Cambridge Analytica A dominance of majority values and systems and the erasure of minority needs that all kind of come with optimization as the math method of producing systems So how does this connect to what I want to say about fairness? And what I want to do now is to give you an example of an optimization system Which basically shows us what happens if we move out of the inputs and outputs of algorithms as the main concern Of injustice and building systems, but look at the systems in the world Okay, so I'm continuously reiterating what Michael Jackson did In order to get us out of the limitations of the algorithmic view So I'm going to give an example from location services. So location services is Any sort of application that you might have experienced already, which uses some sort of location mechanism to provide you services, yeah So google maps, I'm sure you've many of you have used I checked last night the ways was not very well known Until a couple of years ago, but if I hear correctly, it's now more popular But you can also think of things like pokemon go and they make much more clear What I said about the definition of optimization systems, which is that they capture and manipulate behavior in environments So here, um, you can see how pokemon go has shown the power of Using the devices in our pockets to not just or not just not the right word But at least to give us the right kind of signals to create ideal geographies from which they can Then extract value, right? Here's a pokemon go players taking over a street And you can see that this power is possible because of the way in which the system has been optimized to bring users together Okay, but I'm not going to talk about pokemon go. As I mentioned earlier, I'm going to talk about ways ways is a participatory traffic beating app participatory in that the users can Input that their roadblocks or police controls and and further things And at the same time What it does is it gives you recommendations on routes, especially If there's a traffic jam So if you're on the freeway It would say okay get off the next ramp and go on the surface streets and then you can reduce your time to travel so it's on the one hand A software that's produced using methods of optimization But it also takes the logic of optimization to its users saying here you can optimize your time to travel so I want to see what are the kind of systematic issues Or injustices or problems that arise as a result of optimization Being put into the world as in the case of ways So traffic engineers have looked at the impact of ways On the world and what they have found is that it actually Promotes a certain kind of social behavior that increases contagion overall It turns out that if a lot of people go on side streets Which are much easier to congest and much harder to decongest That they not only create more traffic on the surface streets, but they also congest the freeway further increasing congestion So we already see that this app which proposes to Reduce travel time for individual users actually causes collective costs and environment and has environmental outcomes that are negative It doesn't stop there. You might have heard this also in australia A lot of the surface streets are not ready for this kind of traffic. This is a picture from los angeles Where fire trucks and luminescence luminescence have gotten stuck on a street that is really not ready for this kind of traffic And so what we see systematically is that ways In the process of extracting value by expanding its user base This regards the impact of their system on non-users and their environments And traffic engineers say, you know, it's not unusual that people know surface streets and take them There's always, you know, the 10 or 20 locals that know The surface streets and they will benefit From knowing that if there's a traffic jam on the freeway But it turns out ways kind of normalizes or Makes a situation where it can If a few ways users use it then they can really benefit on their travel time But if a lot of them use it then those few might still benefit But the rest actually suffer because of the increased contagion So what i'm trying to do is try to show you that the way in which the mathematical and managerial forms of optimization Function is that they externalize a bunch of costs onto other parties And that's part of the way in which they extract value and optimization itself actually is very significant plays a significant role in these costs so What we've been doing with my colleagues Is to see if we can actually identify common externalities that can be associated with optimization systems I showed you in some of the examples of ways that they disregard non-users and their environmental impact They benefit a few And if they're trained on the right wrong environment, and then if they're if they're used somewhere else Then they can all of a sudden create errors, which you know could also be externalized to other to the users or the environment Etc. Etc. I'm not going to go through this whole list But the point I want to make is that by already taking a systems view You see a lot more problems that impact society and and communities that might be hardly affected by Much hardly hit by such systems Which are all left out of fairness frameworks, right? If we look at this list The benefiting a few one could argue fairness frameworks somehow addresses maybe Distributional shifts so if the the data set is trained in in one environment, but then applied in an environment another environment Fairness could potentially address those things Distribution of errors that was very much used in the compass case, you know, what happens with your post positives and post negatives and can we get parity in outcomes? Those are the kind of things that fairness looks at but it leaves out all these other things that can happen systematically when you apply optimization systems in the world So, okay, I'm trying to get it something but I want to see if I can maybe formalize it some more And I'm going to go back to Michael Jackson. So is there a way For me to tell you and convince you that there's a delta between fairness frameworks Which focus on algorithms and data and fairness in the world Okay, so to do that I said I would use Michael Jackson. This is what he looks like unlike what most people expect And so what Michael Jackson tries to do for computer scientists is to provide an ontology Of the world, which is, you know, very problematic for many different reasons But just bear with me to say What kind of system what kind of machine so the whole thing is called for him a system the environment plus the machine What kind of machine do we introduce in the world so that we can get to certain outcomes in the environment? So he says the job of a requirements engineer remember that one was kind of obsolete Um is to find those requirements or the changes you want in the environment and to specify the machine That will fulfill those in the world So he gives a little bit more detail to the ontology He says the environment has something called domain assumptions. So these are the things These are the these are the facts in the world He calls them that describe the behavior of the environment as it is So what happens in the world and then the requirements are statements about the desired conditions in an environment So let's say you want to improve traffic time travel time for for users, etc Um So the requirements describe the application domain and the problems to be solved And he's very much into problem definition and what how to solve these problems He has proposed something else called problem frames, which I will not go into today Then he says, um, there's a specification and specification is about the phenomena that's shared and phenomena You need to keep in mind between the machine and the environment It's a restricted form of requirement providing enough information for the engineer to implement the system So it tells the engineers what it is they need to build but not necessarily how to build it So if I say, um, the engineers need to build a system that reduces travel time for users That's a specification and then you can be more specific and say, okay I you know measure travel time as such and such and the system is successful if it's you know, um Reaches a certain threshold defined by the stakeholders of the system But what it's supposed to happen is that you go for requirements in the world to how the machine is going to achieve it Which is described in the specification Programs on the other hand implement the specifications So requirements engineer gives the specification tells what the system is supposed to do and the program is how you're going to implement that So how you're going to design the machine to do that So if I come back to where I would locate fairness frameworks here, it is in the specification I would say, um, although of course, you know, there's a part where it's engineered But for a moment, I'm going to leave that out and I would say what fairness frameworks do in terms of research Is explore a specification of a machine that is fair. So let's call it as fair for some definition of fairness Um, and for all inputs the output and the outputs of the machine. They will be fair Okay So now I can In this ontology express a little bit the kind of things that get left out by focusing only on the Specification of the machine and not what its impact is in the world Which is what we're necessarily usually concerned with when we want fairness and social justice, for example So let's give an example So let's say that we have a fair predictive policing algorithm that can fairly distribute police officers to different neighborhoods So we're done. We've got parity. Everybody gets equal amounts of police officers For some definition of equal So now let's look at the domain assumption that maybe the policing institution is already configured to control minorities And interactions with police pose greater risk of harms to those minorities In which case actually fairness in the world is not achieved What we have is a fair allocation of resources, but they can nevertheless desperately impact those minorities So if we have a fair specification It might also not capture the effect of machine on phenomena not shared with the machine Remember, I said there's phenomena that's shared in this specification and keep in mind the word phenomena These could be behaviors or activities in the world that is not shared with the machine So if we think about for example, something as Bare air bnb, which actually has been shown to have lots of discrimination issues in their platform But let's assume that they have achieved fairness so that hosts and visitors alike are not discriminated for some notion of fairness But we have dozens of reports that have shown that air bnb has been disrupting neighborhoods by changing rent dynamics neighborhood composition So again, this is not going to Be part of an algorithmic fairness because it's out in the environment And it's something that is neither of concern nor part of the solutions of fairness frameworks Specification that is fair may also not capture potential harms from phenomena in the machine. So Here i'm going to give a pokemon go example What pokemon go developers did was when they started their world in which people could play pokemon go They needed a map where the pokemons would be generated to do that They used ingress which was a map that was generated by ingress players Which was a short lived game which was mainly picked up by early adopters, which happened to be mostly white men So when pokemon go started off in a lot of Poor and this was in the u.s Poor and especially black neighborhoods. There were practically no pokemons to be found So here we see again where you know pokemon goes can say I am fair to all of my users and those users might even include Communities that are usually underrepresented in these systems But the map which is in the implementation of the machine might cause a Harm in the world that is very difficult to capture with the specification that is fair I refer you to our paper to look at other types of Let's say delta between what it means to achieve fairness or social justice in the world from a systems view Versus achieving fairness in this specification Mostly because of time concerns. I don't want to also eat your totally geek out on here It works and here it doesn't But there's something else that I want to emphasize Which is that even michael jackson has his shortcomings? Even though he's a he's a bit of a mentor for me Which is that he skips on the political economy and this is something that also fairness frameworks do not look at Remember I said in the very beginning That specific that fairness frameworks assume that fairness will be Applied at the discretion of the service provider But if you study the political economy of these services and the kind of power imbalances are built They're currently engaged in or that are part of You will see that they might not always have the incentive To capture fairness requirements in their environment They might not have the incentive to Take care of their externalities. Uh, for example In the case of ways people from neighborhoods have called ways saying, you know, you're destroying our street Municipalities have called ways to say you're increasing congestion in our city and ways has said this is too costly We cannot respond to you or just completely ignored them And so we see that the economic incentives are not aligned for these companies to take into consideration Fairness in the world and in a sense fairness in the algorithm is a convenient solution for them to say we're done And in fact, you can imagine that companies have an incentive to not only optimize their systems for fairness But to optimize fairness to say to to take the minimum Threshold for fairness whatever that may be and to say look i'm done, right? And so this is the part where michael jackson also misses because he assumes that computer scientists and engineers want to do Good in the world And here we see that actually economic and political incentives might lead these companies otherwise and the fact that fairness frameworks are at their discretion might actually End up impacting how much fairness even the specification provides If you look at the paper, you will see where we make some assumptions These are somewhat more formalistic Approaches to show what happens when you focus on you on the algorithm's inputs and outputs and not on what's happening in the world And we even leave out the incentives problem to show that it's very costly and difficult for Fairness frameworks to address some of these concerns that we talked about So if I would want to kind of elicit a conclusion from what I have told you What here's what I have to say about fairness frameworks First of all, um, they focus on a narrow definition of harms in the inputs and outputs of an algorithm in a somewhat Decontextualized manner. They do not look at what happens in the environment There are some people who have started doing that and there are always cautionary tales But it is a very very computer science project in that In that it um tries to decontextualize a solution that can be applied across contexts, right? Fairness frameworks mitigate discrimination harms at the discretion of a service provider That has potentially incentives to optimize otherwise. In fact studies have shown that they often have other other incentive structures than being fair Um, where he's in fact service providers often exist in an interlocked web of systems that introduce or amplify Existing injustices as a lot of recent work has shown Anywhere from Ruhab and Jamin to Safia nobles and many others who show that there these interlocked web of systems that in in the technology domain that Amplify existing justice if not introduce new ones What is of greater concern to me is the fact that fairness frameworks Narrow down politics and the possibility of contestation to the redesign of the algorithm Which may not be the site of the problem, which I hope I hope I showed you and may not be the site of the solution In addition to that as a privacy person, I would say that fairness frameworks do not say much about privacy They very much are on that debate of collection versus use on how you use the data not so much whether You collect too much data or not. In fact a good number of fairness frameworks says you should maybe in addition to your usual data set collect Sensitive attributes to check if there is unfairness in your system So as as as frameworks they kind of confirm the use of data often from that dodgy data markets without questioning computational power So we have made we have a little response to As an alternative to fairness frameworks remember we talked about ways And so we thought about what is it that what is the kind of solution that would not put all the power in the hands of the service provider What is something that could give maybe more power to individuals or communities or environments that are impacted by optimization systems? And that's how we came out up with protective optimization technologies And I think it would be more right to say this is how people came up with protective optimization techniques And we wrote a paper to show that this is possible. Maybe necessary and and appeal to computer scientists to develop such solutions so the solution we found from residents is that They would often turn on ways on their surface street roads and report roadblocks. Remember, it's a participatory Application so they would say there's a roadblock on my street Which would then give feedback to the optimization algorithm that this road is blocked and the traffic would be rerouted to other parts of the city And in fact, we saw a lot of different examples of people basically Interacting with the system and changing inputs to the system in different ways so that they can get better outcomes for their environment So we talked about the virtual roadblock I heard in australia the police do not want people to use ways Because people can now spot police checks So in miami what they did is they put police checks everywhere so it would be unreasonable to trust one over the other And some researchers Created a bunch of ghost accounts to create make it look like there's a traffic jam On the freeway which meant all the all the cars that are using ways would be diverted to surface streets And they had the freeway for themselves So we see that users already and some of the more competent and Or more knowledgeable about technology than others have found ways to push back on the on the externalities of optimization systems So what we do In our framework or proposal is to say, okay. These are ad hoc responses You know ways for example quickly found a way to identify Those residents were not really using ways but using its account to it and blocking their accounts So how can we increase the effectiveness of these efforts? And systematize them so that they're more effective for those who are impacted by optimization systems negatively So we propose that computer scientists should engage with this they should design tools That allow users to re-optimize their world for themselves And we do this using For example adversarial machine learning. We switch The trust model of adversarial machine learning Which is that usually um computer scientists assume that the machine learning service provider is good And the adversary is somebody else who's outside who's trying to gain the machine learning system And we actually turn it around say, what if the machine learning service provider is causing harm to the environment? How can we use the fact that machine learning is so malleable or let's say vulnerable to These inputs from the outside which can change its outcomes To the benefit of the people who are impacted by these systems So to give you an example in the case of Waze What we did is we used traffic interdiction algorithms developed in the world war two to keep the russians from getting to the front stage too quickly By bombing out the right roads. So you optimize which roads you you bomb out So it's a war technique, but Okay, it provided what we needed and so what it does is looks at exactly which roads which parts of a surface road Need to be either blocked or you know, you can introduce for example speed limits or traffic lights or one-way streets So that the time to travel through the city is not worthy for people to get off the freeway And that way you somewhat increase a little bit the travel time for the residents of that city But you make it unoptimal for ways to propose that street unless there's a very very big traffic jam on the on the freeway And you can see that it's a collective solution in the sense that you know, you can propose this to municipalities Who do not get a response from these companies because they scale up and they serve thousands of cities And so they can't care less about one city that's complaining To find a solution Maybe not a solution but to raise the issue and to make clear and show forms of contestation That would otherwise not be possible Okay Just to kind of give you a Comparison before I finish Pots start from injustice from systems And affected populations and their environments rather than a top-down definition of what is fair unfair Executed by a centralized agent the service provider Pots aspire for just outcomes in the environment. So not in the algorithm parity may be part of such solutions But may not be sufficient Pots produce a different kind of political political contestation Including the contestation of utilitarian models, which usually underline both machine learning and fairness For the management of everything There are also problems with Pots, but I think we can come back to that during the discussion. I went a little over time Thank you for your patience That was fantastic. Thanks so much, cedar I've got about 15 questions here Um, but I like to invite people on the um on the panel to use your use the hand function to raise your hands And those of you who are in the audience, um, if you'd like to use the q&a And we'll go through them there. I'm going to start with a question from atuza Okay, uh, thanks so much cedar. Uh, it was very interesting. Um, I think I wanted to a little bit, uh Question you on this distinction between optimization systems versus counter optimization systems, right because the general Like optimization framework is very very broad. And so normally, uh, we can We can consider some issues about the environment Um, with respect to the objective function and optimization Or we can set some constraints of the optimization still we are using an optimization kind of framework So in some sense then the the criticism is not really about like whether we are using an optimization Framework or a non optimization framework, but it's like really about how we define this optimization functions What are the decision variables? How we define the constraints and things like that So yeah, so then basically the idea is that the criticism is not about optimization systems. It's about the way we define them right, so Um, I I will not go back to the slide But one one of the things we do is we say pots itself is a provocation To show that you can't solve optimizations problems with more optimization Because you can imagine that when the ways users on the street Report a roadblock and the traffic is diverted from their street. It usually goes to another street right And so now you've just removed the problem from your street But maybe you've put it on somebody else's and of course, you know, you can say, okay I'm going to engineer the system so that goes to that street, which is the least Going to cause disturbances to the neighborhood, etc, etc But the point that we try to make is that optimization as the only form of governance, right? does not address the complexity of the world and and even the pots May surface the problems with an optimization system But they do not necessarily solve it And in fact, they show some of the ways in which optimization means almost per definition to externalize certain costs to others right I don't know if that kind of helps a little bit Can I do a quick sure? Yeah So I mean because there are all these kinds of optimization like local optimization global optimization And there are some some scientists mathematicians like Euler Who would make claims about like at some point we can define everything in terms of optimization. That's right Like clear about what is excluded from the notion of right Yeah, it's very interesting because you know, when I gave this talk at the simons institute with a lot of people who do optimization They said well, everybody optimizes all the time, right? Like and and I had to remind them, you know, since the time of Mechanical clocks, I think we have a tendency to use our technological advances as metaphors for human activity So people used to think that we are all like clocks And you know about 20 years ago physicists said everything is a network And so we were all of a sudden networks, right And then somebody stood up at simons institution and said institute and said we're all computation So I think we need to be careful With being too excited about our current techniques and applying them to everything and to say, you know, they do certain things we live in Despite COVID-19 in a logistical world with limited resources in which optimization is a necessary technique However, if you start using it as the only technique to manage very sensitive areas of life Including the basic infrastructures that we use in a way that Left that companies and remember we're not talking about public institutions that are held accountable to the public Through a bunch of democratic procedures. We're talking about companies that Transcend national state borders and deliver these services and claim that they're optimizing when they're they're optimizing their value extraction So these are two, right? There's a technique on the one hand There's the companies who are scaling up globally on the other Which is a very different setup already and then to say optimization is what we all do anyways And these companies are just doing it and the more information they have the the better they will do it Really undoes Both the kind of limitation that they bring on what optimization can do because of their value extraction interests As well as the fact that they're not accountable to a democratic constituency by virtue of being global companies I'm just a little sort of Note on that I think although the In the ideal you could say that all of these things are trying to solve optimization problems Such that they they could in principle be specified in that sort of way And you know in principle These of these apps can solve collective action problems in a way that um, you know, we're not able to without this level of communication Um, if in practice no company is ever going to actually take into account all of the relevant interests and actually go And optimize for all the things that matter Then the the fact that in principle you can express it in a certain way Wouldn't end up having much Much purchase, but I think that's what you're just saying anyway, so So the next question is going to be from katie Thanks. Yeah, great talk. Thanks Cedar. Um Yeah, so while you were talking I was Reflecting on whether like you were suggesting that These online apps in sort of encourage or erode our capacity for pro-social behavior so, you know For many years we've sort of relied on people being willing to follow norms and and to Have some self-sacrifice at least if they conditional on other people also following the norms and so on And yeah, no and so these apps that sort of get you Promise you optimal outcomes for yourself are really kind of taking away from that spirit that we know we rely on but There is something odd though in that people are sort of remarkably pro-social Towards other users in the app. I mean more than it baffles me, you know, I like rarely leave comments on appliances or you know All the ways in which you were able to share information online I mean, I rarely do it, but other people seem to be much more socially oriented. So this is sort of strange tension between sort of a lot of social behavior in these small groups, but Increasing isolation of that group to the outside world. Is that the kind of phenomenon that you're picking up on? I think um, I think what I'm Picking up on is two things and I think maybe it relates to the first question to the second as well Which is that these companies are in a sense creating environments in which we act where they determine What are the conditions of our acting? And this I think even if we find that this is um Socially pleasant for a while, right? First of all the utilitarian logic means that minorities are typically aren't going to be erased, right? And this is for example, if you look at what happened with facebook with minority users, right? Like they're usually subject to a lot more harassment. Um, you know, the the policies impact them much further anything from freedom of expression to real name policies like it affects lgbtq communities and sex workers much more. Um, if you're a Person of color or a black person You know, you're much more likely to be subject to harassment that the company will not pick up on especially if you're a minority, etc etc And then there's of course the geopolitical level where you know, uh, the Cambridge analytical thing will make headlines across the globe but the The fact that in me on mar ngo's raised red flags with respect to the use of facebook in the genocide We still haven't had any sort of accountability around that, right? and so What i'm trying to say is that these companies are creating environments in which they love they Manage populations and environments In under the logic of optimization, which limits political contestation even if it looks good now It might not be looking good for a minority that is not worthy under a utilitarian logic, right? I mean it's worthy from a universal Values, right? Like everybody has rights kind of human rights or ethics perspective But the utilitarian perspective will say, you know, if I have a billion users and there's a million that's struggling Oh, well, right? Like that's the utilitarian logic and it's very hard to break out of it And it's exactly that kind of removal of contestation And and what fairness does is it says any justice claims can be dealt with by the service provider Which is very problematic, right? It's it's anti-democratic by design. If that makes sense Yeah, sure. No, there's a lot to think about. Yeah, thank you Okay, so the next question is going to come from Damien Oh Thanks Damien Um, I suppose like my question kind of follows on from that because I suppose to a certain extent No, it's not the responsibilities of the businesses themselves to be democratic It's the responsibility Of policymakers and the legislator so Like I mean your solution obviously I mean you kind of indicated to the towards the end that some of these solutions could be You know given to municipalities in order to start their problems Uh, but don't isn't there already just a simple way of saying well that business model is actually causing too much collective harm We should just simply not allow it Um, and I'm wondering to what extent like, you know, the traditional fairness debates in the like the fair machine learning literature are actually kind of I mean they're The consequence to a certain degree of process orientated requirements kind of that are actually required in all as it is So the requirement like accountability mechanisms and impact assessment And then they kind of they have to do that So they've operationalized it to a certain extent whereas actually Kind of what you're discussing are kind of broader debates and actually a failure of society To do something about it and then you're optimize optimizing Sorry, but probably a bad use of the word to actually, you know Come on to technical solutions. What is actually a social problem and should probably be done More directly through I don't know a law or whatever or policy or however you construct Let me see if I can answer this in a in a short way and it's very nice to see you um So we've come up with we've kind of been looking at where these companies are going with respect to investment And what we see is there's been you know, ever since 2008 a lot of money has gone into tech companies Actually, you see this now with covet 19 as well as markets kind of struggle a lot of money goes into tech because it's much more Stable in comparison and actually they're making a lot of money out of the suffering of covet 19 So that's kind of interesting to think about as well So where these companies have to go in order to return on investment is to integrate computation As much as possible into existing infrastructures and by existing infrastructure. I mean transportation health Education everything that we see with covet 19 that's propelled forward Was already a project of these companies and we call this programmable infrastructures So what these companies propose is a way to make more optimize or maybe even augment existing infrastructures by integrating digital into it and in a sense it's comparable to the advertisement model right like google made a lot of money by taking everybody's advertisement budget What these companies clouds are going to make money off is by taking everybody's it budget and the biggest it the it budgets are The greatest potential is when they can take whole infrastructures That includes the universities and the fact that we're using zoom right now as part of that project, right? um So if you look at it this way municipalities and democratic institutions are up against companies with global finance that are That whose project is to take over the management of the infrastructures that those democratic institutions are actually put to the public And this is exactly the kind of tension that we're looking at and indeed you could argue that the that the You know the algorithms is you know in the design and you can still ask these questions And that these companies are taking over existing infrastructures is a different level of a problem, right? But let's take a moment take a step back and say If it's the case that these systems are eroding democratic institutions that manage our infrastructures And they're optimizing for their optimal users and they can escape regulatory mechanisms Or if they don't they become the implementers of regulation again becoming even stronger Right, then what does it mean that they have a fairness claim? Does that make sense does that answer kind of your question? To some extent I mean let let let's put it this way right like the coven 19 We had the split of those who could stay home Many of us I think could and work from home using these digital services and there were a bunch of those who had to go out and deliver care And delivering also packages right like people who kept the logistics going and kept things going Both of those are managed by technology now. You can say the the delivery services are fair But if you look at the workers that are there, they're mostly Lower class if that term still holds and they're mostly people of color How are you going to have a fairness? Framework there that is going to respond to the fact that technology companies have enabled the situation In which some people are continuous at risk while others are safe So the next question from Colin Yeah, thanks. I mean I'm sympathetic to a lot of this, but I worry that a lot of the framework was kind of Evil tech companies versus noble municipalities and noble users And of course I can see a lot of this going the wrong way, right? So the kind of user hacking of ways presumably at least initially is going to push Cars off of tech bro surface streets and into poor neighborhoods And even in terms of privatization, I mean so new south wales is always on a big privatization kick And takes it as one of the features that you can push over all this responsibility to companies And then you can't use do freedom of information requests and so on So I mean it's not obvious to me that you know any of these like I can see how they can be used for good In some cases, but I can also see how they can Very easily just a lot of the stuff you're proposing very easily just exacerbate existing inequalities And if you're kind of cynical about the world, which I tend to be these days You might think that that's probably going to be the modal sort of thing right is that If the system is set up to say screw over minorities Then you know, it's going to find a way to do that kind of it's sort of tech agnostic about which way it does it so Absolutely agreed and I don't want to romanticize municipalities at all right like and I've actually Heard people saying you know the state was not managing us before at least these companies are doing us a favor, right? But I think my point is not to say let's embrace our institutions with all their ups and downs But or you know and look away from their downs, let's say, you know, the university has some problems to for example But I think You know The point is and I think this is you know A lot of people get a little worried when I give talks because they feel like there's no way out Like these companies have already won and I sometimes lose sleep because I do think that's somewhat true But I also think that if we could understand that the concentration is much more than redesigning algorithms That we could actually rethink what they're what our institutions look like that can live together with this technology But right now what we see and I have to say, um, I've been part of Covid 19 contact tracing app development And I've seen both health authorities and governments neither understand the infrastructural power They're dealing with nor have the capacity to manage existing public resources to benefit the people In fact, they just push their populations on to these companies And I think they're you know without romanticizing public institutions There could be a way to look for how we engage with these companies and how we introduce them to our existing infrastructures That is currently not even spoken about mostly because both academics and policymakers are stuck on algorithms I think we have a lot a lot of ongoing discussions in the slack I'm sort of shutting myself up from having a photo from basically every point I'm going to keep being disciplined and pass on to Sarita There's also one in the q&a as well That just popped in But I'll take advantage of this to ask the question So I want to bring it back around to a to use this question because it's something that comes up a lot And I really like where you're going with your answer So I want to think like I just want to try out how I understood it and and I see if that like makes sense to you So what I understand you saying essentially is that like What optimization does is it um like you have to pick a domain in which you're optimizing And whatever that domain is you're going to push the costs outside of it And there isn't like it's not feasible or even like sensible to talk about the domain being all considerations You would ever make about any like any possible systems that might interact with the system you're working in Um, and so you're talking very fast. Can you sorry? I'm looking at you like I need to work on that Sorry, okay. Let's try again. I'll start over early morning early morning. Yeah Um, okay, so what you're saying to a teaser What I what I heard you saying essentially is that when you are optimizing you're you're starting with some domain in which you're optimizing um, and some way of framing the problem and Um, however, you've done it however broadly you've defined the problem and the parameters like the optimization will find a way to push the cost outside of that And you are saying that furthermore it's not reasonable or sensible to talk about having All of your considerations from the start in your um in your domain And so you can think of so so basically the problem is that there's no such thing as a static defense against like Injustice in a system and so you could think of and so a tuza would then reply that you can think of it It may be as like a dynamic dynamical optimization. You notice the problem you expand the domain in response to that um, and I guess the and then the response to that is just that it's Like maybe we're not even close technologically to being Where we're at the right level of abstraction to still have optimization be a useful framework for talking about that Okay, let's try. Okay. Like it's a very kind of what I said, but like let's try to be um a bit closer to the model that we proposed okay, so The proposal was that we could improve optimization by you know, either picking not so a social Goals right like that's the ai for good project. Remember like ai can be for bad, but we're going to make it for good, right? And we can continue to see extend the model. We can make it multi objective optimization There are all these ways in which we can put constraints right all these ways you can Improve the optimization. Let's give an example of errors, which is what fairness is concerned with like how are the errors distributed? There is no optimization system. That's not going to make errors Yeah, there's no machine learning system. That's always going to deliver hundred percent that I mean That's an interesting idea. But no, it's mostly a fantasy That we can have but in reality the world is more complex and we can model it and so there's going to be errors What a centralization of the design of optimization makes is to decide what kind of error is acceptable and who is going to maybe even Have the burden of that error And that is the power that you start creating And if there's no way Alternative way to contest and say look, we're we're bearing the burden of these errors and the company says Well, look at my utilitarian logic. This is the best outcome for society from a very utilitarian perspective There is no you see how we have basically undone political process By effeminate mckelvey says it right optimization becomes a way to remove political contestation Or to trump any sort of political contestation. It's optimal Right and so there is no optimization system without errors Um and what you do with introduction of optimization systems and really I'm not talking about the technique I'm talking about these computational infrastructures and the companies that are building up on them You know taking over different parts of the of the world either by taking over infrastructures or piggybacking onto public infrastructures If they don't feel responsible to etc They get this power to determine what is optimal on this infrastructure And who bears the burdens who bears the cost and and where they make the cut And none of these things are currently being discussed, right? Yeah, and so there's no optimization without an externality Even if it's a good optimization if it's with the best of our heart right like it has good goals It took all of these things into account And we show mathematically that it would be very difficult to actually Reach with optimization all the externalities in the environment. It would be too costly, right? But the discussion ends there, right the companies decide where the cut is nothing else Currently I mean I would want to think about alternative ways to do this, but that's currently where it is and you know We can go into this other direction of can we build different kinds of systems? But there's a whole political economy of why it's cheaper to plug yourself into the cloud infrastructures and service architectures That I left out of this talk, uh, which is something else, right? Which makes it very difficult To change these systems on your own, right? The privacy community has tried that for years We've tried to develop alternative infrastructure infrastructures and we cannot compete with global finance fact Infrastructures that are made cheap or they can burn through money for months. If you just look at how snowflake got an IPO This week with immense earnings, right? Like you can look that up and you will see they burnt through millions before they got to that point You know, no municipality can burn for millions like that to make a cheap infrastructure Okay, so we can have one last question from the q&a panel So the the one that we don't get to if you would like to copy that over to the slack channel And we'll continue the discussion there But samakan asks how users consent and autonomy can be fact into this So obviously when you're thinking about apps like ways, you know, one reason why they have a particular constituency is that there is a particular set of users who consent to be Who consent to have their data collected and shared with them? And so samar asks how given this Consent should take into account fairness Whether there is I suppose I've glossed on that Whether there's a natural constituency given by the fact that there is user consent at the foundation of these apps presumably samar would also have modulo all of the problems with notice and consent in Yes, in fact, you know, if you look at the very short writing we did on programmable infrastructures We talk about pocket power As another way in which democratic institutions can be sidestepped So I will give you a completely different example and I apologize for this But I mean, I guess ways is itself an example, right? Like or even uber, right? Like some cities will say this is not allowed anymore But how are you going to keep people from running the application? Right without putting out a very big surveillance system and you know Finding people for having apps on their phones, which is ridiculous, right? Like and so These companies are indeed very cognizant of the fact that They have what I call pocket power They're already in the devices that are in our pockets and can already Decide the functionality, etc. So with respect to consent I just want to say that, you know, the reason I got into this research was not because I was interested in fairness I'm I'm you know, I'm a very I'm very politically interested in social justice as a computer scientist, but fairness was never for me the way But I was working on privacy and I was thinking for years That there was something wrong with the way we expected users to decide on privacy When there had been already a cascade of decisions made to them So they were left with like this little slipper You can go between a and b when all of these decisions about how data is going to be collected about them What's the defaults, etc had been done? So what I started doing is studying developers and their ability to produce better privacy for the users Like see if I could you know get privacy engineering in etc a very like michael jackson, right? Like we can do this, right? And it was at that moment when I understood that the way the The software industry has moved from shrink wrap software to services and this continuous optimization That most developers will not develop software from scratch They will plug and play with services that already exist, right? Like that's why when you go to a lot of websites and you have to log in They'll ask if you want a google or facebook login and they won't have a separate login because it's costly To introduce your own login and to secure it, right? You need to hire someone to make sure your system is secure, etc So what we have is an ecosystem where already the the the fundamental decisions are made at an infrastructure level Upon which all of these developers build these apps So the idea that we can produce autonomy or better consent is limited by whatever decisions have been made at the infrastructure level So that puts a huge break on what we can expect from Age like the ability to give agency to users because even the developers don't have agency And that's my concern with respect to fairness. What does it mean to solve fairness at the level of infrastructure? When fairness is about multiple groups contesting, right? Like who has rights and how and the groups continuously change, etc What does it mean to have already resolved that issue at the level of Infrastructure with hundreds of thousands of different groups that that's an impossibility It's always good to finish on an impossibility So look, I'm going to go over right away to slack and I'm going to write about 15 questions Anyone else I encourage you to do the same. Thank you set up for for getting up so early for such a fascinating talk And let's go give you a sort of silent zoom round of applause And yeah, thank you very much. We'll stop the broadcast in a moment. Um, great. Thank you all for coming and joining the session