 go ahead and get started. I'm listening just provide the context I have a reason I showed up so um yeah I mean I can at least get started with my like initial get reaction the first thing I wanted to talk about one of the things I actually one of the things I really liked was that first article and basically the short summary of it is the article or like the paper tries to reframe the question of like why do we care about private data and like why do we care about like worrying about like these policies through the lens of not so much saying like do we need to prove that there's a harm but then actually eliciting like what are the things that like are by default like risk items that exist and so it's not proving I thought it was like an interesting take of like a thing that's a fact but sort of like not necessarily acknowledged that by virtue of how these systems are sort of constructed and how we think about like what is like the default state you can sort of either assume harm is only introduced when a bad thing happens or harm is introduced the moment that there is a risk of a bad thing happening and it's sort of interesting tying that to some of like the downstream things about like how do you think about like proper regulation where you both want to like it sort of ties into like what Jonathan the trains thing says that they're the last one of the last optional ones but I thought it was like a wordy one yeah the thing I thought was super interesting is I don't think I've seen that framing before like usually I think that actually you could probably comment better than I I think a lot of the times when we think about like environmental regulation we think of it in the like okay after we've done a thing how do we apologize and like figure out that like oh yeah that was truly a bad thing or you think about it in the context of like Starlink like you shoot off all these satellites which is like theoretically a good providing internet for everyone but also like really messing with like astronomers and their abilities to like map out stars and like planets and stuff it's kind of a getting it under the wire before anybody feels it's not too kind of feeling which I think like fundamentally like highlights one of the ways that I think regulation is like interesting where a lot of times regulation is like we don't want to stifle innovation and so as a result we like tend to bias towards like under regulating and I think GDPR this is really just tying all of them together the thing I thought was interesting is like having seen GDPR in practice that like my old job and then seeing it also like in that other article of like what is the impact after a year like you notice that when you're in the EU you click through a lot and like in theory sort of does some stuff but then like it's not necessarily like enforce it like so they were saying in that other article like it's not like they necessarily are actually having people enforce all of it and so there's still this question and like people like 63% of Europeans still don't feel like their data is properly protected and so I guess there is this question of like so how do we think about regulation both from a like there is a harm that can be introduced and in fact we do materially see it when nothing is done because there is a risk that we are just not acknowledging as well as how do you like implement this in such a way that like it doesn't actually like stifle people trying to do stuff and it's like actually like easily applicable I think it's like an interesting tension and one of the things that again from one of the other readings that I thought was super cool is this idea of like maybe we just like take a step back and realize okay there's like a hodgepodge of things already sort of like applied here we have like HIPAA that thinks about things in like a medical context we have like Kappa thinking about like children but like Julie data is like quite arbitrary and you get like these weird like splices of things and then maybe what we sort of need is like a meta like model of these things to think about like so what do we define as like PII and like if we started thinking from like that lens like if you have like this marking of PII on like a specific column or like data then like how do you need to treat that and yeah so I guess that is like looped through like most of the articles way too quickly but I feel like there's like a really interesting theme there of like how do we actually write the regulation in such a way where like it could be uniformly applied so there's like actual confidence that this is being like doing the thing that we want to do which is like providing some layer of security how do we then make it like not so hard for people to actually implement and then also how do we make sure that we're like doing better than the status quo of like do you know at least status quo in the US which yeah yeah yeah I think that's really interesting and you look through so many things that I couldn't even write down myself to like it's cool I'm sure well I'm sure we'll look around I feel like these conversations are always big spirals and I think that I think that's cool about them is that like somebody brings up a whole bunch of ideas and then we we rabbit hole on one or two of them for a little bit and say you're out the next one the last thing you said really reminded me of a quote from Nissenbaum that was quoted in that one slide show thing they had where is that oh notification is either comprehensive or comprehensible but not both is very much along the lines of like how do you make this both functional and and parsable like how can we have a regulation that actually works and that people understand and that they can implement my product management professor had this concept of the iron triangle where you have what is it time cost and quality and you can rotate it but if you can't have three fixed points you only get to choose two of the points and so there's some of that going on the other thing that you were talking about was like this tendency and this is the one I kind of want to drill into that was talking about this tendency to to not regulate things if we don't have to and like I come out this from a lot of different perspectives one is that you were talking about like that lens they were using of what is harm is like the harm exists when the possibility for harm exists or does harm exist when the harm actually occurs and honestly I mean that touches on a lot of the different conversations we've had when the possibility for harm is likely and and it really ties into our trust conversation I guess one of the things about regulation like I've set up a wallet for communities from scratch and gone into communities and tried to set them up to be healthier places so like as an open open source person and as a community oriented person like there are certain things that I do and don't do right at the beginning and one of the things I do right at the beginning is write a code of conduct and that's less because I actually expect a code of conduct to come into play and more because I'm aware that that's a signal for people about whether or not this is the kind of space where they're likely to be welcome and listened to but one of the things that I don't do and really like try to explicitly not do is is the kind of bike shedding area of making rules for things that aren't problems yet and that's kind of this interesting balance where like it is a lot of work to pre-create regulation for problems that you don't yet have but can imagine being harmful but I don't think that scales and we always talk about what scales here because of course like you can have trust within a unit of five people and you can't have the same kind of trust within a unit of a thousand people and of course the country is much much bigger than that so then you have this problem of like within a small community within the kind of community that I'm likely to create I think that you don't want to create a lot of regulations at the beginning like I think you want to leave it underdone and that's not even because like there's two main reasons one is that it's a lot of work to make and try and enforce regulations and the other is that it doesn't create the atmosphere that you're trying to create where you actually depend on each other and trust each other but like if you're scaling an organization so that works up to like 12-ish people and like this this comes from startup too once you get past about a dozen people you don't necessarily know what everybody's doing anymore and even if you actually are working in really good faith you probably aren't going to understand like this is where you start needing like some kind of daily scrum check-in this is where you might start needing a manager where somebody checks in and is like hey is this stuff actually happening are you communicating are you doing well and that's in a community where you actually all have the same dream and intention and so what usually happens with these organically growing communities is that you build regulations as you need them and it becomes this really cool kind of it can be a really communitarian exercise to start thinking about like what are the rules that we now need and it's challenging but what we're talking about on a governmental level is always retrofitting weird to sort of impose rules on an existing community at a grand scale in ways that are definitely going to negatively impact businesses because there's no new regulation that doesn't negatively impact some business but that might it might negatively impact some individuals because of those businesses or or unrelatedly but also not making it might also negatively impact individuals and so it's a really different question at that at that level of scale like what it is to create a regulation and that's all you know that's all sort of a precursor to the other question that we've asked in the trust conversation of well are companies worthy of trust or like does the concept even apply to a company or is a corporate entity something that if you don't regulate it it will it will just grow it will just do everything that it can to consume everything that it can. I've been talking a while so back to you. Well so two things I think are really interesting about that so thing number one I think there's like it's interesting when we talk about retrofitting because like technology and the internet has been a thing for a bit now and like we've seen the like the version of bad so it's weird in the US where there's this like willing or like I guess I don't know what the right term is talking like we seem not interested in getting a gdpr type version so like Obama administration put out like some principles Trump was like why do we need this but like we're actually seeing I think what you're sort of are you're describing like California coming up with their own we've seen Europe come up with their own and so it's not even like rooted in like a reasonable principle like we also see data breaches and like here are like material bad things it's not like it's a theoretical it's like it is a thing that is happening and so like literally the trade-off seems to be so what is like the sum of the total harm of the like post chat or like the post harm thing like if we were to do aggregate number of people in the aquifax breach summed up like how bad would it is that versus like the economic harm of like companies implementing this and I think this is where the question of like how hard is it actually to implement some of these things and like one of the questions I have is like is there a way that you could like make this easier to like if you were thinking about it from like some of the technology side like I think about like the large companies that are actually trying to do this stuff they mostly contract from like I don't know like there's open source things that definitely like like Postgres or whatever and like could Postgres have like specific markings that you can like allocate to like tables and stuff um I don't know so it seems like there might be like especially in this realm there might be something interesting about like how can we use technology to like help scale from that perspective the other thing that I thought was interesting uh like can we trust organizations like or companies specifically um I'd be curious to know what your thoughts were on like the train thing about like privacy fiduciaries or like because in theory like fiduciaries like do exist and like they generally like depending on how you feel they are doing that for their clients that is like a legal requirement for them like if you're a black rock or whoever um and so I think there is and I wasn't I don't think I caught the trust conversation I do you think there is something about like how can we use the legal framework as like a hammer or a stick to like guarantee that company like whether you believe that they are like completely rational there is this other option of like you make it like a thing that they could be designated if they want these rights and there's like a legal recourse if they don't like honor that like end of the bargain um which uh yeah it doesn't have to like trust lists it could also be like uh it's trust lists because there's a huge economic camera that like could in theory be like swung against you if you abuse that trust yeah ideologically I really like it like conceptually I really like that there can be a legal thing like I forget what the name is of the um the legal responsibility that like corporations people or I don't know if it's the corporation or like the CEO and board of directors or whatever has to the shareholders to increase revenue like they have that yeah yeah yeah that's just a regular producer yeah well there we go um so I really like the idea of this sort of um counter pressure or like ideally counter pressure I have a lot of cynicism about the idea that it could work because it has a lot of the feeling to me I mean the phraseology used there says like companies like Microsoft have already indicated their their interest in pre-compliant with um with regulations that aren't yet imposed and I look at that and I kind of roll my eyes with them like okay I'm very familiar with companies trying to ward off regulation by creating their own loop holy version of it first that says oh you can't make a you can't make a different one this is covered I don't know like I was this is in a completely kind of like parallel but like I have an example in this vein too I hear the same thing a lot about oil companies especially now as like oil companies in Europe are divesting more and more from oil as they like invest in green tech and they're selling all of their stuff to American oil companies and so it seems weird to be like these are better oil companies than those um I do think there is something to this like as the dynamics of power change you do actually see incentive alignment and like specifically in this context I would point to maybe apple depending on how you feel about like iOS 14 I do think that's like an interesting like uh incentive alignment of like apple is doing what it thinks is best for like the privacy of like its customers and it's using that as like a unique selling point and like there is this like collapse of like the irrational and like it doesn't mean that it's like not opinionated and there could be divergence of like how is apple forming its opinion what enforcement mechanisms do we have if we think that like apple is doing something that's not far enough um but I do think especially as this becomes a hotter and hotter topic there is like more consumer power and like you see it also with like twitter uh like Jack Dorsey has talked about this too uh not in the same context of like data privacy but like as we think about like algorithms and like what is the role of social media I know he's talked a couple times a little bit about like what is twitter's role in that uh as like not just here we want to keep you engaged uh in the short term but like the long termism view of like can we do better I will not say that I think twitter is succeeding but like they're they're trying well there's like how much are they trying I'm like I don't know like I'm a cynosist but I'm human optimist so I'm so like I'm I'm like I don't know I I'd like to believe that nobody's trying to do evil things even within their corporate role so you know I don't disbelieve that it could happen Greg did you have something well I mean on that note the phrase I usually use along those lines is is gromsche's it's a you know skepticism of the intellect and optimism of the will and I gotta say like the the optimism of my will is really damp right it's really depressed right now um so I've been I've been trying to I'm not a technologist I don't I don't Jonathan I don't think we've met Kelsey you and I haven't talked before um but over the last five years I've inserted myself into technology conversations and I've been like bringing up these questions about privacy uh in a very specific sort of like cross-section of health human and social service like technology and innovation types um and nobody in these conversations was we're having conversations about harm um like as far as I can tell like I was the first one to show up and talk talk about harm and for the first couple years the response in these spaces was oh you know the the cyber security subcommittee is taking care of that right or or we we have that all worked out we have issues of like consent all worked out in the data use agreements like that's under legal and um and I'm like no I don't know if you are hearing me I'm talking about harms that are lawful and and potentially from like non-bad actors like you know and and it just never occurred to them and when I tried to learn from GDPR um and try to bring some of those principles um you know of um revocability and and monitor like like like these things like data transfer should be like monitorable uh you know thinking through Nissenbaum's distinction between comprehensive or comprehensible um and when I start bringing these things back um people get really quiet um and I haven't figured out how to stimulate the conversation because it's so overwhelming the technical people get quiet because some of the things that I'm pointing out like need to be accounted for they're like I don't know if that's possible and the policy people get really quiet because you know the the the points that I'm making but about the gaps between like what's um compliant with regulation versus what's ethical um uh they're like I don't you know I don't know what to say to that and um uh and I have very few basically my sense of like what it is that we need and maybe this relates to the conversation you all were having about like is the harm the risk of harm I don't I don't know if I followed that I might want a clarification on that but it's like um my question in all these spaces is who who's going to be able to decide who's going to be able to evaluate and who's going to be able to decide because right now there's some like um there's some hand waving that goes on behind like the notion of individual consent where it's like oh yeah we'll ask for everybody's consent you know but but like that just it doesn't work at all right as a method of like giving people agency and thinking through the potential repercussions and the trade-offs and like unanticipated consequences right like individual consent as like as a model doesn't doesn't work and also we don't have a I don't see other models out there for how can communities make decisions about this stuff um the closest I get in these spaces is like trying to get these basically the issue in health human and social services in my field is um after Obamacare passed hospitals and health insurance companies suddenly realize that like um uh people are sick because they're poor and and they suddenly like cared about people like not coming back to the hospital like they did they wanted them to stop getting sick which apparently before Obamacare like that wasn't actually like it was it's fine if they kept coming back to the hospital because they kept getting sick because they were poor and so now like health care is like we've got to get everybody out of the hospital we got a sentence and social services so we got to get every community organization on to the same platforms that we can refer people directly to them and know exactly what happened with the social service organization and the case management system and like it all needs to be integrated and uh and when I come up and I'm like have you considered the harms of that even though this is driven by health care the the prospect of like do no harm as a first principle has never come up but I've I've made the case right like now people are turning to me and they're saying okay what should we do and I'm like I don't know so that is my uh that is my spiel and that is why I'm showing up on this call sorry to be late I mean I'm curious if you could maybe go into that a little bit more so what into like I haven't thought deeply enough about this part so what ends up being like a case of like the heart is it like about data leaking or like in in what context are like how does that like manifest I think there's a there's a range of possible harms like I might want to actually like try to put this on a major the range of like like like good and bad actors of like conscious or or just un unknowing you know nature uh you know because most people think and when they think of harms they think of like cyber hacking um uh but but uh but there there's also like de-anonymization which is especially when we're talking about you know bringing data from all these different systems and linking them together like de-anonymization seems to be a much greater risk than many of the people in these spaces seem to want to recognize but I'm also thinking beyond de-anonymization of like the tremendous potential harms that can come from the use of aggregate data from all these different systems in algorithmic decision making and regulation right so in the context of health care like these systems are building algorithms that decide who gets what kind of care right and they can make those decisions according to things like oh like um most recently uh Native American women coming in for COVID tests were separated from their children like by an algorithm in like in New Mexico by the New Mexico health system right because there's like some algorithm decided that like those children are at risk right because of some data that was fed to it and so every time a Native American woman came in to get tested for COVID they were separated from their child right and and like nobody this might not have been a conscious intention that policy might have emerged from just a bunch of decisions made by essentially machine learning and artificial intelligence that know that maybe nobody is specifically accountable for right and in other contexts we've seen that like you know there's lots of talk in this space about like improving health outcomes but what it really means is like saving money for the hospital system and and the proxy for like is something good or bad is like does it save money or not and because like um uh poor people especially and black people in particular like have more health problems associated with themselves they end up getting like shunted by algorithms out of certain kinds of care contexts and into others so that the hospital system is just like I don't I don't have to deal with that because that's going to be more expensive and it's going to be like a less valuable use of my resources and and also like you know the potential intervention is is less impactful because it might stack up against all these other problems that this person has so they don't deserve to get it right like there are all kinds of ways in which this data just serves as input into the system that that yields all of these inequitable outcomes and if you and so it's like privacy doesn't really cut it it's it's also like how are how how is the aggregate set of this data being used to to actually like allocate resources in ways that might entrench you know re-entrench you know existing patterns and I don't know that people certainly in the elite conversation when you get people on uh you know on a panel talking about how awesome healthcare interoperability is these issues don't come up and then when I ask these questions and they're like oh gosh we hadn't thought about that it's like it's it's like the nurses who come up to me and say thank you for asking that question because I've been wondering about that you know it's like the technology innovators just don't really think about it the healthcare executives just don't really think about the potential for these things to go wrong it's the people who've seen things go wrong over and over again who are the ones who are like who might not know exactly what's going to go wrong but they're they're just like they know fuckery is under a way and so my question here is less is like how do we get those nurses into governing bodies like like what like you know that that's when when privacy comes up that's what i'm wondering is like how do we get the people who who actually deal with the shit um to be involved in the process of making decisions about what should and should not happen um and that is a very unpopular question i'm finding have you heard of birth jog birth jog but b u u r t z o r g this is a no a key example that is used in a book called reinventing organizations um oh yeah okay yeah and the the example is and i haven't read this in a while but the example is that basically a group of healthcare workers were working for a company and experiencing a lot of those issues um and also like experiencing a lot of labor justice issues like on a personal level and they kind of threw everything out and formed this nurse cooperative that's quite big i think it covers like a pretty large amount of a country i can't remember it's like the netherlands maybe but i don't know like i don't want to be the radical on the call who's like cooperatize it'll solve all your problems but what they've done is create this like really direct line of communication between actually doing the care and managing how care is done and that's you know it is cooperative in this case you know no i mean you're not going to be the radical on on this call if you start talking about cooperatives as as like the mode of solution to many of these problems like we might end up forming a cooperative right yeah i'm not Bert Zorg Bert Zorg okay here we go yeah all right um yeah i mean i think um you all talked about ostrom right i don't think so we have before not a while yeah so um building off of nissenbaum's work there's there's a branch of ostrom's common pool resource management sort of school of of of academic thought that's specifically about knowledge commons i think you all read some of those and they've recently some of those folks have recently like taken this nissenbaum's uh framework for contextual integrity as like the important thing about privacy in this interconnected world as opposed to like does the government know what's going on with me it's more just like is information that i share in this specific context like going to be appropriately translated or blocked from being used in a different context like that's the old mode of privacy doesn't doesn't really apply to that and this this actually does sort of lend itself to thinking about privacy and trust to both of these uh earlier points as a resource and people and people's dignity as a resource and as and that is essentially like the collective of of that trust and dignity is a common pool resource or of sorts in that it can be like easily squandered right um and polluted um uh and there are ways to potentially cope with the threats to that vulnerable resource and those ways essentially entail institutional design right so maybe a company is capable of stewarding some piece of this puzzle right but that that steward needs to be monitored right based on what we know about vulnerable resources is like yeah okay like you can have an appropriator who has like the power to to deal with this resource but who's going to monitor that appropriator who's going to monitor the monitors how are rules about like what is being monitored set like are they are they are those rules set by people whose stakes are involved in the management of the resource right and and so like i appreciate having this frame right but the thing about the commons the thing about common pool resources is like the more complex it gets and the bigger the scale it gets and the more diverse interests are involved the harder that shit is and it's hard on simple scenarios right so i'm like the more i learn about this stuff the the less hope i have which is a scary situation yeah well i think you've had it on the head earlier when you were talking about how do we get those nurses to be the ones making the decisions and like i guess my point about cooperatives is i don't think that they're by themselves the panacea i've definitely seen them done poorly right yeah but like i think that that's a big piece of what we're trying to reach for in like a participatory democracy model like you know man has ranked choice of voting at least in theory and like that starts to get towards the our ability to trust that our vote does something and like that starts to create this this idea that a government might actually work for its people and very hard to to like not have the very american context centered right now of like we're about to go into this what has already been a shit show of an election cycle like um and and nothing is working and nobody trusts anybody right like yeah and we don't even have like we used to get this like lovely complacency of like well we don't really have to worry about it because it doesn't really impact us that much and like as untrue as that might have been it's never been less untrue um nobody nobody's feeling that anymore um yeah one of the projects that i'm working on right now is um like so the the point of the EPA is to enforce environmental regulations um and a big a big chunk of edgy's work over the last few years has been showing that they basically just don't um my own research project that i didn't publish because i'm like a nervous data scientist is it basically showed no correlation between violating a regulation and receiving enforcement action um nationwide um and that just seems not good um we're doing a much more much more intensive much more like reviewed process right now to get that much more specifically but there is that problem of like yeah you can make a regulation like what happens next like you have to actually you have to actually follow it and like you know i don't even think these regulations are that good they're literally permission to pollute um there's work around that um but but even this very little bit that we have like there's not really any good reason to take it seriously um there's like maybe like two i don't know if this is right so we should jump on me if i'm wrong it feels like there's like two sort of like separate threads that you could sort of pick at like so one i think is going back to greg's scenario like this question of like who designs like the algorithms what data is like shared what either willingly or not and then like what are like the conclusions and the facts because like there is like a sort of like arbiter of truthiness that ends up coming in as you decide like oh this algorithm like if we're all going to defer trust to the algorithm we sort of want to believe that the algorithm has like fair inputs and like we understand the caveats and like all that like anyone who's tried to write an algorithm knows exactly what they're doing and like how caveat intent that might be but i i think especially in business context when you get like these layers of abstractions that definitely falls away and people sort of just like oh yeah the thing spit out the score and therefore like we do the thing the other thing that helsey sort of touched on i think there's this other question of like weirdly i feel like this comes into like the the sort of like immigration debate too where it's like you can have this like policy and like there's no way that the policy is like actually enforced and so what is the point of the policy and it leads to like this question of like so what is the policy doing like so like in the environmental context it might be like you have this regulation but you like these or even like taxes like you have all these rules for taxes and like you defund the IRS so no one's like gonna go on it anyone so like yeah so then what is the point of the rule like it effectively is like undercut by the fact that we can't actually monitor this stuff um which yeah i don't know yeah i can't well whatever you want to call it yeah i mean in my field basically i have to start at the remedial place of helping people think about the difference between infrastructure and application right like facebook like 15 years of web 2.0 has has polluted an already dull american mind that they like they think software applications are infrastructure and i guess like there are some contexts in which you can make that argument but it's like what we need in this field of health human and social services is like infrastructure on which various applications can work but people are seriously just stuck at like what will the software look like that everybody will use right like that's their level and so i'm trying to make the point of like infrastructure is not something that you don't know what it looks like like that question doesn't i know you all want the solution in your hands but like we've got to actually build the things that stand behind the things that people use that enable those things to work and then they're like so finally i'm on the path and i'm like helping people understand like what would that what does that mean and and i and i had to get down to the level of like okay the data exchange pipes the data lake all that should is infrastructure but also like the meetings where you review like what's happening in the in the pipes and in the lake that's infrastructure and the process of making decisions over what should happen you know in the lake and what should be able to go through the pipe that process is infrastructure right like and and like understanding that there's not just like um it's not just the thing itself but the way we use the thing that that's that's really at stake here um is a level of education that i'm just like i'm exhausted to have to get down and and basically be a school marm about it like people are have just been mystified by this Silicon Valley culture of just like it just works right like and they're not able to think in terms of complex systems which seems like a priority um and uh yeah i don't um the and i think that's that that's just also reflected in this like notion that like individuals will consent once to something that's like spelled out according to like some contract that was signed five years ago when you know when the software was procured you know and then that's it right which you didn't read in the first place problem right yeah but this notion of individual consent like like everybody they take it as a given they're like oh yeah people shoot on their own data i'm like but like think about it for a second man the like you know a woman going into a social service provider has like three kids an ex-husband the kids have a boyfriend like a boyfriend there's like you know there's a caregiver involved uh and like her data is tied up in all those people's data right if she's going to talk to her her social worker about this stuff or her or her healthcare provider and they're going to ask her these questions because like if they want to address her social determinants of health which is all what it is all about they need to know all this information about her home situation her family life so she's sharing all this data about other people that's her data oh she consented to share it but what about them like we have no framework for thinking about like how do you protect people whose data is entangled with other people sorry i'm griping with you all like i don't know i really like it and it's it's interesting because like um jonathan was talking earlier about um you know that positioning of harm like does the harm exist when it occurs or does it exist um like when the opportunity for harm like first is created explain this to me a little bit more because you said it a couple times and i don't know if i get it yeah so it's a little hard in your context right because it's like well i give it easy contact with you it's very simple like imagine you're like a company you bet or like you have a store uh and so like you get a bunch of credit card information because people buy stuff from you you have all their personal like where are you shipping it to their full name all that good stuff and so is the harm introduced at the moment when uh telsie hacks me or and like that data is leaked or is the harm introduced the moment i didn't encrypt your data uh such that she even if she hacked my system she wouldn't be able to like read anything is the question right right so it's like the is the bad thing making the harm possible is that in other words yeah and i think in the the the point that kelsey's making is there is like this interesting question of like especially when even as like a social thing you think about like what data permissions do we give to each other like i can talk about well kelsey volunteered cameron's email to me when i dated a contact and like there is some sort of like social trust that we like imbue on people and there's like some amount of like in a human context it feels normal in like uh if i was to look at kelsey's like contact list and see like every person she's emailed in the last year clearly very different uh and so there's like a version of like what do we what are like the socially acceptable versions of like what we share uh and then also like what rate and in what context do i have the rate of veto to be like i don't want you to like you even think of it like facebook and like went back in like 2011 or eight or whatever they were trying to like really make the social graph like an api that anyone can plug into um yeah there wasn't actually a full thought there it was more just like that's like an interesting example of the same issue of like yeah like do i get to volunteer the effect that we are threatened uh to the world or it's some application that you may not want to know yeah and how do we how do we navigate then that tension between like um comprehensiveness and comprehensibility right where it's like like i want people to have tools so that they can like gradually think through the implications of different things because i know like i think about this stuff all the time and when i'm presented with like a consent form of like do you agree to these terms of service i'm like fuck this like like if i don't if i really don't trust this place and i'm not going to agree but if i like if i like feel like i'd need to get in there then even if i only distrusted like a little bit i'm still going to agree because it's like i gotta get in there right like you know um and i'm presented with this binary choice and i don't have you know and it's disempowering so are there examples of methods for that enable people to navigate between like what they can immediately comprehend and the and like the broader comprehensive universe of potential you know potential implications i have a kind of fun too right like oh i'm gonna say yeah like if you're asking me to like consent to a data service and it's like a new niche one and i'm picking one among many like i'm less likely to read the whole terms of service and more likely to read the founder bios that's how i'm gonna know whether i trust them oh that's super interesting uh i was gonna say but you're super savvy though like what about regular people sorry go ahead oh i was gonna say in a totally sort of like parallel vein i do think there is an interesting model where something kind of similar has happened before like i think about like open source licensing uh and you think about like how uh like especially you have like companies have different policies about the types of things that they can use and so like the tools that have been written to sort of like automatically flag when like certain types of policies are like embedded in like dependencies or other things inside of projects it does feel like there is like an analogous sort of thing that you want where like i don't know how one actually goes about enforcing but something to the effect of like there is some amount of like uh like when you're you want some sort of like general frameworks that can like be applied over and over again so like i'm not like trying to understand like the 15 different flavors of like whatever microsoft's version versus facebook's or whatever there's like a standard thing that i sort of know and like i can then more explicitly give consent because i know what i'm signing up for and like it feels like that's sort of like well shrouded path and also gives you the ability to like i don't know you can even imagine like in a browser like this is getting way too like specific about like a technical solution but you can imagine like the sorts of things that you maybe be able to configure and say like i want to like for certain types of applications like enable these things maybe by default these things not by default uh and then explicitly like be able to like one be able to like review who have you given whatever to be able to like revoke those permissions uh isn't this what solid does i have no idea what is solid i think it was in the optional readings but i'd already read about it i'm not trying to show off it's tim burners lee's new like modular approach to to like web browsing where like solid i think solid like you're able to i think it gives you those kinds of like granular controls over what's happening with any given site that you go to is it is that right kelsey do you know are you familiar with this i feel like i ought to know um i've like quoted him on the subject but i still don't have a great understanding of it it's very hard to like read good data good good explanations of this decentralized web stuff because it's like what level do you want it on the basic you can get at level or the level where you still don't understand even though i told you everything it does just the thing that's like everyone has like their own personal data capsule or whatever you know related to that yep yep i think so yeah but like i can't believe that there's an actual i've signed up for a i've signed up for a whole like hour and a half workshop tomorrow at one of these like internet you know symposiums about design patterns for decentralized technologies so i'm not a designer or technologist but like i'm gonna i'm ready to i'm ready to hear like what's up i i downloaded mastered on i i looked at that shouldn't i have no idea what to do yeah i mean you should drop those notes in the chat if you take any tomorrow because i'm really curious i'm gonna circle back on jonathan really early on the conversation you brought up this idea of like we were talking about um well so i had talked about how for example gdpr or any retrofit um regulation that applies very broadly is going to negatively and positively impact people and companies um and you talked about like well well let's try to enumerate the harms um like maybe we could do some version of like looking at how much it hurts various communities and like um i don't know i'm curious what you think in terms of the possibility of balancing like i guess where the concept of equity fits into that thought i don't know if i was making a comment about how things should be more an observation how they are uh i think we're like when we talk about like the harm imposed it tends to be this like balance of some probability like if i was to imagine how equifax prior to like them getting hacked talked about things i assume it was something on the order of like we have this list of priorities we could go like i'm sure it was on someone's to the list to like uh i don't know whatever security stuff they needed to build but it was like lower priority and like the reason it was lower priority is there's some sort of trade-offs between like what is the cost of this thing getting like leaked versus like actually doing the fix and the math sort of balanced out to like well let's buy and maybe that could also be humans are bad at like gauging risk uh but like yeah clearly there was like somehow in whatever manager's function like some sort of discounting that was happening um i do think there is this question about like yeah like if we think about what is the harm that's introduced i think it can like really vary uh depending on like what data and in like what context like what conclusions that leads to which can also make it really hard to like make an argument from like trying to quantify harm to even begin with because it really does matter like how that data ends up getting leaked and like what it can be used again for which is why i don't know if it like and leads to like a convincing argument like just because you could like really go deep or you could be like oh yeah but like what's the probability that actually happens um so i don't know if i have a specific thought there uh other than i don't know if that actually leads to like a good like it feels like the wrong path to try to convince someone of like why they should do a thing um yeah there's something well so john and i were both talking earlier about how like ridiculously busy we are right now and um you know while in the process of implementing technological infrastructure um i took a anti-racism workshop this summer that was really really good um by the attaway group and one of the things that it said that i hadn't heard anywhere before was this concept that busyness is a tool of white supremacy and there's a lot of different nuances to this but one of the ways that i think that that can be true is like if you're the person in the position of implementing the change like you're worried about you and you're worried about what your boss is going to say if you don't like get the change done on time and like that's one of the ways that we have these data vulnerabilities is somebody's just trying to get stuff done um and in a system that says yeah sooner is better um no matter what the trade-off is as long as we don't notice a big gaving hole you know quick merge if you wonder if like part of this is like and this is maybe a broader question about like policy it does feel like one of the like most useful things that government can do i think there's many useful things it could do but like one is like how do you uh explicitly rebalance an equation that is unbalanced which is like uh yeah uh how do you make this a higher priority explicitly make it much more expensive if it goes wrong like uh it and i mean in theory this is like how things work i think this gets to a question of like how do you have i think you want this in like many forms but like what is the right check and balance like you could argue in theory that is the epa but at like what point does that get like undercut depending on who's in power and like their actual commitment to like trying to get to like a specific outcome um yeah yeah i figured i was gonna say ah dang that was something perfect came off of that the economist in me really loves the idea that like the thing that the government is doing is just like somehow collapsing the rational with like the moral uh to like just make it so incentivized or de-inventivized that like the good thing is the thing that you get like it's really hard to predict all the ways people will figure out how to do the bad thing that is still okay like oh yeah so well so i think that where we got to in our trust conversation is mostly agreeing that like trust is a human to human thing um so i think that one of the things that's kind of interesting is and this comes up in environmental conversations all the time is um what if there was personal rather than corporate responsibility for violation of laws and regulations like that's that's one like shouldn't Mark Zuckerberg go to jail yeah like if it can be proven that he had control over the thing and didn't fix it and or didn't anticipate the problem or like if he was the expert who should have known you know shouldn't he be personally vulnerable in the same way as his users are personally vulnerable when our data is used and as board members that's when it would get really interesting but like even just saying that it's like well it's not like he made the code well it's not like he was on the process of auditing and well like yeah i mean i do think there is like an organizational thing that's hard to deal with too like uh to pick an example that will probably great uh i don't know if you guys watch the tech uh i don't know what you would call it the like the congressional things where they like interviewed all the CEOs and like they said to his ass plant blank like does amazon i forgot what it was it was something very specific it's like does amazon use i think it was like pricing data from the website to do something and he was like it is a matter of policy that we don't but he wouldn't explicitly say i know for a fact that we don't and like i do think there is a challenge also and like maybe this is exactly the point where it's like you may have a policy but if there's no consequence for someone like violating the policy then like you're not actually like you have i mean for all good reasons if someone discovered it like yes they would be like reprimanded or whatever but like no one it has like their neck on the line because bezos is worried about him personally being responsible um i do wonder what negative ramifications that could have because like i do think when you're talking about like these super large actors it's like very clear and obvious but i actually think they're the least likely uh like the very very large ones to have the same sort of issues as like think about like clear view while clear views also may be a very bad example because like they seem very intentionally trying to go into a specific area clear view if you're not familiar is like they scraped a ton of data off the internet for like facial whatever um but like i think it's like it tends to be smaller companies that end up being the ones that like end up it like there's this counterbalance of like so to what degree are we okay just fomenting like or like really solidifying facebook's like lead here uh and just being like facebook is the one that will be the arbiter of power because the new hurdles for anyone else is going to be like quite high uh and part of me is like we need to figure out how to like both make it that we can like make it cheaper for people to like be compliant and do the right thing and also like uh make it more expensive to not do that right thing right at the beginning you were talking about starlink and and similar i don't know starlink might be too specific and controversial but like different ways in which like we try something in order to find out if it works um or in order to find out if it even if it's viable i guess um and there is something very interesting there i read a book it was like a book of environmentalist essays back before i was really in this scene um actually at the time i was very very much in the tech startup scene and was also very much on that train of like you know throw stuff out there you know make your name as fast as you can whatever way you can like um if people are willing to give you money you know it's good enough it'll work make it make it happen ship it um and they they had in this essay this one one of the essays is all about um maybe one of you or will know the right terminology for this it's like the zero harm policy or something um it's like it's an argument that's used against using genetically modified foods um that says if you can't prove that it's harmless you shouldn't do it um and at the time i had this like oh come on like we would never do anything because you can never prove that it's harmless um but i don't know it's very interesting to look at that with my current perspective because i'm not sure i totally disagree with myself but i get it a lot i think like one context i think about that a little bit which is really deviating from like privacy but like self-driving cars is like depending on your definition it will either be here very soon or never like uh because it really does come down to like how do you define harm and like what are reasonable levels of harm uh and like i think it's like an interesting like we will have to write a lot about this one thing but i do think it could also be applicable elsewhere of like so how yeah like what is the gating threshold because like the other way of looking at it is by default there is some harm that's like involved right like we are sort of accepting the status quo of like let's take genetically modified food you could talk about like golden rice like uh how many people are like unable to get like access to like i don't know like i mean in china like two rice and like by having this genetically modified option like your choices are either like less food overall or like this risky food um and so i think in specific contexts it might be worth also like piecing apart like are we already implicitly saying like the status quo is the acceptable thing uh and then yeah i don't know if that leads to that just leads to more questions but like in the starling example i think you could maybe uh put it as it's easy for us to say that like oh yes like if we do this thing we're introducing space junk and whatever the other question is like what about the people who don't have access to the internet but want it uh and like what are we sort of implicitly saying about like because you weren't born in the right area you don't deserve access to like all of this like wealth of free knowledge that is just there um i've seen a similar argument used around nuclear power where we have this idea that you can't use nuclear unless you have no waste or you can store it for literally forever versus like okay how much radiation is emitted by uranium unharvested like can we at least get to that level like now you have a line now you have a reasonable seeming line where before you just had this let's just never use it yeah i mean um have you all seen the feminist manifesto the feminist data manifesto uh i will i don't know if i i'm on my phone so i can't like it's just manifesto.com um and reading this was like in my in my field i'm known as like the outer left or the left and most edge by reading this i was like oh fuck like they are far to my left and they and they seem right they seem correct where they're just like they're basically like like all of these we refuse to operate under the assumption that risk and harm associated but with data practices can be bounded to mean the same thing for everyone everywhere at the same time and that's just how it starts and it gets harder from there and and i'm reading through this and i'm like you're right you're right you're right you're right and at the end of it i don't know what i'm left with as like someone who wants to reduce harm in these fields what they're basically making the case for is refusal and and and rejection and and it seems solid to me um which is worrisome because i don't like where does that leave me as someone who's trying to do ethical work um i've sort of made my peace with it to date but looking ahead to um what i expect will be a very bad situation next month um and what i expect will become much worse in january um these fields are going to want to continue pretending like everything's basically fine and politics is so weird right like indefinitely and i think that like i'm personally approaching a point in my professional work where i'm going to have to start taking this manifesto more seriously um under under an explicitly fascist administration like no like healthcare like hospitals shouldn't be collecting people's personal information and sharing it with child child and family services shouldn't happen at all and i think you're right greg i think this thing is i think this thing is totally right i don't think it means you can't do anything with data i think it means you have to do things with data that specifically are really participatory right and and i just i and i don't know um the fields that i'm in the the gap between i was already struggling just to be like let's create a like let's create a group of people who aren't just users but who are setting priorities for this entire system and evaluating outcomes right and maybe that's a little bit what this is um maybe that's a version of this um but but yeah i i i am concerned that the forces of of of power um are just are inherently going to have the upper hand when it comes to complex information systems um no matter what kind of participatory action research you throw at them um that's my concern um and in a situation where we've gone from passive white supremacy in this country to active white supremacy um i think that leaves those of us who who believed in innovation in a very in a very difficult spot sorry to be a bummer again we've all been bummers today it's okay um yeah i honestly i'm wondering in our last eight minutes if there is more ground we want to cover because i feel like we've you know they're really talked through and and give me some right spots you know depressed ourselves about this no what's working give me appreciative inquiry what's what have you seen it going well who's doing it right other than Bert Zarg Bert Zarg is good you got um i mean i like what we're doing with the environmental enforcement watch environmental enforcement watch org um okay yeah where we're getting trying to get different people involved with EPA's enforcement data we've got um what else is there i don't know i mean data design justice as a text and and the many like networks of a thought and activism that Sasha points to there was really helpful for me to bring into some spaces and they specifically link to our data bodies which like i find to be a like a a really good start as far as a report that sort of brings some of these ideas into a digestible framework you've got like max leverans lab in canada does a lot of um data justice and kind of like true academia kind of moving the needle on who's allowed to be participatory in what spaces you know what that's in canada where what's what's that called i think i think she's like near the maritime provinces but i'm actually sure max leveron max lead oh i are right on okay okay right on they look cool how can i talk to people about the risks of de-anonymization i mean honestly you came in and you were like i'm i feel like i've done the first person in a lot of these spaces to think about harms but aren't just legal harms and i feel very similarly including in in leading the centralized web meetup here in seattle like i want a technologist you know there's there's definitely a cycle going on of like you can become a technologist because you had access to the materials in the community and so a lot of the people doing it just haven't really thought that far beyond like how cool it is that you can do the thing right um and i've had a lot of conversations here about like you know somebody said to me oh it'd be so cool if every car had a camera on it and you could monetize just by driving around like you know showing wait times at restaurants and i was like i don't want anything about that like can i break that down for you yep um honestly i think that that's that's kind of a source of hope is just like what you're doing integrating into different communities i'm saying hey have you been asked this question have you thought about this that's what i want data together to be too for people is a space to like to really think about this stuff i feel like we need grief counseling for technologists you know like like there needs to be like this whole like our peers are going through it i've seen many people going through it and it's clearly painful but like more people need to go through it go through the process of like mourning for like the internet that we grew up like believing in and for the notions that like inspired us to do this work like we have to agree we have to let it go and see what comes up after like through that process you know like what are we left with well these yeah i think i asked someone last call on the last day together call like you know how do we implement this idea on a broader scale how do we implement trust on a larger scale and can't remember who it was who just basically clapped right back at me and said you don't you grow up from the ground every time and you know that's not wrong yeah yeah that's what i was worried about we can go better infrastructure too like i get some ideas for how the epa can have better hygiene on their enforcement data practices