 Hi, everyone and welcome to our third and final series on meaningful technical oversight of generative AI. So just as a reminder, our first conversation was around what are the new harms that generative AI is introducing, if any, or are we just seeing old harms and new outfits. And that was with Viva Shorts of NIST. Our second fireside was with Bruce Schneier about balancing transparency and security. And today we have our third conversation with two guests, Julia Anglin and Brandon Silverman. Hi, Julia Anglin has a very long and amazing resume and we could not go through it all, but I suppose your most recent, you know, thing that you're working on is, you know, you are New York Times opinion writer and you're working on a new book. So Julia, thanks for joining. Happy to be here. And Brandon Silverman, so formerly CEO of CrowdTangle and now a transparency policy expert. So Brandon, you've been really advocating for increased access to data for journalists and for other folks. So, you know, a good way to lead into what we're talking about today is this concept of meaningful technical oversight. So, before we hop in, just a reminder to folks who are joined and joining that you please put your questions in the Q&A panel on the bottom of your Zoom screen. We won't be taking questions at the end of this because 45 minutes is going to go immensely fast, especially with this illustrious group. So I will do my best to take questions that are relevant in real time. So please feel free to hop in, you know, put your questions in and then I'll raise them during the discussion. So today's conversation is on meaningful technical oversight and I'm really excited for this one because the policy world is exploding right now. So last week we had Sam Altman and other folks in front of Congress, and there were some, you know, big things kind of dropped this idea of licensing, this idea of some sort of a regulatory body for AI in the United States and everyone, all I seem to be talking about policy. So the Digital Services Act, the UAI Act, etc. And, you know, I want to talk about governance in other forms. I think there is this knee-jerk reaction, we say governance or oversight to immediately think about the law. But the people I have here today are not policymakers, right? You're not people who write laws and are regulators, etc., even if you may advise or work with regulators, but you operate in a different kind of technical accountability oversight. So Julia, I'll start with you. So you were basically responsible for in part creating the field of data journalism. So, you know, this is kind of, I know, reaching back into your career a bit. But I kind of wanted to start from that end because I find, you know, data journalism to be really fascinating because here we are with a profession that one would not think would be technical in nature and you created for public a markup like these institutions that are known for deep diving into the data and reporting on it. So, you know, kind of what inspired that and why do you think journalists were the people to do this? Well, it's great to be here. Thank you. And I really can't take credit for all of data journalism, but I think what I can take credit for is really bringing developers and engineers into the investigative reporting process. Basically, as somebody who grew up in Silicon Valley and was programming since elementary school, I grew up in the early days of the computer revolution and my parents were sort of in the software business. And so I just always saw it as really natural. I rebelled against it, went into journalism, but then what I realized was that journalists were so outgunned. We have, there's like I think six PR people for every journalist in America. And so we don't, and also as you may have heard, the revenues of the industry are collapsing. So I basically thought, okay, I know computation automation can sort of superpower your work. And so essentially I felt like, especially when you're confronted with these opaque algorithmic systems that the only way to try to sort of understand them from a journalistic perspective was to try to do some sort of reverse engineering. And so I brought in journal engineers to help me do that developers and that's sort of what I've been doing for the past decade or so. And I would say that it is really important work because what we're seeing, especially with generative AI is that these types of systems are really hard to monitor, especially when they have different outputs every time you encounter them, right? So the dynamic nature of them means that any story that you write as a journalist, like that's a one anecdote story, like it happened this one time, the companies can dismiss it. They can say, oh, that was just one off. Right. And so you kind of need a bigger data set in order to prove systemic harm. And so I believe that data journalism and building these types of data sets is going to be the only way we're going to hold these types of systems accountable in the future. And Brandon, you know, what I love about this talk and I was messaging you both about this, I feel like you're like mirror images of each other, right? So Brandon, you were the CEO of CrowdTangle. And again, this is like digging back a bit and you know, related to what you're doing today but not what you're doing today. Can you talk a bit about for people who may be unfamiliar with CrowdTangle, what CrowdTangle did and, you know, how it relates to journalists and journalism. Yeah, we were, first of all, it's great to be here. So thank you for having me. So yeah, I was the CEO and co-founder of CrowdTangle. So when we got started, we were essentially kind of like a social analytics tool that focused on in the beginning Facebook. Over the years, we sucked in data from other social media platforms, but what we did essentially was make it really easy for newsrooms, in the beginning was mostly newsrooms, to be able to just have a better sense of what was happening on social media. And starting around 2012, 2013, newsrooms began getting so much traffic from social platforms that suddenly this data was not only becoming really important to just understanding their own business and their audiences. They were flooding in at levels they'd never seen before, but also increasingly a lot of political and civic discourse was happening on these platforms. So even if they just wanted to do their jobs, it was really helpful to have a way to make it easy to see inside these systems that otherwise were kind of functional black boxes. So we got started with just a tool that made it easy to look inside social media. We were acquired by Metta in 2016. And for four years, we kind of continued to do that work from inside Facebook, but instead of just focusing on newsrooms, we took on a broader mandate of helping Facebook be transparent with actually a really wide swath of civil society with kind of the same set of solutions and products. Yeah, so let's, let's talk about this, this expanded universe of people right so again when people talk about technical oversight, you think about like data scientists and people have this formal training. I find it really interesting because last week this idea of licensing came up right so Sam Altman brought it up in the context of licensing for companies, but also we have seen this idea of licensing for data scientists or people who are going to be doing these models, you know sort of open question for for either of you. First, how do you feel about the idea of licensing whether for companies or for individuals. And then second is, how do you think that would impact your the communities that you work with civil society journalists that are people who are not your traditional tech people going through technical degree programs. Julianne can start with you. Sure, you know this is a question that something that comes up a lot is this idea of okay journalists should get access to the platforms through some sort of license or an API and generally a lot of times the requirements that are set for the type of access are not things that journalists can comply with so, you know, oftentimes they might want to know exactly what type of stories are going to do or vet your methodologies, or you have to agree to not use it for certain things or for keep the results confidential. So almost every kind of scheme like that really doesn't work for journalists which is why I've been a real advocate for what I call adversarial testing which is basically just looking at these systems from the outside. And that is risky because we don't have a legal regime in the US that really supports that the that type of work that's in the public interest and we need some amendments a lot to really make that less risky for journalists but you know honestly most of a lot of journalists are still doing it with like strong like lawyers behind them. And but I just I personally believe that when there's, it's great to have all those types of schemes but ultimately journalists are sort of that last backstop where we're just on behalf of the public. We need to be incredibly independent and so our testing needs to be outside of most of those regimes. And Julia then how do you feel about this idea of even like almost like a harsher license and that like something like akin to a medical license or like a like a license to practice laws so do you think, or how would you feel about even more formalizing the practice of even being able to handle data or work with machine learning and AI models. I mean I think that's an interesting idea for the people in the industry right I've always wondered why there wasn't sort of a, you know, a code of ethics that programmers and developers signed on to and I know there's been some movements here and there to install those, you know just the way that other professionals have some commitments outside of their actual job where they're like you know what I can't go do this thing because it's outside of my ethics and I think that as these systems that are being built the AI systems the social media systems control so much of our world I do think it's incumbent upon the people who work on them to maybe start thinking about what are their ethics and what are the higher principles that they need to be called to and so I think that's an important thing I think journalists already have that kind of ethics amongst our community. And we do ascribe to them although you know we don't have a licensing regime because we, we don't believe in sort of getting government handing out licenses to journalists but I do think that there are some really clear standards about what constitutes journalism and like, in most that at least in the US and other places that have been trying to codify to like the trust initiative and stuff like that. I'm going to put a pin in that last part of what you said and then go over to Brandon first because I think it's a really interesting point that you say you know the government handing out licenses I'm going to put a pin in that and then go over to Brandon so you know, let's say if these licenses existed when you were forming crowd tangle, everybody at crowd tangle including you would have to go get some sort of a license or you'd have to pay every year to have your license to have crowd tangle renewed. And one of the critiques has been oh this is just these companies trying to build a mode because famously was it two weeks ago I don't know time moves so fast it seems like it's been six months but two days. I think it was two weeks ago the now infamous Google memo that there is no mode memo dropped. And one of the critiques of this idea of licensing is oh it's it's the people who are already in power, trying to build this mode around them. So my one of, I think my first principles at this point when it comes to transparency and data sharing different things is, is I like a yes and to essentially like all different mechanisms that I think there are real there are genuine and really important privacy, and in some cases like trade secret risks and other things, or which some sort of like vetted licensing thing could play a role and be helpful and important, but it shouldn't be at the expense of other types there's no single solution to the stuff where we should not put all of our eggs in a single basket. Every one of these sort of slightly different stakeholders has slightly different outcomes has slightly different risks and benefits etc. And so I very much always have like a yes and sure licensing great but like let's get into the weeds on what that looks like because there are real ways in which it can be done poorly, including just with like the wrong incentives of the private companies are the ones deciding gets etc. And also, in that entire yes and spectrum, you need some mechanisms to also allow and enable. Yeah, adversarial transparency where people who are not in any conflicting conflict of interest or incentives also have the ability to go in because the systems are so powerful and they become so like our body poly to body politic that they need to be scrutinized and they need to be scrutinized very aggressively, and we need to make sure those methods are also available. So, so yeah I, I believe in a yes and all this stuff, but especially making sure that we're engaging a diverse group of outsiders and civil society and for me especially journalists, I just fundamentally think in liberal societies and liberal democracies the role of journalism to hold power to hold powerful account is is especially important now when we see so many of these platforms, kind of being owned and managed by tiny handful of companies in very heavily consolidated markets, and touching so many different parts of, you know, the world we live in so yeah that's, that's how I when I first read about that was my first instinct. So let's let's let's pull that thread a little bit right this idea of licensing and even the yes and right so who would be this licensing body so Julia understandably like well, you know maybe this shouldn't be governments because if we expand this model around the world and not just us and maybe even the US under a different regime. So one of the things as a data scientist one of the things that attracted me to the field is actually the complete lack of centralization of authority right there is no one degree program there is no Harvard of data science really there is no. One place we get degrees from we learn online so like, you know, essentially like who would put the bell on the cat. Any thoughts who that might be, or who that shouldn't be, I suppose. I guess I'm not even sure if I'm a yes and so like, I think I might be a no. I mean, I just am not sure that that history provides any support for the idea that we will know in advance who is going to come up with the best uses of technology, right. So what licensing really suggests right like I mean I definitely want my doctors to be licensed because it's clear to me that there's a huge body of knowledge that they need to have before I want them anywhere near my body right, but like technology is actually a very different field where we don't yet know what lessons we've learned, we haven't codified them yet, and we also don't know where the great innovations are going to come from. So licensing would feels to me a little bit like trying to establish a priesthood really early on where only certain people get access to these powerful tools and those people are granted special privileges. I mean let's be real we already have such a priesthood right like there is actually already this power that AI of AI is totally in the hands of few corporate entities and in a sense they already have the ability to give it to whoever they want. So I don't know why we want to enable that further when in fact I think the question we should be asking is how can we make sure that this new technology is being used for public good. And what are the incentives we need to put to make sure that happens and how what are we doing to mitigate harm right and I would say I'm probably in the mitigate harm camp because that's what I view my role as a journalist and but I think there is a role for government for civil society to figure out the uses of good and I wouldn't want them to have to go back the billionaires for to prove in advance that they're going to find something good with it. These guys by the way did not prove in advance that they were going to do anything good with it, you know, that's that's fair I mean just going back up the chain like who has to prove value to what and whom. I find it very interesting that all of these companies are chasing this idea of building artificial and general intelligence when actually nobody has any idea of artificial intelligence, general intelligence is what it might look like and you know, literally billions of dollars are being poured into companies to chase something we can't even define. So to your point, I do find that part really interesting. We actually have a really good question. Thank you David, and it kind of is a good segue into the next thing I want to talk about which is this idea of the Brussels effect right so like absent. So let's take it up a level from the US into the global arena and kind of there is increasingly this conversation around global governance but global governance might look like it is that anything attempting that would be a geopolitical flag buyer right no country. And this isn't about like China or Russia or a quote malicious actor like no country in the US wants to give up sovereignty over regulating technology on their own. So, you know, the idea of doing this is tough, but there is a risk and so David asked in this question. Is there a risk that national regulatory regimes in a connected web just ends up leading to location of certain functions offshore so sometimes it's called regulatory innovation arbitrage. Another name for it's like came in a station the fact that a lot of companies will test their models in countries with weaker privacy security laws and protections, in order to then you know figure like work out the kings and then you know launch it to a public at large so I suppose it's sort of like let's think through this like what happens if and when countries do have like certain countries have stricter regimes and others, or maybe even bring it back to the US we're kind of already seeing this with privacy and increasingly with surveillance and ethics and we have different laws in different states so can we talk about that a bit you know what what does that look like. In the current, you know policy environment that we're in, whether nationally or globally. Yeah, please go ahead. Sure, Brandon you might go. I'll jump it out to you. I was just gonna say like, I think that there's absolutely no question that that is already happening right so if you think about how GDPR was enforced essentially Ireland was the communication like so everybody set up shop in Ireland knowing that the enforcer is not going to be incentivized to be strict against these companies and that's why in part right the EU had to step in and build their own regulatory regime and pass new laws to try to regulate the tech giant so we have seen that the US of course has chosen not to regulate them at all and that I guess keeping our Cayman option afloat. So, but I, I guess my feeling is that like, yes, that's going to happen but that's not an argument for not doing anything I would say that like one of the arguments that people I've heard in the tech industry say is like oh, you shouldn't do any regulation because you'll need to this like bad actors going over to bad places. And I think that that's just not true that's not how law enforcement works like we do extradite people from other countries like we have rules and I think we do still want to live in a world where we have rule of law and so if we want to live in that world, then we have to make laws that we all agree on and then if people leave the country to try to escape them, we have to decide whether it's worth our effort to go get them and so I guess I'm still a believer that like we have to believe in legal regimes even if they are flawed and inconsistent. You know, just to butt in for like Sam Bankman Friede comes to mind right. Yeah we did manage to get the guy out of the Bahamas or whatever right like now he's in his home in Palo Alto. Brandon any thoughts. Yeah, no I mean we're already seeing this play out I mean I think the other way I feel like I've been thinking about this recently is to what degree are some of these locally specific versions of regulations technically feasible. That make sense sometimes secondly they seem to skirt some of the jurisdictional issues around data rights, which I think is going to increasingly be coming up. And then third I'm you know I do worry about what I mean in the US we're seeing this where like the federal government is doing anything and so states are now starting to write their own laws and I think one of the really unfortunate things that happened with that is it completely screwed up the to like a political level to pass a federal privacy law and so like California jumped in and did it and now it's essentially there's a standoff between California legislators who believe that they've written the right version of this and a more, you know more politically than one because not like just because you had literally have compromised at the federal level that they won't sign on to and so now because we had because we're forced to pass them in California. I don't know where to stand so federal and I think that's that's a real cost to those of us who like believe in this stuff. And then I think you know I think when it comes to AI I have been very convinced I think Ramon in part on some of your writings but I think other people and also what I've been seeing personally through some data access on social media in Europe, about we have trying to get to a point where we can have some international transnational cooperation around some of these issues and so that we can try and head off some of the challenges potentially of like the Balkanization but also build kind of regimes that are more standardized across the whole world hopefully encourage countries maybe haven't passed some to get voluntary compliance and they're where they are as well etc so I think if we can think about international solution some of the space I think there's a lot of promise there. Yeah, and I appreciate your kind of weaving through and kind of navigating this like there are these laws and regulations that are you know sticks and then there's this aspect of law that could be hey let's open up. Access to this wider range of you know governance partners so more than just lawmakers regulators policymakers like how do we do this and of course there's the platform accountability and transparency act in the US and then the digital services act. So I want to open it to either of you, you know to. How do you feel about the digital services act, especially we know that's going to be the first one that comes into play, allowing for access there's been a lot of really interesting conversation. You know, and I appreciate both like positive views as well as cynical views. And in this closure I've been working with the cat team so I know a little bit about you know what they're trying to do and brands and I think you're, you know, you've talked to them a few times as well and Julia I don't know if you have but I would be surprised to you already. Because we're all kind of navigating this together right this. This is an unprecedented level of access and as more than one person put it in some of the workshops like companies will push back and sometimes rightfully so on protections of, you know, IP security and privacy. So first question is like how do you feel about that, you know about opening up this data. How might it be meaningful where and where might be some of the pitfalls you've seen, Julie I'll start with you. Well, it's funny. I was just writing my comment this morning to the EU Commission on data access advocating for them to consider journalists and journalists access. As they consider some of the fine print that they're writing up about how to implement these laws. But I would say that like it's an incredible I feel like generally the DSA is such an incredible. Moment right it just if you think about it these are these global platforms they have their I think up until this moment it felt like they were too big for any government to regulate right I mean and basically nations that tried to didn't succeed right and so the EU getting together and passing these laws really thoughtfully in a way that attempts to mitigate the risks without without with while trying to allow for flexibility for the fact that these systems are dynamic is really really exciting and I think it's really what I'm worried about is if it doesn't work people will say they're just too big to regulate right and so I feel like the stakes are really high because if if this doesn't work and like them hiring hundreds of technologists and you were mine I think to like help like organize how to do this auditing work if if it doesn't work then I think people might feel like oh well then there's just no hope and I would want there to be hope because if you have global platforms that regulate speech around the world and are not regulated by the government which is by the way in most cases a proxy for the people who live there. That's a really scary thing right they really can flip a switch and just say we want the whole world we want this person to win this election and that one and in and that is a terrifying idea and so I do think we want to see the dsa succeed. I think it's an open question right it's a hard task. And Brandon I know this has been your passion. Very, you know recently in this but what you've really been working on so platform accountability transparency act as well as dsa so any thoughts on how it can be successful. Yes, many many thoughts. I'll do quick dsa and then pivot to pad I said yeah I mean I agree with Julia I mean I think it has the opportunity the chance to be a generational improvements in the internet. But there's also still a lot of challenges and getting it right. There's so much this work is new. There's so many details that have to work out and figure out and still negotiate between civil society I think European Commission itself has there's not enough technical experts who are willing to help with this stuff. So I, there's a lot of, there's a lot of hard work in some ways I feel like the hard work is kind of just beginning to get it right. And I absolutely agree with Julia that on the specific data sharing stuff. There's some nuances to how it's written that worry me a little bit and I think I, some of us are trying to help with one I think is how broad are the audiences it's being made available to and does that with journalists. If not, I think that's just a huge miss and how the amount of impact it's going to have. But then secondly, like, you know Europe also. They believe a lot in their regulators and their agencies and so the DSA gives a lot of power and authority to them. I think in a way that is, you know, we have seen go wrong in other countries in different circumstances and so also making sure the work itself doesn't get overly wrong because it sits inside the body. But I think on Pada, you know, I think for the platform and accountability transparency act which is kind of a similar as approach in the US. I mean the number one first challenge is getting it passed. It's, it is going to be reintroduced in the next few weeks with some more prominent bipartisan senators which is exciting. And I think a few improvements to the bill that I think are going to be meaningful for continuing to kind of build a bipartisan coalition here in the US at the Senate. I think it's very unclear how the house is leadership is going to feel about it. I think every, I think these terms are these things are sometimes long term projects and every additional elected official who comes out to support it is important and meaningful, and that each kind of step we go and writing a better bill also is meaningful. So I think first step is getting it passed. And then I think the second step would be a lot of the same work that I've done DSA, which is there's still a lot of details we fleshed out, even in Pada and some of that is technical other parts of it are like figuring out what the the right tradeoffs are. And so there would be a lot of work and getting it right after if even if it did get passed. So a good segue into a question by spot the and, and also bringing it back to, you know, last week's Congress hearings and people having mixed reactions to industry in general, and in this case obviously Sam Altman, specifically saying we want a regulator we want to be regulated, the cynical view being well of course you do because again that helps create your moat, but it also moves the conversation into an arena where companies have a lot of power, especially in the United States where they spent hundreds of years on lobbying, you know, you know, Julia, the, you know, journalists don't go and have hundreds of millions of dollars of lobbyists to go influence policymakers, you know you do well in the court of public opinion, companies don't do well in the court of public opinion right. So what's what these questions is that these technical standard setting bodies are standard setting bodies in general are not neutral. How can these bodies be made to be more accountable and, and maybe this also sort of brings in the Brussels notion right this idea that absent, there's like a vacuum of global governance and, you know, first with GDPR and you know now with both DSA and the EU AI act there there actually is the European standard setting of values. So, how can these bodies be more accountable. Julie I'll start with you. Okay. Sorry. Yeah, I mean, it's a really hard question because, you know, as you mentioned the most wonderful thing about the internet is that no one runs it. That's actually what makes it great. And so the decentralized nature of it is amazing. And when I first started covering the internet, you know, the technical standards bodies really were just like the engineers at universities and even if they were at a company they were really more devoted to the internet than they were to their like corporate practices and so it was a different time and now those bodies have been gained just like everything right so companies realize that they can set standards in a way that that advantages them and they try. And so that is a really sad thing and it's also worth noting that like civil society is just underfunded right like there's not enough money being spent on people like, you know, I wish there was someone who was doing what you know the Federalist Society guy is doing with just like hundreds of millions of dollars to like support engineers to go to the standards bodies and to set up like technical institutes and to honestly save journalism which is falling into the ocean. So I think that but that said, the alternative to these diners bodies is probably worse right so this is like where I don't know what a better I don't really want like Sam Oman in charge of it personally just because you know, and so I, and I don't think the government, any particular government should be in charge of it so I, I feel like the answer is maybe just make this sort of somewhat flawed system slightly better. I always going to agree with that I mean for any this, any solution you're going to propose you have to ask who's going to fund it and there's essentially, I mean, three and a half reasonable sources of funding there's either industry governments or philanthropy slash people, I guess, but the problem is you know I think on the philanthropic side is very unclear at the moment, if that money is meaningful is available in any meaningful way I think it's a lot of us in Europe don't feel like it's available in Europe. It feels like there's more of a possibility of it here in the US but it's still if that space is very nascent at the moment and it's unclear if it could meet the needs of this sort of thing and so are you willing to take the trade offs that would come with trying to build some sort of independent, you know, trusted effective body that gets any either government or government funding at all is that even possible, etc. So I, I agree to I think there's not, we're choosing among probably are not ideal options and try and go do the best you can in that space. And you know I will say I think on the cynical side, you know, I watching Sam Altman's testimony, you know, it, it sounded. I just felt like we're playing the same record all over again from like the last 15 years of this stuff. And I think if they're, they're open AI say what you will about them, they're, you know, they're not dumb and I think they can read the tea leaves anybody else that nothing is getting through the federal government anytime soon on anything. And so I think there is something a little disingenuous about going and calling for this stuff it doesn't I think it. I think it's a lot of obstacles like strategy credit it's very easy to go and try and take credit for doing this thing but knowing it's never it's not going to happen. And meanwhile they probably get some credit for its seeming for hyping up how world changing of a technology it is that it desperately needs regulation. So I think there's a lot of benefits to I think what I think there's a lot of marketing benefits to the testimony in my opinion. But I think you know this question of where the regulations actually even come from is. Yes, it's probably going to be Europe or setting up some sort of international transnational bodies getting administration to join. Or, you know, I, Julie is probably going to kill me for this but like I do think public pressure does still work on these platforms I think there's this notion sometimes that they just think they've had so much bad articles written about them they just write it off from my brief tenure inside I actually saw that oftentimes it felt the opposite that they were bouncing around, trying to figure out what to do from a lot of this criticism and it actually really did impact a lot of decisions and sometimes is too often. And so I think you know on some cases that is clearly not true, and it is not like in some cases I think they planted their flag and they don't care how much criticism they get around X, they won't change. But I do think public pressure can change some of the behavior and I think it's still an important thing to kind of to keep up as we move forward. So I just want to say I heavily agree with you Brandon on that front. In fact, sometimes I just think that like my, I'm like, I'm like, I'm a shame regulator like I'm just, I regulate through shame. Well, and in the like sort of risk analysis way of looking at things and companies that's reputational risk right like a name like yours or Brandon's you know pointing the finger at a company carries a lot of reputational risk for companies. So a good, a good same way to question from jihad and it's come up a couple of times this idea of the imbalance of expertise right we lack enough trained auditors we lack enough people with sort of these critical thinking skills or data analytics skills and both of you have worked to improve that so it's, it's, it's hard to be like what can be done, right but like we can't live in this world forever. You know, so what, what, what can be done. You know, like just a minor question, like should you know maybe I'll put it more specifically should foundations be putting their money differently. And just be funding things you know the the White House announced 140 million dollars into various national AI centers which sounds like a drop in the bucket to the $10 billion that Microsoft has just given open AI. You know, so then, you know, like all of these places and you all pointed out amazing people in Europe amazing people in the US trying to work on this, and then you know contest that with a salary at a tech company which would be significantly and frankly the work, you know having worked on both sides I would say the work is easier in many ways, not that the work is easy but it is. You don't have so many frustrations right is lots of wins. What can be done. Yeah, I'll jump in. I saw lots of thoughts this one so I'll save one really really concrete one is there's a program in the US called tech fellows, where they place highly technical staff in Hill offices and DC. It's an incredible program, like there's this notion sometimes like DC doesn't get tech or something. I have like met a lot of the hill staff like working both on on in elected officials offices but also on committees who are like working on this stuff. And some of them are incredible, like, and know this stuff insanely well, Europe needs a tech fellows program, like, desperately. So if there is like flantypers out there think about this stuff like tech tech we need a European version of tech fellows. And by the way, if it does exist, I apologize but like I, they need more of this talent and a version of that and I've been to Brussels a few times and I, I haven't heard of one but I think that's like a very concrete thing. The second is, I do think that the silicon value will always be able to pay more than like public sector etc but like I think the days of like insane compensation insane campuses like there is a bloated era that I think is coming back down to earth and I hope that a little bit more balance will make it easier for some people to choose between private and public sector opportunities. I think literally just the number of layoffs that have these companies have put out a lot of talented technical people, just in like the last nine months and part of what you're seeing is I think the beginning of like a building of a like trust and safety like integrity industry that now exists outside of the platforms, and that is resulting in things like trade associations being created like the trust and safety association or the integrity Institute or things like that so and then you're also seeing a dedicated academic journal journals just focus on this I think like what you're seeing new for profits new nonprofits. I think like my bullish self is like there's a that that space is growing now and will hopefully be become a kind of like personnel infrastructure of the internet going forward in a way that it wasn't just people who were like used to work on the ads team and got asked to help with, you know, you know, impersonation and suddenly we're like the leading, you know, integrity person on spam on the internet but now there's going to be a new industry of why a lot of talented technical people who hopefully will be moving between those spaces, a lot more going forward. So yeah, but I'll pass. Yeah, and Julia like your career, a big part of your career has been literally in cultivating that expertise and people who are not working at tech companies what are your thoughts. Well, the thing that I experienced is that there are so many people with technical skills who don't want to work at those companies and want to do something more mission driven. If, if I had all the money, I could hire so many of them right because that is the only barrier they're willing to work for half a quarter you know they're they're actually really excited about doing meaningful work and the reality is that a lot of the work you have to do in those companies is not that joyful right like optimizing ad, you know, algorithms or whatever and so I, I think there's a huge group of people I have never had any problems recruiting. I think what the challenge is that I'm in a field where revenues globally have gone from 100 billion worldwide 20 years ago to 30 billion. So I'm in the collapsing industry. And, you know, there's only so much nonprofit philanthropy can do right it's just like they're trying to also there's a lot of other urgent questions in the world and so I feel like the amount of journalism that needs to get done, especially technically precise, you know methodologically driven journals and the kind that I like to do which is expensive is something that is is hard to get funded and is is hard is difficult to keep running. Gosh, we have five minutes and so many good questions in the q&a. Well, it's what we'll tear point then in sort of these pathways in. Do you have any thoughts on this is a question by Hector. He wonders about any thoughts on protecting people who highlight ethical and social harms from retaliation so this idea whistleblower protection I know that's actually something that folks have been working on. These protections may not extend to ethics or tech impacts concerns in general. Is this something you'd support is something you're seeing some noise being made about, and you know, into your point both of the points that you're making is, there is a huge audience people who don't want to work for tech companies but also there are people working at tech companies who go in and they realize actually want to be doing something a little bit more meaningful, or I have seen these terrible things I'm afraid to speak out because we have seen, you know the amount of pain that they can send your way. You know, and anything on a whistleblower protections. Are you in support is something you've seen have you seen any noise about it. Brandon I'll start with you. Oh Julia sorry I look like you, you had. I mean I would just say like whistleblowers are. We do have, I mean not great whistleblower protections but actually there are somewhat decent whistleblower protections for corporate employees. And the problem is that the law just isn't enough right like companies can make your life hell, regardless of what the legal structure for whistleblower protection says and so, ultimately you see that whistleblowers like Francis haugen for instance, leave it rather than deal with the consequences of what it would be like to work there after having blown the whistle and so I think there's that whistleblowers are just people that we need to protect. But I think realistically the idea that they're going to be able to keep their jobs. I think it's is unrealistic I think especially given that we're not a nation where people move jobs all the time. Yes, I'm really hopeful because there's two new legal entities now that protect whistleblowers and I feel like that didn't used to exist. So I actually feel like things are on the upswing for whistleblowers. Yeah, you know, I know less about kind of the legal to bar for the whistle for whistleblowing so I probably won't speak to it but what I will say is like I think you know one of the reasons I'm so passionate about data access and transparency is that for a long time, getting any window into a lot of these questions required somebody to meet the documents. And if we had a world in which there was much more accountability and much more ability to audit and study and monitor these platforms. Ideally, like, you would have to rely on whistleblowers less because the public would already know so much more. Perfect so in the two minutes we have left. You know the one law that is happening is Digital Services Act right and we talked a little bit about Article 40 which allows openness and transparency and access. But we've all expressed like, oh this is either going to be greater terrible and there's kind of no in between. So in, yeah, in the two minutes we have left so I'll go with you first Julia. What is the one, what is the one thing that lawmakers in in the EU should be focusing on to make Article 40 a success. Oh, thank you for asking me that question. I really think that journalists need to have some access to platform data. We have been on the front lines of providing accountability to these platforms and if they really want these companies rained in we are the ones doing the work. And so there is sort of a provision in there that seems to protect that right for journalists to scrape and I think that should be codified so it's clear that we can go in and work in the public interest on these platforms. Brandon. I totally agree. There is a version of the current article 40.4 and 40.12 that could be read in such a way where three four years from now and it's all said and done. All we have is regulators picking and choosing from among a very small handful of academic researchers whose projects may or may not even be technically feasible by the time they go to the platforms and ask for the data which may or may not exist. And they may or may not have funding after they may or may not get the data. There's a world in which if you read this stuff very narrowly, it's a very it's essentially moving from one walled garden to another, and from moving from real time monitoring of these bases to constantly looking back three to five years later in a space that changes insanely quickly. So, can we craft the details in such a way that they allow tiers of different access and tears of different data, including one that makes room for real time monitoring of what is happening on the platforms. Julia, Brandon, thank you so much. If the comments and chat or any indicator this conversation could have gone on for hours and everybody would have been glued to their screen so thank you both for your insights. And thank you to the folks who joined us and stay tuned will be will be posting this fireside chat, as well as reaching out again so thanks everybody. Thank you.