 First of all, thank you. I'm so pleased to be up here with the two of you, and I've loved kind of getting to know you virtually over the last few weeks and reading your writings and watching your YouTube videos. It's really great to see you in person. So I'm hoping we can just kind of have a little conversation and then we're gonna give about 15 minutes at the end for questions. So if you have them or if you've already thought of them during these wonderful presentations that you got, write them down and then we'll make sure there's mics out there for you to ask questions and give plenty of time for that. So just kind of want to dive right in, Ramesh. In your work, you really talk about how online companies are shaping the way that we live and they shape our beliefs and our work and really our lives in big ways and they they act on all of us. It doesn't matter what race we're from, where we live in the world, you know, we have just this powerful influence of technology on us all the time. And now we're seeing this widespread adoption of the exact same platform, kind of cookie-cuttered across the world with no kind of regard to the cultures that they're being implemented in, and no direct input from the end user either. So I'm wondering, you know, how can we ensure that in the technological development of the future, you know, what can we do to make sure it's inclusive of people of all demographics? Yeah, that's such a killer question and a tough task. So, you know, any time a technology is designed in one place and is used in another or with designed by one group of people and used by another, there are naturally going to be disconnections, right? Because we are not the same as other people, right? I mean, it's kind of obvious. So there's, so, so what we've seen is sort of a fragmentation or a geographic, cultural, spatial, etc. distancing between the producers and also monetizers of technology and those who use that technology, right? And so, you know, you've all heard that aphorism, right? If you're not the customer, you're the product, right? That's sort of an idea. And so the product being sold, quite honestly, is our own forms of free expression with one another. That said, there's a great amount of value that has been provided to us in terms of efficiency, very consumer-facing in many manners, by efficiency-optimized platforms. But efficiency is a quantifying and engineering kind of statement. And it doesn't mean the same thing for everybody at all times in all places, right? And also, a narrowing vision upon some sort of closed notion of efficiency actually creates all sorts of other inefficiencies. I think of something very similar with entropy as well, by the way. I kind of riff on that a little in the book. That was really fun. So that distancing is something that I think is being increasingly acknowledged. I mean, that's why I alluded to the point of, you know, like, for example, Google having, you know, setting up an AI lab in Accra, you know, these are attempts to try to close these distances, these profound distances that are actually partly a function of the fact that we are a very diverse race, you know, we are, I'm sorry, not race, we are very diverse species, right, on a planetary level. So that's kind of point one, right? Like, we really have to think about questions of closing distance, but also questions of out of the kind of techno cultural experience, the experience of the human colliding with the technology, who benefits, who doesn't, right? So there are certain types of ephemeral and transient benefits. There's no question that, you know, especially search algorithms provide us, right? But I really, really appreciated, Dan, that you said that, you know, that search systems need to be more transparent and accountable to their users. It's excellent to hear someone from Google saying that. And I, you know, challenge them to continue to really, to be good to that. Like, what does that look like? So I think there's that facet of distancing. So it's closing the gap. It's also questions of governance, right, and power and value, right? And that's really what I was trying to touch on in my talk, right? Like, who gains from those experiences, right? So if I have the short term gain that is based on a peer to peer interaction, where perhaps I get an Airbnb apartment, but macro economically, it's kind of unhealthy for our larger society, which is already in a very unhealthy place, right? That's not that's not the way we want to kind of, if you will, wire technology for our present or our future. This all connects to the first point you made Aaron, which I think is really important, which is that, you know, I don't know how many of you, you know, kind of took classes in feminist studies and so on. But like Donna Harrow is classic text, the cyborg manifesto, you know, it's kind of a very dense and difficult thing to read. It's kind of amazing to kind of writes about how, you know, we have long human beings been cyborgs. These, there's a sense of anxiety we have when we don't look at these things constantly, they are tied to our bodies, their prostheses, right? So that means that our dependency and our relationship with one another, to some extent even ourselves, is very conditioned by this object or the information that it makes available in our lives. It even influences how we think of ourselves. So here's the key point that I wanted to get to, which is there is an incredibly, there's an incredible amount of power in determining and shaping those questions around what is made visible or invisible to us, what is served to us or not. So, you know, again, the intentions of algorithm designers are not to be sociopathic, right, are not to be unhealthy for our society. Their intentions are to have their stuff used as much as possible, of course, right? It's like a business. Give me a break. If a CEO says we're going to design a product and not be used as much, the CEO is not going to have a job, right? We'll be replaced by someone else. The question, however, is are there business models that can exist that don't, that are not optimized for that usage and that addiction in a way that we as human beings, we're addicted to healthy stuff to some extent, but we're super addicted to unhealthy stuff, that dopamine, right? And so those technological systems have arrived at some sort of state, I'm not sure it's a finite state, but at some point where content is being fed to us that tends to reinforce bias and at times inflame bias, right? Because if I'm served, you know, if I'm watching a Bernie Sanders video on YouTube, and then I get served, you know, 9-11 conspiracy theory video content after that by a YouTube auto recommendation system, it keeps my attention, but may not be like the best thing in general for society or maybe even for me. Or if I'm watching a Donald Trump video and I get served neo-nazi content, again, we might see the same thing. So we can't, that's unhealthy socially, right? So I think there's just profound responsibility for us all to come together, make decisions that are healthy, because human society and societies are far more complex than mere technological engineering questions. So that's why I think we have to guide the technologies moving forward in the image of what we all choose. We have to come together as a society, as a world, to make those decisions and shape technologies accordingly. And that's what I'm trying to argue for in the book through a ton of examples. Can I give a comment? Of course. So I'm part of a group at Google called the Human-Computer Interaction or User Experience Research Team, broadly speaking. And we, the premise of your question was, well, you know, how do we, you know, fit these things to different groups and different populations and so on. So we provide Google search services basically in every country. And there are a few countries we can't get into and there's an interesting debate about that. But we, as a consequence, have to work in all these different cultures and different languages. So when Google started in 1998, it was English-only, stanford.edu, that's it, right? And we have grown since then. So now an interesting question for us as researchers is how do we provide Google services, say, in Arabic? How do we do them in Urdu? How do we do them in different languages, different cultures? And part of the question there is how do we design so that it works well for that population? For instance, we offer Google search services in Korea, in South Korea. Naver is the big dominant player there, so Naver completely dominates that market. We are like, oh, yes, less than 10% or something like that. But the big piece of feedback we get from Korea and says, why does your page look so weird? And I, of course, as a designer with sort of pseudo-Scandinavian design aspirations thinks it's elegant, it's sleek, it's all white, it's wonderful. They say, white is the color of death. And you don't have anything moving on your page. There are no animated gifs and so on. And what's more, do you guys know HTML? I mean, that's literally a question we get from our, and so we've allowed our Korean designers to kind of have a good time, just do something. Because what we're doing in Mountain View does not work in Korea. And we've realized that sort of versions of that story endlessly. So an interesting aspect of this is not all cultures have the same relationship to the web as we do in Silicon Valley. So, for example, the Arabic language corpus that Google crawls and so on, tiny. It's not very big. It's not that there's not a lot of Arabic content in the world, it's just we can't crawl it. Right? Therefore, when you search on Google for Arabic content, there just isn't much. They haven't decided for whatever reason to put it out there. Similarly, Cherokee or Klinkit or name your favorite small vocabulary, a small group language. So we do our best. So we have a lot of programs, especially for these underserved languages, underserved cultures, to try to bring more of their content that they want. You want this stuff, we'll digitize it for you, we'll make it available. So it's an interesting question. How do we design to support that? One last example. For example, in India, lots of folks, the phone that they get is defaulted to English. Now, if you're a Hindi speaker, that doesn't work so well. So it turns out typing in Hindi is a major pain, really hard. So most people would prefer to speak their query. And they want to speak it, say, in English because a lot of the things they want to look at like school schedules and stuff like that are in English. But they want the results in Hindi. So we've got this sort of interesting UI design there. It's called English. And it's a common thing that you see in India where people do a query in one language, you want the answers in another or vice versa, whatever. So we've had to figure out how to do a lot of accommodations to that, to the local culture. And so we do that a lot. And that's sort of a lot of what human computer interaction is all about. It's really interesting. It actually kind of feeds into what I was just about to ask you. Because I would have to imagine there's a lot of differences in how an algorithm returns results like culturally based as well. So what I'm going to expect Google search results to give me is going to be different than what someone in India is going to expect, especially when you're doing now cross languages and all of that as well. And so I was really curious about this kind of use of AI. You were mentioning Google being an AI company and talking about that. And we're actually seeing this into the library market now as well. There are lots of companies that are offering tools that enable you to track literally every single thing you do in a library and then offer personalized search results. But we know that this approach can lead to filter bubbles and the narrowing of world views, which is in my opinion at least the opposite of what a library is supposed to do. And the internet really, our concept of what the internet I think was initially was, oh my gosh, it's going to expand the world views. But we're not really seeing that. And so how do we avoid this filter bubble trap that AI can leave us in? Either in both. Okay, so I had a piece that I went like my first book came out. It's called Who's Global Village. I put a piece out in Quartz, you know, and I shared that where I was invited to the country of Cameroon. I'd been to Mali before in West Africa. I was invited by UNESCO to go to the country of Cameroon on a project, really cool project. And the first thing I did was search for Cameroon. And so I didn't find a single webpage from Cameroon, which it turns out has a very active English and French language blogosphere until I got to page three. And Dan can give you the data better than me, but nobody goes to page three. It's very rare that we go past the first three or four search results themselves. So why did that occur? It's not because, again, Google is anti-Cameroon. It's understanding of me, which was presented in an invisibly ordered list without me knowing why I saw what I saw. What about me influenced those results? What might be the sort of parameters that shaped the search results, not just about me, but more like corpus-based? All of those were invisible from that ordering. So as a result, the first sort of, I mean, I got like the CIA webpage, I got the State Department webpage on page one, and like the very bottom of page one, I got the Wikipedia webpage, right? So this is just an example of these disconnections that we're all speaking about, right? And the way we resolve these things increasingly is, you know, like Per Dan's excellent examples that he just gave is to give up some power, you know, to like basically say to these communities or stakeholder communities, like, here's kind of what we have, work with our stuff, and give us some feedback and responsiveness into an alternative mode by which information might be organized for your own polity or for your own locality, which might mean not making all information free, which might mean not making all information visible, right? Because a lot of cultures, there are protocols around the transmit, you're an anthropologist, but you're also Dan. So it's like the transmissions of knowledge, the circulation of knowledge, the custodianship of knowledge, all of those are very cultural sorts of principles. So we're at this moment where like the network effects, the scalable effects, especially of Google technologies, I think Facebook is a much worse design technology in many ways, but these are not understood simply as technologies, they're corporations, right, with the suite of products that interrelate with one another, has scaled out. It's good for certain kinds of parameters, but it's not necessarily something that is tethered to the human experience of that individual in a place as part of a community themselves. So like what we can really do is, and I just want to kind of tie this back to libraries, you all, and I said it earlier, and I was a little abstract about it, but I just want to say a little more specifically, you all are the intermediary, like you are the human guide through all of this, and the values that I believe you all support as librarians, and I know my students definitely do and did, are ones that are dedicated to public welfare, but not public in this again, vacuous sense, not like everybody, blah, blah, blah. It's more like the different interests of, and realities, and experiences, and voices, and concerns of those multiplicities of publics that you work with. So it's your goal, and that includes anti-surveillance training if it needs to be, you know what I mean? Like that's important too, and privacy training, and encryption training, or whatever, you know? So it's kind of like, it's up to you, and this is a massive charge, to try to grasp this kind of complexity of different information forms, these different forms of mediation, and to kind of really work with people and communities, and that's why librarians are super important, and that's why I really believe that librarians should really articulate their mission as the people that really, not necessarily check power, but really balance things out to ensure that, you know, a bunch of engineers who may not know necessarily any better, because like honestly, when I was trained as an engineer, I didn't think about any of this, you know? I wasn't sure, I didn't learn any of this stuff. I didn't even take an ethics class, really, honestly, like we would have to like fill out some forms or whatever, but that was it, right? So that's not because they're unethical, it's because engineering is often taught as morally and culturally and ethically agnostic, right? And so it's your role, and that's the place where libraries can be like the hottest stuff moving forward, rather than like, oh man, Google's, you know, putting us in trouble or all these, you know, trite phrases I hear all the time. You are the people of first digital world, and that's what I want to like, really like argue for, and you can work with folks, like really nice folks, especially at Google, like Dan, to kind of make these things happen, you know? Got the right person from Google here, not the people running Project Maven, you know? Which is gone, but that's a separate issue. The people arose and made it go away. So I was born in Compton, in LA, and I grew up in South Central LA, and I was a passionate library user there, and I don't know if anybody's been LA County, but there's a zillion branches, right? And when I was young, I didn't realize I lived in a filter bubble, right? We called it the library, and there were well-meaning acquisitions people there, and there's a great, you know, they did the best they could. But by definition, the collection is small, and it does not represent, you know, you try to find something about Cameroon, forget it. Impossible. Zero hits. And so what I think we've started to realize is that filter bubbles have always been with us. They just have different names and different kinds of technology. But to your point, Aaron, we had this dream that once we all had access to all the world's information, we would all become Vermont Democrats and live peacefully forever, growing our own wheat and breaking bread together. That didn't work out, right? Vermont independence. So it didn't quite work out that way, but so I'm going to basically second what you said, Romesh, which is I do, I agree with you that librarians are our hope for this, because when I teach, I teach people that you have to get out of whatever your filter bubble is. Be aware of the surfaces that you are within. Be aware of what boundary conditions are. So, for example, one thing I have to clarify is that Google results are not personalized. They are localized. So if I search for pizza here in San Francisco, I do not want to see pizza's places in Mountain View, right? If I search, for example, the Falkland Islands and I'm in Argentina, I want to see East Las Mavinas, right? So we have to localize a lot of that stuff. And so localization is not the same as personalization. We actually use almost zero information about you other than your geocode. I know people don't believe that. I can give you an hour on why you're wrong. But you can follow up with me at the break or something. But yeah, people widely believe that we personalize everything. No, not true. Anyway, the point is we have to teach this. It has to be a kind of very intentional act. Don't just believe the people you meet on the street in San Francisco or in Silicon Valley or Hyderabad or whatever, right? You have to be aware that we are a global world. We are a global information space and you have to consciously get out of that. May I make one other quick point? Yeah, please. It's not just a question of individual privacy here. If one group has power over the technological instruments that are intervening in everything that we do, even how we think, that affects systemically marginalized communities. And I didn't really get to that point yesterday. I mean, I'm earlier today. I alluded to it. I'm just in a twilight zone. I've been traveling a lot. It's things like predictive policing systems. Again, Google's not, this is not on Google. But it's those systems being used in machine learning algorithms, taking black and brown populations and treating them as objects of greater surveillance and persecution. It's courtroom systems, algorithmic systems entering our courts that are judging people of color and black defendants with misdemeanors as more at risk for future crimes than Caucasian individuals who have a potential even history of felony convictions. These are all actual journalistic stories that are out there. So the issue is really on the like the scale of persecution that can occur through these harms that might have started with maybe innocent biases to begin with. That's the issue. Yeah, there's a really great book out there, Weapons of Math Destruction. Yeah, I'm doing my first book event with her. Oh, great. Yes. Kathy O'Neill. Yeah, she's awesome. So if you want to learn more about algorithms and Yeah, so I'm doing my first book event in New York City with her on October 29. She's awesome. Yeah. Although I want to highlight something you said, which was often these these machine learning systems in particular are kind of just math devices. And they're like in the TAE example, they're completely contentioned on the data you feed them. You give them garbage. Guess what? There's garbage in garbage out. This is true of machine learning algorithms as well. And so I think one of the great, great points that you make, and I think we're trying to make collectively here is diversity is strength. And in particular, one of the problems with, for example, the predictive policing and a bunch of other examples of crummy ML crummy machine learning systems is that they were trained on terrible datasets that had no diversity. And so by having diverse people in the engineering population in the data correct creation population, all of a sudden guess what those mistakes don't happen. Or what if black lives matter shaped predictive policing algorithms? That's what I'm calling for, you know, like like leadership in the in those communities, you know, and so put like patrice scholars who I work with a little bit and is on the cover of my book is, you know, like those folks are trying to intervene with LAPD predictive policing systems, which fortunately have been temporarily halted. Thank you very much. I think we're going to move on to some questions from the audience. I'm sure we have a ton out there. We have a microphone here. Just raise your hand. There's a microphone here. So raise your hand if you have a question and a microphone will magically come to you. Testing. Hi. So kind of combining the two talks from this morning. Given that our digital experience is shaped by physical reality and politics, how do we kind of combat the issue where many of the decision makers or stakeholders like voters are kind of digitally illiterate? So issues like net neutrality come up and nobody knows what that is. When you try to talk about digital privacy, people are like, I don't know. Should I care? So what are the steps moving forward for us if we are on this forefront to try to make a difference in the policy as it affects our digital experience? Well, I think, guess what? I'm going to say the same word all over again. It's education. It's got to be because you're absolutely right. I've sat through way too many, shall we say, congressional meetings where it's clear that the people asking the questions have no clue what they're doing. That's true. My CEO had the misfortune of having to say it with Senator, that's not our phone. We have no idea what's going on with that. And that reveals a kind of fundamental misunderstanding. So I would love to see the people behind the spokesperson because the congressional staffs are usually pretty clued in. I want them to ask the questions. They know what's going on. But I understand it's TV, it's politics, you know, whatever. But it's got to be education. And I understand there are people who believe that education will never survive. The reason you're all here, I will point out, is because of education. So it's working somewhere. And I think we have to do a real outreach and help the decision makers understand that. Yeah, we're at a moment now where there's a lot of popular consensus, which is kind of interesting for different reasons, like the Republicans and Democrats, people across the economic spectrum, people across the geographic spectrum, and even across the age and racial spectrum in this country all support some sort of shift in our relationship with tech companies and tech issues. I don't know if you look up the latest poll. It just came out about 10 days ago, two weeks ago or so. Vox has reported it and a few others. So I think it's for a very, very different and strange reasons. You all probably remember when President Trump tweeted that Google stole him, stole from him millions and millions of votes in 2016 because its searches were biased toward Hillary Clinton. I wrote a piece in the Washington Post that came out the next day rebutting that claim, not because I'm like an unabashed fan of Google, but because it put me in a weird state because I, you know, it's sort of like that's not the way we should analyze what Google means in our lives. That's absurd. But that does mean we're at this point where now there's interest and at least in the public, a real point of bipartisan agreement, if you will, to kind of shape the digital world in a pro-people feature, in a pro-democratic feature, something that I think Dan and I both agree on here. So let's do it. And I think part of the way is not just sort of seeing net neutrality as sort of a standalone issue, but understanding how these specific issues are not just about random experiences of privacy, but are actually tied to our overall health as a society economically and so on. I think that explanations need to be made in economic terms to show the externalities of these issues, right? So externalities, that's a term that's a concept that shows the effects that are not sort of like rooted in the actual, in the specific experience itself, right? Like the overall health of a society, right? Like so if you care about your property value, the school district matters in your property value, right? Public health issues affect all of us, right? So we have to express these issues like net neutrality, which Google is totally supported. We have to express why they're important for everybody rather than just an individual liberty or rights issue. That might work for some small subset of the population. We have to talk about our overall health, how that affects like our jobs, our income, our income mobility, those themes. Thank you. Do we have anyone else over here? I think we have someone here, or wait? We have one back here and then we'll... Hi, I'm the translator in Spanish for San Francisco Public Libraries. And I wanted to talk about Google Translate. I want to talk about Google Translate. So I think a lot of these issues is all about balancing access and localization and really looking at unconscious bias when evaluating cultural values of marginalized communities. We... I have a co-worker who does a Chinese translation and we have lots of talks about unconscious bias with co-workers and we open a lot of conversations about how that is expressed in the way programming is done in libraries, how that can change. And I think it's a big responsibility of Google to really in the way of a Google Translate to have... What do they call it? Yeah, it's Google Translate, but it doesn't translate effectively in many arenas. I worked 10 years in law enforcement in the courts. I remember how much argument there was economically to change interpreters to use Google Translate and things like that and it doesn't work because at the end the marginalized community values of how communication is done in other languages is not counted in a lot of the ways translation is done by Google. So those are the things that are way past the surface of the iceberg that really need to be looked at. And we in the library do a lot of that work, but there's so much more to do. It starts from the top down. You know, I hear often why do we have to do programming in other languages? Well, because we have those populations in our community. We serve everybody and it shouldn't be an afterthought. It should be part of every single design of services, of products. Anytime access is in the in the talk we want access for everybody. Well, what are the cultural values of the marginalized communities we're serving? What is access for them? And what is valuable for them? Even if we think that value is not efficient or economical? Thank you. So a quick comment about that. I am as Ramesh said, I'm sort of a human centered. And so I want people to use the services that work best for them and the situation that they're in. And in particular, there are lots of conversations or lots of texts that I wouldn't want Google translating for me. And if in particular for these very socially dense, complex, interactive or marginalized groups, don't use that tool. It's up to you, I think, up to all of us in these settings to choose the appropriate technology or the appropriate method. So that may be that we find a Zapotec speaker, right? Do not use Google translate for Zapotec. Bad idea, right? On the other hand, we are constantly trying to improve those services, not with an eye towards replacing those careful, delicate person to person kind of conversations, but as a way to augment people who have no insight into say Japanese at all. I'm a zero percent Japanese speaker. And so that opens up an entire realm of information from Japan that I otherwise have zero access to. So I also point out that, for example, in our translations of Turkish, we are doing a lot of very clever finessing of gender there, so that, for example, a lot of discussion in the US about gender appropriate pronouns, we are doing something similar to that in Turkish, right? So I think our technology will continue to improve, but I do think it's incumbent upon all of us to use the right technology, the right tool, the right method, the right time with the right set of people. So it doesn't give you the universal hammer. It simply gives you a great tool to use in certain times. Small screwdriver. Very small screwdriver. Hi there. Is this a question that's kind of like directed at the moderator and maybe you can translate it into a way for the speakers. But so I work here at the library, I work in the magazine newspaper center. And, you know, we've tried to do workshops around improve your like news IQ and kind of looking at like how news gets, you know, through a billion kind of tech companies and comes to you. They're not super popular. You know, it's like we know that there's like we know that there's like a social need for these things, obviously. And we've, you know, the there was, you know, the PLP, like, you know, did this whole thing on like developing a workshop around that that we, that we, you know, tailored to our people here. You know, we do have this successful thing here in NSF called Tech Week, where you know, we bring a bunch of people in and a lot of different service providers and kind of talking about technology needs in San Francisco. It's like, you know, like to just get like public housing now, it's like you have to know how to use the internet, which a lot of people don't, or to be able to apply to a job, you know, every day we have people that don't know how to do that, something like we don't and we're not expecting people to become like tech wizards, but they do know how to use, they do need how to know how to need some like very simple tools for us to be able to be more competent in these things. But I guess it's like, for lack of a better word, like what's the marketing and like how do we make this stuff more interesting for people to get the tools that they need? I think it's interesting because I think that in a lot of ways we feel like this burden has been placed on us, that we have to be the people who teach how to use all of this. And I actually, this is in one of my questions to you guys earlier, and it feeds nicely into this, which is like, what responsibilities do tech companies have to make sure that people, you know, understand the information they're seeing online and how to evaluate it. And, you know, I know there's lots of different tools they're trying to do out there that are failing all over the place so that it doesn't just come to the libraries where people who are unfamiliar with this like, tech is scary. People aren't just going to come into the library, especially if they're already unfamiliar with the library and the library is scary and say, hey, and especially if you're a person of color and a minority, hey, white lady, can you, I don't know how to do this, and I don't speak the same language as you. Can you please tell me how to do this? Like, that's, you know, that burdens on us, but I think there is some responsibility from tech to kind of solve a problem that has been created by them. So I can start with that. So just like, you know, big tech companies who profit off of the monetization they have of journalistic and news and at times fake and inflammatory content, just like they have a responsibility to partner with like community civil civil society journalistic stakeholder organizations. So too, in my mind, do libraries have a responsibility to partner with community based organizations in the constituencies that they serve. And I teach a class exactly on that topic for like 15 years at UCLA on kind of community based outreach, because that actually that's sort of responding partly to what you were just saying here. And so, you know, all of us are in our different roles are intermediaries, right? Like we all are in the middle between something and something else. And so we have to think about the values were guided by when we choose to perform that service of being in the middle in the center. So, yeah, to where the tech companies can go, I think it's it's absolutely critical. And I know that there are some initiatives underway in this to really kind of partner with journalistic organizations. I know like the Knight Foundation, for example, has partnerships with another of tech companies and so on. That's really important. But at the same time, I think that the issue here, which you alluded to, Greg, is about, you know, it's it's boring for people or whatever. I think what kind of content gets to people that keeps their interest and attention can still be journalistic. Well, like maybe appealing to them and who they are. You know, I'm not saying that we are the most, you know, classical journalistic outlet, but I'm a guest host quite often with the young Turks. And I think we're like kind of keeping it real in our own ways. I mean, I'm not as loud as some of my co-hosts. But, you know, I think we are certainly entertaining or like I mean, I can't believe random junk I say on there. There's like hundreds of thousands of people that watch it. And, you know, so what I'm getting at here is there are other ways of presenting journalistic content that can still be fact-based, evidence-based and ask the right questions. And I think it's a responsibility for us to think about what will appeal to our constituencies and present that content in that way or bring people together in that way as well. So I think we're just about at one. OK, I'll take 45 seconds. And point out that a big chunk of what I do is outreach and education, right? So I mentioned the power searching with Google Class that we recently did. It's had 4.4 million students go through it. That's pretty good. It's not bad for back of the envelope calculation. But we do a lot of other outreach in addition to that. So we have, for example, support for historically black colleges and universities. We do outreach of how to understand the news and so on in different places throughout the United States. We also have a lot of support for journalists internationally. So we do a big it's like a foundation like operation. To support journalists in Europe and in other places throughout the world. We also do a lot of pro bono teaching of journalists so that people creating the news in the first place, how to use all the technology tools, not just Google. So we do a lot of sort of ground level teaching of students, teaching of teachers, like with the faculty awards and so on. We also do the journalistic teaching journalists how to do it. And then we've got these direct action things like my move. So I think we're actually doing a good job. We should, in my opinion, do more. But I think more to the point of your question. I think all the technology companies have a responsibility to do this. And I think we're doing all right. But I want I want more help from my partners in technology. Well, thank you so much. And thank you, everybody, for your questions. Thank you for our panel again. Thank you, Erin. Thank you.