 Good morning, and welcome to this week's edition of Encompass Live. I am your host, Krista Porter, here at the Nebraska Library Commission. Encompass Live is the commission's weekly webinar series where we cover a variety of topics that may be of interest to libraries. We broadcast the show live every Wednesday morning at 10 a.m. Central Time, but if you are unable to join us on Wednesdays, that's fine. We do record the show as we are doing right now, and it will be available for you to watch later at your convenience. And I'll show you at the end of today's show where you can access all of our show recordings. Both the live show and the recordings are free and open to anyone to watch, so please share with your friends, family, neighbors, colleagues, anyone you think might be interested in any of the topics we have on Encompass Live. For any of you who are not from Nebraska, the Nebraska Library Commission is the state agency for libraries, so we'll be similar to your state library, possibly. So we provide services and programming and resources and grants to all types of libraries in the state, so you will find all types of shows on Encompass Live. Shows for all types of libraries, yeah. Public, academic, K-12, corrections, museums, archives, the list goes on and on. Really our only criteria is something to do with libraries. Cool libraries are doing, something you think they could do, be doing. We're the Nebraska Library Commission staff that sometimes do presentations about programs and resources and things we're offering through the commission or here in Nebraska. But we also bring on guest speakers, and that's what we have this morning. With us today is Marcella Fredtrickson. Good morning, Marcella. Good morning. And she is from the University of North Carolina, Wilmington. And this is a session that I saw that she did at Computers and Libraries, the Computers and Libraries Conference, beginning of this year, yeah. We're already past internet librarians, I get a little... And I reached out to her to invite her to come on and share what her presentation and the work she's done on this with all of us here today. So I will just hand it over to you, Marcella, to take away and tell us all about your presentation on racial and gender bias in search. Great. Thank you so much. Thank you for joining me today for this presentation on racial and gender bias in search. As Krista stated, I gave this presentation at the Computers and Libraries Conference in March 2023, and I also gave it a few other times for ASERL and NCLive in 2023 and 2022. I would like to open this session by taking a moment to acknowledge my privilege as a white cissexual person. The topics we will be discussing today may be difficult, and I will try my best to respect and give space to the complexities around racial and gender bias. So in today's presentation, we'll go over some basic definitions, examine the idea of search neutrality, explore racial and gender biases in search and library discovery, and try to understand how human beings influence search algorithms to produce bias results. To start our definitions, let's first look at bias. Here's a screenshot showing the Oxford language's definition of bias in Google search results. Bias is defined as prejudice in favor or against one thing, person or group, compared with another, usually in a way considered to be unfair. So bias can be understood as a preference for or a prejudice against something that is often unfair or unjust. There are many different types of bias, such as unconscious or explicit bias, sorry, as conscious or explicit bias, unconscious or implicit bias, cognitive bias, prejudice or stereotyping, prejudging, and so on. Today we're going to be seeing examples of how bias can serve as some technology. Some of these biases we'll examine today are explicitly racist or sexist. Some work on inference or suggestion and rely on known stereotypes. As Matthew Reidsma states in his book, Masked by Trust, quote, when supposedly objective search results reflect the kinds of structural inequalities that we struggle with in our daily lives, like racism and sexism, there is a good chance that these systemic biases have crept into the algorithms behind search. We also want to understand what algorithms and mathematical models are. We often hear the terms algorithm or model when discussing search, machine learning, or artificial intelligence, but understanding what they really are can be hard to pin down. Miriam Webster's broad definition of an algorithm is a quote, step-by-step procedure for solving a problem or accomplishing some end. We can think of a recipe as an algorithm or a knitting pattern as an algorithm, but algorithms are also much more complex than this, and their definitions can change depending on who is defining the term. The algorithms we encounter in our daily lives, and there are many, are really algorithms upon algorithms, all connected together to form complex structures powering things like Google search. And we have very little understanding of these algorithms. Reidsma describes commercial algorithms like the one Google uses as quote, black boxes. The components are working of the systems are hidden from view. Search algorithms, he says, are largely kept secret because they are the primary intellectual property asset of the parent company. And so sharing the details of how they work would devalue the primary revenue generating asset the company has. The Encyclopedia Britannica defines a mathematical model as quote, either a physical representation of mathematical concepts or a mathematical representation of reality. In the book weapons of math destruction, Kathy O'Neill writes that a mathematical model is quote, nothing more than an abstract representation of some process. And that quote, the model takes what we know and uses it to predict responses in various situations. But she also pushes back on the idea that a mathematical model by its very nature is exact or impartial models make mistakes because there are simplifications. No model, she says, can include all of the rules complexity or the nuance of human communication. Some of the characteristics that turn a model into a WMD or weapon of math destruction include opacity or invisibleness, pernicious feedback loops that make the model more unfair over time and the large scale scale of some models, such as those used in HR processes, health administration and banking. Mathematical models are formed by inputting data into algorithms. Algorithms are procedures we follow while models are built using algorithms. But algorithms are not inherently correct, as Sarah Wachter Betcher writes in her book, technically wrong. She writes quote, algorithms are just a series of steps or rules applied to a set of data designed to reach an outcome. The question we need to ask are, who has decided what the desired outcome was? Where did the data come from? How did they find good or fair results? And how might that definition leave people behind? Two terms that are becoming much more popular in the media these days are artificial intelligence or AI and machine learning. As we'll see later in the presentation, AI is a new frontier for search engines. This definition is from the Google Cloud product page. And I just want to read the last paragraph. On an operational level for business use, AI is a set of technologies that are primarily based on machine learning and deep learning used for data analytics, predictions and forecasting, object categorization, natural language processing, recommendations, intelligence data receivable, retrieval and more. Here we have another definition from the Oxford languages on Google, this time for machine learning. Machine learning is defined as the use and development of computer systems that are able to learn and adapt without following explicit instructions. O'Neill says that with machine learning, a quote, fast growing domain of artificial intelligence, the computer dives into the data following only basic instructions. The algorithm finds patterns on its own and then through time connects them with outcomes. In a sense, it learns. So now that we have a shared vocabulary, let's look at the concept of search neutrality or objectivity in search. What do we mean when we say search? From January 2022 to January 2023, Google dominated the market at 92.9%. The term Google it has become ubiquitous for searching the internet. So really, when we talk about search, we are most likely talking about Google. Reason states that Google handles around 3.5 billion searches a day from over 4 billion internet users. And of these searches, Google says 15% are ones they have never seen before. In its own words, Google says its mission is to, quote, organize the world's information and make it universally accessible and useful. They also share that they earn most of their money by showing ads in search results on Google.com. In the book Algorithms of Oppression, Sophia Umogen Noble makes a strong argument that Google is not an information company, but an advertising company. Beholding to a shareholders and to the bottom line. Noble says, quote, search happens in a highly commercial environment and a variety of processes shape what can be found. These results are often normalized as believable and often presented as factual. And on its about pages, Google is very careful to describe itself as either an information provider or an advertising company. Risma says that this quote allows it and other companies to avoid regulations that apply to existing business sectors. As Google tells it, it sells ads, but only to better serve results in search, which are determined not by a human but by complex algorithmic processes. These next few slides feature a lot of screenshots from Google's documentation called how search works from both April 2022 and more recently. In both examples, Google explains it's advertising as a way to enable freely accessible search for all people. In April 2022, Google explained that ads are displayed only when relevant to the task or the user's search query and that their interest is in quote showing only useful ads. Today, this text has been updated and streamlined even more. In 2023, Google now clearly states in unequivocal language, quote, no one can buy better placement in the search results. Google also clearly highlights that ads are labeled and easily distinguishable from the rest of the page. They reiterate in multiple places that no one gets special treatment or advantages in search results. Google says their results are the most relevant and reliable. Ritzmer describes the authority of algorithms to make everyday decisions as quote, or increasing faith that the algorithm will deliver a more objective decision than a human could. That somehow the algorithm eliminates the human biases that we often see coloring our decisions. And Google's descriptions of how their algorithms work reflect this idea. As Ritzmer states, the company actively reminds us that its tools are making choices and selections through algorithms rather than through human curation or judgment. Google is very careful to point out that real people are involved in measuring search results but not in ranking. Ranking is purely algorithmic and therefore, as we've seen suggested earlier, not subject to human bias. So how does Google rank and serve relevant results? The company says that Google's ranking system sort through hundreds of billions of web pages and other content to present the most relevant, useful results in a fraction of a second. Search algorithms look at many factors and signals, including the words of your query, relevance and usability of pages, expertise of sources, and your locations and settings. But Google is aware that problematic content can appear. In April 2022, they said quote, no system is 100% perfect and quote, occasionally results may contain content that some find objective or offensive. Note that Google does not say that they serve results which may include bias or even factually incorrect information, just that some results may be objectionable or offensive. And they are quick to point out that quote, such problematic content does not reflect Google's own opinion. So excuse me. They also clarify that policy violating content is resolved by improving automated systems first. And then in some cases, human beings may take manual action to block the content but not rearrange ranked results. Google is careful to distance itself from problematic content while also emphasizing the objectivity of automated systems. Google does have content policies for search results which include child sexual abuse, highly personal information, spam, site owner requests and valid legal requests. They also have additional policies around their search features, such as panels and carousels, which they acknowledge may be perceived to have quote, higher credibility because of how they are presented. These policies include harassing content and hateful content, but only in search features and not in web results. Other search features such as autocomplete and dictionary boxes also have specific policies. Looking at autocomplete or auto suggest systems specifically, we see the same pattern. Google distits itself from the problem by saying quote, autocomplete predictions are perfect and quote, predictions aren't assertions of facts or opinions. And then once again, they emphasize this system as the solution. But they also fall back on enforcement teams that manually remove predictive content that violates Google's policies. In April 2022, autocomplete had policies against election-related predictions, health-related predictions and sensitive and disparaging terms associated with named individuals. By February 2023, Google expanded their autocomplete policy section to include references to their overall content policies and policies for search features, as well as adding the autocomplete specific policy against quote, serious malevolent acts. Many of these policy changes can happen at any time without warning or notification. In this documentation, we can see how Google reinforces the idea of search neutrality by emphasizing the automated processes behind search. Algorithms and not human beings serve search results that are most relevant to the user, which leads us to the next portion of my presentation, what we believe about search, and how we use it. I'm just going to jump in quickly, let everyone know that, as I didn't mention at the beginning, the slides that Michelle has here, they're going to be available afterwards. She's already sent them to me, so please don't try to write all this stuff down. We will have all of these slides available to you afterwards to go along with the recording. If you have any questions or comments or thoughts about the presentation, type them into the questions section of the GoToWebinar interface, and I'll gather all those up and we will answer them at the end after the presentation is complete. Please do post your questions whenever you think of it, so you don't forget. Thank you. Okay, so this data is from the 2012 Pew Internet and American Life Survey. In 2002, 52% of all Americans used search engines. By 2012, 73% of all Americans used search engines. And 91% of search engine users said they always or most of the time find the information they are seeking when they use a search engine. 73% of search engine users say that most or all the information they find as they use search engines is accurate and trustworthy. And 66% of search engine users say search engines are a fair and unbiased source of information. The simple design interface of a Google search is purposeful. Reason states that the interface works to enhance the user's trust in the search engine. And he points out that the design choices Google has made have been copied by other search engines, library discovery systems, and library research databases. Such as shown here on this screenshot of UNCW's Summon page. In critiquing these design choices, Noble writes that when designers use clean aesthetics to cover over a complex reality, to take something human, nuanced, and rife with potential for bias and flatten it behind a seamless interface, they're not really making it easier for you. They're just hiding the flaws in their model and hoping you won't ask too many difficult questions. The Pew Internet and American Life Survey from 2012 also shed some light on how we use search engines. We are confident in our search abilities. More than half of search engine users or 56% say they are very confident in their search abilities, while only 6% say they are not too or not at all confident. And the vast majority of search users report being able to find what they are looking for always at 29% or most of the time at 62%. What do people think about bias in algorithms? From the 2019 Pew article, Seven Things We Learned About Computer Algorithms, they said that more than half or 58% of people surveyed believe computer programs will always reflect the biases of their designers, which brings us to examples of search bias. In her book, Technically Wrong, Sarah Wachter-Betscher examines multiple cases of bias in information technology systems. One such example is of Google Photos autotagging feature mischaracterizing selfies of two black people as gorillas. This terminology of course has a long racist history when used to describe black people. No human being was involved in this tagging, as she explains. The technology is based on deep neural networks, which are massive systems of information that enable machines to see much as the same way the human brain does. Neural networks use a learning algorithm. Instead of following programmed, predetermined steps, the algorithm is fed historical data which it parses, identifies patterns, and then makes determinations about new information based on those patterns. In response to this mistake, Wachter-Betscher reports that Google's Yonatan Zunger, chief social architect at the time, said on Twitter that quote, we're working on longer term fixes around image recognition itself, e.g. better recognition of dark skin faces. Which Wachter-Betscher believes implies that the historical data fed to Google Photos possibly failed to include pictures of black or other minority people. This is a significant lapse in judgment, which Wachter-Betscher says is based on quote, assumptions that technologists so often make that the data they have is neutral and that anything at the edges can be written off. By 2022, Google was actually promoting a feature of its Pixel 6 phones called Realtone, which represents quote, nuances of different skin tones for all people, beautifully and authentically. From this, we might assume that they solved their image recognition problem and even decided to promote its solution as a feature advertised to consumers. However, a recent article in the New York Times points out that in Google Photos and in Apple, Amazon, and Microsoft Photo Products, searching for terms like Gorilla and other primates is difficult. In Google Photos, a search for Gorilla will produce no results. Eight years have passed since the tagging incident in Google Photos and yet the company has still not corrected or improved its systems. Perhaps Google is more concerned with the optics of getting a search wrong and possibly exposing bias in its systems again than with providing a program that accurately tags and retrieves information. This next example doesn't involve Google, but it may be interesting to other academic librarians. O'Neill's book, Weapons of Math Destruction, takes a look at how mathematical models, unregulated and never challenged, reinforced discrimination in many different aspects of our lives. She takes a hard look at the college rankings from the US News and World Report. In order to quantify something as nebulous as, quote, educational excellence, US News built a model based on proxies in 15 areas or categories that seem to correlate with student success, such as SAT scores and acceptance rates. Then, as the ranking reports grew in prominence, a feedback loop developed that became self-reinforcing. As O'Neill states, if a college fared badly in US News, its reputation would suffer and conditions would deteriorate. The ranking, in short, was destiny. The larger the scale of the rankings reports, the more damage they do. The problem with building a model based on proxies is that it is easier and simpler to gain the system. Schools begin to invest time, effort and money into the 15 areas ranked by US News. But one of the areas never accounted for in the proxies was tuition and fees. O'Neill states that between 1985 and 2013, quote, the cost of higher education rose by more than 500%, nearly four times the rate of inflation. And just as the rankings reports have grown to include law schools, business schools, and even high schools, SOTU has an entire industry of consultants, coaches, and courses that promise to get those who can afford it into elite schools. O'Neill says, quote, the result is an education system that favors the privileged. It tilts against needy students, leaving out the great majority of them and pushing them down a path toward poverty. It deepens the social divide. And as we've seen in recent years, questions about college rankings and the lengths on people will go to gain the system have abounded in the news. Ruha Benjamin coins a new phrase in her examination of bias in technology, the new gym code. She defines this as, quote, the employment of new technologies that reflect and reproduce existing inequities that are promoted and perceived as more objective or progressive than the discriminatory systems of a previous era. She goes on to say, quote, the animating force of the new gym code is that tech designers encode judgments into technical systems but claim that the racist results of their designs are entirely exterior to the encoding process. One example she gives is of Google Maps telling her driver to turn right on Malcolm 10 Boulevard. Google Maps had recognized the X in Malcolm X's name as the Roman numeral 10. Benjamin argues that this supposed glitch is really an example of the cultural norms of programmers being coded into technical systems, a kind of digital gentrification marrying what is happening in urban areas across the US. She says, quote, this is more than a glitch. It is a form of exclusion and subordination built into the ways in which priorities are established and solutions defined in the tech industry. Glitches are not spurious, but rather a kind of signal of how the system operates. Another example of racial bias in search is from 2016 when searching the phrase, three black teenagers produced dramatically different results from three white teenagers or three Asian teenagers. Searching for black teenagers resulted in images of criminal mugshots while the white teenagers search showed quote, smiling go lucky youths and the Asian teenagers search showed semi pornographic images of girls and women. Searching for three black teenagers today brings up articles, images and videos about the racist search results from 2016. As Benjamin points out, Google is, quote, able to optimize online content in ways that mitigate bias. The technical capacity was always there, but social awareness and incentives to ensure fair representation online were lacking. Noble's book algorithms of oppression contains many examples of bias search results and autocomplete predictions. One example she raised is the search results for the word Jew prior to 2005, which included a significant number of anti-Semitic pages. In 2005, Google published, quote, an explanation of our search results, explaining that anti-Semitic results for the word Jew rely on computer algorithms, and even though they were problematic, they were computer generated and thus not the company's fault. Google then recommended that users search using the terms Jews, Judaism or Jewish people, putting the responsibility of finding factual information on the user. Noble asserts that this statement is, quote, claiming that the company can do nothing about the words co-optation by white supremacist groups. She replicated the search in 2012 and noticed that Google then linked the explanation from the beige box at the bottom of the page. However, a search today for the word Jew produces no problematic results and no links to the explanation page, again reminding us that Google can and does influence search results, whether through automated systems or human beings blocking specific kinds of content. This screenshot from 2022 shows the first half of results for the word Jew in Google search results. No anti-Semitic resources appear at all on the first page of search results. In Germany and France, where it is illegal to sell Nazi memorabilia, Google filtered their search results to block such types of sites from appearing as far back as 2002. As Noble summarizes, quote, what these cases point to is that search results are deeply contextual and easily manipulated rather than objective, consistent, and transparent. She clearly assigns blame saying racist and sexist web results are in search are created by, quote, a corporate logic of either willful neglect or a profit imperative that makes money from racism and sexism. It's harder today to find examples of biased search results and not to the magnitude that Noble found when researching algorithms of repression. She searched for terms like black girls and Asian girls and was rewarded with results pages full of pornography and sexual and racist stereotypes. Since then, Google has obviously removed some of the most egregious racist and sexist search results as you cannot replicate Noble's searches. But stereotypes and biases still exist. These are screenshots from March 2022. Bias doesn't stop just because Google enforces certain policies on certain search results. As our society grows and changes, so too does the biases, whether explicit or implicit, that accompany the human experience. As we learn, so too do the algorithms and models that influence so much of our lives. There are also troubling reports of racial and gender bias in artificial intelligence systems and its products. According to this article from NIST, bias can manifest in the AI systems and the data used to train them. But like some of the examples we've examined today, bias is also reflected in how AI systems are used in our society. Benjamin points out that machine learning systems, quote, allow officials to outsource decisions that are or should be the purview of democratic oversight. Public agencies employ systems built by private companies who face none of the checks and balances that a public official or agency would face. NPR reported in March of this year that the Bing AI chatbot launched by Microsoft called a reporter, quote, ugly, short, overweight, unathletic, and then went on to compare him to Hitler, Polpa, and Stalin. AI Power Search may be the next frontier in which we have to uncover and confront racial and gender bias. As Benjamin states, quote, with emerging technologies, we might assume that racial bias will be more scientifically rooted out, yet rather than challenging or overcoming the cycles of inequity. Technical fixes too often reinforce and even deepen the status quo. So now I would like to turn our attention to examples of bias in library discovery systems. Matthew Reeds-Mudd published his book, Masked by Trust, in 2019. He examines bias in library discovery by querying various discovery platforms and recording his results. In summary, he found that a year and a half after the election, the topic explorer showed outdated Wikipedia entries for Donald Trump and Barack Obama, declaring Obama the president and Trump a real estate and entertainment model. In Ed's Go Discovery service, a search for, quote, rape culture showed an encyclopedia article about rape myths in the research starter at the top of the page. In summing, a search for, quote, Muslim terrorist in the United States resulted in a Wikipedia entry about the Islamic religion in the U.S. as a whole. Reeds-Mudd also found very different search term results for searching the terms 9-11, September 11, and September 11, with the 9-11 search appearing to emphasize conspiracy theories about the event. Reeds-Mudd not only uncovers factual errors in library discovery systems, he also shows implicit bias in conflating topics like rape culture with rape myths or Muslim terrorists with the Muslim religion as a whole. Reeds-Mudd points out that library users are, quote, searching for big, challenging, often contentious topics. There is no mathematically correct answer to a question about abortion rights or the death penalty, yet libraries and vendors have promoted library discovery tools as effective guides through difficult subjects. And that brings us to the end of my presentation. Reeds-Mudd says that the unexpected bias results that appear in seemingly objective search tools are the result of treating everything like a math problem, assigning numerical values to unquantifiable things, of accepting measurable proxies for slippery concepts and ideas. I believe that it's important to start with knowledge. If we know algorithms and mathematical models are biased, and sometimes even untrustworthy, we can work towards a solution. We must recognize the limitations of algorithms and learn how to approach search results with skepticism. And we must share this knowledge with our patrons and users. It's not just a matter of improving diversity in Silicon Valley. In October 2021, CNET reported that multiple tech companies had signed on to a DEI initiative to improve diversity and inclusion in their companies. But as Noble points out, this puts the onus on marginalized people to learn coding or become programmers or somehow step into leadership roles. She says, quote, we have automated human decision-making and then disavowed our responsibility for it. Algorithmic bias is a multi-layered problem that needs many different solutions working together. Wachter Betcher writes, quote, if technology has the power to connect the world as technologists so often proclaim, then it also has the power to make the world a more inclusive place simply by building interfaces that reflect all its users. So here are some films and books to look for if you're interested in more information on this topic. I was not able to discuss all of these resources, most notably the film-coded bias, but I do recommend them. And for questions, here is my contact information. And I will stop sharing my screen. Actually, you want to put the screen back up there to show your contact info for a bit while we're at it. There we go. In case anybody has any questions about anything that you did have in any previous slides or if we need to pop back to anything, we'll have that available. Thank you. Thank you so much, Marcel. This was, as you said, I appreciated your, if I call it a warning, disclaimer at the beginning that this is, I mean, I think it's pretty obvious with the title of your presentation, very serious discussion, things to talk about. Yeah. You're not going to be a happy-go-lucky kind of, just to thank you, thank you for thinking about here, because there's things that are very important and serious going on in the world that we need to be presenting properly to whoever's using our libraries, public libraries, academic libraries, etc. So I really appreciate you bringing all this information together, which is why I wanted to have you come on here today because there is a lot going on out there, but it's great to have someone to bring it all and say, okay, here's all the things you need to be thinking about and be aware of. I don't know if we have solutions for everything, but awareness is the first step and an importance. Absolutely. So if anyone has any questions, nobody typed anything while you were presenting, that's fine. If anyone's listening, I assume very intently. If anyone has any questions, comments, any of your thoughts about what's going on with this, go ahead and type into the question section. I was at the very beginning when you were talking, yeah, the math book there, I was like, I was, I was surprised that math would be part of this presentation. I was like, I didn't know where it was going to be, math was going to be required here. But I get it. It's, and that's, as you said right here at the end, that's the problem. You can't mindlessly let another algorithm do everything. It all needs to be examined. You can't, I mean, everyone in the whole AI thing, oh, that just makes me nervous, just in general, with everything with it. Yes, it has its uses, but there's so many things that are not good or comfortable about it and you just can't let it just do its own thing. No, we do, we have to challenge it. We have to watch it. We have to remark on it and record what we find and share that information with others, definitely. I've started some research into generative AI and bias and I'm already seeing results that show clear, stereotypical, implicit, unconscious bias. There's the cliche with voice, people have always said about technology and computers, and now it applies to AI, garbage in, garbage out, which is very, that doesn't cover everything, but yet it only knows what you've taught it and told it. Now, ideally, AI is supposed to learn to somehow, but it can only learn based on what we human beings have. Right, exactly. It's not going to come up with its own original thoughts or ideas. So, yeah, and we need to be very, very much aware of that. And some of the things, and I think the whole Malcolm X, Malcolm 10 thing, I mean, that's just, I mean, I don't want to, you know, make light of it, but that seems just laziness. Everyone knows Malcolm X. That should be something that some programmer, someone should have immediately said, here's something we need to fix and make sure it only knows that when this phrase is there, this is what it is. It's not like it's some obscure unknown thing. Right. They programmed it just smart enough to recognize X as a Roman numeral, but not smart enough to recognize the phrase Malcolm X. And that you need to say it in a different way. Right, exactly. Seems like such a simple thing. It's just like another thing just makes you go, okay. And some of the solutions you mentioned, I think, you know, you got into the end there about the databases that we use. Is there any recourse for us, since someone's asking, as the librarians who don't have a lot of choice in what, you know, let's go databases we have is there anything we can do from our side of it to fix these issues within the system? Is there anything that we can do we reach out to the database providers? Is there something in our settings? And I know it's probably, you know, be specific to each system, but what would be someone who wants to really jump on this and start, you know, telling nebsco, you need to fix this, because exactly. Riesma advocates very strongly for librarians to do the work, to do the research, to run searches, look for biases, take screenshots, contact your vendors, you know, stay on top of it. And I think librarians are especially well versed or well trained in being able to do that kind of research, because we know about search and retrieval, we know how to analyze sources and see patterns and, you know, report, make reports on them. So I think, you know, in some ways, it's between a rock and a hard place, because you have the vendors you have, and changing them can be very difficult and very expensive, but you also have the ability to draw attention to these things when they happen. You can report them to listservs, you can report them back to the vendor, you can put them on blogs and Twitter and such stuff like that. So there, I mean, that's librarians are, that's one of our, you know, basic tenets is sharing and I don't, you know, sharing our research, sharing interlibrary loan and sharing these things, we find out that, Hey, did you all know that this particular database when you run this search is bringing up these bad results. Let's try and do something about it to hopefully get that fixed. But also, if you have any people in your library doing these kind of searches, you need to teach them, train them in, you know, that critical thinking skills, that's something that's a critical thinking. Yes, exactly. Teaching not just how to use the database, but then don't just trust it blindly, use your credit. Right, exactly. Sometimes it's a whole other whole class and training on its own. Yeah. College and universities, how do, how do I evaluate what I've come up with? Don't just, you know, I, when I worked in the university library, one of the, one of the reference interactions I had with one of the students was, it's still stunts me to this day, it's stuck in my head. I helped her do a search on something in our online catalog. It's okay, here's the search results. Now you got to figure out which one is the best for your particular research that you're doing. And she said, I'll just take the first one. No, no, no, no. That's not how this works. No, you've got to look, no, I'm just taking the first one. Thanks, bye. Like, oh, wow, you're not going to do very well. I did my part, but I was like, oh, no, no, no. You've got to dig deeper, you know, because really the fourth book in your search results was really about your topic. Right, exactly. Exactly. I think librarians that can advocate for information literacy in their libraries, courses, seminars, webinars, you know, I think that also is another really great way to bring this topic up. Because you go and find the click here to get this kind of search into these kind of filters into, okay, now you have your search results, then what? Then what, right? Absolutely. Well, I believe that most of anyone has any other questions, still got a little bit of time. If you want to ask anything or share anything about the topic or anything you've done at your library that you've had a success or failure in this with, we can all learn from failures too. That's something to be aware of too. It's okay to, you know, struggle with those things like I did a long, long time ago. But I think this is definitely something that, like I said, we try to teach people to pay attention what they're doing. And hopefully this is something that, I know everyone is aware that this is an issue, but it's something that a lot of people are dealing with, racial and gender bias in life in general. And it's something that as librarians, I think we do very good at being supportive and helping that everyone gets the right information out there and doesn't get misinformation and bad information and looks deeper into what they're looking at and not just, yeah. Yeah. That was interesting that the 9-11-1 really, I didn't realize, when you start talking about it, I was like, oh, that's weird. And then when you said 9-11 comes up with more conspiracy things, oh yeah, I get it. That's what they always, now I see that when anyone's on the conspiracy side or the that's how they call it, what they call it more often. Yeah. Those kind of things you learn. Yeah. Right. And then, you know, to know that someone or Edsko hasn't done the work to normalize those terms so that you get the same information for each search, you know, is kind of shocking, is kind of, and disappointing. Like, yeah. And that's like, I mean, librarian, catalogers, that's their, their like main goal in life. Make sure every, no matter what someone types in, they'll find the right resource and it's going to take manipulating the metadata behind there to make that work correctly. And that's why catalogers are, I think the saviors of our libraries are at least, you know, the heart of the library, because if you can't find what you're right, then you're looking for, and it takes someone going in and manipulating the data and saying, well, you have to make sure if someone types in this, this, this, this, or this, they're also going to all come back to this results, the same results, no matter what happens. And you think database companies that have been working with libraries for however long they've existed, they should know this. Yes. Yeah. They should know this. Going back to that sad, sad laziness, which just seems very flipping, flipping, but I don't know. Sure. Yeah. All right. It doesn't look like we have any other questions or comments coming in. I can't see if people are typing to wait for the message to pop up. But I was going to, I think I'll start working towards wrapping things up for today. Thank you so much, Marcella, for being here with us this morning and talking about this very difficult topic and important, I think. And hopefully that this will get the word out to more people about what we can do and that it exists and something to just be aware of when people are using your library's resources and do what you can to correct this. Yes. Yeah. Absolutely. Thank you for having me. I really appreciate your time this morning. This is a great presentation for us. Thank you so much. All right. I am going to pull presenter control to my screen here now. So I'll get there we go. There we go. All right. We do have some thank yous coming in from people in the chat though. Thank you for everything you share. This is great. Yeah. All right. So here's the session page for today's show. As I said, we are recording this as well. Yeah, I do have the slides so they'll be available. And I'm going to pop over to our main Encompass Live page here. Excuse me. If you use, talking about search engine, but it works. If you use whatever is your search engine of choice and type in Encompass Live, we are the only thing called that on the internet so far. Nobody else is allowed to use that name. So you will find a link to our main page and link to our archive page. If you use whatever search engine you like to use, these are upcoming shows for the rest of the year and into 2024. But our archives are right here. I said I would show you this. If you click on there, most recent ones at the top of the list. This is last week's show. Today's will be here. It should be by the end of the day tomorrow at the latest. And you'll have a link to let's see this one. It'll be just like this one from last week. A link to the recording, which is on our YouTube channel, the Nebraska Library Commission is a YouTube channel where all of our archives go. And a link to the slide presentation. Everyone who attended today's show and registered for today's show get an email to me as I'm sending it out to our mailing list here. But you also get that directly to let you know that it's available. We also push it out onto our mailing list here in the Library Commission and our social media. We do have a Facebook page for Encompass Live. It's linked off of the show page. If you'd like to use Facebook, give us a like over there. And we do reminders. These are reminders to log in to today's show, presenter posts. And then when recordings are available, we post here as well. So if you do like to, if you use Facebook, give us a like over there. We also use the hashtag Encompass Live, a little abbreviation for our show name in Twitter and Instagram to post out reminders about the show. So you can follow us there as well. While I'm here on the archive page, I will just show you, you can search our archives, our show archives, see if we've ever had a topic on anything you might be interested in. You can search the full show archives or just limit it to the most recent 12 months if you want something just very current. That is because this is our full show archives. And I'm not going to scroll all the way down because as you can see as I'm scrolling this huge page, this goes back to when Encompass Live first premiered, which was in January 2009. So we're in our 15th year, oh my gosh, of the show. So, but we have all of our show archives going back to the beginning. So do pay attention to the original broadcast date of any recording you watch. They all have the date there so you know when it happens, when it was first done. Some of the shows will be great and fine and stand the test of time. Some things will become old and outdated. Resources may have changed drastically or disappeared. People probably work at a completely different library than when they did possibly work in a completely different place than when we started. So just pay attention to that original broadcast date. But as what libraries oftentimes do, we do keep things for historical purposes so that they can always be available. And as long as we have some place to post these, which right now is our YouTube channel, we'll always have our show archives out there for anyone to watch going back to the beginning. All right, so that does wrap it up for today's show. I don't see any other last minute desperate questions or comments anyone typed in. So I think I will say, I will finally wrap things up. Thank you everybody for being here. Thank you so much, Marcello. It's good to have you with us this morning. And maybe we'll have you all on a future episode of Encompass Live. Oh, that would be great. And so as I said, here's our upcoming shows. And next week, our topic is redesigning a library website. So if you do have a website that might need a little work, or if you're thinking about doing something, this is a session that may be good for you, another University of South Carolina presenter. And we have all other shows coming up as well. So please do sign up for that show and any of our future episodes. So thank you, everybody. Thank you, Marcella. And hope to see you all on a future episode of Encompass Live. Bye.