 Thank you for joining me, this was early in the morning, libraries typically use the term filtering to refer to the internet blocking software on public computers, but that's not what I'm talking about today. The term filter bubble was coined by ELAB Parse in his book by the same title to refer to the way all the companies that operate on the internet both the companies that control the ads and the companies who you see up front know, like Google and Facebook, collect and use your personal data to tailor what you see on your screen. And so I'm going to talk a little bit about the book itself. I'll talk about the difference between over filtering and covert filtering and they do work a little bit differently. I'll talk about some of the negative aspects of this process and then some of the positive effects also, and then again, do libraries and covert filtering. So the first time I really became aware that something was going on behind the scenes in terms of the the information sphere was when I was on vacation in Branson, Missouri a couple years ago. And the moment I set book is really easy to store. So Gary, outside of my usual place, I'm in a geographic location that I don't normally go to. And during a store, I don't normally go to. And the instant I set a real store, I received a text message on myself all over the group on for Disney products. And it kind of freaked me out because I had never logged in with that company, I had never created any kind of account with them, certainly not bigger than my contact information. They knew who I was, they knew where I was, they knew I was in their store, they knew how to contact me, and they did not have my knowledge or consent to have this integration on me. And I'm sure many of you have similar experiences where some company, be it on a cell phone or on a computer, email, whatever, somewhere seem to have an awful lot of your personal information, and you've never actually given them permission to have. Eli Marksor's the main premise of his book is that when these companies operate together behind the scenes to crack what you see, to make it so you see more relevant ads, you see things that are more relevant to your interests, more of the things you like, and fewer of the things that you don't like, which is not only things that you explicitly expressed as interesting, but the things that you just have a particularly expressive interest in, they might be things you'd like, they would just never have happened to explore them. When it shows you more of what you like and less of everything else, you start to develop a narrow filter view that isolates you like you're in a bubble and you aren't as exposed to your ideas, and you fear that this could lead to a decline in innovation because you don't get that spark of creativity from seeing something new, and you fear that it also could create even wider sociopolitical vision that could break down the country. He's a mobile artist, but that was his perspective when I was learning this coming from. And that's when I really started thinking seriously about this issue after reading his book and sparking to read other perspectives to get about his perspectives. But in general, Marksor, one of the things he noticed, which I have also noticed myself using Facebook, he noticed some of his friends were just glad I was preparing for this meeting. In this case, it was his conservative friends because it turned out he was more frequently clicking on the links that his little friends posted. And so Facebook was just quietly showing them all posts by them and quietly hiding all the posts by the friends he wasn't clicking on as much. And when he realized what was going on, he became kind of alarmed because even if he wasn't clicking on the links posted by his conservative friends, he wanted to know what they were talking about and what they were thinking about. When he wasn't even seeing what they were talking about, he felt like he was getting a very, very narrow hold of you for very constrained. He was missing out on half of the day he was at work. Now, not everybody shares this perspective. He wrote another excellent book called I Live in the Future and Here's How It Works. It had essentially the opposite experience. He talked about his social networks. In this case, mostly Twitter, he was exposed to a significantly wider range of viewpoints than he'd ever been before. So your mileage may vary. It really may depend on your personal habits which are likely to click on. It'll be, you know, as with everything, it will be different for every single person. But one thing that everybody who writes about this issue agrees on, whether they're in favor of the book you're going over against it, is that the reason it exists is for the purpose of providing more wealth and advertising. It's your more likely to click on the product, you're more likely to click on it because that's how all of these free online services, Google, Facebook, Twitter, Kresi, et cetera, this is how they support themselves by having one of the ads and that's just the way the economics work. So it's not necessarily a bad thing, it's just the way the system works. So how much money are we talking about? For Facebook, for example, per year, per user, they make about piss. The price of one cup of coffee. Now, when you multiply that by the total number of Facebook users, that's a huge amount of money. When you think about it on an individual level, just you as a person, all of your family connections, your friends, your professional colleagues, your interests, your medical history, your hobbies. The Facebook is more important in some change. Maybe then it gets a little scary and that's where people start getting a little worried about these issues. Because companies like Google and Facebook and many others can collect personal data based on IP address, even if you don't have an account with the company, and sometimes even if you're not logged in. Facebook can get information anytime you visit the website that has a Facebook Lite client on it. You don't have to click the button, but it just has to be present on the page. And Google will collect information anytime you visit any Google service, including YouTube. And you cannot opt out of the static collection. You cannot have seen the unauthorized ads, which you can't opt out of the static. And Google, of course, doesn't just use it for advertising. They're all complete to make your search results more targeted and more tailored to your interests. Their goal is eventually to make auto-complete, virtually telepathic. And I've had experiences with this myself in the spring semester, one of the classes I was taking that opened up the syllabus and said, okay, business assignment, I need screenshots of the program at least. So I went to Google and I have the words in my head that I was going to type or how to take screenshots of macOSX. I typed how to take and not even type the S to begin the word screenshots. And Google pops up with how to take screenshots of macOSX. And I'm like, oh my god, they're reading the syllabus. Stop it. So let's try it. Let's try it. If you got your tablet or smartphone or laptop or something with you, E-Life RRs would talk about a particular instance of our two friends of yours both searched BP. And they got some radically different search results. They might have walked down different sites. They got all sorts of information about the oil spill. The other got all sorts of information on investing with BP. And they had virtually no overlap on their first pages. And that had kind of worried them. So try searching BP and let's see if we get anything different. I get some stock ads and I get about these express food shops. Do you want to share what they got? I got prayers. Stocking prayers? Official site? Okay. Wikipedia. Wikipedia? All right. Area gas stations. Area gas stations? All right. Let's start. Okay. Getting some stuff. Not so much the oil spill anymore because it's not the hot topic that used to be. Melissa Casperini, my colleague and I, we're doing some searches just to see where Kyle Arnott results to first. One, we had a huge diversity on Melissa, a bee city and bogey. In her case, she'd been doing a lot of research on a particular scholar in that field. And so her search results, every single one of them featured an article about that scholar. Mike, none of them did. And so if you were to look at her search results, you think that this was the cream and it's hot. Unfortunately, I forgot. But if you look at Mike, you wouldn't think this person was there at all. So these are the bullying. Anybody want to say what they've got? I've got... On the second page, I've got something about Bogey. Bogey? On the second page. Okay. I just get a lot of news and for the Louisiana bullying, I just get a lot of news and it's a thing about children in Bogey and Kyle. You get the children first? That's all the time at the bottom of my first page. So... I get teasing and bullying and bogey from a Bogey's children. Okay. Yeah. That one didn't show up for me at all. It's from healthy children. So we've got some overlap and some not overlap, but it's not as bad as it used to be. And I think part of that is Google has actually been responding to complaints like parsers by trying to introduce more diversity back into their search results. They've gotten very diverse. And just in the last month, I've actually seen some articles talking about how Google was very deliberately trying to make sure that the important stuff makes it into everybody's search results. So that's one of the points is that this landscape, this filtering landscape, is changing very rapidly. So there's still filtering going on, but the effects are changing. Okay. So the NIM... Yeah. NIM NIH National Institutes of Mental Health, I think. And then Wikipedia again. Wikipedia always seems to go to the top, almost everybody, I think. But is it a problem? I mean, even when we do different search results, is it a big deal? There's a study out there that says that not the moderates, but the far left and far right people on either extreme of the political spectrum have so trained their neural pathways to their thought processes that the other viewpoint, whatever it is in those both directions, looks pathologically insane and genuinely can't understand how anybody can think that way. And that's on both sides. That's not specific to either party. But there's another study that says that the U.S. political divide is today just about as wide as it was four years ago. So even if we do have a big political division, we can't find the internet for it, and we can't find the filter for it. And the more I started to read about this, I really started to think there was a big difference between over filtering and co-over filtering. Eli Foster did not make this distinction. He painted all of these companies with the same rush. But I really thought there was a difference between a company like Google or Facebook or Twitter where you create an account who signed a, I agree, to some sort of terms of service agreement, their collection and use of your data is largely consensual. Co-over filtering go are these companies that mostly control the advertising that operate behind the scenes, gathering your knowledge from all sorts of sources without your knowledge being sent. If there's any archivists in the room, they do it without comments. So once that is in the database, they can't turn to where you got it. And they can't verify its accuracy. And there's virtually no government oversight of this, although that may change because the government has just started this summer by moving into one of the data brokers, and there are many of them. I'm not thinking of this one in particular, except that there's more information about them than any of the others. It's axiom. And the reason there's more information about axiom is Eli Foster talked a lot about them. Natasha Singer, a New York Times, wrote a great big article about them. Joseph Thoreau, The Daily U, his book The Daily U, also talked about axiom. So if you've heard of one of these data brokers, this is probably the one you heard of. And they have about 1500 specific points of data on more than 1% of all U.S. adults, plus a slightly smaller percentage of adults worldwide. And of course their main goal is to sell it to various companies for the purposes of targeted advertising. So from one of axiom's own presentations to their investors, Pete Becky, 37, Mary, two children, high-value, lives in New York. Think about the high-value. How do you think they rate your value? They judge people, they rate people based on their purchase history, their credit worthiness, their friends' credit worthiness, and any number of other factors to gauge whether or not you're worth pitching the good ads to. And they are just one of many, many, many data brokers. There are literally hundreds of of these companies operating behind the scenes. They merge, they go out of business on a fairly regular basis, but some of the big ones are simply sticking around. And this is like 20-some of them are working 40 sources. But they're credit reporting, they're in the mix, experience effects, and changing their part of the spirit that's operating there. And one thing that really surprised me when people was reading Elsevier, because of course in libraries we think of them as publishers of academic textbooks and journals, but according to Joseph Torrell and the Daily Hue, they also trade in individual persons' data about their credit worthiness. So, who knew? So, a person named Alexis Mander decided to do an experiment where she tracked the tracker. She wanted to see who was watching her. And so during the day and a half of the normal internet usage, she didn't go out of her way to visit any website that she wouldn't normally have visited. She crowded 105 different companies collecting her data. And she went to the Network Advertising Initiative, which is something that the data brokers have set up to theoretically allow people to opt out and they'll grab all their phones and jump through all their hoops and discover then that the only thing that she actually managed to really opt out of was seeing the targeted ads. They were still going to collect her data. It was still going to be part of their big database, but she just wouldn't see the ads. Joseph Torrell talked about a similar situation in the Daily Hue where if that's the only tangible benefit you might get out of it, seeing stuff that's more relevant to your interest, but you can't opt out of the pre-part of it, why would anyone go to the triple of opting out in the first place? And then Natasha Singer of the New York Times went a step further and she actually paid Axiom a fee to get a copy of her profile. So, if I can't opt out, I really just want to know what you know about them. And all they sent her was a list of her previous addresses. They claimed that they could not send her a complete copy of her profile because they couldn't search their database by individual name. Yet, when they're selling this data, these data packages to their corporate customers, they include personal data all the way down to names and street addresses. So, they just can't give it to people. So, there is no way for you to review what these companies know about you. So, as I say, this is all about targeted advertising. It's all about relevance. They divide people into targets and waste. The algorithms do this behind the scenes. They're not necessarily, they're not a personal identity, which is the algorithms looking at your searches, your buying habits, and sorting them. And the algorithm consistently falsifies a person as waste for more products than not. And makes a little privacy problem pointed out that this could be over a lifetime added to a lot of these opportunities. You know, you don't see. If you don't see the ads from good stuff, you don't know what opportunities exist, you can never take yourself out of the hole. If you get classified as waste too many times, it's a trap you can't get out of because you never see the way out. So, why does this continue? Honestly, we know companies are making money. That's why they're getting at it. But why do we, as individual users of the system, get at it? Why does this allow, you know, why is there no big public outcry? Because it is convenient. It's good to have things sorted. This is not an indictment of humanity. This is not saying that people are lazy. This is saying that there is too much information out there and no one can possibly sort through it all on their own. You have to have some way of getting to the stuff that you care about and reading out the stuff that you're not interested in just because there is too much information. And that's just a fact that a lot of people want to take some information that exists. Sundar and Marat did a really interesting pair of studies and then they were looking at the idea that there's a difference between customization and personalization, two kinds of filtering. And personalization is what is controlled by the system. So, the algorithms operating behind the scenes depends on what you see. And customization is controlled by the user. This is where you go in and you set your options. You actually put on various buttons and set up your feeds or set up your parameters. And they found in their first study that power users really preferred customization. They liked that high level of control. Not power users didn't want to think about it. These are just the ordinary day-to-day users. They wanted it easy. They liked the personalization. They wanted the system to do it for them. But in the second study, Sundar and Marat introduced a strong privacy policy as a barrier. And the preferences flipped. Power users looked at that and said, okay, if my dad is secure, then why not let the algorithm do some of the heavy lifting? And the non-power users looked at that and said, well, who is really thinking about these issues? I guess I better pay attention and maybe take a little more control by that. And they started out with more customization. The thing was, though, in all these studies, the website that consists of a rank at the bottom that absolutely nobody liked power users and how power users alike were the ones that had no personalization and no customization, no filters in effect. Nobody liked them. So don't expect any kind of uprising against this kind of filtering and personalization and customization. It has people defying the useful on average. Some libraries are filters. We've always made filters. We call it collection development. Every time you choose to buy one resource and not another, that's a filter. And David Feinberger, author of Too Big To Know, talked about how over the course of about a century there was a 30-fold increase in print publications in the US alone. That's just print. That's just the US. Of course, libraries didn't get that much bigger, so libraries did the only thing they could do, which is buy a continuously narrowing percentage of the total works published in the US. And by extending globally and online, etc. So over the course of the last century, the library filter has been getting narrower. That's nothing we could really do about it, because we don't have the resources to keep up with the information explosion any more than anybody else does. And that's just the collection side. On the patron side, we still also collect and use patrons data for all kinds of things, for circulation, power development requests, sometimes public computers, required logins. We collect people's data for all sorts of things, and we use it for all sorts of things. But the difference between us and those data brokers is we don't sell it and we don't share it. We, as a profession, hold privacy as a strong form of value. Within the ALA's intellectual freedom committee, they make a distinction, actually, between privacy and confidentiality. And privacy is the right to open inquiry, that right to research any topic without fear of negative repercussion, without fear of persecution. Confidentiality is more like that of security. That's more like we have the patron data and we keep it safe and we're not going to sell it for keeping it confidential. This distinction is largely unknown outside of libraries. When you read privacy policies from all these companies operating on the web, they're talking about what the library association refers to as confidentiality. And the concept of privacy as libraries use it in terms of the right to open inquiry for any topic is never mentioned. Now it could be, theoretically, that they might actually do hold that as a value and then just don't feel the need to talk about it. Or maybe not. Eric Schmidt, who was at the time, he said that in 2009, CBO of Google, he's now the executive chairman, so he's still under hot management levels, said that if you have something you don't want anyone to know about, you shouldn't be doing it in the first place. Well, that's completely counter to the librarian's idea of the right to open inquiry. Because just because somebody's researching a topic doesn't mean that they're going to go, you know, blow something up, they could be researching anything. You can bring them to the office of research. That's why we have to protect people's right because people look for great information for all kinds of reasons. But obviously Google at least doesn't hold that as a as a particular value. ALA's freedom to read statement includes the idea that it's in the best interest for everybody to have access to the widest diversity degree of views and expressions, including those that are unorthodox, unpopular, considered dangerous by the majority. I would say that in this age of filtered internet, we might need to add another category, even the passive life of exposure to new ideas that the filtering creates. Because you can't ask a question if you don't know what topic consists. And that takes us back to Congress or central themes that in your filter, you don't know what you're missing. We don't know what you're asking. So in your book with your library, first, think about yourself. Think about your own filter bubble because everybody has what we all do. Just be consciously aware of it as you're selecting materials for your library. Because you know what your library's collection is a little bit of your interest. You want to reflect all of your own patriot's interests and diversity of it. And we all know this. It's just a minor reminder. Think also about your patients as they're coming in with this increasingly filtered experience where everything they use on their tablet and their smart phone or whatever is highly customized, highly tailored to their preferences. It's all the stuff they hear about and none of the stuff they don't care about. And then they come into the library and they search for databases and their catalogs and when we have our own it's kind of terrible. And so they may perceive an undesirable signal-to-noise ratio. They may look at this and say, well, what's with all these walls hit? This is a one and not at all. And they may not understand how to search in an unbuilt environment. So that's an education issue. Another thing to consider with patrons is especially if you're helping somebody remotely, be it chat or phone or however, what you're seeing on the screen, may be totally different. Even if you're featuring the same search terms into the same search box. So some patrons are going to be so thoroughly filtered that they really don't even know what to ask for. They don't know what they're missing. And that's where the education issue comes in where you have to help them understand how to search outside of the filter level and how it's actually important to do that for some topics. But again, everybody's different. Every patron is different. It's not always necessary. Sometimes you can just look at someone's business or something without even having to push the parameters. Unfortunately, there's no right easy answer because every single person is different. Libraries could perhaps use filters effectively too. Keating and Hackner had an interesting proposal about 10 years ago. I've never seen a library that actually does this, but it's an interesting idea and the technology today actually would make it easier to do now than it would have been when they first proposed it. This one applies more to academic libraries, but the idea was using a student's major to filter their search results based on what other students of that major or faculty in that department have done to be used for relevant. It's an interesting idea. If libraries could offer a high level of personalization on the level of Google, you could imagine people coming in and do patron search for stars or physics students with all sorts of resources on astronomy and the movie about physics all sorts of resources on the Hollywood stars and both of those would happen, but if libraries do move into a future where we do offer more filtering, more personalization, there are a couple of caveats. First, we would have to make it very obvious that this kind of filtering is in effect because historically we have operated and many of our agents expect us to continue to not offer it. So if we do start filtering search results, we need to let people know and we got to make it very easy to turn it off for those patrons who don't want those results or who do need that privacy and that confidentiality, who don't want to be tracked, who don't want whatever they're searching to be influenced by their past searches or to show up influencing their future searches. One option that exists now is kind of the anti Google. The search engine, their whole frame of in is we don't filter the Google. They even show up in like our search book on their about us page. So there's not much to say about them other than they don't track, they don't filter. So if you have a picture that truly meets that, this is one of your questions. Another interesting website that might be worth working with, check me then, is TOSDR from Service to Injury. This is a really new project. This has only been going since this summer so they don't have a lot of data yet, but they're working on it. And the premise is people use all these services online and they just click I agree to the terms of service without reading them because, you know, who wants to read 15 pages from the terms of agreement when you've got a bug in an insulated password. What they're doing is going through and collecting these terms of service agreements and going through them carefully, rating the different aspects of them, the different parts of them, has to, hey, okay, this one, this sign is safe to, ah, this is considered at this particular issue matters to you. Now to, oh gosh, don't give these people your data. And as they get more data, this site will become more useful as a reference to check if you're picking a site that you like when you use your library services or if you're thinking a site to recommend to a patron, this could become a useful reference to all as it gets more data on the site. Right now they've only got maybe a dozen or so sites reviewed, but it's not. And of course think about your library so privacy policy. You shouldn't necessarily expect their patrons to know that we don't buy and sell their personal data because some people might just assume that, you know, every other company helps their operating on the web buying and selling their data and libraries probably are too. So make sure you have a privacy policy or are linked to the ANLA's statements or something in place to show the patrons who are concerned about privacy, what we really believe in, and think about what libraries are in a position to do well, that the internet at large tends to not do so well. Carl Grant and an interesting blog post on libraries in the cloud talked about the idea that instead of a filter about the libraries need to be the learning people, where we would make it very easy to click a single link to have an opposing viewpoint or a critical commentary on whatever a person has to be looking at. We're out there yet, but it might be something to go forward to open May. And this feeds into information literacy instruction, this information literacy, critical thinking, how to fair amount of overlap. John Wiener did a really neat study where he analyzed 1,600 articles from PubMed and their databases to see what kinds of concepts information literacy and critical thinking will pair with, see what the overlap has to be. And critical thinking turns out to be primarily internal, it's a private process, it mostly happens up here in the individual person's brain. It's learned over many years through experience, whereas information literacy tends to be more public, more communicative, it's shared information. But when you take outside of that, there's a huge amount of overlap. I mean, the Venn diagrams overlap quite a bit. And so information literacy instruction can actually be used as a tool to teach critical thinking skills to help people evaluate these information structures and as they encounter online too, better control around data. And think about things that either could be happening soon, or in the near future horizon, Joseph Turcro, they already talked about have news sites already have technology to alter the headlines and read paragraphs of online articles to better attract an individual's attention based on their personal preferences. Okay, is that a problem? Well, what are the student sites and article in a paper that they turn into their teacher and then their teacher puts them a link in the bibliography and the title doesn't match? And the quote the student used is an exact match either. This is just, again, a thing to be aware of this might be something to, if we have knowledge that a vertigo news source is doing that, that's the information we can pass on to the students and teachers who use our online queries. And this one's somewhat scarier, but it's not happening really yet, although it is technically possible already to alter the characters and plots and the crimes characters using in your books, in videos, in video games, to super readers or viewers, players. It's not widespread because it requires an awful amount of processing power to do this. But think of all the things we're doing today that 10 years ago would have been incredibly impossible because of processing power to do this. This is something we might see in the future. So imagine a book club discussion where everybody gets a book on their, on their e-readers or their tablets and they get together and they really have the same book. So then, again, here's a place where libraries can step in. We can always make sure they have access to the authoritative source of the text, but the word the author actually wrote. And, um, James Meinheimer, who is a cataloger in Rome, talked about libraries as how reliable we are compared to Google. He talked about reliable selection, so we get the full range of diversity of opinions, our reliable cataloging, so you can always find something the same way, and the reliable access. So there's something that appears, we still have ways of getting to the information. And in general, just the things that libraries are in a position to do very well that, you know, we can't out-brew the book, but we're not playing the same game. So, and we're going to just deal with the basics. As always, libraries have to continue to develop our collections to the greatest depth and are possible within our resources, so that we always have something to offer any picture that comes in or comes to our site, no matter how narrow their particular filters are. So this filtering is a permanent part of the information landscape. The exact way it's working is changing rapidly, but in some form or another it's going to possibly be more cohesive. And so the main takeaways, and I could average, is first an awareness of your own filter bubble. Think about, think about what these companies probably know about you and how that affects what you see. Understand generally how these things work, and then you'll be better able to recognize when you're a filter bubble and you're a patrons filter bubble, I hope a lot of them at all, so you can figure out how to get to an ARC and help them most effectively. There we go. Any comments or questions? How have I changed my searching? At first, I got really paranoid and I tried to go in and lock everything down, and then I realized how incredibly inevitable everything was, and I thought all I'm doing is making everything work. I'm actually more of what I let the companies have. My privacy settings are actually looser because I can't really escape, so I might as well, and that thing that Joseph Thoreau said, why would you opt out if all you're opting out of is seeing the targeted search results and you're not able to top up totally opt out of an ad question? I was like, well, I can't get what benefit I have. So I figure if there's something that's going to put me on the wrong, on a societal level, I'm not going to be the only one attached. How would you go about circumventing somebody's filter results and hatred in this? How would I go about circumventing a patient's filter results? Well, one of the options is simply using an unfiltered thing like DuckDuckGo. Sometimes actually the public library computers themselves are good enough as long as nobody's locked into a their own Google account or something, because those public machines get such heavy use by such a widely diverse group of people that, based on their ideas, there is no coherent filter in effect, because it's all spramble. If someone logs into their account with one of these services, then all those are off, because then they can get the account level stuff. But it's truly a public machine where nobody is locked into any personal accounts. Filters are so scrambled anyway that it won't really match up with the patient's preferences to start with. So, provided they're not logged into whatever their account source is, Yahoo, AOL, Google, yeah, sometimes the answer is just log out of everything you can log out of, or go use a computer where there's a different computer where you're not logged into anything, nobody else is either. And there aren't a lot of really easy ways to get out of this stuff. And also, most browsers do have some kind of private browsing feature. It's called different things in different browsers, but look for something like a private browsing feature and just turn it on. So, go to make sure everything that can be logged out of is logged out of, and turn on about every kind of private search, private browsing feature. Kind of remember what some of the others are calling different browsers. But every major browser, the Intermediate Explorer, Firefox, Chrome, Safari has something like that. Our library has, we're a university library, and our library has on our, it is in the library, program company, Freeze, which basically every night the computer squabs themselves clean. And so, every morning when those computers come back on, it's like brand new computers, you know, searching permission from some of the previous users or anything like that. That's one other way to get around it. That's another good one. She mentioned DeepFreeze as a program that when the computer reboots, it's scrub clean and reset to its initial settings. So, no cookies or anything that's left on it, nothing carries over. I've heard of another one called Centurion. I'm sure there are many others out there. They're probably that use similar kinds of things. DeepFreeze is one of the biggest ones I've heard of though. So, that's another option to make sure that none of these little cookies and other files that accumulate carry over. So, that's one service that libraries could offer is secure searching of the web. Yes. Privacy-free program that comes into the internet. Can you comment on it? Yes. Security on the internet, sort of. I had a question. In one of your sources, one person, when she did her searches, she found that there was 105 of them for tracking. How did she know that? She found her searches. I don't know exactly. She had some kind of method for figuring out who all was was watching. I don't remember exactly. If you could go to my article and check that yourself. To answer that, unfortunately, I can't remember the name of the plug-in, but there's a Firefox plug-in that you can go to a site. It will then start tracking what everything is tracking then you and then you can bring up a screen that shows all the connections between the site you're on and other sites and how those are connected. Unfortunately, I just I can't remember the name of the plug-in, but search for like, you know, Firefox privacy tracking plug-in or something like that. I'm sure you can find it. So, that's how she knew that she wasn't seeing the ad. She was still getting tracked because what she did again, it was still tracking her. Yeah, I don't know if that's the tool she used, but I know that is a tool that is out there that will do that sort of thing. Probably something like that. So, could it have been a browser plug-in like that? Or maybe other tools that she had access to? I'd never go back to the video. I guess the other comment you made for the people on the streaming who may not have been able to hear them. The other comment you made is, this is a service that libraries can operate. We have something like deep freeze in place that we can offer secure browsing that's a service we have. So, all right now, any other questions, comments? Will this be available for us to view later? If I'm recorded or you know what I mean, we'll be able to see it later. Providing nothing goes wrong before she's done, yes, the session has been recorded. The session is, apparently. I don't have the name. Actually, if you email me, I will be happy to send you my bibliography. The full bibliography. It's like two pages, so. Which you can't possibly read here, but I'd be happy to send this bibliography to anybody who wants it. Yeah, the Prezi is online and it's public. So, if you go and just go to prezi.prezi.com and search for the filter developer Angela Crager, you'll find it. And it's also on the UPLI inside where they file the links to presenters materials. You can find it there also. And you can see he did a TED talk. I heard about it that time here. We thought, oh, let's follow this guy. Yeah, same. So, you need this and just TED talk for one. Indeed. Now, I'm sure he had a very good TED talk. And I know some of you. Yeah, the name of the Firefox plugin I found is called Ghostery. Ghostery? Yeah, G-H-O-S-T-E-R-Y. G-H-O-S-T-E-R-Y. Ghostery is apparently the Firefox plugin that will allow you to see who is watching you and track the trackers, if it were. Well, where did you start with the time trip presentation? Hello? P-R-E-V-I dot com. And you can also find it on the N-P-L-A website. Any streamed videos will be available to the general public two weeks after the conference. Two weeks after the conference. At least. For the first... Yeah. For two weeks before it was available only to the general public. At first, it was going to be the conference. It's the same as for people who record everything. At first, the content of these videos will be available only to those who paid for the paid for the streaming video conference. And then after two weeks or more, then it will become available to the general public also. Any other questions? Comments? Observations? Well, I didn't have it to have the careers of clearing my browser at home in my personal machine. I was just a matter of some degree of hearing going on every year. Yeah, but I clear all of it. The memory, the history, the cookies, everything. To read all of those files, all of this information is collected on the web. So what advantage do you get by doing that on your machine when all of that information is already out there? You do actually make... If you do delete your cookies and stuff regularly, you do actually make it a little harder for them to track because they do always need some kind of input in order to give them new information about you. And so yeah, if you do delete your cookies regularly and use private browsing, you will have a variability to collect new information about you. Any others?