 I'll send a note to would be Ann Ladium or me. And we would greatly appreciate the feedback. Again, welcome very much to the Digitorium 2020. And I want to hand it off now to Dr. McDivitt to introduce our plenary speaker. Hello everybody. You get to see my wild office. I'm pleased that I get to introduce Dr. Lauren Klein as our first speaker for Digitorium. And Lauren is an associate professor of English and quantitative theory and methods at Emory. And she's also the director of the Digital Humanities Lab at Emory. Inside higher ed named Dr. Klein as a rising star of digital humanities in 2017. And she also edits debates in digital humanities, which I thank you so much because that got me through grad school. So thank you and welcome. And you have the floor. Thanks so much. I'm saying thanks so much for that introduction. I just have to say, University of Alabama holds a place that's more meaningful than many of you know, because I've been living in Georgia for at this point, almost 10 years. But the first invited talk I ever got asked to do upon moving to Atlanta where I live was in Tuscaloosa. And I totally didn't know that it was in central time and arrived a full hour early, but I had a great time. And it's really nice to have that all come full circle. OK, so I am going to try to share my screen here. OK, that seems good. And then let me adjust so I can see the people. Excellent. OK, so what I'm going to do today is talk a little bit about digital humanities in relationship to the growing field of data justice. And so this is sort of an interdisciplinary field, field formation that considers how the collection, analysis and use of data relates to issues of social justice. And today I'm going to be talking about how feminism and intersectional feminism in particular can help us develop more ethical and more equitable data practices, both in the digital humanities and beyond. And I should say a lot of the content of this talk comes from a book that I co-authored with Catherine Dignasio, who you can see here. She's an assistant professor of urban science and planning at MIT. And so I just wanted to put that out there. And there's more information about the book on the internet. So my starting point for writing the book and sort of my entry point today into this talk is sort of the larger movement that is working to hold corporate and government actors accountable for racist, sexist, classist data products. So you might think of things like face detection systems that can't see women of color. You can think of hiring algorithms that demote applicants that went to all women's schools, search algorithms that circulate negative stereotypes about black girls, child abuse detection algorithms that punish poor parents. These are all examples that you can see actually that are discussed at length in a lot of the books that you see at the bottom of the screen. But I just put two examples that have come up sort of in the realm of academia in the past month. So we've seen, for example, on the right, an algorithm that was intended to predict A level exam grades in the UK since the exams were canceled because of the pandemic. But in calculating and sort of predicting the grades, they took students past exam performance, but also geographic location into account, which meant that students in under resourced regions, which had historically lower exam scores found their grades lowered in comparison to students in Tony or as they would say, neighborhoods. And then just yesterday, I saw something on Twitter. I'm pretty sure this is the news article that it links to the exam during some that either couldn't see or somehow identified this woman that you see here who has dark skin as cheating on the basis of some sort of black box algorithm that was looking for something we don't know. But she's required to shine a really bright light in her face for the duration of the bar exam, which is like a two day long, all day test. You know, so pretty much, you know, these are the examples that came to mind last night when I was putting together this slide. But almost every single day, there is another example of this. And so this sort of constant deluge of new examples of algorithms intended to optimize or enhance, but instead being used to discriminate, surveil, police, oppress. There's yet another example of these things. And the fact that these examples really they just keep on coming is in a sense what motivated me and Catherine to put our heads together and to write the book, you know, whereas corporations see data as the new oil. This is a phrase that you see bandied about, and they mean this in a good way. You know, data seems to be the sort of untapped natural resource that if you can process it and refine it, it can lead to tremendous profit. So on the one hand, you have the corporations viewing data like this. You have women, particularly women of color, as well as indigenous people, immigrant communities, LBGTQ folks, and more experiencing this very same process of data extraction, if you want to sort of extend the oil metaphor, because that's really what it is, as just another instantiation of the same old oppression. And really, and I keep these books at the bottom of the screen because we are not the first ones to have made this case that the oppression is real, that it is ongoing, and that it's necessary to dismantle. And actually what we explain in the book is how feminism and intersectional feminism in particular has been focused on dismantling instances of oppression and then the forces of power that caused those instances of oppression for a very long time. And so that's sort of the rationale for the project. And as you'll see today, I'm going to try to take these principles and apply them to digital humanities a little bit. But before I get started, I thought I would just do some level setting about what I mean by feminism, because this is a term that means many things to many people, and I can see some of your faces up here and some of you I know, others of you I do not. So I thought I just wanted to do some very basic definitional work that hopefully will not be too reductive and will give you a sense of where I'm coming from today. So what you see here is an image from the 2014 MTV Music Awards. This was one Beyonce, who you see in the front, projected the word feminist behind her. And she also sings about feminism in her song Flawless, where she actually samples a clip, which actually turns out to be the Nigerian author, Chimamanda Adiche, who was in turn quoting what turns out to be the American Heritage Dictionary definition of the term. And so you see this, you hear this on the song Feminist, the person who believes in equal rights for men and women and non-binary people. Well, they don't include the non-binary part. We'll talk about that later. So what we get from this basically is that feminism is at its root a belief in equality. So the Merriam-Webster also gives a second definition, which is organized activity on behalf of women and non-binary people's rights and interests. So feminism also means political action. And then feminism has a third definition, which is a set of theories and ideas. And these theories begin by thinking through issues of inequality with respect to sex and gender. But really, you know, the past 40 years of scholarship and the current political reality have brought many, many more dimensions of inequality into the conversations. So these include race, these include class, these include sexuality, ability, and so on. And so this sort of brings me back around to the idea of intersectional feminism and how in my view, and this is what Catherine and I say in the book, feminism in the year 2020 must be understood as intersectional. So as you probably know, but I think it's worth saying out loud, intersectionality is a term coined by the legal scholar, Kimberly Crenshaw, which uses to explain how social inequality cannot be explained by only one dimension of difference, such as gender. So when we're talking about inequality or oppression, we must be talking about the intersection of the many things of the many factors and forces that produce them. And so the really, the sort of key thing to understand about intersectionality, and this is sort of, it's a thing that's often overlooked in casual invocations of the term is that intersectionality doesn't just describe markers of individual identity and their effects. Like I am a woman, I am cis, right? It doesn't just sort of mean these little sort of aspects of my identity. It describes the structural forces of power and their intersection that create the effects that I experience as a result of these identities. And it's really the work of women of color feminists and black feminists in particular that have foregrounded the conversation about forces of power. And so that's the really key, the key contribution here for understanding this talk, the larger project and intersectional feminism more generally. So in short, intersectional feminism isn't only about issues of gender or even only about women. It's about power. It's about who has it and who doesn't. And in today's world, as you can see, data is power. And you see this again in this idea of data as the oil. You also see this in the idea of data as the same old oppression. And so intersectional feminism when applied to data science can help that power be challenged and changed. And so the argument that I want to make today is that data science needs feminism and intersectional feminism in particular, if we ever hope to overturn these power imbalances. And I think that those of us in DH are uniquely positioned to do this kind of activist interventionist data work precisely because we have both the technical skills and the knowledge about how to work with data and because we understand history and theory and culture and context. And even more than that, because of our training, it's easy for us to do this translational work to sort of see how these feminist principles that we're deriving either from feminist theory or from feminist activism, how they can be translated into the technical world. And again, this is tricky for people coming from other fields with other backgrounds. But DH, this is the work that DH does all the time. And that's sort of what I want to be my major takeaway from my talk today. So, you know, I guess just to sort of talk a little bit more about that, I really feel like it was my own experience with digital humanities and Catherine's sort of in her related background with design, which also does a lot of sort of translating theory into practice that enabled us to write the book. And our approach was really to sort of sit down and ask ourselves like what have we learned from all of our leadership, our participation in various activist communities, and, you know, how could we like distill these ideas, these sort of complex, heady ideas into some actionable principles. And so you can see here the seven principles that we came up with that we hope, we think, we believe, encapsulate the most important aspects of feminism as they relate to data work. And our goal, it was really the sound sort of like instrumentalizing, and in a way it is, but not really in a bad way, although we can talk about that later. Our goal was to operationalize feminism for data science, so to provide models that might guide the work of people working with data or who are want to work with data, such as students who might be listening, or people who want to refuse to work with data, either on political or ideological grounds. And so what I'm just going to do for the rest of the discussion is give a couple of examples. Some sort of like middle of the road DH projects that call themselves DH, some I think of as sort of DH adjacent, and some that are secretly DH projects because of the people involved, that help to illustrate what I mean by taking these principles and putting them into action in the work that we do. And I'm hoping that this won't take that long, and that we can use the last bit of the discussion today. So what is operationalizing feminism look like in practice? So one example that we talk about in the book is Mimi Onoha's library of missing data sets. So Onoha is an artist and an educator, and she served one of these DH adjacent people who I described, like she sort of shows up at DH conferences, but teaches primarily from her perspective as an artist and an artist who uses a lot of that. So this phrase missing data sets that you can see on the slide is Onoha's way of describing data sets that a reasonable person might expect to exist because they address issues of pressing social need, but because of various reasons they don't actually exist in real life. So data sets like trans people killed or injured in instances of hate crime, there's sort of no national comprehensive data sets that are excluded from public housing because of criminal records. This is a real thing that happens, but it hasn't been identified as an issue by the various government actors that have the capacity to change this. A gender and race break of people with COVID in the United States, like we don't have this data comprehensively. And what Onoha explains is that these data sets are missing from the public housing system. So data sets are not just a general social, political, or governmental will or some combination of these. These are the forces of power that the principle of examining power points us towards. Onoha exhibits this artwork in two ways. One is just as a GitHub repository, which you can Google and find right now. That's what you see on the right in the screenshot. This is the file cabinet that you see on the left. These files are labeled with the titles of each of the missing data sets. The idea is that you peruse the file cabinet tab through it and you see one that looks interesting to you and you go to open the folder and it's empty. And it's empty for the reasons that I've just described. So the point here is that the first step involved in a feminist approach to data work is to understand and examine these forces of power that determine what data are collected and what are not or what research questions are asked and which are not, what research is undertaken, what is not. Once we start to understand how these forces of power operate, we can start to take steps to re-balancing these forces of power in terms of data collection, in terms of the research that we undertake and beginning to affect change. So an example of this from the DH sphere is really one of my favorite DH projects ever, which exemplifies pretty much all of the principles of data feminism, but I'm going to talk about it now in the context of examining unequal power relations and challenging that power. It's the Colored Conventions project. And so if you're not familiar with the Colored Conventions as a thing, so these were events that took place in the 19th century United States in which black Americans, so both fugitive and free, gathered in person to strategize about how to achieve legal, social, economic, and educational justice. And so, you know, I am an Americanist, so this is sort of in my wheelhouse, but if you don't know this, sort of the standard story about the abolition of slavery in the United States is one that really credits white abolitionists for a leading cause. But that's primarily because their efforts were the ones that went recorded in newspapers. They were the one who were publishing newspapers who were circulating information. The newspapers in turn have been digitized. They are easy to access now. Versus the meeting minutes of these in-person events, those were not easily preserved. Those are not easily accessed. And as a result, they're sort of harder to access when you're generating your historical account of what happened at the time. So in this case, the forces of power that the Colored Conventions project is seeking to challenge are these forces of power that sort of control the historical narrative. And among the goals of the project is to create sort of a counter data set, which consists of these meeting minutes, so that they can then be reintroduced into the historical record and rebalance the credit for, you know, who was responsible for bringing about the end to slavery and ideas about abolition more broadly into the United States. And just very briefly, there's actually a second issue that the Colored Conventions project takes up that's related to this idea of missing data that Onewa talks about, too. So because the meeting minutes are derived from the official meeting minutes, it only records the discussions of who spoke, right? Who was there and who spoke officially. And these participants were almost exclusively men, because that was who was authorized at the time to speak in a public event like this. And so in order to address this disparity, the CCP team asks its teaching partners to sign a memo of understanding and MOU before introducing students to the project. And what it requests is that all instructors introduce a woman involved in a convention, such as a wife, a daughter, a sister, a fellow church member, alongside every male delegate who is named. And they ask the teachers to sort of report back on their own research. And then from this work, the project team is creating a second data set of these women's names, these people who would sort of go otherwise, go unmentioned, uncounted, and therefore unrecognized, and trying to sort of rebalance the scales in terms of gender parity as well. So I'm going to turn to an example that I've chosen to illustrate another principle, but it's actually also a counter data collection effort. And we can talk a little bit about this later. I've had so many of these projects sort of fundamentally need to start with making their own data because the data doesn't exist in order to capture the problem or issues that they're interested in. There is maps created by this San Francisco-based anti-eviction mapping project, and it's the AEMP. It's a collective, but I feel like in this particular audience, it's worth pointing out that the project was really initiated by and has continued to be guided by several graduate students in the humanities and real leadership roles here. And so they don't advertise that fact, but this to me is like one of these stealth DH projects because of that. In any case, since 2013, the AEMP has worked in collaboration with tenants rights organizations and community groups in order to collect and map data about the eviction crisis in the Bay Area, which I think pre-COVID, that was where a lot of the news reportage was focusing on evictions because of the influx of tech people. Now the same issues are pretty much everywhere in our country. So what you're looking at here on this map, each red dot indicates a place where a person or a family or a group was evicted, and the blue dots indicate places where the AEMP has also video interviewed that one of the people who was evicted from that location. So if you click on one of the blue dots, you bring up one of the video interviews, like the one you see here on the right of this resident of Midtown in San Francisco, Phyllis Bowen. So in terms of the idea of embracing pluralism, which is the principle that I want to talk about now, we can contrast the map that I just showed you with the work of the eviction map, which is based at Princeton University. And I believe I might get this wrong, a sociology department, someone might correct me on this. So the eviction labs goal is to present a national picture of the eviction crisis. And this is a really worthy goal and a valuable project, especially now, as I mentioned, given that the crisis of evictions has really exploded all over. But it's a wildly different project in terms of process. The eviction labs maps, they derive from seemingly bigger data. And the map that you see right here, it seemingly presents a more comprehensive picture of the problem of eviction in the United States. Like we're literally looking at the whole country. So you think, oh, like this project has more info. But what the AEMP has shown is that national real estate databases, like the one that the eviction lab uses, significantly undercount evictions, right? So you can think of, first of all, why real estate companies would be disincentivized to count evictions, right? Like they're in the business of making money off home sales. So if they get too much negative press for evicting people so that those places can be sold for more money, that's a bad look. But second of all, and you can think about this, like I can think about this having lived in, you know, graduate student homes, like all of the different ways in which you can be forced to leave the place you're renting that aren't officially being served with an eviction notice. So raising the rent a lot, not fixing things that are broken, having the landlord come and lurk around the property and be intimidating. Like all of these things make people move out, but they are not recorded in the, you know, the real estate or the city database of who has been served with eviction papers, right? And so what the AEMP has done is they work with local communities and community groups that are doing outreach to people who have become evicted. And they've gathered admittedly messier, but actually much more accurate and thorough data. It's actually also more contextualized data that documents a greater extent of the problem at hand. And this is what it's gained when you begin to embrace pluralism, right? Seeking knowledge from different people coming from just different perspectives across the board. Okay. Just looking at the time and I feel like I'm doing good. So I'm going to keep on going with this, this parade of examples. So the first couple of examples that we've been talking about have focused on the issue of power and people, sort of the people who have the power and the people who don't. But another major idea that comes from feminism relates to more conceptual structures of power and more specifically binary structures that are usually defined by a hard distinction between two groups. And what feminist theory has really helped to show is how these binary distinctions are usually hiding a hierarchy with one group on top and the other on the bottom. And then once you start to see the hierarchy, you start to understand why there's a hard line in pose. And this is to keep sort of to ensure that the one group on top stays there and the people on the bottom can't break through. And so I think sort of the early example is the distinction between this idea of man and idea of woman, right? Since it's like a clear example of both a false binary and an unequal hierarchy right there, there are more than two genders. And among them, no gender is better than any of, or worse than any others. But then one of the key moves of feminist theorists, and again, you know, to, I think to many of you in the audience, I know that that lab is important. One of the key moves of feminist theories is to take this critique of the gender binary and then use it to question other binaries and hierarchies that we encounter in the world. So this means like the distinction between nature and culture, subject and object, or the one that this example takes up, which is the artificial distinction between reason and emotion. And again, in all of these cases, there's sort of this implied hierarchy and also this sort of enforced belief that these things are somehow poles and, you know, never the twins shall meet. So, you know, generally in an Anglo-Western context, we've been taught that reason is somehow better than emotion. And we see this play out in data in general and in data visualization in particular. So, you know, best practices for data visualization often involve like a clean design, a minimalist aesthetic that sort of present just the facts kind of thing. But the question remains, you know, why are these our best practices? Especially when research has shown that we actually interpret these aesthetic choices just as emotionally as others. We tend to believe that these sort of minimalist just-the-facts style charts are more truthful than they actually are. So, we can point to scholarship by the visualization researcher, Jessica Holman, who has shown that if you include a source line below your visualization, it's just a sort of source, colon, URL, anything. And you say like, do you trust this visualization? If it includes a source line, regardless of whether you actually click on the link and see where it goes and even see if that is real or not, people think that visualization is more trustworthy. Or the flip side, there's a visualization researcher, not NCS, but in sort of more sociology of visualization, Helen Kennedy in the UK. And she's done these really interesting studies where she shows people the minimalist version of a particular data set and then a more elaborate one with embellishments and animations and things that would be dismissed in terms of best practices as like whatever Tufti would call chart junk, stuff that doesn't belong. But then two weeks later, she asks people, what do you remember from that visualization? And lo and behold, the one that is sort of maximally illustrated is the one that sticks in people's minds much more than the minimal generic, you know, visual standard visual typologies. So this is sort of this paradox, the sort of why we believe somehow that rational presentation of data is somehow better and should not be mixed ever with sort of visualizations that appeal to your emotions. This is sort of the paradox that I want to talk about a little bit more here. So now you see sort of the bad object on the bottom left. And I'll talk a little bit more about what you're seeing in the major image on the screen. So the large black images that you see at the top are actually screenshots of an animated visualization. It's the number of gun-related deaths in the United States, in this case in 2013. But it's actually, it's a visualization created by the design firm Periscopic. And if you Google it, you'll see the animated version. And actually you should do that because I only recently discovered that they've decided in the past several months because of the continuing epidemic of gun violence to update this visualization with data from 2018. And they've done a really interesting intersectional breakdown, not just of sort of generically, who was killed by a gun in the United States, but which groups are killed and for what reasons. So I totally recommend that you actually, while I'm talking, find that visualization and watch it while I continue to talk at you. So if you don't do that, or if you want visual, if you want a sort of narrative of what you're doing at the same, what you're looking at at the same time. It starts out with that image that you see on the top left, where each person killed by a gun in that year is represented as the single arc that is traced across the screen. And they're traced across one by one and they start out pretty slow. So you can read all of the information about that person. But then they get faster and faster. And so not only can you not read all of the information, it's coming too quickly, but they start to sort of create this web-like image that you see in the larger image on the screen. And actually it's, I find it, and at this point I've watched it like hundreds of times. I still find it overwhelming to watch, like really almost unbearable because you think that it's over and you think that you can't plot any more points on this, on this visualization, but you can and it keeps on going and the numbers keep counting up. But that's really the point, right? There are too many people being killed by guns in the United States, right? They even do this visually, right? So in data visualization, there's something called occlusion, which means like hiding information below other information. In general, you're... They don't do that. They're deliberately doing that. And the A&P does this too, actually, to make the point that there's too many people who are being killed. This is an epidemic, right? At the same time, methodologically, it's no less statistically sound than any other study. So if you look at their methods, which you can read about, and I teach in my classes because they're very fascinating, the data about the people who were killed derived from a national crime data set from like crime.gov that URL exists. The projected life spans that they plot are determined using a sophisticated data model developed by the World Health Organization that takes into account all sorts of factors about individuals and regions and gender and things like this. But it was really viewed with suspicion from the visualization community because it makes us feel, right? And a feminist approach here would say, you know, that's not a problem at all that it made us feel things. And in fact, it's a more compelling and more meaningful visualization because it lends reason with emotion. And so rebalancing emotion and reason really just opens up the data communication toolbox and allows for us to focus on, you know, what really matters in a design process, right? Which is honoring the context, listening to experience, and then taking action and trying to impel your viewers or your users to sort of take their own action to rebalance these asymmetries of power that we encounter in the world. Sort of one quick example that extends this idea, which is new, is Dr. Faith Day's Black Living Data booklet. This is really new. Day is a clear postdoc at Purdue, so very closely connected to DH. Single person, you know, recently out of grad school. This is actually not a visual project, but it absolutely exemplifies this idea of reconnecting data back to embodied experience. The project is a printed book. It's almost a manifesto that explains why data about black people sort of inspired by the data that's coming out about COVID and the disproportionate effects on black people who get COVID in the United States. But in all cases, why data about black people must always be considered alongside black life. And so these are, you know, I wanted to show a different example, but of the same principle that honoring the human experiences and the lies and the stories behind the data gets you to sort of a more complete picture and a more meaningful picture. And in many cases, sort of a more respectful picture of the issues that you're hoping to convey to other people. Okay. So if it's not already apparent, the principles that I outlined in the beginning, the sort of principles of data feminism, they really apply to every stage of a project that involves data from inception to funding to production to circulation to impact in the world. And this sort of brings me to the final point that I learned like major point I want to make before the Q&A, which may already be obvious from these examples, but it's that the sort of feminist approach to data science really involves a bringing together of core digital humanities tenants and the work of data science. And it's really important not to erect artificial distinctions on the basis of the size of the data set, the technical credentials of the people undertaking the work, work that is, it has a contribution that is, you know, humanistic rather than, or partially humanistic rather than purely technical or methodological. All of these concerns are continually and historically have been used to exclude women and people of color from participating in these fields, right? But if we start to look at these fields together, then we can clearly see that some of the most exciting data science work, because this is really what this is, is in fact being undertaken by digital humanities scholars right now, as well as artists and journalists and community organizers and activists, right, among the many, many groups that are doing this kind of data-driven work. And some of this work does look fairly traditional. So you see up on the top left, a paper by Margaret Mitchell who runs a team at Google who researches bias and natural language processing. But then sort of next to Margaret's work in the middle and the top middle, you see an actor, you see an interactive AI artwork by Stephanie Dinkins and inside that sort of genie lamp type form is an interactive AI that was trained on an intergenerational dialogue between the black women and her family. And then on the right, you see the Torna Parrot-Separados project. This is a DH project that calls attention to sort of the nefarious reach of ICE contracts or in some cases, potentially followed to ICE and the detention of undocumented immigrants. And then at the bottom, you see a project by Data Therapy. They're a community group coming out of academia, but that works with community-based organizations to create what they call data murals for their own community. So in this case, they sat down with the kids who lived in this neighborhood. They had them examine the data about food and food security in their neighborhood and then what the garden was doing. And they analyze and then designed and then actually painted this big mural on the fence. So these are some, certainly not all, of the ways that we can use data to develop, to develop more ethical and equitable data practices and really to sort of to call ourselves in our communities to action. So I think, and this is something that I think is interesting about a lot of DH work that in some ways positions it against a lot of the work on data that is coming out of academia at large. While we recognize that data is at the root of so many problems today, we also see that in certain cases that can be part of the solution. And I think DH in particular is really uniquely suited and has a lot to contribute to that task. So I just wanted to end with, since the emphasis is on sort of concrete things you can do, some main takeaways of what I'm calling for in this talk. So I'm outline data feminism as a data science that exposes and challenges power that is led by and centers minoritized people and groups. It can actually be a counter data science about the injustices created by mainstream data science. It looks at many axes of inequality, including gender, race, class and more. It considers process, how inequality permeates all stages of data science projects from funding all the way to deployment. It credits labor, acknowledges how data science is the work of many hands. And then I think importantly looks to and is exemplified by a lot of digital humanities work in the world right now. And so just sort of more specifically, some things that DH students and I should say faculty too can do. So do work that interrogates and exposes racism, sexism and other forces of oppression. Examine how these forces show up in data and in the world. Collect counter data and missing data. Introduce new communities to data and digital tools. Use data to advocate for equity at your institution. We can talk about this later, but there's so much work that can be done at the local level of sort of where you are right now. Experiment with creative forms of data presentation and communication. So quilts, sculptures, murals, VR, fashion shows, anything you can think of. Include more people in data driven projects, especially impacted communities. And then make sure you credit your sources and your research support staff. You know, especially again in the context of the library, you know, faculty and students often come to librarians and ask for help with research are pointed to certain resources, learn certain skills. All of this labor should be credited in DH and data science projects and sort of more generally making your process transparent if you worked with other people, either part of your team or as a way to learn the skills to do your project, being transparent about that, as well as reflecting on your own identity. Okay, so I think that's about it for the formal portion of the remarks. And I'm just really eager to hear your questions and talk more about this and the rest of our time today. Thanks. All right, everyone. Lauren, if it's okay, I'll open the chat up for questions. And I'm happy to watch for you there and to take a look to see, I've notified participants in previous sessions, you can click the participants button at the bottom of the screen to do a virtual raising of your hand if you would like. And I'll be glad to take a watch for that as well. Looks like there's a question from Daniela. And then Madeline. Hi. Can I speak, is this good? Great. Thanks so much for that amazing talk. This is really, really inspiring. I just, I mean, I was totally convinced by your point about the emotionality of visualizations. But I just, I was, I would love to hear you talk a little bit more about the relationship of that to the claim that Tara McPherson made in her work about how, you know, stylish visualizations basically sort of cover over this corporate gloss, you know, with, with in terms of, you know, like stylish visualizations are all the hype now. We get all these software that are supposed to make data accessible and that there's that kind of discourse going into that, which I think often works to prevent critical reflection even of an emotional kind. So, so yeah, I would love to hear you, like engage that a little bit. Thanks. I love that question. I mean, that's such a good point. And I think, you know, the easy answer, but I think it's the right answer is that, you know, none of these images should be taken. And this is more of like a Joanna Drucker argument, but related, like none of these images should be taken sort of as truth in and of themselves, right? You should always be asking, what is it that I'm seeing? How am I, how is it being presented to me? What does the data set contain? What does it omit? What are the choices that were made in the presentation of the data that may lead me towards this particular takeaway versus choices that were not made? And this is very, you know, it's sort of data literacy, but it's not even data literacy. It's just sort of, you know, critical thinking and reflection on the images that we encounter in the world. But I think that with respect to data visualization, there's a lot of sort of unlearnings that needs to happen, both with an academia and just more broadly, just because of the way in which sort of the history of data visualization and how the idea of giving data visual form was intended to sort of stand for the data rather than be used as a sort of tool in the process of additional discovery. And, you know, this obviously, you know, if you look at the history of visualization research, there's been a lot, you know, even over the past, I don't know, like Andrew Gellman and EDA and John Tukey. That's like the late 70s, you know. So people have been pushing it back against this for a long time, but there, you know, were easily over 100 years of these images sort of being taken for a fact before visualization researchers started to say, hey, wait, really what visualization does best is let you, gives you a perspective on the data lets you sort of recalibrate or iterate on your research questions and then go back to the data source again. And so I think that's sort of, you know, that's sort of my general response to that. It's like, there is no single image that should be taken outside of its context. It is the responsibility of all of us to ask why are the people presenting us with this image? What do they want our interaction? You know, what do they want us to take away? And, you know, even if two visualizations look very similar, it matters who made them. You know, it matters if it's a corporation versus, you know, a student project. It matters if it's a real estate company versus the EMP. And they can, and they do use very similar visual typologies and part of these are the tools that we have available to us, right? But I think, you know, part of what I'm trying to do, and I teach in my classes is to have the students really sort of ask these questions always, you know, whether it's just like, you know, a chart of your spending on your credit card bill or something more complicated, or if your spending is complicated, I don't know. Looks like we've got a question from Madeline, and then there are two in the chat that I'll read out after her. Okay. I have a comment. This, since I'm doing a dissertation centered on text mining, this opens up a big can of worms for me, like how I might come up with data text mining metrics that capture the nuances of emotional language, but keep the integrity of that reason kind of, reasoned kind of exploration of the text, but still honor, of course, that emotions, that emotions, because the one I use right now is AFIN, which is just a simple number rating that is plus five for high emotion and high positive emotion, and negative five for very negative emotion, and that connotes a hierarchy, and that some words are neutral that may not be. So that's a... I mean, the whole idea of emotion detection or, yeah, yeah. Sorry, go ahead. That was just my... Yeah, I mean, I guess, you know, it's interesting. So I actually, I teach a methods of text analysis class where I teach my students how to do sentiment analysis, which is a different version of what you describe, but it absolutely is a sort of blunt force instrument that does not capture the range of emotion, messes up a lot of the times, and only works best for only certain types of emotional language. And I think that, you know, again, it's not dissimilar from what I said in response to Daniela's question, which is that context is always what matters, right? It is fine if what you were... If in the goals of your research is to sort of get a rough shape of what emotion is happening in the particular corpus you're working on. But it's also crucial that as you do that analysis, you pay attention to the context, you know, especially looking at examples that get high scores, at low scores, the ones that are, you know, get zero scores, looking at the particular types of language that sort of seems to break the algorithm, whatever you're looking at, and to take that into account in the, you know, in whatever the final analysis is. You know, we're always like... You know, you're a single student or, you know, we're all limited by time, and many of us turn to these automated processes because they can either let us get sort of a big picture of you of things, or they can allow us to move through data more quickly. And that's a reality that sometimes we need to make those concessions, but you always do need to sort of compensate for the expediency by taking care to attend to specificity. And that's sort of what I would say to that, and, you know, like the only bad thing to do would just to be to look at the scores as some sort of objective truth, right? Yeah, and then move forward rather than something to say. Right, which I think, again, like, you know, most people don't... Well, most people in academic contexts, given the time to think and make nuanced arguments do not do, but we often do see corporate products, you know, either just glossing over the fact that algorithms do not work so well or are in some ways discriminatory. Or they just didn't have time or they didn't have the right people or something. Absolutely, absolutely. And I think that like the national conversation and the general, not even international, those are general conversation needs to become much more nuanced about what algorithms are good for and what, like how far they get you and then where the work, their work stops and the human work begins. But then that's another binary. Between people and algorithms? Yes, absolutely. Yeah, there's amazing work on, you know, and I guess that, you know, it's like COVID is such an interesting example, right? Like there's always humans in the loop, right? Even the things that we think of as being totally computational, have people tweaking things always. Like people noticed in like the first week of the shutdown, how all of a sudden there was more spam and their Facebook feeds. And this is because the Facebook algorithm, right? Actually involves a lot of people make, like individually pulling things in and out of the feed and adjust and start tweaking the fine tuning the algorithm. And before they figured out how to get those humans to work remotely, there was one week where like the algorithm was bad, like it didn't work because I'm missing people. And there's, you know, there's tons of examples. Lily Arani actually does great work on this. And I recommend reading her pieces. You can, I think she's published some stuff and like public books and stuff on this. That's quick to read. Yeah. Thanks. Thank you for your question. Yeah. Thank you for the insights that your talk gave me. And it's really helping me design my dissertation research. Glad to hear that. And helping me scope it out. Yeah. It looks like our, our next question is from, from Brian. And it was a quick, could you please show the data feminism takeaway screen again for just a few seconds? Oh, sure. I only last week just learned about this portion of screen sharing, which has been a big revelation. Thank you. So then the next question we have is, how do we balance the feminist approach with the use of classes, races, gender, et cetera in our discourse as digital humanist? I really like that question and it's a really good one. And so, I mean, the first thing that I would say sort of clearly and emphatically is that a feminist approach does not preclude these other approaches as well at the same time. And in fact, sort of a general feminist principle is to never is always like both aunt, right? You know, it's never, you were often presented with false choices and it's usually never the case where the choice to go forward with one thing precludes another. But the other thing that I would say is that a feminist approach, and this is sort of like a good thing and a bad thing about feminism and actually some ways in which feminism has been critiqued for its sort of outsized role among theories of inequality and oppression in theory land is that because that feminist theory does or sort of has made the move away from issues of gender to sort of the forces of power that cause gender inequality, that intersectional lens can be used to look at other categories of analysis, right? So that is sort of a second response. And then the third thing that I would say is that feminist thinking, you know, offers one set of concepts and methods that can sort of be mapped onto or translated onto thinking about data, but a lot of other theories of social inequality, of race, of racism, of sexuality, of disability, you know, a lot of these other schools of thought also have their own sort of models and approaches for making sense of the world or sort of cracking open what seemed to be sort of complicated and compact problems and able to be able to sort of giving you sort of avenues of analysis. So you could think of, you know, like a disability studies approach to data might look at sort of the larger frameworks and sort of like the frames in which we encounter and interact and engage with data and analysis and asking like how do these larger structures either make assumptions about how we pass through the world or perform this work versus how we may or may not actually be able to perform or pass through or engage with this work. You could think about, you know, how like a queer theory approach might look at sort of categories and organizational structures and systems and sort of where either people or objects that don't fit into those categories or sort of break the system, like how they actually perform this generative role of allowing us to question, again, these sort of normative assumptions. So I think what I would like the take away to be from this response is that you absolutely should balance a feminist approach with other ways of thinking about issues of inequality and social difference in the world because each of these sort of provides you with a different lens or a different way into understanding what are really complicated and layered problems that, again, will not be solved by a single approach either, right? Like we need all of these people bringing all of their ideas together so that we can hopefully attempt to address and intervene in some of these really complicated challenges that we face right now. So I appreciate that question a lot. Thank you. That was great. Just double checking through to make sure I haven't missed any virtually raised hands there. Any other questions? I think, Madeline, is that, are you raising your hand? Okay, go ahead. So I see a paradox here. It's basically that the distinction between inclusive, yeah, inclusive is, and binary hierarchical isn't self-abinary. So the distinction between being inclusive and a binary hierarchy? Yes. Is itself a binary? Yeah. You know, this is a really good question here. I actually have a slide about this. I'm going to go back over here. I skipped it, but I'll show it to you now. Okay. And so you're right. And so there are sort of, you know, perspectives equally. And then sort of the choices, whether intentional or not, the need to prioritize certain voices and not listen to others. And a feminist approach would be to say, you know, in this case, you do need to make a choice because, you know, we do not have infinite time. We do not have infinite resources. And it would just be very boring to listen to literally every single person who was impacted by a big data study. Right. But what you do need to do is be very intentional about who you do choose to prioritize. Do choose to prioritize with the knowledge that it is a process of prioritization. And you will hear from some people and you will not hear from others. And so a feminist design, sort of principles of feminist design would say, you prioritize the participation of people who are most marginalized by the system. So rather than look at means or medians, the people in the middle who represent the dominant perspective, you deliberately look to the margins because you will learn more and more differently from the people whose experiences are further from the mean, than you will by just listening to a whole bunch of people at the middle. And this is coming from like Sandra Harding and standpoint theory. Like this is a pretty old, you know, feminist thing, but it's been adapted for, there's someone in feminist HCI who really advances this name show in Brazil. And, you know, again, this is, you know, this is a choice and it introduces a different balance of perspectives into the design process. But a feminist approach would say this is the balance that you want because ultimately you will get a better product, a more accurate analysis, a, you know, more widely used system. If you make these choices to be sort of intentional about whose perspectives you bring to the table and whose you do not. But I think that's actually it's a really good question because I think a lot of these, a lot of the principles are sort of aspirational, right? And the reality is that we are all working under deadlines and funding restrictions and, you know, issues of, you know, promotion and job security. Like everyone is dealing with the reality of their world, right? And so you're never dealing with an infinite time space in which you can do everything perfectly. So I think ultimately it really just involves being sort of intentional about, you know, recognizing your, the possibilities and taking some time. And I actually feel like I, I do this a lot more now. I spend a lot more time in the beginning of a research project sort of thinking through the possibilities rather than just sort of like my old approach was to sort of start and then to iterate and sort of let myself be redirected as inspiration struck. But I really do think that spending more time at the outset to be intentional about your choices is, is the more sort of equitable way to go. Thank you. Okay, we have one more question in the chat from Nada. Oh, these are the, wow, these are, these are good questions. Thank you, Nada. I mean, so I would say, you know, it is both a permanent path, but it's one that is like, you know, endless, right? Like you never get to the end of this journey towards, you know, whether it's justice or equity or sort of a more like a, you know, this end goal, sorry for the screaming children. You know, but the most important thing is, you know, this is what Donna Haraway says now, like to stay with the trouble, right? It is uncomfortable. It is often frustrating. You feel like it is taking too long or you're not getting to your goals. But feminism involves this constant series of questioning and making intentional decisions and recognizing when you're wrong and having to redirect. And in that way, like there will always be more context that is brought to bear and more prospectus that help reorient you. And I think that it's important to understand that a commitment to feminism is sort of this commitment to keeping on learning, to keeping on listening, to being okay with people telling you that you're doing it wrong. And to just sort of not be dismayed by that and like take your toys and go home, but to keep on going and that process may differ along the way. Thank you very much. That was really insightful. Well, I believe that is the end of our questions. Lauren, I want to thank you very much for your presentation. That was very thoughtful and very inspirational in many ways. So now we will be at 130. We'll be opening up the. Workshops in breakout one and breakout two. Again, thank you all for your attendance. Thank you, Lauren. We'll see you around. Thanks so much. I really appreciate you're inviting me and it looks like a great conference that you have. I hope that it continues to be very productive. Thank you. Bye everyone. Thank you. Thank you. Thank you.