 to you Matt and a warm welcome for the summit. Thanks, Marin. Thanks, Marin. Hello everybody. How are you all? It's very strange asking questions into all of silence, but I hope you're well. And yeah, thanks for asking. I'm really well too. I'm really pleased to be here this morning and it's a huge privilege to be able to introduce this morning's keynote, Charlotte Webb. As Marin said, I'm one of three co-chairs of the ALT 2020 conference. And I must do a quick shout out to my co-chairs who can't be here today, unfortunately. So a quick wave of hello to Faizana Latif and Roger Emery because I'm sure they'll be watching the recording at some point. The conference would have been taking place, as you know, in a couple of weeks' time here in London. But we are all three really looking forward to at some point in the future being able to welcome you for another on-site face-to-face conference. And I hope to see as many of you there as possible. Obviously the conference, the trustees may be inevitable decision to postpone the conference back in April. But the summit has happened and I'm hugely grateful for ALT for doing this and giving it an opportunity to both recognize and celebrate the remarkable achievements and the hard work that's taken place over the last few months amongst our community. I think it's really important to recognize that. But also the summit is providing a space for us to consider and discuss some of the big issues that are facing us. And this keynote certainly calls into that category. The main task of conference chairs is to come up with themes and keynote speakers for the conference. So back in December, in our early discussions, myself, Roger and Fazzana kept coming back to a couple of topics in particular, inclusivity and ethics. And following on from that, one of the keynote speakers we signed up for the main conference was Charlotte. So we were really pleased when she was able to rearrange and she was free to come and talk today at the summer summit. Charlotte and I are actually colleagues. We both work at the University of Arts London. If you don't know the university, it's quite big. It's got several colleges and institutes. And although I was aware of Charlotte's work, we hadn't actually met until recently. Charlotte is co-founder of Feminist Internet, a social enterprise that focuses on making the internet an equal space for women and other marginalised groups. At our university at the Creative Computing Institute, Charlotte has led the creation of a really important and interesting new methods degree MA internet equality. She's also created a future-learn course, creating a feminist, or design a feminist chatbot, I think it's called. So a great background and it's a real privilege to be able to introduce her, as you said, from what I know of her work and the chance we've had recently and actually my sneaky preview of her slides. I know it's gonna be a very important keynote. So please get your emoji, your welcome emojis at the ready and before I hand over to Charlotte, just to remind her that there's gonna be lots of opportunities for questions. So please do post them in the chat. We'll be pausing partway through actually and they'll also be Q and A at the end. So please fill the chat with a warm virtual welcome to Charlotte Web. Hi everyone, thank you so much Matt for that lovely introduction. Just sharing my screen now. And thank you so much to the whole team for putting on this brilliant conference and for making it feel so welcoming and friendly and amazing. And I particularly want to say thank you to you, Matt, because when Matt and I spoke, let me just go back to everyone, when Matt and I spoke a few weeks ago, he really made it clear just quite how much work the whole digital learning community has been doing over this period of crisis and how much of a toll it's taken on people sort of physically, emotionally, technically. And so I just want to really, really salute everybody for all of that work and hope that you can have a little bit of rest before next term, which is also going to be challenging. Maron and everyone, I can't actually, at the moment, see the conference chat or anything whilst I'm sharing my screen, which didn't happen in the test. Is there a way that we can resolve that? I'm going to call this a Martin Hawkes, he runs down the stairs in the page moment in order to help fix it, so Martin, over to you. I'm not entirely sure how you would have been able to do that, but we're more than happy to keep an eye on the chat for you, Charlotte, and just keep everything in. I'll just go for it then. Okay, and let me keep an eye on the time as well. Well, make sure to give you any audio cues, so Matt and Maron and I are here, so don't worry, just focus on your things. Okay, thank you. So, as you know, I'm Charlotte and I'm the co-founder of Feminist Internet, which is a collective of artists and designers and our aim is really to disrupt inequalities in internet systems and products and services by educating and equipping both the people that build them and the people that use them. And we do that by bringing together technology, feminism and art and design practice in workshops, in consultancy and in creative product development. We started out in 2017 at University of the Arts London as an educational experiment. So we designed a 10-day curriculum for students all across the university and they had a two-part mission which was to collectively write a feminist internet manifesto and prototype creative responses to issues of inequality that they identified in a sort of rapid research phase. And we showed the results of those 10 days in a seminar and feminist internet just magically and extraordinarily snowballed ever since then. And these are some images of some of the stuff that we've been doing. We've actually been all around the world running workshops, giving talks, meeting people and we have worked with corporations, with NGOs, with cultural institutions, we've developed a podcast, we've built a chat bot, we've run lots of events and it's been a really amazing and extraordinary journey. But I think the most amazing thing about feminist internet for me is the people in the collective and the way that we are able to navigate what can sometimes be quite challenging to reign as a group. And I think for me, it's that group dynamic that creates the care that this conference has been exploring and the care that I think is really needed to face up to social injustices. So today I'm gonna talk about how I think that feminist approaches can help us understand and intervene in some of the ways that inequalities get amplified by technologies. And I'm gonna talk about that both in the context of the crisis and also in relation to feminist internet's work. So I wanted to begin with this question and I was going to be monitoring what people were saying about this in the chat but I can't see it, so maybe someone can tell me. I'm sure that many of you that were, hello. I was gonna say, sure, we'll monitor the chat for you. Thank you, brilliant. Yeah, so I'm sure that many of you who were in Angela's amazing session yesterday already have answers to this question because it's what we were really discussing. But for me, one of the things I've been reflecting on so much over this period is the fact that algorithmic systems and the pandemic, they both expose and amplify structural social inequalities. So even though they appear neutral, right? And think back to how much that phrase the virus doesn't discriminate was used certainly at the start of the pandemic. And we discussed this yesterday in the context of the A level results. And I think Angela made a really amazing point about how the current government is so enthralled to technologies that they perhaps don't always see how problematic they can be. But I actually wanted to give another example aside from the A level results because it's been a bit more studied in detail. Sorry, the references a bit cut off but all the references are in the Google Doc that I shared earlier. So in 2019, researchers published a paper in Science Magazine that spoke about how an algorithm called Impact Pro was racially biased. And this is one of the largest commercial risk prediction tools that's used by healthcare providers in the US. So it's used on about 200 million Americans. And what this algorithm did was it flagged patients that might benefit from extra care based on how much they were predicted to cost the medical system in the future. And relatively healthy white patients were more likely to get priority over sicker black patients because Impact Pro relied on previous spending. And that was historically lower for black patients because of systemic inequalities in wealth and in access to healthcare. And what was really important about this case was the fact that Impact Pro didn't actually explicitly use data on race to make its predictions. The data was taken from historical insurance claims and the data included things like age, sex, insurance type diagnosis code and things like that. And because cost or previous spending was used as a proxy for health, what happened was that what a person costs the healthcare system obviously depends on the barriers that they face to accessing it, right? So for black patients, those barriers included things like having less access to transportation, competing demands from jobs or childcare, perhaps having less trust in the healthcare system itself. And so by using cost as a proxy to health, the algorithm reflected these biases without even collecting data on race. The good news about this story was that the manufacturer of the algorithm actually read this paper and worked together with the researchers to tweak the algorithm. And by doing that, they were able to reduce the bias by 84%. So going back to what we were talking about with Angela yesterday, tweaking the algorithm isn't going to fix the problem because the problem is in those various access to healthcare amongst marginalized groups. But it is really heartening to see that if you design the algorithm slightly differently, you can get a much better result. And what I've been thinking about is how COVID-19 has sort of become another kind of system that, oh no, sorry, that's my calendar notifications, you don't want them. It's become another system that brings these intersecting inequalities really starkly into light. So here are just some examples, statistically, about women and COVID-19 from this researcher at University of Exeter. I won't, actually I will read the stats just in case anybody can't see the slide. So 81% more of women, 81% more likely to have had an anxiety attack than men. 8% more concerned about spreading the virus than men. 96% more likely to have lost their job. 14% more likely to work in close contact with infectious diseases and et cetera. And they also donate to food banks 31% more than men. Obviously racial inequalities, as we were discussing yesterday with Angela, have also been exposed. So 70% of UK health and social workers who died were from an ethnic minority. In the UK, black African groups were 3.24% and Pakistani groups 3.29 times, 3.24 times and 3.29 times more likely to die. In New York, coronavirus was killing black and Latinx people at twice the rate of white people. And this, black Asian and minority ethnic people are 54% more likely to be fined under coronavirus lockdown rules than white people. And so as with Impact Pro, race and socioeconomic status are clearly intersecting factors here. And as Angela said, no, this is not a genetic disadvantage at play. So, so far a little bit so bleak. So sorry about that, but I wanted to sort of set out the kind of canvas for the rest of the talk, which is going to look at how I think feminist approaches can help tackling some of these social inequities and create the kind of care needed in a crisis or just in complex times. So our motto at Feminist Internet is that there is no feminism, only possible feminisms. There is no internet, only possible internets. And for me, possible feminisms are kind of different manifestations of the fight for gender equality and social justice that respond and morph in relation to cultural shifts and changes. And they're kind of possible feminisms because there isn't one way to enact them. And this motto is really getting at several different things. So firstly, the feminism that we're grounded in is intersectional feminism. So it looks at the different structures of oppression and how they intersect with each other to form unique modes of discrimination. That's a really important kind of grounding for us. So we can't, for example, look at gender without also looking at race or class or other defining characteristics. It's also pointing to the fact that although there are obviously common concerns that unify the feminist cause, things like pay equality or reproductive rights, differences in sort of local axes of power in coal contexts, in political regimes, obviously lead to differences in the focus and implication of these feminist causes. So basically what feminism means and what the cause looks like depends on who you are and where you are and what are the struggles that you face. It also points to the fact that not all women who fight for equality day to day would label their actions as feminist. And I'm gonna come back to that point at the end of the talk. But I actually have a question at this point for everybody and I might stop showing my screen just so that I can see the responses. I wanted to know how many people feel that they identify with feminism in any way. Is it something, do you see yourself as a feminist? Do you see yourself as an ally or not at all or any problems with why you might? Yes, oh yes, oh yes. Okay, strongly. Okay, wow, this is amazing. We've got like a hundred feminists. This is extremely cool. Yes, but don't like the word as it has too much baggage. Thanks, Anne. Yeah, that's a really good point. And one of the things that we do have to do, particularly in dare I say corporate contexts, is get past the barrier of those negative associations and connotations with feminism. Charlotte, when you've shared your screen, you can then click on the chat icon at the bottom right. I really hope I can do that because I didn't see any way to do it, but let me try. I don't see how I can do that. Okay, I'm gonna just keep going. Okay, so the other thing that this motto kind of points to is the fact that dominant narratives of feminism really do need to be dismantled and histories retold to center the very vital activism of women outside of white and Western feminism. And I see those other histories also as kind of possible feminisms. And this issue of history and the constructed nature of it was a brilliant discussion point with Angela yesterday as well. But this point is really, really beautifully articulated by an African proverb, which was quoted in an interview with Moana Hamisi-Singhanu, who's the founder of an African women's COVID-19 hub. And this is the saying, until the lion learns to write, every story will always glorify the hunter. So as long as dominant narratives stay dominant, they are going to rule. So it's really, really important that we contribute to foregrounding these alternative histories. And I'm sure that many of you interested in decolonizing the curriculum are doing that already. Really sorry, that's the answer machine going. So hopefully we're not going to hear my parents' friends leave a message on the answer phone. Anyway, I had an experience recently which taught me something I think really important about this. So as Matt mentioned, I've recently created a course for FutureLearn in partnership with the CCI and the Institute of Coding, which is called Design a Feminist Chatbot. Shameless plug, you can go and take this course for free right now. But in this course, there was a video that introduced the, the or a history of feminism. And it was there because we knew that lots of people coming to do the course wouldn't necessarily have any grounding in sort of feminist history or theory. And it sort of included the conventional narrative about the different waves of feminism. But with that said, I tried really hard to communicate that Western feminism was historically very exclusionary, that black feminists had to really fight for recognition in the movement and that dominant accounts of feminism have been centered around white middle class women. But there was very, very robust criticism about this video because it wasn't written from a global enough perspective and it wasn't decolonized enough. And that was absolutely right. It was a huge oversight and a blind spot. And initially, when the comments about this started to come in, I felt pretty terrible. I felt really called out. I felt really ashamed that I'd had such blind spots. But once I got past that, it was a really, really good opportunity to engage quite meaningfully with the community. And I took very active steps to rewrite the content and to be transparent about those changes and why they happened. And that had an extremely positive response. And I'm just sharing that because I do really feel that feminism is a process. And that we've really got to learn as we go and be accountable for that learning. And you can't know everything about every struggle, but what you do have to know is that you shouldn't really be at the center of every debate or every form of activism or every cause or every form of cultural production. And we all will have blind spots. And usually the best way to deal with them is to be involved with this wider group as is possible. So I was just really, really grateful for that opportunity. And following on from this point about who should be sort of centered in forms of activism and resistance and of blind spots, I just really want to acknowledge the crisis of racism that's been playing out and obviously it's nothing new, but playing out so starkly against the backdrop of COVID-19 because it's raised so many questions for so many people and so many institutions, which is a good thing. And I was really, really grateful for Angela's brilliant discussion of this yesterday. There's obviously a huge amount of work that individuals and communities and institutions need to do to address all of our complicity in racial injustice. And I know that lots of you will have started that work or been doing that work in your own context. And I know that we've got a way to go in feminist internet as well to make sure that we're building anti-racism into everything that we do. But I just wanted to show or share a few things that we did during this period. We wanted to respond very practically following the death of George Floyd because we knew that although it was extremely difficult, we weren't living through these sort of acute trauma that our black friends and peers and colleagues were. So one of the things that we did was write a template for people to email their MPs and ask them to put pressure on Parliament and stand in solidarity with the Black Lives Matter movement and the letter had some specific requests in it including education reform. You can access it at this URL. And if you go to that URL, you will also see a link to how you find your MP. And actually the ex-boss of one feminist internet member sent it to Corbyn and got a very nice comprehensive reply. So that felt really amazing. And this particular post on Instagram had about six or seven times more engagement than typical posts. So we knew that people were really keen for these kind of practical support mechanisms. We also opened up our direct messages for questions from white people who were trying to make change who were feeling confused and didn't know what to do. And we were inspired to do that by Gabby Edlin who is an amazing activist who runs an organization called Bloody Good Period which I recommend that you support. We did this because we and Gabby had seen that there was just this exhaustion in the Black community from being constantly asked what to do about racism by white people. And so much discussion about how that labor should not be put on them, especially at this time. So we opened our DMs and we got lots of questions from people and we created amongst ourselves a Google Doc which we use to collectively write replies all of these questions. They were things like what do I do about a family member that just doesn't care about this and just thinks it's pointless? Or how do I approach black friends about how to help or what can I do without being tokenistic? And so this was a real process of us trying to come up with responses together, provide resources and help people through the situation. And then finally we organized a discussion group about strategies for social media particularly as there were so many questions raised around Blackout Tuesday and how the use of the Black Lives Matter hashtag on that day was actually extremely problematic for the activist work that was happening around it. And again, we shared a Google Doc and like collectively built some strategies and talked about things like sharing, triggering content or like avoiding reinforcing stereotypes and things like that. And so we know that this is not going to make a significant dent into the movement but we just wanted to do the practical things that we could and we're sort of continuing to think about more things that we can do and more ways that we can embed anti-racism into our organization. So I want to pause at this point just to see if anyone had any questions or anything. Okay, we've just had one question come into the chat actually Charlotte. So you probably still can't read them. So let me read it out. So it's from Tim Franson. Are you having a look anyway? Having a look. Okay, I'll give you a second to read. It's probably easier that way. Tim, hello Charlotte. Your thoughts on the concerns of tech giants and commercial platforms, including the ones we're using out and associated values slash cultures that have colonized the university's communications infrastructure which has been further entrenched during the COVID-19 crisis. It seems like that's a kind of two-part question. I don't know. Tim, could you, would you mind asking your question? Cause I'm not sure about the, whether it's a two-part question about tech giants and commercial platforms. Well, I'll give Tim a mic if he can. That would be great. But in the meantime, I think there's a couple of other questions while we give Tim a mic in the chat Charlotte and one of them was from David White who asked them, should the lion use the same language as the hunter? So a bit of a metaphorical question from Dave here. We also have a question from Moira which is around whether you see AI having any useful roles so far. Okay, so to the question about should the lion use the same language as the hunter? Thanks Dave, metaphorical question. I think that the lion should be guiding the hunter about language and the hunter should learn to listen to the lion about language because the lion, well, first of all, the lion isn't one thing. The lion is a hugely diverse, multiplicitous group that doesn't have one perspective. Like yesterday we were talking about the language of BAME for example. And, you know, within communities there are different perspectives on that term and how it functions. So I think it's a question of the collective voice being recognised as something that is not homogenous and that should lead the hunter to thinking about language in new ways, right? Okay, thanks Charlotte. I'm slightly wary of time so I think we'll come back to questions again at the end of the talk. So, yes, running away. Okay, so let's keep going. So one of the other things that bias algorithmic systems and COVID-19 inequalities have in common is that they can and they are being challenged by sort of cross-cultural and intersectional feminist approaches. So just a few examples. This is the ENCODE project which is a programme that is centred around the ideas and values found in algorithms of oppression by Sophia Imogenobl. This is the data feminism project which is, well, actually there's a lab now out of MIT. So this is thinking about data science and ethics informed by intersectional feminism. This is a feminist response to COVID-19 website. This is like a volunteer online data repository of information that's focused on feminist actions and principles and policy responses as well. And this is the COVID-19 online hub and Juan Hermesie Singano says that Africa has no shortage of feminist scholars and analysts who can expose whitewashed and male-oriented COVID-19 responses. African responses to COVID-19 need to be guided by African feminist principles which take care of those currently marginalised and lay the ground for policies to evaluate emerging marginalisation. And then wider global calls for feminist responses include this Kvinatil Kvinna Foundation from Sweden which promotes women's rights in conflict-afflicted countries and they've implored their government to advocate for a feminist perspective in the COVID response. A policy brief from Oxfam India recommends that a feminist approach includes explicit analysis of gender power relations and the Hawaii State Commission on the Status of Women has introduced this feminist economic recovery plan. And what these things all have in common is that they do advocate for an analysis of gender power relations amplified by the pandemic. They focus on the impact on marginalised groups. They centre gender equality and social justice in the process of rebuilding society and they emphasise the importance of feminist approaches to policymaking. So those are broad, global feminist responses to the pandemic which I have found incredibly heartening and interesting to watch but I wanted to now move on to talking about some of the ways that I think feminist responses can help specifically when thinking about technology development and one of the key kind of methods that we use at Feminist Internet is to use a tool called the Feminist Design Tool which is a set of principles that we've developed over the last couple of years. They were originally inspired by a piece of work by Josie Young who is a feminist AI researcher but now also program manager in Microsoft's Ethics and Society team. And she wrote a paper back in 2017 called Feminist Chatbot Design Process which aims to help chatbot designers avoid perpetuating bias in their designs. And we were really, really, really inspired by this. You can access the document at this link if you want to look at it now. But we could sort of see that for it to be really, really practically applicable for students it might need to be made a bit more accessible and have a kind of graphic treatment. So we did that. And you really don't need any previous experience or knowledge to pick this up and to use it. And you can adapt it to a range of processes. I've even used it in a bra making workshop which was a lot of fun. So it has eight sections and they all contain kind of prompt questions for anyone that's designing a piece of technology or a bra or whatever it is. So the stakeholder section is about thinking really carefully about who we're designing for, considering any specific needs or barriers they might face and asking if there are any opportunities for meaningful co-creation or participation. And this section was originally called Users and we have had a lot of discussion internally about that term because we felt in a way that it emphasized the idea of people as less active consumers of technology rather than creators. So we tried out the term stakeholders instead and we had generally speaking a very positive response to that because well, we thought that it was a way to imply more strongly that people have a stake in the technologies that are being created. But one participant on the Design of Feminist Chatbot course pointed out that amongst indigenous communities, the term stakeholders have very negative colonial connotations. So again, another example of language and how it needs to be so sensitively handled and understood from many different perspectives. So going round to the right, the purpose section is about encouraging people to try to design technologies that improve rather than damage environmental or social economic outcomes. And consider whether it actually does address the need of the stakeholders that are being designed for or ideally with. Context is about looking at how technology obviously doesn't sit in isolation of political and social economic power dynamics. So an example would be track and trace technologies being developed in a context where black and minority ethnic people of 54% more likely to be stopped for like contravening COVID rules. Team bias is about looking at our own assumptions and biases and reflecting on them and trying to be very aware about how we might be unconsciously embedding those into our designs. Design and representation is about considering whether we're reinforcing any stereotypes with the way that we are actually presenting technology. So I think Alexa being typically characterised as female. Conversation design considers what a feminist voice would be. That's obviously a bit more of a philosophical question, but there are also more practical prompts in there about empatheticism. Data is about monitoring data bias throughout any kind of process and architecture is really acknowledging the sort of ecosystem in which technologies are developed like infrastructure, hardware, software, et cetera, human labour and trying to look at where there might be inequities in that whole sort of supply chain. So I'll show you a few examples of how this tool has been used in the past few years. So I'll show you a few examples of how this tool has been used with students. Firstly, in a fellowship at the Creative Computing Institute in 2018 where we were looking at designing a feminist Alexa. And this work was really a response to the ways that personal intelligent assistants like Alexa are problematically characterised as female and also especially at the time were poorly designed to respond to the extraordinary amounts of abuse that they receive. So we designed two three-day workshops for students to explore these ideas. And what we did was map the tool to every stage of the workshop, as you can see here. So the reason why we wanted to map it to the workshop was because we really wanted a way that students could kind of be supported in evidencing feminist decision making throughout the process, which can be really, really difficult. And we really encouraged them to when they were presenting their final ideas and prototypes, talk about the ways that the tool had informed their decision making. So we ended up with eight imagined voice assistants. And you can watch a video about this on the link here. And the report of the fellowship is I think in the previous link. And this is just an example of a specific design decision that responded to a particular part of the tool. So the conversation design section asks people if the device receives abuse, how will it respond? And in this case, in their conversation design, the user had sworn, user says for F sake page, that's not it. Page was like a voice assistant that would help students find credible research sources. So the person that's using it swears up page and pages, please be polite, I'm not a human, but abuse is not acceptable in any way or form. Which would have been a really great response or something like that would have been really great for Alexa to say when it's called a bitch or a slut rather than like I'd blush if I could, which was what it was originally programmed to say. We also use the tool in this coding a feminist Alexa workshop, which was really, really interesting and challenging. We did that with Catherine Breslin, who was a former Amazon Alexa team member. It was really challenging to teach people to learn Python in an hour code of an Alexa skill and like learn about the tool, but we did manage it. And we created a booklet that takes you through the whole process, which you can access on this URL. I'm rushing it because I'm really conscious of time. I've nearly finished. And this was a intensive chatbot course that we ran at the CCI last February, which was looking at how can we create chatbots that tackle online abuse. And we hadn't previously done any courses where we took one topic for everybody to work on, but we decided to do it here because we'd found that if we didn't, the research that students would do in the beginning of workshops was a bit thin. So we really, really heavily front-loaded this course with guest lectures from experts that had been working in the field of harassment. And I think that that's a very strong part of the feminist approach is that you're sort of connecting work that's being done rather than trying to reinvent the wheel all the time. I was gonna show you a video of that workshop, but I'm not going to because of time. So, we started out today talking about how technologies reflect sort of social inequalities. And I know that one tool or one kind of methodological approach isn't going to fix that, but we have really been finding that working with students within this framework is changing their perspectives and getting them to engage with tech development in a much more critical way. And it's been really, really heartening. And just to finish, I wanted to just reflect on how I think some of the most extraordinary acts of compassion and justice and care are of course not understood by those who carried them out as feminists. And I really love this quote from Maimuna Jiang. She says, it's fundamental. We don't overlook women who do have feminists on their bios but are resisting and defying in their own schools and workplaces. That we do not forget the women who fight for equality without the theories and contextualization or the conferences, workshops and convenings. And it just really makes me think that possible feminisms like already exist in a lot of places but that we might not have expected. And the more that we kind of recognize them and celebrate them is maybe the more likely it is that we can kind of foreground the lion and not the hunter to go back to that metaphor. So thank you so much for your attention and I'm gonna stop sharing my screen now. Really love to hear, I'm gonna go. Thank you Charlotte, thank you very much. There's loads of questions Charlotte but first I just wanted to say thank you for an excellent keynote. I'll move to questions but I also wanted to let you know that a 14-year-old and a 16-year-old who've been sitting in the background in two rooms of attendees have really enjoyed the talk too and ears are pricked up and wanted to listen. So that was really nice. Please, thank you so much. I'm just gonna go back to Tim's question because obviously I cut into your earlier on. I don't know if you wanted the mic, Tim. I think you can give an answer if you want to go ahead. Matt, I've started giving all everybody who wants to ask a question of mic. So it's up to our participants if they want to use the mic or would prefer to pose the question in the chat. Okay, so we'll start with Tim if you're there, Tim. And if not, I'll move over to Richard. Hello there, can you hear me okay? Yes, we can. Hi Tim, yeah. Hi Charlotte, yeah. Yeah, just thanks for the talk really, really good. The, what was I sort of thinking sort of? And good you're highlighting the fact that communication technologies are never neutral. I guess it's more, rather than introducing it to the classroom, it's more institutional. I think it's more straightforward delivering ethical things in the classroom space, particularly when institutions want to look like a good corporate citizens. It's more a question about infrastructures of the university, the neoliberal university. And maybe just some of your thoughts on that. And very much like your feminist design tool and whether something, let's say at UAL, is the institute got a procurement design tool or a procurement tool in a similar vein or does it just doing the usual thing, let's zoomify this, let's get blackboard of that and not looking at open source alternatives in terms of their communication infrastructure. Well, I really appreciate that question because that's given me an idea for like 10 corporate client jobs, which we could help with. No, but it's a really, really good point about, you see, I do think that you can apply the tool or an adapted version of it to those types of processes as well. You could be thinking about, well, is a recruitment process being thought about through this kind of a lens and as you say, is the choice of the technologies being thought about in terms of this type of lens. I am not responsible for deciding what technologies are used in an institution. And so I can't answer that question from my own personal experience. I do think that where, for example, with the CCI, I know that decisions about what technologies to use do consider ethical issues, but whether or not there are always alternatives that are viable, that are ideally privacy-preserving, for example, or ideally ethical. I'm not 100% sure, and I think that's part of the problem, isn't it, that I know that there are problems with something like Zoom, but sometimes it's used just because it's the most practical and viable option. So it's about, and that's an analogy for the whole question of, well, should we all be using Facebook or Google, Amazon? Should we be buying stuff from Amazon when we know they don't pay the taxes and it's a really dodgy organization? But the trade-off kind of wins out for on-demand and stuff like that. So I think it's really, really difficult, but I really do like the idea of applying the tool to procurement processes. So thanks. Okay, thank you. I'm still trying to sneak a couple of questions in before the end, because I'm aware that we probably need to finish on time for other sessions. So there's a couple of questions, Charlotte, that got upvoted in the chat. So very quickly, Richard Price was asking, could the questions in the design tool be equally applied to other groups such as ethnicity? Yes, but I do think that when you apply it in different contexts or different focuses, you need to revise and modify. And also I would say that if it was going to be applied specifically in that context, I would want to work with black people and people of color to make sure that it was doing the right thing. I wouldn't want to sort of import it without quite a lot of specific thought. Okay, thank you. Another question that was upvoted was, has anything changed with Alexa after all this work? Did any take any notice? Yeah, actually, there have been some changes which has been really heartening as well. I think one of the significant moments was the UNESCO report that came out, it was called I blush if I could. And I think that had quite a bit of traction. And so, yeah, a disengaged mode was introduced. They improved the conversation design around responses to abuse. There are more options around voices that you can pick. I don't think it's necessarily kind of completely ironed everything out, but I think there have been attempts to make things a little bit better at least. Right, I mean, that's really wonderful to hear. I'm going to sneak in one more question but we was asked quite a while ago and we might want Moira to take the mic. She was asking, do you see AI having any useful role? But I don't know if Moira's got the mic if you want to expand on that. Moira's just in the debt, so. Okay. Okay. I mean, I do think that AI has the potential to be helpful in certain contexts. But as we were discussing with Angela yesterday, you really, really need to look at what are the structural and social inequalities that AI amplifies. And the main thing is to make sure that when we're building AI systems, those things are at the forefront of everybody's minds. What are the unintended consequences of the systems that we're building? How do they potentially impact marginalised groups more negatively? How is the data being used in these systems already containing historical biases? So yes, I do think it has potential to be helpful, but there are so many things that need to be considered along the way. And I'd say if you want a really, really interesting discussion about track and trace, listen to the Guardian's Got a Science Weekly podcast about track and trace with Kali Kynes and Cita Penya Gangadharan, and it's brilliant. So check that. Okay, so I'll have to stop you there. It's always a sign of a good keynote when there's a hundred unanswered questions sitting in the chat. So really, thank you so much for coming today and doing a keynote for us. I'm sure everybody's gonna join me and give a huge thanks for today's keynote. Thank you so much.