 David to you. Thank you, that's very kind. Well, thank you and welcome. We are a small group, nevertheless, I think it's highly interesting how the discussion can be, I would say, concentrated and really more detailed is how I see this. So what we're presenting here is our data ethics frameworks for research-based learning in higher education. I'm not sure, as we are not that many, shall we present each other and, you know, a little bit who we are and what's our interest in the workshop? I think that can give us a little bit of context if you're okay with that. Yep. Anyone who would like to start, Tom, would you like to start maybe presenting yourself and what brings you to the workshop? How are you interested in this? How and why? Do feel free to grab the mic, if you can, or just write in the chat, Tom, Tim, Gemma, any of you, if you do have access to speak, then do feel free. I'm working now. It is. Oh, yes. Okay, yeah, sorry. No, I had to go to the drop-down menu. I think there's a few, sort of, I have webcams and mites and everything. Anyway, yeah, sorry, I suppose I've been a member of our Institute of Ethics Research Committee for a number of years, and I'd have to say the whole thing around that, ethics has certainly become a lot more important. And dare I say it, for those of us in the EU, Leo, sorry, but now the whole GDPR thing has really sort of become far more important. And to be honest, if I can pick up any hints or tips, look, when I think back, I think a lot of us would look at issues which may or sort of things which we wouldn't have considered to be an issue even six, seven, eight, nine years ago, we certainly need to be far more cognizant of what we're doing with the data and what even constitutes data, ethics and stuff. I suppose that's what, you know, the problem I fear is though, what we're doing, or maybe there's an issue that we start avoiding ethical issues rather than managing them, as far as that's... Excellent, Tom, it's so, you're so attuned with what our kind of perception has been and why this is taking place. So it's phenomenal to have you, thank you, Tom. Tim, or Hema, any one of you would like to, oh, Tim, there, you've got the mic. Yeah, look, I'm just here. I'm actually running up to set up some online research courses at the University of Adelaide, so I'm here to, I guess, learn from what everyone else is sharing at the moment. Great, thank you, thank you, Tim, that's great. Hema, you saw Hema. Hi. Hi. Hello, good afternoon. Well, in my case, I think that is very interesting because I've perceived that some researchers, some colleagues as well from the university, I'm from the University of Barcelona, are concerned about the educational research data, what they can do with them, what ethical considerations they have to consider. All these kind of doubts that in some meetings they manifest, and that's why I think that it's important to clarify these for myself, in this case, and also for my colleagues to try to inform them and to train a little bit, also, our students, because they will be the future, so that's why I think that is going to be very interesting. Thank you, Kali. Yeah, great, Hema, great to have you here. I think we're all worried about the same thing, so it's brilliant that we are together. It doesn't matter that we are small or big, I think we are the ones that we need to be here, and at least we are on the same page, I would say. So in the next slide, we have framed, I would say, the context of our project. What are we doing, and why are we doing it, I think? So we're doing lots of research, absolutely, yes. So this is a project called Yes, They Will Be Tom, They Will Be Available Later. So the project is called Understanding Data, Praxis and Politics, and already the name is, I would say, self-explanatory, if that is possible, and the idea was really to think about a critical approach to data literacies, and what do we need to be able to unpack and uncover the politics of it? And once we are able to do that, I think we'll be more able to do data praxis, which means, you know, theory and practice coming together in our teaching practice, that is kind of the idea. We have been doing pilot, so we developed an open educational resource, which will be available once the pilots are ready, because it's very organic at the moment, I would say, but it's an open educational resource that lives in WordS, and it has content that we have developed, and it has activities, and we are running it in different sides. So the University of Tanghasa in Nairobi was one site. There is the University of, the Open University of Catalonia is another site. The University of La República, New Uruguay, and Surrey here in England. So we're running kind of these multi-site, if you wish, pilots. And yeah, I think they have been very welcome, and people are quite keen to learn about it. So what is what we have been noticing? And so that we have more time to work together. Can you go to the next slide, Leo? I think because what I want to kind of share with you is, these are the sites where we are doing, so in South America it's massive. Really, I think the, how can you say, the welcoming has been big, and in Uruguay I have to say, and in general, Latin America is strong in paying attention to open data, to open innovation, to ethics while you work with open data. And I think the criticality there is incredibly rich. We have enjoyed a lot being in Uruguay. The same happens with Nairobi. Although we had a smaller cohort in Nairobi, the keenness to work with open data and to be critical about which data, is everyone represented in the data sets? And is everyone recognized? Is everyone equally important? And has been very rich, the discussion there. And so what we, the next slide please, thanks. So what you, I think Tom, you really said something that we have been thinking as a group, and it's about the meaning of data has completely changed. Data is not the same thing we used to think about, I would say even five, six years ago, it has really changed completely. The meaning, but not only the meaning, but also the power it holds. And so who holds the power and who holds the data? Who are the owners of that power? And how then in a way, what is the role of education in examining these uneven dynamics of power? Who are, so what's our role? So in our teaching kind of practice, that we can awake or, I don't know, uncover or raise awareness that these things are happening. And the world of the whole data-driven systems, data-driven technologies are just really kind of taking over, it's like they're like a virus. They're spreading like you don't see it, but it's happening every day more and more. So really we are thinking, well, in this context, what is data ethics? What is it? And so here we're gonna then want, we would like you to think together with us, what is data ethics? How do you teach it? How do you do it? And what, for example, to begin with, we're very interested in hearing what are the biggest challenge of embedding ethics in research-based learning or problem-based learning. So yeah, we have done a little Mentimeter slide and they are, yeah, great. If you then go, if you have either your computer or your mobile device, what makes you happier, menti.com, and once you are in Menti, you need to put the code in that, yeah, there it is, eight, three, two, five, seven, four, seven, zero, and then the question will automatically come up in your device, whatever your device is. Oh, sorry, yeah. Sorry, that is easier. I didn't have the chat. Thank you, Javier, yeah, sorry. If anyone has any trouble, please just let us know and we're happy to help. Yeah, exactly. I was just wanting to say, when the one that, the person that wrote, need to address people, could you extend a bit on that? What do you mean by need to address people? Maybe I don't want to assume anything. Okay, that was me actually. One of the things that when we think about doing research-based learning, we think about the data. The data is not just a thing without, mostly, when you talk about data related to people, the data is not the relevant thing, is the people that it's described or trained in such data. So don't treat data about humans in a way that you would try that, I don't know, use data from transport or data from environment. Just, it's this thing. It's just to bring, to care of the, to care, to think about the people first. Absolutely, thank you. I think that is, and it's about people, and it's really about people. And I didn't want to assume that I was understanding this, so that was my question. And so, yeah, great points, really. Changing mindsets, I think that is also quite important. Yeah, anyone would like to expand or explain or? Great, Tom, go ahead. Yeah, I think there is, I've been on the fear of losing ownership and sort of changing mindsets, particularly for people who have come from this sort of idea, and maybe I'm damning the whole STEM area, but when they shouldn't, but there's a sense of proprietorship, and this is our, we need to be forced there to beat. But I do think, particularly for stuff like Plan S, I think has been a huge game changer or potentially the game changer in that the EU is mandating. I mean, the reality is, a lot of research is paid for by public funds, and yet then it's hidden behind paywalls and the data isn't shared. So I think there's two things there going on. I think that we need to educate people to kind of go, but by the way, you're on a 300,000 euro publicly funded project, so it's not technically your data. But then I think also, I think, apart from just the mindsets, I think people are not even quite sure how to do it. Okay, so if they have that data, how do they actually make it publicly available in an accessible, so to learn all about tagging, some metadata, and then I suppose you're a thing as well, the idea that historically, I would have asked people to take part in research and the data would have been just for this project. In theory, I'm making that project data available it's going out into the ether, but a lot of people, myself included, wouldn't be exactly sure how we do that. So even if we get people to commit to changing the mindset, I think they need to know how to practical. Okay, so even if you have someone, okay, you've changed my mindset, now how do we actually ethically share it and do it properly? Sorry for rabbitting on a bit. Oh, excellent point, because in the pilot we're doing, one of the things that we see is that the technicalities that you were just pointing to, how do I do this is as important as the criticality. It really goes hand in hand, so it's a really very good point. And it's about also do we need to think, we should be thinking about how can we integrate this, isn't it? So the technicalities and how to do this is one thing that needs to be integrated in our kind of new approach. Great, anyone else would like to extend or yeah, Leo, I agree with you. I agree with you. Yeah, and I think this is from part of the research that we're doing. We look into around 250 research methods courses from quantitative and qualitative kind of areas. And we found out that most of the mentions of ethics, the data ethics though quite scarce, quite narrow, just refer to informed consent. Just kind of make sure that you get informed consent from people. That was it. Getting informed consent is way more complex than just getting people to sign the form. Based, you need to make sure that the people that you're working with or studying can read. Let's start from that. Then you can understand what you are writing. Then can understand the language in your writing that they are not vulnerable, that they are not fields coerced into participating. So informed consent is just, it's a very top layer of something that is indeed quite complex. I don't know what the rest of you think. But I can say that we find like very little evidence of that the research methods courses in general include something that is beyond informed consent and the dimensions of our data ethics are just like almost nonexistent. Yeah, great point, Javi. Great point. Anyone wanting to add something? Yeah, Hema, go ahead, please. Yes, I was thinking now that Javi that was mentioning this about the complexity of the consent that in these mobile aids that we are immersed right now, it's also difficult because the concept of education extends beyond the traditional understanding. And sometimes if we want to do like something about mobile learning, it's really difficult to control this immediacy about the informed consent. I find it really difficult. I don't know how we can deal with this, but it's something that is really, it's in the society right now. Yeah, it's an excellent point, Hema. I agree with you. And also if you take, for example, data from the internet, not necessarily things that are already there who owns that and are they aware? And so many things that one, is this complexity that is, layers are being added as we talk in a way? Tom is saying as someone who lectures research method as well as a member of our research. Oh, thank you. Thank you, Tom. So I suggest, because I think it's you, Tom, you need to leave earlier a little bit. So shall we go to our next activity? I think that's a good idea that will help us to deliberate and think about- Yeah, maybe we can do it all together. Yeah, I think that's a brilliant idea. If, who shares the Google document in, I can share it in the chat. I can put the link. Yeah, you can, please. Yeah, I'm doing it right away. There we go. So if you can go to this document, please, if you're so kind. I'll be happy, Tom. Yeah, this is the searcher doc. Yeah, in it. And if, yeah. So Javier, do you wanna lead on this one? And I'll shut up. So we had a little bit of context. We had a sort of expectation of breaking into breakout groups and working on one framework per group. But I think we need to work kind of all more together as a collective. Yeah, great, Leo. Thank you for that one. Yeah, and let me explain it quickly in the background. What we did after that, we reviewed the 250 documents, like, sorry, CLAB on research methods. Then we went and reviewed the data ethics frameworks that were circulating ground. So Doc is asking me to request access. Okay, let me open up access to the court. Garuka, you open up the access. I think you created this one. So what we did, it was like, basically we went with Leo and map some really cool, interesting kind of splashed around data ethics frameworks. To see who were publishing those frameworks, who were creating those frameworks, where they come from. And so what you guys to do is to look at them and say if the framework as it is, is useful or useless for teaching, or which elements could be useful or useless. Can you see them now? I mean, you're done. Okay, lovely. And I lost it. This is what happened when you have like 10 browsers open. Geez, where is it? Great, great, great. So maybe if we go all to the group one framework, I think, and it's an interesting one, I would say it has quite a lot of detail, which not all of the frameworks have, but this one has. And it's here, where I was like lost. So for us, it's like, if you... There is already, I gave, try again or refresh. I did give editing privileges, but maybe you need to refresh. Let me check. Yeah, just a sec. Let me see. Yeah. Yeah, sorry, fine. Yeah, yeah, yeah, good. Okay, super. Yeah, if you can just, do you just put an X if you find it useful? Let's find a quick way. So the first one, it has some, we call it something like, well, they call themselves overarching principles. So this is kind of the main principles in which their framework is based. So one is like, consider if not collect, inform unproposed consent of data. Just giving a one or a zero. Let's play binary, I think it's either. So do you see the zeros are like useless and one it's useful. So then it's easy for us to count things. And then for me it will be like, as you didn't explain, it will be like, I know, let's say. But also, you know, you can write comments as Leo was saying. If you have any thoughts about this in the, with the comment feature, you can write a thought that you have or, you know, any concern that you might have around one of these principles. Is it good, is it bad, is it weak, is it? Yeah, so we want to show you some feel of them. So if we can just go for the next maybe five minutes that we managed to not lose time. Okay. Yeah, five minutes would be great because that would make it 35 past three, I think, that five minutes seems little, it is a lot of time. Gemma, can you still cannot edit? Can you leave comments? Let me see, I think I can put it. Yeah, yeah, there are comments already. No, Gemma, let me see, I can give it to Gemma because I have her email. So, let me see. She is. And I have access, everyone has. Okay, because Gemma was saying that she couldn't in the chat. Now, Gemma, if you refresh, I gave you access. Yeah, I can see the G. Do you just can leave comments? I think it's faster if you want to. Tim, are you okay with this activity? We have a Tom and a Tim, I think, yeah, Tim. As well. Tim and Tom. Yeah, your time, Tim, I agree, read it, and don't feel rushed. You don't need to go field by field, you just can give an overall chain kind of comment to the entire framework. Great, comments are being added, love it. Thank you for your contribution, that's great. Gemma, you can leave comments if you want. Still have a couple of minutes to go, so take your time. Yeah, exactly, Leo, good point. Yeah, yeah, yeah, that's our plan. So now you're looking at anonymous frameworks, but then we'll show you which framework it is and who produced it. So if you can just go to the second or third or later on, just wander around in the document. So we didn't write any of this, don't worry. Do you know, but I was thinking, Havi, it's interesting also the wording, because that makes then the interpretation easier or harder. So it's a good point, what is suggested about the wording. And they have different approaches. Some bits are for data, some bits have data for artificial intelligence. So have a wander around in the document because you'll find that they are quite diverse. Okay, what do you think, for example, for the last two ones? Number six and number seven. So just give it a couple of minutes so you can just visit the ones at the bottom. Honestly, I'm very grateful to have an anonymous day. What do you say, Leo and Havi, one at the time? Who's Leo, the anonymous mink? No. I was just saying, I like that we have an anonymous mink. Oh yeah. Yeah, that's a really good point. I'm group three, Tom. First the low. It's like, I will give you my impressions later on because I have lots of impressions after reading and re-reading these ones. Yeah, Tom, you're right. Some music in the background is always good. Okay, so whenever you're ready, we can tell you a bit the story of all this if you want to hear a story. We just don't really a fairy tale. Yeah, so Leo, can you go back to a presentation and disclose where they come from? So if you recognize some of the principles that you mentioned, some of them come from the government. So the US government, Australian government, US government, some come from the private sector, southern elite tech center. Some come from think tanks. The last one is data ethics each year, which is the economic think tank, which is part of the society, part of academia. And the last of the frame that you saw is a state of feminism by Klein and Degnazio. So what do you think? Anyone wants to take up the mink? Maybe because Tom, there is a summary. They have some details. Some of them have, some of them don't. Tom, do you want to say what do you think about all this experience? Yeah, I mean, it's really good. And you're right, I think some of the language there is a little bit vague. I mean, I think like six or seven, I had a look and I thought they were so vague as to be not much used. They weren't even, I wouldn't even call them principles. So they're just words or maybe. It's like some of the stuff, and I don't mean to sound disparaging, but it's like sort of saying, would you like to see world peace? What are you going to say? No, I'm an arms dealer, you know. Some of the stuff is a little bit over the top. Sometimes some of the language I actually just didn't know what it meant, to be quite honest. And some of it here, I don't know whether your average researcher, certainly social researchers, will actually have the technical knowledge. Some of the stuff will actually make me feel a bit stupid and that I'm not very good at my job because I don't know some of the stuff. So I think like frameworks are brilliant and I actually think we need guidance and structure, but not to the extent that they're so vague, they just end like sound voice, or not to be so technically prescriptive that we'll end up with like, only two people and one dog will end up doing the research. Yeah, sorry, I don't know if I'm playing on. Yeah, for me sometimes what I could see is like they put a tech guide with a lawyer to come with something. They needed to come with something, so they came with something. Sometimes it feels like in a bit of a rush and not talk through. It also for me, it takes some of the people that have reached in some of this, mostly the ones that come from the private sector, have never taken a course in research ethics. So basically it's all about like, oh, what we do, how we do, we make it fair, we tick, tick, tick, tick, tick some boxes, we look cool, we look sustainable and pro or like woke, and that's it and move on. But so we find it sometimes a little bit vague, but we're seeing all these together and of course the one that is not here, yeah, the person is not here presenting today, Christian Thiemermann, he's a philosopher and he does bioethics. So we've been working and developing this like through a lot of like philosophical research as well. So if we can move forward Leo, we're thinking of asking you to discuss our framework to see what do you think and how we can improve it. So I don't know if you want to do it now. Can we just quickly have a look into it? Gada, what do you think? Yes, I was wondering one thing is I know that Tim I think or Tom, sorry, I missed, I messed with whom needs to leave earlier. But I think it's interesting to go just maybe four minutes could be good timing to just look at our initial concepts. They're not set and we are learning from what we are hearing from you in this chat. So, yeah, I think that that can be a good thing. Sorry, just to let you know that this is kind of the short descriptions. We have longer ones. They need to fit into today's small timing. So Tom and Thiemer, can you see the Jamboard? Are you on it? I just emailed my class town the movie, I won't see it until 10, don't worry so we've got a few minutes. Thank you. I know I'm enjoying this immensely. So, and you're right, we definitely need frameworks but it's fine and one that works. Okay, so I'm in the Jamboard, yeah. Okay, so the idea, if you can look each slide, there is one concept dimension, however we want to call it. And we would like to, yeah, there are two things. What do you think about it? Is it too vague, is it too narrow? Do you think anything that you could do in your class to action, enact, foster, I don't know, this principle or this dimension, that's kind of what we want. So yeah, first it is as respect autonomy. So it's to teach students and to teach her actually ourselves to enable others to make informed decisions. And this is again, the complex, if you're working with a community, make sure that you're speaking the language that the community understands. And if that means from vulnerable and from children, vulnerable people, people from micro backgrounds, if you don't speak things in a clear way, they cannot make an informed decision. But it's just to have this discussion. What do you think, what do you think? Yeah, it's good Leo, it's fine, it's just to take a look, I would say. Yeah, the trick is will be how to do that enabling. I think it's the first is having honest conversations and make and show them consent forms that they won't be able to understand, for example. Give them informed consent forms that they won't be able to understand. So they understand the impact of being kind of feel excluded from a conversation. And this is kind of the work that I'm doing with some of my fellows at the Latin American Initiative for Open Data. And it's to talk about how to make, how to produce informed consent from people that may not speak Spanish. B, may not be able to read. C, both. Yeah, I agree. The trick will be how to do the enabling. And that's why this is not a quick fix. This needs deliberation and thinking. And I don't think this can be done like, okay, write a framework, throw it out there, and here we go. I think it needs deliberation and it needs thought process. Yeah, and I can see that in number three. So how far do we have to go to ensure this? Minus more research of limited resources. Yeah, well, this is mostly when we work and it's limited thinking about the limited resources. So I'm trying to think and speak at the same time and I'm not very good at this. It's kind of how when you work with certain data sets from complex background, you try when you process the data, when you analyze data, you try to treat them equally, not better or worse, depending of the group of people that produce that data. So this is kind of the concept of fairness, that you treat everyone as equal. And it's very related with like a prevent bias in a way or another. But it's making a fair analysis. Can we go to the next one? Yeah. So respect privacy is, understand that not all the information needs to be put in the public sphere. And it's important to protect the privacy and the respect of people. And the other day I was talking with some colleagues and in a committee that the research said, but I was, I have asked informed consent to the participant, his participants were people that work in the free system in the UK. Well, you ask them to them to talk about their work, but their work, it's directly related with people that cannot give informed consent because they're in prison. So it's like the tertiary data. If you collect data about people that work in the prison system, these people shouldn't be disclosing stories about the inmates, let's say, for example. Because the inmates haven't given consent for the stories to be shared. You know where I'm going? Yeah. Fair point. It's a very good point. So you respect the privacy of every one. And their stories, they shouldn't be told in public. They don't belong to the public space. There is a case in Argentina not long ago that where the schools decided to use a predictive analysis. And that was with, of course, help of a very big tech company to predict potential teenage pregnancies. And they were exposing a lot of information about these girls. They didn't ever give the consent. And they were like, oh, the girls from this neighborhood tend to be promiscuous. That was an awful way to say things. Can we go, I think, one more? Yeah, who determines acceptability? That's a very, very good point. I think that will need a bigger discussion with the team. Yeah, you know what I'm seeing here and what comes clear to me is how discussing with interested people in the topic enriches the view of each of these principles or, you know, concepts or dimensions. I think that is really, really important. Yeah, and for do no harm, I think training is really important, mostly when you do data analysis. This is another thing that you might, I'm just giving you a preview of our paper, but in a way of another, what we found out when we then review 80 data science programs, they, most of them, only 14 of them, I think if I'm not sure, address ethics. So they teach people how to analyze data. They will tell people how to analyze it thoroughly. So that the principle of do no harm, if you can move forward to do no harm, needs training and mostly in the data science programs. What do you think? I can't hear you saying, like. Oh, sorry, I think I should have, I'm typing where I should have knocked out my microphone. Yeah, I mean, I just had the next point there as well, a data subject should be in a position to decide when and what data they wish to disclose into whom. And I take on that, I mean, I've just been doing a large survey in my own university. And I put in this quite long information letter, but some of the students who know me, they email me back and said, oh my God, Tom, I mean, I didn't, I just wanted to, I was happy to do the survey, but I didn't want to read another essay. So, I mean, some of the stuff here, we, I don't disagree with what I needed to stop here. It's finding that balance, I think, about, and you're right, we need to consider, as I said, that there is, I think it's increasingly diverse population. But I think, as I said, see some of this stuff is absolutely fine when you're doing one-to-one to stuff. But like, you know, you're sending out a link to maybe, you know, 3,000 students. You're just hoping that some of them would click on it. So, do we, how, do I provide them with an attached information letter, which I did? And just informally, I asked some of them, you know, how many of you read it? And of the five people I asked, none of them had read the information letter. Yeah. But now, in the survey, I also had a really stripped down version of, you know, informed consent as well. So, I was sort of walking the assumption, they may not actually read all of the attached. And that's the thing there, I suppose. It's about balancing, isn't it? Yes. So, you know, I don't know. I'd better not to make that. Guys, can I reach out to the three of you? Because I really would love to consider and continue this discussion on some side of, so glad I came along. Yeah, well, it was fantastic to have you here. Yeah, no, no, I've really, really enjoyed it. And I'm sorry, but I really better, I better run to the class here. But look at, Leo, can I message you? And then I'll get the contact details. Absolutely, yeah. Yeah, yeah. Yeah, and Leo, can you send them a copy of the chapter of Data Ethics? Because we're just gonna release the chapter today. So, we'll send them. Okay. Sorry to run. As I said, no, look, I'll hook up with the three of you, but Leo, I'll reach out to you and we'll talk soon. God bless everybody. God bless you, bye, Tom. Take care. Bye-bye, bye-bye. Leo, can we go back to the presentation? I think it's eight minutes, and I think it's time to kind of give the mic also to Gemma and Tim. You go back to the presentation and then we can keep doing this. Gemma, Tim, do you have anything to say? Because we really want to talk to you guys. Can you move forward, Leo? Yes, I was sharing something, but I think that you changed the slide. I'm lost now. I don't know where I was adding the information. I think it was, let me see. In the jumpboard, I think. Yeah, but I mean, it was before I think it was respect privacy. I wasn't there and you were like two more. I'm sorry, I wasn't so quickly. I was saying here that it's necessary to also to guarantee reputation and the trust also from the users that you, of course, are respecting their privacy. And but I don't know what else to comment here. And I think there was another one that I contribute in the first one about the empathy, the respect autonomy. I was commenting that is also an activity can be to think on a reverse case, which is directly affecting them because the empathy is like a tool to make them understand why it's important to respect autonomy. I was thinking about your class activity, I mean, in something practical. And I completely agree with you. When the things happen to you, then you can only see how it affects others. So if you kind of take away the autonomy at some point, then that person will start to value it. That's really good. Thank you so much, Gemma. It's okay, thank you. So yeah, this is our little framework. We will be publishing it hopefully quite soon, as long as we have some time to finish the paper. Because we'll talk about that. Leo, I don't know. Oh, Carol, if you want to contribute, I think I spoke too much and I'm tired. Yeah, I just think having had this experience, I think, and I realized more than thinking how useful it is to have people thinking together with you because it really gives you a different dimension. So I just kind of want to say thank you for being here. And it doesn't matter if we're three or four or one or two, I think it's so valuable to have the insight from someone that has never seen this and maybe that we haven't talked to. So what I think, I'm just kind of thinking ahead and what other things we can do within the community of where we are to just get, maybe, yeah, people thinking about these dimensions and what are their thoughts and their views because I think it was really valuable what we did just now. Yeah, I think it's been so interesting. And obviously like we were hoping that there would be more people, but even really with small numbers, it's really fascinating to get that feedback from people. And I wrote in the chat about how it's quite easy to critique the vagueness of the language in other frameworks and sort of question, what does this really mean? But it's really valuable to have people look at your framework and say, I would need to know a bit more because it's tricky. It's always open to interpretation. And here is where I think the value is exactly that what do other people interpret from what you have written that yours, you have your don't, we are so biased all the time. We write and we just assume and here it is exactly what you're saying. So I think I'm incredibly happy that we had this workshop and I'm incredibly grateful to Hema to be here and stick with us until now. And I do think this would be great to just kind of extend these kind of activities to a bit, you know, outside. It will really be very helpful for us. I am absolutely sure. Yeah, and I think we'll need to kind of run some webinars on this at some point just to pilot it for a while. Yeah, absolutely. Thank you, Hema, so much, so, so, so much. We'll have to leave now. I have to take to do work to do. Thank you so much. Yeah, thank you, Hema. Be well. Thank you very much for being here. Right. Take care. What an excellent session. Thank you so much. I'm just going to start, stop the recording now.