 Alright, thanks everybody for joining our session. We really appreciate you being here. So instead of verbal introductions, we are actually going to do written ones in the chat. We'd also love to hear from you. So feel free to introduce yourself as well. I'm going to drop mine after I stop speaking because there's too many screens around. So yes, please, please introduce yourselves and we'll introduce ourselves in the chat. So here's the plan for the next 45 minutes. It is pretty action packed. I will start us off by giving everyone a brief overview of the G search consortium and the goals of the consortium. And then we'll go into an action oriented discussion on our measurement processes and technologies. So the G search consortium is a group of impact investors making inroads with gender lens investing through the support and use of gender smart technical assistance for their portfolio companies. So in many cases they bring on board an implementing partner to conduct the activity such as value for women. These investors are also committed to building the business case for GLI in SMEs in emerging economies such that it becomes mainstream. Now my organization, which is the William Davidson Institute at the University of Michigan go blue is the research partner. Our goal is to measure and understand the social and financial impacts of these gender smart technical assistance activities. As part of the research we're conducting five case studies where we're conducting primary research or collecting the data ourselves. And this is where our data collection partners 60 decibels and Alberg research come on. And that's why we have Ashley and Jasper on the call today. So for example, let me explain how this universe works. So for example, we are investigating the effectiveness of a gender smart communication campaign launched by a clean energy company in Sierra Leone. And another example is the measurement of outcomes from gender smart training to help female AgriWords sell more product in Kenya. So WDI, the investor, depending on which investor it is with which company, the company itself and value for women when applicable where they are the implementing partner. We together have co-developed the measurement strategy, including the research design indicators and the survey. Now 60 decibels and Alberg research will now come on to help us test and refine the survey to make it really contextualized. They will train their local enumerators and they will administer the survey to the sample. We're also using a WhatsApp platform. So that's one of the new technologies we want to share today to collect data in one particular case study. The other company has implemented a gender smart mentoring program. We want to capture the impacts on the middle managers who receive the mentorship and on the junior employees who report to those middle managers. So we'll be using WhatsApp for both those groups. So let's get into the technical content. That's why we needed slides over here, but we'll keep this really high level. So to ensure we're all on the same page, I'll quickly explain the slide for any data collection activities, whether you're collecting one to two KPIs or you have a large research project like ours. There is always dialogue needed upfront and possibly contracting if you have an external party do your measurement. So these are examples of pre-measurement activities. And then that's followed by the actual measurement design and testing work that you should do before you dive headlong into data collection. And finally, you have data utilization for decision making and learning. Now, before we move ahead, I do want to share a key insight from this research experience. There is always this inherent tension that stakeholders face between measurement activities and data integrity and versus the realities on the ground and the many priorities of SMEs working in complex environments. So there will always be resource allocation challenges. I've done this work for over 10 years. I have never seen money fall down from the sky. And so you cannot let the perfect be the enemy of the good. And when I say resource allocation, I shouldn't just say money, right? It's also time, technical capacity, the burden on staff, the burden on the people that we actually talk to and collect data from. So that's what I mean by resource allocation. So there will always be those challenges and you just cannot let the perfect be the enemy of the good. So the best you can do is co-create measurement strategies with key stakeholders and use the insights and feedback from the world around you to just and adapt both the intervention and just as importantly, the measurement plan. So the slide is busy. Actually, what we've tried to do is we've provided you the sub steps under each of those categories that I just showed you such that you have a checklist of sorts. We'll make the slide public. And instead of discussing each sub step in the interest of time, we've decided to highlight just a few key ones and share how we have overcome hurdles at those points. So I'm going to get us started. I'm going to talk about the importance of buy-in for measurement and managing buy-in because it is not a binary. It is going to have its ebbs and flows through the process. So let's first start with what is buy-in? Buy-in is interest followed by a commitment from leadership as well as staff responsible for M&E to implement and use data for the sake of continuous improvement or what we call an M&E jargon adaptive management. All this means is that you're using data to learn and unlearn and improve operations and products and services to achieve your impact goals. Buy-in is also understanding the why behind relevant and right size data collection designs and methods so that you can champion for it accordingly, especially when there are limited resources. Buy-in is also can show up in the form of whether you participate in the co-creation process such that it meets, as your measurement strategy meets the needs of the different stakeholders versus just the ones in power or wielding power. Buy-in ensures that M&E activities don't get delayed or be prioritized or shortchanged for that matter. So a critical issue is always around ensuring and managing buy-in for measurement, and I want to spend a minute discussing how to build it. Now, I understand that achieving impact is in your DNA when you're an impact investor and a social enterprise. So you would think that this is the easiest part, right? Buy-in for measurement is the easiest part, but it's actually when resources are tight or a pandemic comes our way, things can't falter. So it's really about making a case for collecting data, not just for reporting or accountability, but also for decision-making to improve and strengthen operations, especially when things are going haywire in the world around you. So building buy-in requires conversation and advocacy for resources to include appropriate measurement indicators, inclusive voices, which we'll talk about a little bit, and technical know-how. So what do I mean by technical know-how? For example, it means going beyond output indicators into your short-term and medium-term outcome indicators. Also, using certain statistical analysis like effect size calculations, which require granular level data, and that can be very tricky to get. So, and then just other ways to build in buy-in includes sharing examples of what data is needed, and then showcasing that, how it can be analyzed and how it can be used. So you kind of visualize the steps for folks, and then cultivating champions among the business team to help when buy-in reduces. That can be really helpful to have champions on board. So now let me bring in Ashley into the conversation followed by Jasper. I got a sneak peek into what these folks are sharing, so spoiler alert. Ashley will discuss survey design and remote data collection, and then Jasper will discuss the critical need to embed gender into research and primary data collection. So with that, I'm going to stop sharing and hand it over to Ashley. Great. Thanks, Yirka. I just want to make sure, can everyone see my screen now? Yep. Okay, perfect. So hi, everyone. I'm Ashley from 60 decibels. And so today I'll just give you a quick overview of 60 decibels and who we are, an overview of our process at a high level, and then I'm going to zoom in on the part of impact measurement that comes after you've received the buy-in from the various parties involved, and I'll zoom in on survey design and give you some actionable tips for designing a survey should you get to that stage in the impact measurement process, which I hope you do. So in a nutshell, 60 decibels makes it easy to listen to those who matter most. So whether that's farmers, factory workers, or customers that you're working with, we try to make it easy to listen to them. So how do we do that? We have a network of over 700 researchers who are actually calling end customers or respondents and asking tested survey questions that we've developed over the past seven to 10 years, which then, because we ask the same questions over and over, we're asked in the same way, we're able to develop benchmarks and help organizations compare their performance to others. So a lot of people ask whether we use an app or think that we're sending SMS messages, but really it's just researchers calling people on the phone and asking well-designed survey questions. And so the process at a high level is first to define goals. So we're contracted usually by impact investors or foundations or sometimes the companies themselves. And we'll just spend time upfront trying to understand what success looks like for the impact measurement process. And then we will design a survey using our repository of survey questions, but also designing new questions based on the particular goals of the stakeholders that we're working with. And then we'll get into the data collection process. As I mentioned, our researchers actually call customers and collect the data. And then 60 decibels analyzes that compares it to benchmarks and shares the results. So this is a very simplified version of our process. I think under each one there's about 15 different sub steps, but I wanted to just zoom out and share at a high level how we work. So assuming you have buy-in from stakeholders that you're working with and you're getting to the survey design stage, I wanted to just share a couple top tips that hopefully you can go and implement after this session. So first in terms of our survey rules of thumb is to make the survey enjoyable for the respondent. I think that so often we forget that, you know, surveys are not super fun for the respondent and as much as you can help try to make it conversational, share the purpose of the survey upfront, along with the consent statement, really try to get the respondent clear on, you know, what they'll get out of the survey in terms of what benefits the companies are going to try to... or what actions the companies are going to try to make based on their feedback. So I think that that is so important and something that we try to focus on in all of our surveys. Second is keeping the survey decision focused. So a lot of times we'll see people come with a kitchen sink of survey questions and I think what is super important upfront is making sure that all of the questions are relevant and actionable. So for each question, pause and think through what are you planning to do with the information that you collect through this question? And if you can't clarify that upfront, we would say scrap it. Related to this is the third tip, which is to try to keep the survey short. So we say that SMS surveys should be below 15 questions and phone surveys below like 30 to 40 questions. And obviously that's one of the hardest things to do when you're designing a survey, but I think that also ties back to the first tip around making sure that this is a positive experience for the respondent. The next tip is to mix it up. I think that it's important to achieve a balance between open-ended and multiple choice questions and think through the ordering. And so I think that if you have, you know, 30 questions that are multiple choice and all very similar, it's going to get boring and a little bit repetitive for the respondent. And so I think, you know, just thinking through that mix of the questions and how you order them and all in the spirit of increasing kind of completion rates is important. Next is to test the questions. So it's useful to test them on yourself. How would you answer the question, but also with researchers or team members? And one of our colleagues always says that, you know, if you are asked a question and it takes brain pain to try to answer it, that's not a great question. So just so important to test it on yourself. How would you answer it? Does it create brain pain? And yeah, how would you eliminate that if it does? Next is thinking about the scales. So if you're using scales in your question where it's like zero to 10 scale or a Likert scale, which is, you know, extremely satisfied to extremely dissatisfied, for example, make sure that, you know, it's appropriate and really understood for the audience that you're testing this with. And again, that goes back to testing the questions, but we've also just got a little more creative with scales where sometimes we found like smiley faces or a thumbs up or a thumbs down or just a green light, you know, red light system could be more effective than a zero to 10 scale. It just depends on the audience and the context. And lastly, I just wanted to share, I mean, we get a question a lot, which is like, what should my sample size be? How do I ensure that it's statistically significant? So it really depends on what your objectives are and what the overall population size is. So for population level surveys without segmenting, we recommend a sample size of around 275 respondents. But again, if you only have an overall population of around 50 people, then that is not necessary. And so we survey monkey actually has a sample size calculator that's pretty helpful to use. And so yeah, I'd recommend thinking about that rather than just assuming that you need, you know, the largest sample size or to hear from as many people as you can. So I will pause there and hand it over to Jasper to share some more tips, but hopefully that will get you started in your survey design endeavors. Thanks, Ashley. Let me switch on my camera so that at least you have been able to see me for a bit. My connection is fluctuating a bit, so I'll put it off again. Hello, everyone. It's a pleasure to be here today together with my fellow speakers, Jakuta and Ashley. I'm representing Dalberg and more specifically, Dalberg Research. And I want to talk a little more about the aspect of applying a gender lens in our research practice. The lessons that we have learned as Dalberg and as Dalberg Research and our own best practices. And these are by no means exhaustive or deep as practices, but I hope it will shed some light on what we have learned and what may be useful to others who are engaged or planning to do similar kind of work that we've done. My points and examples are a little skewed towards primary research and also primary research in general, not specifically B2B research, but I believe these would still be helpful points. So generally, as a starting point, what has been helpful for us in ensuring that the approach we take is comprehensive, is by applying frameworks. You see one of them on the screen. This is a particular framework that is inspired by John Hopkins' Gender Analysis Framework, which tries to unpack why gender inequalities exist and persist by looking at how power varies across different dimensions. One of them is roles, who is participating in what sort of activities, who is taking what sort of positions in businesses, for example, who is making decisions. So that is important to consider, of course, resources, who has access to what sort of resources, control over resources, for example, capital, the norms that play, social cultural norms and beliefs, how are men, women, boys and girls being perceived when it comes to rights, responsibilities, opportunities and so on, and also entrepreneurship, right? Then there's needs, what are the different needs of men and women, and also the legal status, the laws and policies, regulations that protect, but also may limit the rights and the status of women and men. So frameworks are useful and this is one that helps us unpack the different dimensions before you start translating that to your own research. Maybe we could go to the next slide, Jakuta. So now as we translate that to our research practice, there are a few things that we have learned and been trying to incorporate in our work. The slide is a bit dense, but I will pick on a few things and give a few examples. And as you can see, there are points related to the different stages of research, design, execution, or delivery, and then the analysis and the synthesis. Let me start with the research design. What we felt is important, first of all, is that you translate your, you know, your ingoing hypotheses or your assumptions, the objectives into a learning agenda, and that you build in gender lens variables to guide the research. So there may be specific questions or variables that are important to your hypotheses, which have a gender angle. We're currently designing a survey that is meant to measure what the impact is of a certain intervention, in this case a training model, on the transition of women from education to employment in sectors that are male dominated. And questions that we have specifically added, for example, are around harassment at the workplace, gender-based violence, for example, that are specifically important to women as it may influence their experience and their ability to transition into dignified employment. So it's good to be mindful of those kind of specific questions and variables. The other thing I wanted to highlight here is that it's good to create a data capture tool and an analysis plan that is sufficient to generate gender-disaggregated findings and insights from your data. So this is important, like in any other project where you want to understand how the data feeds into your analysis, right? And ultimately how it helps you in answering the research questions you have. So it's good to be mindful about what that right set of questions is. And with that being able to explore where biases or root causes of gender gaps are coming from. So an example that I wanted to give is we ran a study in Nigeria together with Ifina, whom you may know, which is an organization that works together with financial institutions to improve financial inclusion also of women. And we did that study in order to understand what prevents women, especially in northern Nigeria, from accessing and using financial services. We oversampled women, first of all, compared to the sample that we allocated to men, in order to get a very rich perspective from women with different backgrounds and from different geographical areas. And through that study, we were able to address some, yes, I call it misconceptions around women's financial inclusion. Our data and the correlations that we run on the data show that women are not financially excluded because they are women. They are financially excluded because they have lower incomes, their education levels are lower. And they have less trust in financial service providers. Those were the three dominant drivers of exclusion. And all of these are gendered issues, right? And this gives a strong direction to where to find the solutions. It's not about, as my colleagues call it, incifying banking products, but it's about closing the gender gap on income, for example, or education, which of course is a big challenge. So a solid analysis plan and an approach to synthesis that you have upfront is crucial. The gender angle is important, but having a solid set of variables that really helps you to dig deep into and understand the dynamics behind what might occur as a superficial bias is really helpful. Now, I have a few more examples when it comes to the implementation and delivery. Jakuta, just tell me when I need to stop because we're running out of time. First of all, I wanted to say something about the sampling criteria. A good question to ask is if with the sampling approach that you're taking, that it's sufficient to capture all the key perspectives that are necessary to understand gender dynamics. An example that you may have heard before is on food security, for example. It's often measured at the household level. When household level indicators are being measured, it's typically the head of household that answers the questions or provides the information. But within the household, the situation can be quite different. Also in businesses, depending on the role or the position someone has in that entity. And the food security status might be good for someone, but might not be so good for others in the same entity or in the same unit. So whom you're asking the questions matters a lot and your sampling strategy is key there. The other thing I wanted to say is about the group of women and looking at not just gender as a variable, because women are not a monolithic group. They are different in terms of age, ethnicity, socioeconomic status, religion. So gender is not the only demographic to consider when you think about how to spread your sample. But it's important to also consider other demographics so that you're able to break down that broader group into sub segments. Of course there's aspects when it comes to the implementation around cultural context, ethical norms and all that. Ownership of mobile phones when you're doing phone surveys, for example. Mobile phones are often a shared asset in the family or they are owned by men rather than women. So that's an important aspect to consider when you think about how inclusive are my surveys. When it comes to face-to-face interviews, timing is important. The location that you ask respondents to come to. How accessible are they for the entire group that you're sampling is important as well. Women are often engaged in unpaid duties at the household. So making sure that the timeframe in which you interview people is accommodative to everyone. Jasper, can I stop you there? And then I'll come back to you for like a closing big picture takeaway. Sure. Okay, thank you. So just going back to just a few other quick things that we want to talk about before we go into closing statements and Q&A is that you know, first of all, please keep your questions coming. Thank you to both Ashley and Jasper for those incredible insights. These are all the things that we deal with on a daily basis, including on the G-search consortium and with our 5K studies. So thank you for those. I'd like to quickly share a new product from the G-search consortium on suggested indicators to measure the success of the gender smart TA activities. So we shared the link. We'll share the link in the chat. Unfortunately, we don't have time to go through this document, but we'd love for you to play around with it. It's a Google Sheet. You can download it. You can create your own filters, etc. And we'd love to hear your experience or feedback on this document with us. One more thing that we wanted to share before we go into closing statements is about using WhatsApp for data collection. We are very excited about this. So the use of the WhatsApp platform comes once you have your draft indicators and questions in hand. So you're actually laying out the questions on the platform. That's where you start using the platform in the whole process. So this is a form of self-administration and you can see the pros and cons right from that statement. So it is not a magic bullet, and I'll talk about that as well. So on the consortium, we're engaging with a partner called Outside Voice. They're based in Singapore. There are other platform providers as Twillow, which I believe the immigration policy lab at Stanford uses. Now, what exactly is this? So how does this work? So you have your indicators and questions, right? So here on the WhatsApp platform, you can use a variety of question types such as multi-item, NPS, open-ended questions, and even like our scale questions. So those questions where you have the strongly disagree to agree. So you do need a little bit of adjustments with that. You also have skip functions. So there's a lot of flexibility there. And one of the coolest features for us is that you can actually include your questions as a video or as an audio clip, right? You have the little green button on the side where you press and you can leave the question as an audio clip. So the respondent as well can answer in that manner. They can speak via audio. They can take a video of themselves speaking or of their environment. They can take pictures as well as a response. So it's really helpful when you have low literacy populations, which often we are working with those kinds of folks. Or you want to use data collection methods such as diaries. So this can come in really handy. So when I say this is data collection over WhatsApp, I'm not sending my respondents a electronic survey link to their WhatsApp. This is not that. This is instead the questions are coming to them and they're answering within WhatsApp itself. So it's very much in the form of a chat. It's all automated. So I'm not sitting here and what's happening with them. Completely automated yet an interactive conversation. So it becomes a very viable data collection platform for scaling as well. Another big pro of course is that WhatsApp is ubiquitous per one data point that I found it has 2 billion monthly users in 180 countries. So it means that your participants are typically already familiar with the app. They know how to access it over Wi-Fi or cellular data. So that I think is one of the strongest points of WhatsApp data collection. Another really cool one is especially if you have migrant populations or seasonal workers or you're working with refugees. What happens is even if you change SIM cards, your WhatsApp number tends to be the same, right? So it can really help with an attrition problem if people are moving to new places. You still have them on the same WhatsApp number. And then of course costs costs are always going to be a big question, right? We have still not figured out costs on our end because we're still in the process of piloting and I'll get to that in a second. But one thing that the IPL group at Stanford shared is that it can cost 55 to 65% lesser than a 15-minute phone call. So we've completed pilot testing and we'll be rolling out the surveys next week with our first cohort. This company, the one that we're working with, wanted a technology data collection solution. So they did not want enumerators calling up their employees. This is the one where they're providing the mentorship to the middle managers. So the WhatsApp solution is working out great in that situation. I will say though that this is not the magic pill. Like I mentioned, it is self-administered. We are using it with one of our companies out of the five case studies. So I just wanted to share that this is not going to replace all types of data collection. But it is a solution out there. So with that, I want to go into closing statements and before we get into Q&A, we have a little bit of time for Q&A. But I'd love for Ashley and Jasper to just share some parting words on measurement or on a key lesson that you want to drive home with our audience. Great. Yeah, so I think so often when we talk about measurement, the response that we get is that it's too hard. It's complex or it's burdensome. And I think even when we use words such as buy-in, I think the language implies that measurement is a difficult ask. And I think that we need to reframe this. I think ultimately this is about the importance of listening and it should be valuable to everyone involved. And you can think of it kind of like a shift away from a fixed mindset and more towards a growth mindset. The process is not about an appraisal of how great you are. It's about an honest curiosity about how you're doing and how you can improve. And that's actually a really fulfilling process that can bring you closer to why you're doing this work in the first place. And I think that it's important to just state that these are real people behind the data and impact reports and their voice matters. And there's a ton of richness in this data that is being left on the table if we continue to approach measurement as a difficult ask that is overly burdensome. And so I think that reframing will hopefully drive more impact and better businesses. Thanks for that, Ashley. I would, you know, thinking about measurement. I would like to relate it back to applying the gender lens as I spoke about it. I think we, you know, I've given a couple of examples of, you know, how it could be how you could be applying that gender lens and how it's important to be mindful of research and measuring things to be inclusive. I think it's critical that we not only increase awareness but also try to be transformative in how we address gender inequalities. And I think that to an extent, you know, that means that you need to critically think about how we design and implement the research that we do. And it's an effort that needs to cut across different levels of our teams. It's a commitment that you can make as an individual and a good practice that you can apply. But it's good to have, you know, gender champions to deliver, to develop best practices, but also to train staff at different levels in your teams on how, you know, our collective work and help in advancing gender equality. So for me, it has an important methodological aspect, but also an organizational aspect. Yeah. Back to you, Yakuta. Yeah. No, thank you for both those key messages. What we've also done for you guys is put together a document where we've listed out all these, these key takeaways that we've learned from our various experiences, not just limited to research but also beyond that we put that in a document and we'll be sharing that as well with the slides. So you have that at your fingertips. I think what I want to do is get into Q&A. And then if we have any time remaining, I'll, you know, just go back to the point on using data for learning and for continuous improvement rather than just reporting and accountability. But I'll pause over there. Let's look at questions. So we have one from Louis. He says, in your experience for B2C models, what are the best customer touch points for data collection? For example, during an online sign up or during deliveries, I understand that the design is iterative, but do you have any key insights to share on anticipating the mental availability of people surveyed? I'm going to open this question to both Ashley and Jasper. Ashley, maybe if you could talk a little bit about lean data and using those existing touch points as key, especially because you also have to be mindful of the burden that you put on your own staff when you're asking them to collect data on top of their everyday responsibilities, right? If you don't have the resources to bring on a third party data collection organization. So Ashley and Jasper would love to hear your thoughts on the B2C models. Yeah, this is a great question. And, you know, I think one exercise that's useful is actually mapping out the customer journey all the way from, you know, customer acquisition, when they purchase the product, thinking through after sales support. And what we've done is at each stage in the customer journey, thinking about using that as a natural touch point to ask certain questions to respondents. And so I think it's important as well to perhaps space it out so that when a customer purchases the product, perhaps you're asking a question around satisfaction around that purchase process. And then perhaps later on, if a customer reaches out with a challenge, you could ask, you know, how satisfied they are with the response to that challenge. So I think that my advice would be to first start with mapping out that customer journey and then thinking about the natural touch points and what insight you would want to learn along the various stages of that journey. I think I've little to add to that. Actually, I completely agree. What I maybe the thing that I can add is by saying that in interviewing respondents, we always try to be accommodative to their, you know, to their situation by asking them when it is a good time or a good day. To interview them, to ask questions that could be either in different sessions. Sometimes we break up interviews, but we always try to, you know, to ask them when it is a good time or a good day to interview and to interact with them. So that's the thing that I can add. Yeah, I think and Jasper, you were also talking about this point while you were presenting on how to apply a gender lens of primary research, right, is that often the mobile phone is a shared asset. This is one of the things we are dealing with on one of our case studies with the company in Sierra Leone, the clean energy company. So how do we get the man to transfer the phone over to the woman finding out the best time to call but also if the man does pick up one idea that we have in mind that we are planning on using is that, you know, sharing with the person. Okay, these are the kinds of questions we are we're going to ask her we're going to ask you first, and then we're going to ask her the similar questions so that they get a sense of what we're asking. And they realize that Okay, these, you know, this is not extremely personal questions or things like that so that they can transfer over the phone. So just things to keep in mind as you are trying to get the right person to get the data from to ask questions to in that in that customer journey. I'm going to see if there are other questions as well. We have another one. Can any of you talk to an example of the insights learned in developing a learning agenda and specifically theory of change with the company. And how does that affect choice of measures indicators. Again, I'd love for Ashley and Jasper to speak. We are big believers in using theory of change as the foundation for our entire measurement strategy. And I can link to some resources for what we've done with USA on previous consortium that we were part of. But I'd love to hear from Ashley and Jasper and I'll make sure to put those resources in when we share the slides. I'm a huge fan of the theory of change. We used to do that at Acumen for all the portfolio companies. And I think that well, so first advice would be to definitely do this. I think the how that affects the choice of measures and indicators. So I actually think the most important part of a theory of change is the assumptions that you're making. When you talk about going what needs to hold true to go from inputs to outputs to outcomes to impact. And often if you include assumptions between all of those stages, that gives you really helpful clues and information on what to measure. And so that would be my recommendation is first to create this theory of change. There's a ton of great resources available, which we can also share and then use the assumptions to guide what you measure. And that will also help you just get more information on whether the impact that you thought was supposed to happen is likely to come to fruition. Maybe the one thing that I could add to that Doris is that we often like to use a sort of a logic tree. Once you have, you know, with your framework or your theory of change, you know, sort of the research questions that you want to ask, breaking that down into, you know, some questions and ultimately translating that into variables or indicators is we've used like a logic tree model for that quite often, which works quite nicely. So that's the practical thing that I could add there. Yeah, and then we'll also add a resource called Lean Research, which has come out from MIT. That talks about how do you right size your research or indicators to the resources that you have, but that's only one of the four hours. It's a four hour framework. So we're at time it's, it's 12 o'clock 12 o'clock, 12 o'clock, 12 o'clock, 12 o'clock, 12 o'clock, 12 o'clock, 12 o'clock, 12 o'clock, 12 o'clock. The session was still 12. So we will get back to some of the questions I am seeing one where is there is how do you assess when surveys were into human subjects research? A really quick answer is that we often have IRB boards that review our research, including the surveys. So that's one way of ensuring that you're practicing very ethical research. It's also one of the four hours of the lean research model, which is respectful, ensuring you're not only like do no harm principle is high and mighty over there. Yeah, I think we're going to have to close this. We will add in the resources and thanks everyone for joining us. It's really great to have you. Thanks Ashley and Jasper. Thank you. Thanks everyone.