 you guys welcome. Hi, welcome to this TechSoup Post-it webinar. Today, we're gonna be talking about how surveys can showcase success to your donors, your volunteers and your clients. So this is gonna be a great topic. I'm so excited about our partners here from QuestionPro. And if this is your first time here at TechSoup, I'm gonna show you how you can engage. Actually, I'm gonna tell you because I can't show you. Many of you've already turned on the closed caption. If you need the closed caption, just type on the CC button at the bottom of your screen. This is being recorded. So we're gonna send the slides and the video to you by tomorrow. And if you have a question, please put it in the Q&A. But I think our speaker will be able to grab it from the chat room as we're moving along. David, you can get ready to go ahead and start sharing your screen and I'll just keep going. Again, thank you for those of you who already let us know where you're zooming in from. This will be recorded, as I said, and sent out to you by tomorrow. I'm excited about this topic and I'm gonna get ready to turn it over to the founder of QuestionPro here with today. Thank you and welcome. Thank you, Arita. I'm super excited to be here really quickly. I wanted to kind of share a story. TechSoup and QuestionPro, we've been partners, I was telling the story to all. So we've been partners for almost eight years now. I think I met Gail, who's one of the executive directors at TechSoup back in 2016, I think it's in San Francisco. We hit it off and then since then, I think QuestionPro has been giving our platform through the TechSoup environment to the entire, obviously to the entire TechSoup community. A little bit about myself and then I'll love to kind of get Robin onto the conversation also. Today, obviously, I'm the founder here at QuestionPro. My background is in software and technology and obviously we're a tech company, we're a software company that allows people to create surveys, deploy surveys and analyze surveys. Robin, welcome to the show. How are you doing today? I'm fine, thank you, thank you. Tell us a little bit about yourself tell us a little bit about yourself and Searchlighter. Okay, well, Searchlighter was set up in 2010. We are operating primarily in a European market. Our brief in a sentence is really to ensure the social dimension of the implementation of education programs so that we are ensuring a relative equality of access and ensuring especially with respect to minorities and women in particular. And we work in a program which is mostly funded by the European Union called Erasmus Plus. And it involves really partners coming together maybe six or seven partners with different experiences collaborating together and with a set task where that is not soldable by any one partner it needs partners to come together and collaborate and we play our part in that. And a lot of the work that we do, I do is on both, on evaluation both within the project and on the project results as well. And I'll come on to that a little bit more in the case study. Yeah, that's what Searchlighter is. Any further questions? No, we'll keep it, yeah, we'll keep it fairly easy and kind of fluid, if you will. A little bit about, so hopefully, I'll keep it fluid and I'll keep asking Robin a few questions as we go along. Lou, a little background about Coaction for itself. I started it back in 2005 in Seattle then I moved down to San Francisco in 2010. I've been running it for almost 15 years, 18 years now. Fairly decent sized company right now, we have about 300 employees, we're fairly global. And like I said, the context of this is really like we've always felt that we need to kind of figure out a way to give back to the nonprofit community and this was when I met Gail, she was running, she was the ex-good director at TechSoup, she and I hit it off. And then we, obviously, we give out question pro, a lot of you guys have used question pro and obviously Robin's used question pro, so we give it out as part of the TechSoup program. It's free, we don't really charge anything for access to our platform. We already talked a little bit about Searchlighter, I wanna take a little bit and talk about surveys and why it matters to nonprofits. Even more so, obviously I come from a commercial environment, I'm not a nonprofit guy, I'm a software company, but I think for me to show progress is pretty easy. I have a financial, I have a board, I have to show financial results, I have to say, look, we made X dollars yesterday, we made Y dollars tomorrow and so on. So in the commercial context, I think it's actually pretty easy for people to show progress, but in a mission-driven context, which is I'm pretty sure all of you guys are, I've racked my head quite a bit around it, it's pretty difficult to show progress in my mind at least, like it's not as easy as saying, hey, our profit margins went up by 25%, okay, cool. I get a pat in the back, everybody's happy, et cetera, so progress, obviously nonprofits and mission-driven organizations are not, they don't have a profit margin, they don't have a revenue amount, all these metrics are not there. So I think that's why surveys are kind of pretty easy way to show kind of the empirical, to at least empirically show progress realistically. And then I want to talk a little bit about, and then I'll hand it over to Robin in terms of what I've seen over the years, what are the different contexts under which surveys can be used, understanding the context of donors, what is their preference, what is, and actually giving them, donors are looking for some sort of tangible evidence of how they're, obviously how their dollars are working. So I think, so that's a pretty kind of important element of how surveys can help, the donors themselves want to see like, okay, look, we gave you some money, what happened? That's an important element. The other thing that also I've talked, obviously we have about 802,000 nonprofits using our platform and they use surveys to kind of like, really get the feedback from volunteers. A lot of, as we all know, a lot of nonprofits run on the basis of volunteers, obviously, because of economic constraints and people love it. I mean, at least in the States, a lot of people, a lot of nonprofits are run through the notion of volunteers. And the volunteers have their own aspirations, needs, wants, and so on. And so collecting feedback about the volunteer from the volunteers as to what's working for them, what's not working for them. Surveys can be a pretty easy way to kind of understand what's going on collectively from a volunteer kind of constituent group. And then obviously the people you're serving, really. Every nonprofit is serving some target demographic, constituent group, and understanding what's working for them, what's not working for them. They all can are intermingled together, so in terms of how we run. So that's kind of at least my perspective. So I have a deep kind of like belief that, this is my kind of ESG way of like, we built a pretty amazing piece of software and we want to share it with the entire nonprofit community. We are not in it for making a dollar. I can tell you this right now. It's just not a bait and switch. It is what it is. Now we give the software for free. You can do almost everything you want or there are commercial teams can do. So that's kind of at least my perspective on this. And with that, I'm going to hand it over to Robin. I really want you guys to kind of like, see what Robin obviously have talked. Robin's done an amazing job in terms of measuring and evaluating programs. I'd love to, Robin, take it over from here, man. Okay. Thank you very much for that. And I, Okay. Yeah, there you go. Yeah. The screen now, hopefully never happen. So there we go. Hopefully you can see the screen. Nope. This is a brief case study that is based around a standard form of survey that we've put together that is firmly based on internal evaluation. One of the issues that we come up with when we are doing projects that have six or seven partners from different European countries who've never met one another, is that we all come together with different experiences which is the strength of the project. But when you're working together that potentially is a weakness as well because you have to work together as a team and you have to find ways to ensure the working methods are done that way. And obviously that depends to a certain extent on the work of the coordinator. But also I try to assist the coordinator in that through setting up internal evaluation surveys which mean that people can see the important issues to think about when working together as a team. And I come up with four headings that I look at. The first is coordination and I'll go into that in a little bit more detail than the second. The second one is partnership support which looks at how individual members are willing to support one another when there are issues or questions or difficulties or snafus that come up. Are we ready to actually step in and be there for one another in order to progress the project? The third one we look at is partner development where each partner reflects on themselves and say what, how much have I grown in my sector so that we can be more effective for our clients and our customers in the long term? And the fourth one we evaluate meetings and how effective the meetings have been be they online or in person and every project has both online and in person meetings that we do. However, I'm just going to concentrate for the moment on this is just a headline from a project that we did called Steaming Ahead and what we're gonna look at is the second partner evaluation survey that we did in that project. Now, I know it's a little bit small but I'm hoping you can, if you look at the screen closely, you can read the text. This is the other slides, the text is a little bit bigger but this is just, as I said, emphasizing coordination then we explain what coordination means so that the partners are absolutely clear what we are addressing and we have three subheadings within coordination, management, communications and resources. So we spend just one slide to kind of explain our terms so that there is no poor interpretation or misinterpretation of what is meant when we are referring to these particular terms. Then the first question we come up with is to please identify the error coordination from the list. And we have a pull down menu here that covers whether they choose whether we have been best in management or communications or resources. And then you have a box here where you can explain an example of very good practice that we have done. We think it's important that we identify an area that we should reinforce because to a certain extent when we are evaluating we're being open to being critical but it's also about patting ourselves on the back where we've done really well and it's about making sure that good practice that has emerged during the project is reinforced and is acknowledged by each of us as being something that's where we're succeeding. Then the next slide is the reflection on the comments that were made in the previous survey. And so we highlight these. So from the first survey, the following projects, the following, sorry, statements were made. And then we are given the opportunity to reflect on them and say, okay, how has things developed over the last six months? So we have identified three areas that needed to be improved. This is where the partners identified and so I'll just read out one or two of these. It was not always clear in the early months as to why the payments and the kickoff meeting have been delayed. We can create a better way of communicating between partners. For beginners, we need extra information sometimes. So these are just three criticisms that will come up. So we go, okay, we must try to improve this over the next six months. And then this survey is to ask the question, did we improve in these three areas? But the survey, the previous survey and survey one also came up with some ideas for improvement. So the first one was a monthly online meetings could be held to keep close track of the development of the projects in all areas. And this was identified three times. So that reinforces that this was a common view that we need to have online meetings more frequently if we're going to progress. In order not to bother the project team with loads of questions, we may have an FAQ annex. So this is a good idea that has come up from one of the partners so that we can create an annex on the internal site that can, where people can refer. And then lastly, future goals for embedding the improvement and that is to collaborate and share experiences as this will improve performance. Then the kind of overall comment is, so the first one is please state any additional perspectives on the points made by partners concerning coordination. In the same space, participants are encouraged to include any further elements of coordination quality that were not raised previously, but could have developed the quality of the project operation further during the project lifetime. And thirdly, participants should feel free to state their overall view with the quality of coordination as it covers management and administration. So this is a kind of final comment box where people can draw criticisms that identify whether we've improved from the first survey and whether the ideas are still relevant for improving and whether the future goals have been met. Robin, I have a quick comment here. I really like the idea that you guys are doing like showing what you got out of the previous survey onto the next wave really, right? So it brings about some level of continuity. I'm kind of curious how often are the waves? What's the kind of the cadence of the frequency? So I said monthly, weekly, quarterly, every six months. The surveys are done, they're usually during the lifetime of a project, they will be done three times. So the first survey will be done after about eight months, yeah, about seven or eight months. And then the second survey will be done about 14, about, well, no, just over a year into the project will be maybe month 14 of a project. And usually the projects last two years and then the very final survey will be a review of the whole project as a whole. And that will be done in the final weeks of the project. I really like that it will be about every nine months or so sometimes. No, I really like the idea that you are kind of showcasing what you kind of learned from the previous survey on the next one. And then kind of explaining, okay, this is what we learned and this is, it's usually important in terms of like, so-called closing the loop really, right? Letting the respondents know that like we heard you and this is what we found. And then that creates a different level of engagement. Please go ahead. Yeah, yeah, okay. And this is just an example of the kind of graphs that are supplied by question pro software when we're doing the analysis. It comes back with a holistic look at what all the respondents had a look, had a say. So it shows quite an even spread in terms of where people feel that the project is performing well. A third identified management, a quarter identified communication and just over 41% identified resources as the area of coordination that they thought was most effective. And these are just, this is some of the analysis that we came up with from the, sorry, the boxes where people could fit in. Yeah, the open-ended. What I've just shown you is the question there itself and this is the responses that they gave in the second survey because the quotes I gave were from the first survey. So this is the second survey and this shows your project results are being designed and are of high quality. The involvement of partners in ensuring the quality of outputs is significant. The second one is rather long one that I'll go skip down, ease of access, ease of use and range according to need. Thanks to this project, I had the opportunity to deal with innovative educational activities. So it's a kind of higher level of positivity I think in this set of responses but it allows them to kind of look at how the project has been improving since the first survey. The work to be done in the ongoing circle of the project was quite good and extensive. Yeah, Robin, I think one of the, I want to jump in a little bit over here. Robin, one of the smart things you did was that you kind of wanted to identify like different areas and then you asked the open-ended question of why? So that gives you a little richer information than the statistical. So you have the statistical information that says, yeah, I mean, communications are good. Maybe we can improve in communications but coordination and this and that. But then you have a richer kind of, a more richer context around why did you choose what you chose really, right? And that gives you kind of a depth of information. I mean, we think, I mean, in terms of quantitative and qualitative information we at least seek a balance but if we're going to give a priority in any direction it's the qualitative rather than the quantitative. The quantitative gives you directional guidance but qualitative gives you rich, deep data. Exactly. So yeah, we can advocate that even for our listeners really like, you know, if you're thinking about building a survey, you know, think about asking the question why and keeping it open-ended because you don't know what you're gonna get, right? I mean, you have a set of closed-ended questions and say, choose between these things is going well, isn't it bad? And then ask a simple why question? Why did you choose, even if it's going well why do you choose it's going well? And that just reinforces your belief and you get a lot of information through that process. So these are responses to the question, please identify at least one example or incident that illustrates or justify your choice. So these are inevitably going to be positive but it's a chance of, you know, to see, you know, which identifies areas where we're doing well and we need to kind of reinforce and to a certain extent, perhaps ourselves on the back, it can be a bit constructive criticism is all very well but if it all sounds critical then that can kind of break Team Spirit as much as build it. So, you know, in order to build the Team Spirit, it's important to discuss areas where we can honestly say we're doing well. So I won't go into too much of these but this is the following comments from the last survey either identified areas coordination where we can improve. And so these are new comments that came up. So areas to where to make improvement it was not always clear in the early months as to why the payments has been delayed. We can create a better way of communicating between partners and for the beginners we need extra information sometimes. And I want to emphasize that a little bit. So I put this in red for the, because this is the presentation which I make for all the partners because I coordinate the process, I coordinate the evaluation and I put together the slides which then becomes a resource both for the coordinator but also for all the partners. And this one highlighted in red was one for the coordinator. So that the coordinator could be especially aware that we need to be aware of the organizations that do not have previous experience in their program. And they may be so slightly intimidated by the partners who have a lot of experience in the program and we need to ensure that they are fully informed because when you have done four or five projects there are many things that you know simply from the experience but when you have no experience the partners who are experienced sometimes don't forget what it's like to be an experienced. And so that is something that we need to remind ourselves about so that the beginner partners don't get left behind. Ideas for delivering improvement and future goals for embedding improvement and then state any of your reflections or thoughts. And so these are, this is the open, open-ended question really. It's so that really, what do you think? What do we need to kind of think about over the next six months? And okay, I'll just read the one at the bottom. We've created really detailed resources and guides which are strengthening the sides but we have to cooperate a lot more than before as we are in the third work package which is based on implementing plans. And we have to improve this. We have to gather as the three implementing schools. We are not totally aware of the other schools so that we can compare and contrast our operations and our results specific to the implementation. So this is about the pilot study that we were doing where there was one school who was saying, you know, they have a lot of awareness of what is going on in their own school with the pilot study. But the pooling that information at the end is not enough. If they're going to make a pilot study successful they need to have more interaction with the other schools as they are gathering the data on the pilot. Sorry, I'm speaking very specifically about this particular project but I thought it was very important that there was a kind of emphasis on communication because when you have seven partners from seven different countries working in their own way it can become quite isolating in some ways. And so this survey is a useful way to kind of reinforce that people feel this need to maintain communication and not to be too isolated in silos. So I won't go into too much detail. These are kind of similar things. Our communication was obscure and distracted in the first few weeks of the project. This has changed completely in recent times. So this shows a radical improvement that took place because of the comments that were made in the first survey. We made those comments in the first survey and we took notice and improved during the second part of the project and one partner has clearly identified that there. So, you know, this is very much about our project so I won't go into too much but this is the graphic information which from the other areas that I identified if you remember I said there were four areas, coordination. This is the second one, partnerships support. And so the idea was to, this was split into three sections, responsiveness, awareness and inclusivity and people could vastly identified inclusivity that they felt included in the structure of the project so that they felt they had, they were able to give their perspective on how to implement the pilot study and that this kind of meant that there have not been too much isolation in the decision making process. So that is positive data that we can have. The third one was partner development. So this is how each individual partner is developing. Have they developed knowledge about the project subject areas? Have they created opportunities within the partner for employees and volunteers to develop and has there been a lot of learning between the partners about the specialist skills that each partner has that other partners could usefully learn? And on balance 50% thought that the area we had done best in was where partners were learning from one another. So this is a very positive thing to take and also learning about the project, the subject areas of the project as a whole. And the last one was in the implementation of the meetings and people felt that when they were in the meetings, when we had the discussions, partners were very supportive of one another over the issues that were being raised. So often while we're at our meetings, partners talk a great deal about the problems that they had implementing the project during the previous six months. And there was a strong feeling amongst the partners that we were very good at supporting one another in addressing those problems. So to, sorry, to summarize, if I can, the, sorry, look back a slide. The first point was the aims and objectives that we have when we create the surveys. It is to bring together disparate departments in a single collective exercise to improve the implementation process. The second aim was to apply a range of perspectives in order to gain a broader overall view that has taken each participant into account. And the third one is to allow those who are far from the day-to-day project decision making to sense their influence in those decisions. And then for project impact, how this actually improves the outcome and to a certain extent, this is very much for the donors so that the donors can see how we have reflected on our work in order to make sure that the results are of high quality as possible. What we have actually created by doing this survey, we believe, is a framework for identifying improvements that can be made so that they are tangible and measurable. We also feel that we have created an agenda for discussion that can be ongoing at any stage of the project development, according to its milestones. And thirdly, we feel these surveys create a heightened collective spirit and a trust in the management process through an acceptance of self-effacing constructive criticism. And that's from such like that. So thank you very much for letting me present. That's awesome. Can you talk a little bit more about like how this kind of impacts donors? Have you guys done presentation to donors with the survey results and says, this is the process that we're on? Can you kind of shout out to them? Not a direct presentation, but what we, these slide presentation, what I've just shown you is just a few of the slides that we create for each survey. We create a slide presentation, which is about 40 to 45 slides that are similar to what I've just shown you there. And there are about 40 to 45 slides for each evaluation. So that is about, you know, over a project as a whole, there are three evaluations. So that's maybe 130 slides so that they can, but it allows them to read the progress, you know, so that they can see the kind of things that we were rating in survey one and where we end up by survey three and how the process has developed the project implementation process. So it's not direct in the sense that we have a webinar like this, but we have to show evidence of the work that we've done in the final report. And in the final report, they have the PDFs of these slide presentations that they have a look at and because they have to evaluate, they have to evaluate the evaluation process, if you see what I mean. And then it adds into their final evaluation report as to whether we have been effective in implementing evaluation. Right. Yeah, I mean, I think that's an interesting kind of angle in terms of letting the donors and stakeholders know that you're actually measuring these elements and then letting them know, like, this is the outcome. I mean, even the fact that you're measuring the elements itself is probably a pretty positive kind of factor and reaction to this. And therefore, like, you know, really likely the donors are gonna be, you know, hopefully happy that you're measuring and evaluating as the process itself as you're going along. Good, good, good. I've been answering some of the questions as they have come along. I think I've answered most of them. And Alison has a question. Can you speak to the evaluating and aggregating answers, especially to open-ended questions? So Alison is asking, Robin, how did you kind of evaluate and aggregate or look at all the kind of textual responses that you got? Can you help me understand that? Yes, I mean, basically what I try to do is I go through each one and I make a judgment call over where the same point has been raised in a very similar way. And I, you know, will take the response that is the best summation of the views of other partners and then I will kind of maybe annotate that a little bit, you know, so that everybody's viewpoint is included. But there's not much point in a presentation of simply repeating the same sentence time and time again. And then as I said, I think you see in there, you know, I emphasize in brackets, I put the same point was made three times by three different numbers. And that obviously has a strong emphasis that, well, this was a real uniform view and a uniform perspective. And, you know, so, you know, each time, if there is the same point is made by different partners, then I will identify when it's being made, you know, more than once. Right, right. Just to show that everybody else in the webinar knows this. Like if it's a small group, then you can do this. But we also have tools in the platform that allow you to kind of categorize topics through the platform itself to kind of say, hey, this is this tag, this is this tag, and then you can do analysis of the tags themselves. I mean, obviously it depends upon the context and the size and the scope, right? If you have, you know, 20 responses versus 20,000 responses, obviously 20,000 responses will be a slightly more challenging and depends upon the context, right? I mean, you're doing a member survey. I think one of the, one of the, one of the, the other one was asking about member surveys. Well, if you have 20,000 members, obviously it's not physically humanly possible to go to 20,000 responses. We have tools in play to kind of, you know, categorize them. Yes, you know, we have some tools in play to automatically categorize content based on something, but the sample size has to be obviously much, much larger to, yeah. Just the context, I mean, you know, the project partners in our case, it's usually responses are about 10 or 10 or 12. It is, so they're really small. Yeah, yeah, yeah. It depends on the context of the size and the size of the context also. Good. I don't think there are any other questions, Rita, either on the chat or on the, on the Q and A. And I think Vanessa has, one just probably, just question for sharing insights with customers about what question wording gets the most responses. Oftentimes response rates are low. I mean, I mean, I get asked this question quite a bit in terms of response rates as well as kind of, you know, we are not in a position, like I said, like the context varies, you know, somebody is doing a survey with, you know, we have surveys that run with, you know, millions of people, not even hundreds of thousands, millions of responses, and we have surveys that run with 10 current responses realistically. So in terms of question wording, you know, I think the general kind of, the general guidance we can give is, you know, be clear, as Robin said earlier on, like what the definition of each of these matter, because everybody has a different, slightly different version of the definition. So if you can, in the survey itself, we generally recommend, like, make it clear as to what you're asking, how, you know, keep it simple and make it clear. Clarity and simplicity are the two key, most important elements in a survey is like, what kind of question are you asking? And it's clear and it's simple. Anything that gets more complicated and it's ambiguous, then you get, then you run the risk of, you know, ambiguous responses realistically, right? If you say like, look, you know, the definition, so I really liked what Robin did. He kind of explained what the definition of each of the components were in his survey. So when I find, I'm taking the survey, so I understand what the definition is. So in terms of what question wording, like I said, my simplest answer I can say is that, and Robin, you can go after this, is clarity and simplicity. You know, if you can stick to as simple as you can and be as clear as you can. And questionnaire wording is actually an art and a science by itself. That's what researchers do for a living in terms of what kind of, you know, how to make it unbiased and so on and so forth. Robin, do you have any kind of guidance and suggestions on, you know, the questionnaire wording itself? No, I mean, obviously simplicity is the key. It depends on, I mean, when I'm talking about the survey that I did, luckily I had quite a high buy-in, if you like, because I was working with partners who had an internal commitment to bettering the project. So, you know, when, I think for me, the most important thing was to make sure that the partners knew the purpose of the project. You know, that they knew that it wasn't just a box-ticking exercise for the funder. You know, that this was really about trying to improve the processes of the, I mean, I did, I mean, for each one as, when you introduce a survey, there is a kind of covering sheet in question pro. You have a kind of covering sheet that you kind of put in what the survey is about, why it's there, why it's taking place. And, you know, I tend not to try to explain the survey too much in that. I do that in the face-to-face meetings. And when they hear it face-to-face from me as to, you know, why it's important, I tend to get quite high buy-in from people. So people take their time to do these surveys. You know, I've had people who do these surveys who take up to 40, 45 minutes to do one single survey. Now, from when you're doing, you know, surveys with clients, you know, that may have several thousand, you know, these surveys must be much, much, much, much simpler than that. So I know for sure that I'm doing something that's quite bespoke, but you know, I wanted to show it to you because I think what the survey that we do on the internal evaluation, it's a little bit different. And it may be less usual to as an application of question pro. So I wanted to make sure that you were aware of the range of things that question pro can be useful. Awesome. Good. I think we've answered most of the questions now. Awesome. We've got, you know, 45 minutes. That's good timing. We got everybody 20 minutes back. So I already thought you want to wrap it up. Yeah. Thank you guys so much, Robert. I think you did a great job of answering how success can be and vivid. What a pleasure to have you here. There was so many questions that you answered already. So we didn't have a chance to go through them. So thank you both for being here. This has been recorded and we send to you by tomorrow. And speaking of surveys, please take a moment to complete the survey when you close out this window on Zoom and we'll see you next time. Bye-bye everybody.