 Well, hello everyone, good afternoon, good evening, or even good morning, depending on where you're joining us from today. Welcome to engineering for change or you foresee for short today. We're very pleased to bring you the latest in our 2015 webinar series. We will focus today on mobile data collection, specifically the way we're calling it a new frontier and technology for development research. We've developed this webinar with our collaborators at the development impact. My name is Yana and I will be one of the moderators for today's webinar. When I'm not working on webinars. I am the director of programs for engineering for change and a S. M. E. I'd like to take a moment now to tell you a bit about today's webinar. The widespread availability of mobile communications offers international development researchers, practitioners and students. New tools and techniques for collecting field data, determining success of projects. For example, administering surveys is possible in real time with electronic data capture and data collection for monitoring visualization and analysis. So, to talk a little bit about this, we've partnered with the development impact lab or DIL on this webinar to highlight some leading data collection platforms, including engaged park, the impact lab and survey. TTO we will discuss some of the benefits and risks of these tools for international development research, how the space is changing as mobile data collection gains momentum and how researchers can start to think about practically incorporating these tools and platforms into their work. This webinar is the first in a series of webinars that Dylan E. First, he will be hosting in the coming spring on data collection platforms, which will hone in on some of the specific tools discussed today. So, if you find that a specific platform or tool is of particular interest to you, we encourage you to stay in the loop for future webinars. If you would like to make a recommendation for a specific platform to feature future topics and speakers, we invite you to contact the for C webinars team via the email address visible on the slide webinars at engineering for change. Or, we can also reach out to DIL at Berkeley dot edu. Now, before we move on to our presenters, I'd like to tell you a bit about engineering for change and who we are. E for C is analog exchange platform and global community of nearly 1 million designers, engineers, development practitioners and social sciences, leveraging technology to solve quality of life challenges faced by underserved communities. You can include access to clean water and sanitation, sustainable energy solutions, improved agriculture and more. We invite you to join E for C by becoming a member membership provides cost free access to relevant and current use professional development resources, including jobs and fellowship opportunities and a growing database of hundreds of field tested products in our solutions library. You first see members enjoy a unique user experience based on their site behavior and engagement. Essentially, the more you interact with our site, better, we will be able to serve you resources that meet your needs and interests. We invite you to join our passionate global community and contribute to making people's lives better across the world. Check out our website. www.engineeringforchange.org to learn more and sign up. We're very excited to collaborate with DIL on this and future webinars. DIL is an international consortium of universities, research institutes, NGOs and industry partners addressing global poverty through advances in science and engineering. They are headquartered at the University of California, Berkeley, and were launched in 2012 with support from the USC and C for international development through the US global development lab. This leverages the innovative capacity of world class universities to design development solutions, which couple technologies with novel economic and behavioral interventions. DIL calls this approach development engineering and we're very happy to have them with us today. The webinar you are participating in today is part of E for C's professional development offerings. The E for C webinar series is a free publicly available series of online seminars showcasing the best and brightest in development engineering. All of our webinars are recorded and you are archived for your website and you can see the URL listed here. And please, if you're on Twitter to join us with the hashtag hashtag E for C webinars. A few housekeeping items before we get started. Let's see where everyone is from today. So, in the chat window, which is located to the bottom right of your screen, please type your location. All right, so everybody should see their chat window. All right, very good. I will add to this mix. I see folks from Boston, from New Jersey, from San Diego, Washington, D.C., Massachusetts, Pakistan, a little bit further away. All right, if the chat is not open on your screen, you can access it by clicking the chat icon at the top right corner of the screen. Any technical questions or administrative problems should go into the chat window and feel free to send a private chat to the engineering for change admin if you have any issues. You can also use the chat window to type in any remarks you have during the webinar. Please use the Q and a window located below the chat to type in questions for the presenters. Again, if you don't see this, you can access it by clicking the icon on the top right corner. If you're listening to the audio broadcast and you encounter any troubles, try hitting stop and then start. You may also want to try opening up WebEx in a different browser. Following the webinar to request a certificate of completion showing one professional development hour for this session, please follow the instructions at the top of our professional development page and the URL is listed here on the slide. Wow, so we have a lot more folks here entering their locations. Thank you so much. It's really great to see everybody from from France to Minneapolis to Madagascar to Nepal. Incredible, incredible to have you all here. Thank you. Oh, with this, I'd like to introduce today's moderator who will introduce our panelists. Dave Clark is a data scientist at Berkeley D lab and a fellow at the Berkeley Institute for data science. He works to bridge the divide between domain specialists and computational methods with a particular focus on human environmental well being. This fall, he had an experimental course in hacking measurement in partnership with the development impact lab and the center for effective global action. To expand data science education to include an orientation to things like sensors, mobile technology and satellite data. In addition to education, Dave works to build technical solutions that are easier to use. The ideal is a development of efficient hybrid training and technology approaches to research with social and environmental impacts. I welcome you, Dave, and pass it over to you to introduce our speakers. Well, thanks very much, Yana. I'm thrilled to be here. And just to say briefly, I think this is incredibly exciting stuff. You know, there are lots of ways to measure the impacts of development, but in the end, you know, people are still one of the most flexible and and sort of powerful ways we have of knowing what's really going on. And so these are methods to actually leverage that ability that people have to help us understand. What we're doing. I also just want to briefly mention at least one group reached out to me on Twitter. You know, to make sure that they were included. So please don't, while everyone that we've included here, I think is an excellent provider of tools for these these approaches. It's not meant to be exclusive. We don't mean to be leaving anybody out. We're really here to talk about the high level issues. That said, it is now my pleasure to introduce our panelists. And we'll start with Ravi Agarwal. He's the founder and CEO of Engage Spark and an entrepreneur and investor. Companies he founded in Boston employing more than 3000 people. One has gone public and he's been an angel investor in the US and Asia, including an investor in 500 startups. 2001 he was chosen by entrepreneur magazine and their third annual 10 under 30 top 10 business people under the age of 30 years old. He's the co inventor of a patent on email security and to 2011 he volunteered with the Grammy Foundation in Ghana and Uganda, working on poverty alleviation programs based on needs. He saw there he founded Engage Spark and not for profit social enterprise and we'll be hearing more about that later. Now, our second panelist is Tom Plakie. He's a co founder and data scientist at the impact lab in Chicago. He received his PhD in physics from the University of California Berkeley in 2009 and twice served as a mentor and co organizer for the Eric and Wendy Schmidt data science for social good fellowship at the University of Chicago. Tom specializes in solving data analytics problems with a social impact in areas ranging from energy efficiency to education to government finance. Tom has served as a technical advisor for a number of tablet based household surveys in the Middle East and as part of a team working on a data collection solution for the humanitarian sector. And our third panelist is customer engagement lead at survey CTO and ability incorporated. So he leads all customer related efforts at ability, including training and supporting existing users of survey CTO on boarding new users and helping organizations figure out what digital data collection system can meet their needs best. Prior to working at the ability. He spent several years as a research manager for innovations for property action IPA in Kenya, where he led multiple large scale impact evaluations and helped lead IPA Kenya's efforts to shift from paper based to tablet based surveys. He's lived and worked in Pakistan, the US Kenya and Zambia. So moving on to the content here, you know, we'll start with this kind of general framing question. Right. I mean, hopefully everyone's here for a reason, but Question is why, why do we care about these methods? So how is international development research changing as a result of the mobile survey mechanisms and data collection platforms that are currently available. So Robbie, it'd be great for you to share your perspective based on your experience with engaged part. And also before you dive in, if you could give us an overview of your engaged part platform. Sure. Thank you. I'm passing the ball to you. There you go. Great. Thank you, Dave. Yeah, quick background and gates partner. And we focus on building tools that help NGOs and other organizations engage your staff and communities using mobile phones, especially using voice IVR missed calls to SMS and prepaid airtime transfer. It's not done to the point where most people in the world now have access to mobile phones. So it's using mobile phones, however, more than 65% of people still don't have regular access to the Internet. They're using just a normal dump phone or simple phone. And in fact, the way of engaging people around the world. What we focused on at Engage Park is really making it super simple, but that a non technical person can launch engaging campaigns, especially surveys in less than five minutes in any country in the world without having to pay subscription fees or information fees and only use the space fees. That's what engaged part. They're talking about how mobile phones have changed data collection. These are the four main ways of doing data collections that we see out there. I thought it'd be good to for our researcher audience here to be able to just compare and contrast for many kinds of data collection mediums. For many years, we've had the illuminators out there using kind of paper and generating thousands and tens of thousands of pieces of pages of paper. It's very high cost. It takes a long time to get the data and then to type it in, which of course brings in errors around handwriting and data entry. And then the more recently the last two years, mobile apps, things like using even basic phones with a J2ME app or Android phones or tablets with data collection or sometimes HTML apps have become very popular. However, still setting up somebody to the field can be expensive, takes time, but it's great for doing longer surveys, especially if there are hundreds of questions involved and ideal for things like a baseline or midline or end line surveys. However, having a person involved as an intermediary can always introduce errors. More and more programs are starting using SMS text as a way of engaging the participants to get data. The cost tends to be lower medium. Questions to be this medium because it can take hours or sometimes a day or two depending on the number of questions. It's great for places there are conflicts. For example, in Afghanistan, it's really hard to send an innovator out into conflict areas and based on responses, it's easy to iterate questions to improve the survey quality. However, the downside with SMS that requires literacy. The participant needs to be able to read and type. It's often difficult to make it free to the participant, which results in lower engagement rates or participation rates. And of course the person must have access to a mobile app. The last kind of surveys are voice IVR. IVR stands for interactive voice response. That's where you hear a pre-recorded question in the local language and then you press keys on your simple phone, could be smart phone. It could be calling to press one for yes, press two for no, or it could be type in the age, type in your age or number of children in your household or your salary, or it could be just speak a response, speak an answer to the question. Like SMS, it has many similarities such as it's great for conflict areas. However, one big thing that has going for it is that it's often free for participants because they're pushed out of phone calls. You can iterate quickly. And one additional data point that comes in using voice IVR is looking at the amount of time it takes between the question is asked and the seconds or minutes it takes for the participant to answer. However, it is difficult to do voice IVR for longer surveys. And of course the person needs to have mobile phones to engage in this survey. We find the best survey results are those surveys where there's been a lot of iteration, many cycles, at least three, four, five cycles of iteration to improve the questions and therefore get quality responses. This means that the researcher must be able to look at the data quickly, ideally in real time, realize that the question wording maybe wasn't ideal or the options weren't great. And also being able to test different variations of their questions to see which ones get the best responses. And then many times being able to add a follow-up question in order to get more detail and truly understand the data that the researcher is trying to get at. And being able to do more frequent surveys to see what changes are happening in the participants' lives. Some examples of voice-based surveys especially. For example, IPA in Zambia is using voice-based surveys to end SMS, especially to reduce water, household water usage using lottery as an incentive. Mercy Corps in Afghanistan used surveys to have these people who took a vocational course would get a phone call every week to find out if they were able to get a job and to raise the quality of the vocational skills training. Mercy Corps in the Philippines did this course with 20,000 people. And then we've got an ongoing program using voice to do a business skills training course to reduce MFI default rates and end-game ability. Dave, that's it for me. Thank you. Any questions or I guess we'll take comments from other panelists? Yeah, first, so thank you so much, Robbie, for that overview. And yeah, so first, Cezanne or Tom, do you have anything you'd like to add for the reasons for these approaches? Sure, yeah. I mean, I think, Robbie, thank you so much. That was great. I mean, I think something I would sort of add tangentially to sort of how mobile data collection tools are transforming research in the development sector is that, you know, as sort of the costs, both in terms of time and monetary costs have gone down to getting this data, because you can collect it electronically, whether it's through smartphones or tablets or SMS or voice. A big transformation has just been simply that people are just collecting a lot more data. And, you know, donors want more data and they want everything evaluated and want sort of quantitative results to show how NGOs and different people are performing. And the problem with that has been that even though everybody sort of gotten onto this bandwagon of collecting more data and more quantitative data and quicker, which is a good thing. I think a lot of people right now, there's a there's a gap with people don't really know how to analyze and what to do with all of this data. So we have a lot more data, but we also have a lot of poor quality data because people either collected poorly because they don't understand the logistics of how dirty the process can be. Or there's a lot of data which is often analyzed poorly or misrepresented. I don't have a good solution for this. But this is also something I think to think about in general when we're talking about sort of the transformation in the development sector with more and more data. Tom, anything from you? I'll take that as a no. All right. So let's again, just in case people aren't noting the chat area, there is a Q&A section. So if you have a question, you can go ahead and put it there. We're going to hold Q&A up until the end. So we'll have a sort of final session for all attendees. But moving on, a key theme for both the DIL and Engineering for Change communities is the iterative nature of both tech development and the research process. And the next question is for Tom at the Impact Lab in Chicago. So Tom, given your work developing this tool recently for Doctors Without Borders, can you speak to how you see user feedback or learning getting incorporated into the development of your and other platforms? How do you see platforms and tools changing as more researchers and practitioners begin to use or take up these tools? Again, it would be great if you can start by giving a quick overview of the Impact Lab and your own work before you dive into the general answer. Thanks, Dave. So good morning, everybody. My name is Tom Plei. The Impact Lab is a company based in Chicago that has the aspiration to make the social sector more data-driven. Data collection is a fairly small part of what we do, but we found ourselves working on a couple of data collection projects in the humanitarian realm, specifically in response to complex humanitarian emergencies in collaboration with a couple of NGOs, one of them being Doctors Without Borders. And for us, since we don't actually have a full service product to sell, what we're most interested in is how could we interact with the user in such a way that we could develop exactly the tools that they need to increase the amount of data that they're getting, the validity of that data and the actionability of it. So the humanitarian context is a particularly difficult one to collect data in the field. You almost need an enumerator out there on the ground, simply because you can't necessarily rely in an emergency situation on the cell network, working on people being connected or having connectivity, or even power sometimes. But having an enumerator on the ground, of course, puts them in arms' way many times, and also you kind of have to rely on the connectors you can get, so who often has fairly low technological literacy. And you have to get their responses back quickly because, as we all know, in an emergency situation, the situation on the ground changes on the time scale of hours or days, unfortunately. So there's not always time to do all the iterations. So the feedback that we got when we started working with these larger NGOs is that the ideal turnaround for them to get actionable data in the field in order to deploy resources more effectively to decide what supplies and personnel are needed in a specific place. About seven to ten days. And of that, seven to ten days, some portion of it has to be spent designing and thinking about what data you want to collect, some part of it spent actually collecting the data, and then some part analyzing and synthesizing the results. And what we found is that the actual collection technology, the mobile app that you send your enumerators out in the field with, not really the bottleneck anymore. There's a lot of good mobile data collection solutions out there. And so the real time sink, as Paizan nicely pointed out, is in the design and analysis of the data. It's perfectly possible to send collectors out in the field and collect a whole bunch of information that have it be invalid or useless or worse. And so that's the problem that we are trying to avoid. And the way that we're trying to avoid it is we're assembling a team of both technologists and subject matter experts to design a very opinionated design collection and analytics tool chain for a specific use case. So rather than having a general purpose data collection tool, we're hoping to come up with something kind of like optimizely or Google analytics, which once you know what you want, in our case epidemiological household surveys. The software and the toolkit sort of guides you on the path to making intelligent choices to setting parameters like sample size and sampling methodology appropriately and then getting the results quickly. And the reason for this is that oftentimes what happens is that the survey design, the survey collection and the survey analysis are actually all done by different people. And so the time that it takes for each of those people to get on boarded to do their job to understand what the previous person has done. That's really the bottleneck for these people. For these organizations, in order to get data back in seven to 10 days, it needs to be an integrated automated tool chain that makes a lot of choices for you. You can do simple correct things in a short amount of time. And so that's what we're trying to do. And watch the space. Maybe I will have something to sell eventually. But for now, we're really focusing in on the needs of individual organizations because we just don't think that at all purpose, all singing, all dancing data collection tool is really what this field needs right now. There's a lot of other steps that need to be in time and thought as well. So I'll stop there if anybody else on the panel has other comments. Do you have anything you'd like to add? Remember to unmute yourself if you're responding to the question. So we got another question from an attendee. I would encourage folks to maybe you can all respond in the chat there for that question and we can return to that question also at the end. But also maybe as we move on to the third question, maybe Robbie you can speak to what data collection is about as we think about the next question. So moving on then, last but not least. Our final question is for Faisan of Survey CTO and this is steered much more towards the practical implementation and use of these tools in research. Many of the participants styled in today are researchers and practitioners who are in the early stages of incorporating mobile data collection tools into their research agenda. So Faisan, you in particular through Survey CTO have had a lot of experience working with various organizations both through your current work with Survey CTO and also in your previous life as a field researcher with IPA. Faisan, what would you advise researchers keep in mind when selecting their tools? How should they think about incorporating them into their work or when might they not want to use one of these tools at all? Cool, thanks Dave. So actually before I jump in why don't I quickly answer the question about what we mean by data collection since it's a pretty fundamental question. So Sadiq do very quickly. Basically I think over here we're talking about collecting any kind of survey data as responses from individuals, whether it's in a humanitarian context, it might be refugees or it's a household survey you're doing in Kenya or in Pakistan. Or wherever and the exact nature of the data collecting could be for anything it could be maybe for a public health project we're looking at outcomes of a particular intervention. It could be you just want to do a census or get a sense of a demographic makeup of a particular area. So that sort of exact goal could be pretty broad but I think very simply it's any kind of information you're collecting directly from a respond. We're asking them a question and then recording their response in some way. So to actually dive in also quickly just to give you a quick overview of Survey CTO. Survey CTO is a mobile data collection platform for offline data collection. So for example we're engaged part that like we mentioned is that I can get over SMS or through voice calls. What Survey CTO does is we have an Android app so that you can actually establish some smartphones to collect sort of again longer surveys and richer forms of data. So that's just a very quick overview and it all sinks to the cloud and then you can sort of analyze it or export it from there. So to focus in on the question sort of what people should keep in mind when trying to select a mobile data collection tool. I'm going to actually focus on I think when I've talked to a lot of clients and talked to different users and also when I was a user. I think something and a sort of a mistake actually people make is I think people very quickly get caught up in sort of specific features that they might want. And sure I mean that's definitely a consideration to keep in mind. But I think before you start thinking about oh I want some fancy X feature that is very specific to my use case and I really need this in order to do my work. I think people should take a step back and focus on sort of the foundations of good software which in my opinion are sort of reliability support and security. And I think sort of what happens is so when you're looking at I think as Tom also mentioned there's a lot of different data collection tools out there now designed for slightly different use cases. But before you pick anyone before you start comparing features you want to first do some research on you know how long has this tool been around. Does this tool have a good reputation for actually working in the field or despite what the website says that it actually quite buggy and unreliable is the support team that is behind the tool very responsive. Do they maintain the tool and updated on a regular basis to keep up with changes to hardware and software such as you know there's a new Android version that comes out every few months these days. So is the team that's behind this tool actually keeping the tool updated and to me what is actually also very important is security. So you know a lot of these tools now if not all of these tools are you know all hosted in the cloud where the idea is you collect data on different smartphones and the data sinks to a cloud server from where you can easily access it from a central location. But not all cloud servers are made equal. So you have free tools you have paid tools. And usually you know in a sense you get what you pay for where you know the servers that are that your account and your data is being hosted in may not always have this you know the same firewalls may not always be maintained and monitored you know 24 seven for for security purposes. The server infrastructure in which they're hosted may not be the most stable it may not always be backed up. So these are all sort of considerations that people should keep in mind especially in the case of development where I think a lot of the data we collect is sensitive data and is identifying information. And often if you're doing research which is subject to an IRB or HIPAA or something like that then you'll probably even be obligated to protect that data. So something else to keep in mind also is you know what are the encryption features that a particular tool offers. Are they encrypting the data in transit as it's you know traveling through the Internet from your phone tablet to your cloud server. Are there options to also encrypt the data at rest so that even the data as it's hosted in the cloud server is also not visible to the vendor only you know somebody who has decryption key can view that data. So anyway so so I think when you're looking for any tool before you get tied into features you should sort of look at these sort of four pillars of which I think make any good software especially when you're talking about collecting and storing other people's identifying information. Something else which ties into that is what I like to say is talk is cheap. So you know you go you maybe have looked at you know a few different tools you've narrowed it down to maybe two or three that you think meet your criteria. I would say it's very easy to make a flashy website and a comparison chart that shows why one tool is better than another tool but talk is cheap anybody can make those claims. So when I was also at IPA and you know when we were always we were always test looking at lots of different tools to use for our own research. Rather than taking any tool at face value we would always sort of take two or three tools and sort of test them out run some small pilot. Maybe even going to do something simple as collecting mock data in our own office but really sort of test the tool over and over see if we can push it to its limits. Right. You know maybe you have a five question survey in which case you know pretty much any tool will be able to handle it but if you have 100 200 300 questions survey with lots of complex calculations and logic and skip patterns. You know things start getting complicated and not every tool can handle the increasing load that that your survey or your data collection tool might require. So something to always do is actually test a few different tools that you've narrowed down to to really make sure that the tool you're using will work in your context will work in the environment you're using it in. Maybe you're offline. Maybe you have servers who are not tech savvy at all and will you know constantly make mistakes and and press all sorts of buttons that might break the tool. And then finally you also want to ideally make sure that even if you pick a tool you don't want to be held or tied down to it. You know the world changes a lot technology changes your project might change. And so ideally you want to go with software that and now again sort of the mobile data collection world has changed a lot and so I think most tools and there's a lot of tools out there which are sort of just like. You know the way we subscribe to software in our private lives where you pay month to month. You know you don't have to sign long term contracts or service or consulting contracts which require tens of thousands of dollars upfront. You can sign up just like you'd sign up for Gmail get started with a tool and if you need to switch you can always switch to a different tool. The one other thing to keep in mind when doing that is you know again you want to ideally stick to tools that perhaps have open standards so that if you are say creating a survey form for a particular tool it doesn't work two months into your project and you need to switch. You don't have to start from scratch you can easily export your form and you know load it into another tool. And again a lot of this is personal opinion but I think these are things to keep in mind and dying to sort of issues of reliability and stability. And then a final thing I wanted to talk about which we sort of occasionally came about IPA and but something I often run into with talking to potential users. The sort of CTO as well is you know should you go for an office or if should you try and build out a custom software and in general I have a strong bias against custom software. I think the reason is that usually people will strongly underestimate the time and monetary costs involved in building your own tool or building your own software. And you know there's I think in the development world out there there's lots of one off multi million dollar projects that you know spent three four years building out something that fizzled out because even after it was built out. There was nobody to for example keep maintaining it. And you know nobody was willing to say put in the cost every year to have developers who are updating the software say Android phones changed etc. So so I think before you decide to build your own maybe you do some initial research and you find a bunch of tools but maybe they're not all 100% don't have 100% of the features you need. Before you jump into deciding to build your own tool very carefully consider the costs and treat us involved with a custom software and also think about the fact that you know I think you know there's a lot of good options out there. The market is very big now for for data collection software and even data analysis software. And so I think about 95% of people's needs can be met by good affordable tools that already exist that are tested the reliable the professionally supported. And you know for that last 5% often a better path than building something from scratch. Even if it feels innovative is to maybe take an off the shelf tool and always talk to the teams about adding you know individual custom features as an extension to the existing tool so you have a good solid foundation and you can still often get maybe some specific features that you really need. Similarly, I think even better alternative is if you have a couple of different tools that are good that each do something you need you can also always look into the possibility of integrating them using API and things like that so that you know for example. You know service CTO doesn't support SMS data collection and engage part does so if you had a case where you wanted to do both tablet based and SMS based data collection you could look into integrating the two through some kind of API and talk to the two teams. For example, rather than building your own tool that does both. So yeah, so that's sort of it for me. I couldn't possibly agree more with the notion that particularly for the process of pushing an HTML form out to mobile device getting the results back in an encrypted and reliable way that is a solved problem. Please do not try to solve this problem again from scratch. It's one of the problems I think that that I've seen in this in in the mobile data collection world is that it seems like such a simple idea that everyone just wants to do it themselves and paying somebody to do it properly. And in fact there's a thousand different things that can go wrong, even if it is a fundamentally simple thing, even if it is just an HTML form. Don't reinvent it don't start from scratch please. Sorry, I left myself on mute there. If you have anything you'd like to add. Okay. Well, thanks so much for son and also Ravi and Tom for your prepared presentations and insights will now take some questions from the audience right those have been trickling in throughout the webinar. Please do continue to use that Q&A region in the in the WebEx Center for adding your questions. And we'll try to get get everybody's questions answered as best we can in the time remaining. But I think the biggest question actually came via the chat window, but that's okay. So, I think some participants still might be a little hazy on the details of how these systems work. So, I think a good question for all panelists here is, what is a good way for folks to get an initial hands on experience to help our participants understand and evaluate the kind of, you know, gritty reality of these approaches and get out of the, you know, I feel like we've we've kind of been assuming maybe people know what these things look like. But some of our participants might not actually still know exactly what we're talking about right so how can how can they see directly and understand how these things work. Since this is an engineering lesson, I'm going to go ahead and suggest that for those of you who are technically inclined that maybe one way to do this to really sort of wrap your head around exactly what the problem is that's trying to be solved here is to just go ahead and open data kit, GitHub repo and take a look. Open data kit is, I'm sure everybody on this call who's used open data kit has strong opinions about it, but it is a mobile data collection tool that does some set of the things that we're talking about here. And since it's open source, you can just look at it and see the complexity involved in the variety of use cases as attempts to support. So I would say that, you know, in addition to taking a look at some of the commercial websites that have been discussed here like Spark and service to take a look at the open source version which which tries to do a lot. And I think that will give you a pretty good handle on the scope of the problem. Yeah, so to quickly jump in to actually, I mean, service you do is actually based on open data kit. So, yes, if you are familiar with the give and you know, service you to also feel very familiar. And yeah, I think you can always play around with O. D. K. service you and a bunch of other tools. So I see somebody actually has a question in Adam in the chat from the but form of in MacP. Pretty much all of them have, you know, we all have sort of options to free trials or free tiers. And so often the easiest way is to yeah just sign up for a service and start playing around with it to get a handle of how it all works. But yeah, I think that's all. For a lot of these tools, there are free versions or at least trial versions, including our own as well as service CTO. Sign up our tool, you get 25 cents of free credit to actually design and build a survey in less than five minutes, launch it to yourself and try it out. But stepping back also just think about, you know, which medium is the most effective for your research goal? Is it to it will it be accomplished by having a numerator go out in the field? Will it be accomplished by being able to quickly do an SMS or voice based survey? If the participant has access to a ball phone, etc. Start with those, play around with the tools and do small pilots, try it out and see what works. I think one underappreciated piece to make sure that you try when you're in the process of evaluating different tools is the data export. Simply because, you know, you can get back a CSV of all the responses that you get from anything, but how much work it takes to translate that CSV into something useful that you can publish or distribute to the decision makers who are relevant can vary quite a bit. This is one of those seemingly small details. What does the actual output of this data collection look like and how much work is it going to be to transfer? So if I were to sum up, I mean, I think we've gotten advice ranging from, you know, if you're an engineer who, you know, is proficient and is going to be comfortable reading Java code and web applications, you can dive right into the open data kit source code. And at the other end, if you're looking just to evaluate as a user, there's a variety of platforms out there that are just free to try, and you should be encouraged to go and try those things. But let's move on to maybe some more nitty-gritty kinds of questions. So I think one, there have been a couple of different questions around, you know, like what do you give up when you go to mobile technology, right? And so for example, one participant mentioned that contact with participants face-to-face can be important to get honest responses, you know, and how can you maintain a human element when conducting SMS surveys with smart phones? And more generally, right, people have maybe questions about compliance, even just finishing surveys, right, versus people not feeling, you know, like there's anything at stake if there's not a person involved. So I think I could invite everyone to speak to that. But I mean, maybe, yeah, I was going to say in particular maybe Ravi, you'd be a good person to start. Sure. Yeah, it's a big question about honesty on surveys. You know, especially when you think about demand bias, which is where based on who's asking the question and the situation, the answer may change. So there have been studies done, for example, when somebody who's of a higher social status asks a question to somebody of a local, of a lower social status. And so that answer will change versus if the question asked by a peer, somebody of the same social status or somebody of a lower social status. And there were some studies done around this idea in India. And so even in person, so it really varies. One of the advantages of an in-person question or numerator is that you can also see the body language and read facial expressions. On the other hand, with things like SMS or voice-based surveys, people also get anonymity. And sometimes people are more honest when they're not looking at somebody else and there's no such, there's a different kind of demand bias. So there's always going to be demand bias, but you just have to be careful about what that medium exerts on the demand bias side. And the best way to try it out is just do some testing and piloting to see how the responses are. Anyone else want to chime in on that issue? Just to have one more thing, one more thing we've seen is especially around things like anti-corruption. So reporting data for anti-corruption, people are much more open and honest when they think it's monomous. And their neighbor can't see that somebody just came into their house and asked them questions. So that can also, depending on what kind of questions, also should be a deciding factor on the medium. This is more of just an anecdote than anything else. But in the situation where you're doing a survey to get sort of descriptive, a broad overview of the situation on the ground, such as the complex humanitarian emergencies we're talking about here. If you're doing a survey that is not in person that does not involve enumerators, one of the problems is you have to be pretty smart at the outset to try to figure out what the right things to ask about are. Whereas when you have an enumerator, what we found is that actually a lot of the useful data comes from after the fact going to the enumerators and using them as a focus group and saying, okay, here's what you asked about, here's the results. What were people dying to talk to you about that you didn't ask them about? And that sort of qualitative information that's kind of hard to capture sometimes in a remote or completely automated or text-based survey can in some circumstances be extremely, extremely useful, especially if you don't necessarily have the foresight to know exactly what the most interesting findings are. Thanks, Tom. That's a really useful insight. I'll also maybe extend this question. So I know that, for example, in SEGA and DIL, one of the research threads is evaluating both. You don't have to choose one or the other. You can have human enumerators going around and checking on things. And then you can compare that against maybe an automated system that is easier to scale. So I wonder if any of our panelists have recommendations for where to look for published research or maybe more informal writing or results that people have comparing different approaches. I guess if another of our panelists do, I guess I'll just say there is some work being done at SEGA at least. And certainly the SEGA website is a great resource for finding references kind of trying to evaluate how well you can rely on different methods for measurement. And certainly if folks think of other options, they could chime in on the chat or come back to that later. Moving on to other questions from our attendees. So Carson asked about cost constraints. So if you're doing SMS surveys, the end user might have to pay a per-SMS fee. Maybe if you're using more of a web-based survey, people might be needing to pay for some kind of internet data access. Who pays for that? And how can organizations manage the cost that might go to participants in your surveys or organizations that are working with you? Sure, I can take that. Depending on the medium, so for example with SMS, typically when the participant replies to the SMS, they pay for the cost of the SMS, which as you can imagine reduces the participation rates. However, it is possible in many places to get a toll-free number so that the participant or reverse billing, so that the participant is not the one getting charged for the SMS replies, but rather the organization sponsoring the survey is. However, dealing with phone companies in many countries is very painful and getting a toll-free or reverse billing phone number can take months. So that's something just keep in mind. The other option using voice IVR surveys are where either if you already have the number, you can just push out the phone call to the participant. So this way the participant doesn't pay for the cost of the phone call. And in those places where you need them to initiate, you can do a missed call. The participant can do a missed call to a phone number, meaning they call a phone number, they let a ring, two or three rings, hang up, and then the system calls right back and when they answer their phone, then they hear the survey questions, the pre-recorded survey questions, and they can participate. But typically the survey costs obviously should be borne by the organization sponsoring a survey in order to keep survey participation rates high. And yeah, and just to quickly sort of, I think, jump in as well and also thinking about costs. For example, if you're using a smartphone-based data collection bandwidth cost for transmitting the data over the Internet. So just as everybody said, that's a cost that would be borne by the organization that's collecting that data. But things that you can do to reduce it is, of course, a lot of the, even the data you're collecting in a smartphone app will essentially be in a format that's maybe a few kilobytes per survey. So the costs actually don't end up being that substantial unless you're collecting lots of media, like you're capturing audio and video clips as part of your survey and transmitting those. And we do have a lot of users who do that. And in those cases, we always tell people to, for example, you can always, on your devices themselves, change settings so that you're making sure your video resolution isn't very high definition or your audio is a high definition. So that instead of having a, you know, a 50 megabyte 32nd video, you have a 3 megabyte 32nd video, which on a computer on your computer will probably still look just fine. So these are sort of logistical things that you can also think about beforehand before you roll out your survey. Thanks. So I'll add, again, a little bit to a question from, in this case, Ann. So Ann is asking about, maybe once you get outside the realm of a smartphone, how well are longer surveys going to work? Right, like, so for example, can you do a long survey via SMS? And, you know, are there, you know, some basic rules that we can use to choose, like, for what kind of survey is suitable for SMS and what really would require a more sort of rich delivery method? And I suppose this could go to all our panelists, but again, I feel Ravi might be suited to start it off. Yeah, with longer surveys, having an enumerator in person is usually a better way of doing it, especially if the surveying has maybe 100 or 200 questions. However, we have seen some customers break up the survey into, say, 7 to 10 questions per voice call once a week or per SMS session once a week or every other week. So just breaking it down into smaller pieces. However, the more questions they are and the more frequent they are, it does tend in general to reduce participation rates or response rates on surveys. So those SMS and voice are better for shorter surveys. One thing that can be used to increase the participation rates is using, for example, having incentive such as pre-paid airtime transfer. So telling people that, hey, if you do finish all pieces of the survey, then you will get maybe three, four, five dollars of pre-paid airtime automatically transferred to your pre-paid phone. But of course, you have to keep in mind that that may influence the results of the survey as well. Thanks, Ravi. Does anyone else want to chime in on that? I'll take that as a no. And we will have just one more question. And I do want to acknowledge that there are lots of great questions. And I see that as a good sign for the continued lively energy in the series. So, you know, if you didn't get your questions answered today, please do stay plugged into the DIL and Engineering for Change seminar. But onto that one last question, I think it's kind of a nice bridge towards the future. All right, so we've been talking and focusing on how we can use these tools for measurement. But David asks, like, how can organizations using these continuous data acquisition and monitoring approaches? Oh wait, sorry. That's not the right question that got added to the end. So, sorry. The question I meant to ask was regarding using these technologies for relationship management and more, you know, as part of things that can go out and then have an impact on the participants who are participating in these surveys or data collection efforts. And maybe if anyone, any of our panelists has examples of using these kinds of tools for relationship management or intervention, you could speak to that. I can speak to why that has not been done in our data collection efforts. And that is specifically because of privacy concerns. We have been extremely hesitant to record any personally identifiable information of any participants. Obviously, when you're in a conflict zone in particular, you just have to be super, super careful about those sorts of things. And so anytime we enter data that's going to find its way into the cloud server, you just have to strip identifiable information from it. That makes it awfully hard to use for a relationship. Yeah, and it depends on the context as I mentioned. So, for example, we did a program at Mercy Corps in the Philippines where the intention was to get the 20,000 survivors of a typhoon to learn how to budget and save money by measuring it at the bank account level. So that's where the bank had a relationship with these people as well as Mercy Corps. And so they used the platform to learn more about the needs of the participants in this intervention. And then the intervention was over 22 weeks using soap opera-based episodes, interactive soap operas with lots of questions being solicited to learn more about their behaviors, how to change them, and then to get them to... And then actually measuring their bank account levels to see how the savings rates were increasing or decreasing. So that's a great way of building a relationship on an ongoing basis with the participants and influencing their behavior. Well, that's very cool. And I imagine some of our participants, including myself, might like to read up about that. So maybe if you have a link, you can post it in the chat log. But I think that about wraps up our time for today. So thank you again to all of our panelists and our hosts. And thanks to all of you who've styled in for today. It's an amazing turnout. I do want to mention that DIL will be hosting a Spring webinar series specifically focused on a wider range of data collection platforms and providing deep dives into some of the platforms discussed today. We'll have more demonstrations and detailed overviews of different tool functionalities. And you can certainly be one of the first to hear about these by signing up to the DIL Connect system. And I'll put the link into the chat window and hand off to our Engineering for Change hosts. Thank you so much, Dave. And thank you again for everybody for joining us. We really appreciate you hopping on. This has been a really rich conversation and we do want to see it continue. So feel free to email us questions that didn't get addressed or are burning now via our address, webinars at engineeringforchange.org. For those of you who need the professional development, our code, it is listed on the slide right here. And you can get your certificate by following the instructions on the professional development page. The URL is here on this slide. So with that, we will wrap it up. We encourage you to become members so you can hear about the next webinar. And of course, we will be doing more webinars with DIL and we're very excited to share those with you as well. This is our last webinar for 2015. So we will catch you in 2016 for the next webinar and hope that everybody has a great afternoon, evening, or morning depending where you are. Take care.