 Welcome to TechSoup Talks. Today's webinar is How to Evaluate Your Digital Literacy Program. My name is Kami Griffith. We'd like to thank ReadyTalk for supporting this webinar series. And also I'd like to welcome presenters Linda Hocheyer and Eric Ruiz. We'll be doing an introduction with them in just a moment. Before I get started I want to introduce Community Technology Network as an organization that I run. We are presenting this webinar in conjunction with California Consumer Protection Foundation who supported us developing this webinar. Community Technology Network helps people improve their computer skills so they may connect with others, access resources, and apply for jobs among other opportunities. We believe that Internet access is a right, however low income people and older adults are statistically less likely to be in a position to take advantage of this right. We partner with social service agencies, senior centers, and housing developments that have computer centers to help them address the digital divides in their communities. We offer staff trainings, networking opportunities, training resources, a directive of computer centers, and trained volunteers to teach one-on-one and small group classes. We are a sub-recipient of the City of San Francisco's BTAP grant and we are going to train 3,000 seniors and adults with disabilities over the next 2.5 years. So when this idea came up to conduct a webinar on tracking and evaluation, I'm personally interested in this topic because of our BTAP grant. And I know that there's hundreds of groups around the country also looking to figure out how to do this themselves. So again, thank you all for attending. I'd like to welcome Linda and Eric. So Linda Hofstra, can you please introduce yourself? Sure. Hi everyone. This is Linda. And I want to thank you so much for joining us today. I'm a research analyst at the Library Research Service, which is a unit in the Colorado State Library. And there's another unit in our organization, Library Development, that is the recipient of a BTAP grant. And so the unit that I'm in is doing an evaluation of their grant. So thanks for having me today. Well thanks, Linda. And Eric, can you please introduce yourself? Sure. Welcome everyone. My name is Eric, and I'm currently a consultant at TechSoup Global where I'm developing the Getting Started with TechSoup webinar series. I'm enjoying this beautiful weather day in San Francisco and looking forward to an exciting program with you all. Great. Thanks Eric. So I'm going to quickly go through the agenda, conduct a quick poll, and then we'll launch right into our program. So what we're going to cover in the next 55 minutes is we'll have Eric do an overview of evaluation fundamentals. And then we'll hear from Linda about the Colorado State Library's BTAP program. And for those of you who aren't familiar with BTAP, it's the Broadband Technology Opportunity Program. It's funded through the NTIA, which is the National Telecommunications and Information Administration. And it's a really wonderful program that's allowing hundreds of groups around the country to provide computer training. So I'm really excited to hear more about what Colorado is doing as well as the systems they've developed to track what they're doing, their long-term benefits for tracking their data, and then some of the tools they're using. We'll have 15 minutes at the end for Q&A. Now for a quick poll because we are interested in knowing who out there is currently managing a BTAP grant. So if you could please submit, I'm going to skip to results so you guys can see. You guys could just keep submitting yes or no if you are a BTAP grantee. So is your organization managing a grant from the NTIA? That's interesting. And I'm going to close the poll in 5, 4, 3, 2, 1. There you go. So more than half are not current grantee so that will help the presenters shape what they're planning on spending more time on. So with that, Eric, can you get us started by giving us a broad overview of evaluation fundamentals? Sure. The first question you should ask is whether or not you need to measure or not to measure. Not that measurement isn't important, rather it's important to identify what metrics you need to track and whether or not those metrics make sense. Before you begin make sure that the program objectives are developed and that the stakeholders agree to the objectives. Agreement will help you to identify what to measure and what not to measure. The realities of ROI measurement is that ROI is not for every organization or program. Not all organizations or programs need to conduct a full ROI study or it may not be feasible to conduct a full ROI study because of the added expenses to your program. Second, are you prepared to handle a negative ROI? The common tendency is to view negative ROI as a reflection of the program's ineffectiveness especially if your program values tangible benefits more than intangible benefits and we'll talk about those in just a moment. Finally, ROI measurement can add to your program costs. It can take substantial resources not just on the part of the person conducting the ROI study but also those involved in collecting and analyzing data. Think about this, you may need to include salary expenses for folks who track expenses related to your program. Expenses for the development, implementation and evaluation of data collection activities or just expenses related to your program in general. The degree to which you track and include expenses as part of your ROI study plays a factor in your final ROI analysis. So if this sounds daunting, don't worry, you can still employ some of the strategies that may include only part of the ROI process. Again, it depends on what the program objectives are and what your stakeholders agree to do. So what is ROI? ROI is a measure of accountability. Did your program meet its goals? As a result of a program, did you see any tangible or intangible benefits? There are two types of benefits that you can measure in an ROI study. The first type of benefit is called an intangible benefit or a benefit that cannot be quantified in terms of dollars and cents. Think of these benefits as attitudinal, improve confidence, improve self-esteem, or improve attitude. Tangible benefits then are those that can be quantified in terms of dollars and cents. As an example, as a result of your program, students were able to apply for and receive X dollars in grant funding. Or as a result of your program, students were able to knit 10 hats that yielded X dollars in revenue. Again, what you measure will depend on what your stakeholders agree to and what your program objectives are. You may or may not need to do a full ROI study that includes the measurement of tangible benefits. If you need to measure tangible benefits, there are two equations you can use. You see, ROI is a mathematical explanation of comparing the program benefits to program costs. The first formula is a benefit-cost ratio which compares program benefits to program costs. As an example, a benefit-cost ratio of 2 to 1 translates to every dollar spent yields 2 dollars. Or for every dollar spent, you receive your initial dollar back plus an additional dollar. Where the benefit-cost ratio is expressed in dollars, the ROI calculation expresses returns as a percentage. For example, in ROI calculation, you will divide the net program benefits by the program costs, then multiply that by 10 to give you the percent. So if your program yields $75 but costs 50, you would divide 75 by 50, multiply the answer by 10 to give you an ROI of 150%, meaning that for every dollar you spent, you received $1.5 back on your investment. Now, if all of these calculations drive you crazy, don't worry about it. It sounds like most of you may not need to calculate to this extent. However, it's important for you to understand that an ROI study can yield two types of program benefits, tangible, which is expressed quantitatively as a benefit-cost ratio or ROI, or intangible benefits expressed qualitatively. Again, the level to which you choose to measure will depend on your program's objectives and your stakeholders. Now, in most cases, you may not need to calculate the BCR and or ROI of your program and instead may focus on intangible benefits your program produces. Let's take a look at the ROI levels. Before you begin your ROI study, which may or may not include all that fancy math, it's important to point out again that your program objectives must be developed. Often we fall into the trap, even as trainers, where we attempt to create measurements for a program after the program is developed. In other words, we develop the program then decide what we're going to measure. The challenge is to begin with the end in mind. You see, although evaluation usually appears at the end of the training cycle, the evaluation process really begins at the very beginning when the program is being developed. Begin with the end in mind and you'll have a more focused program, clear objectives and measures. There are five levels to an ROI analysis. These are based on the work of Phillips and Kirk Patrick and you'll find references to their materials over the next few slides. We'll talk about each of these levels in terms of key questions or what you are trying to answer during a particular level of analysis and methods for collecting the data. Let's start at level one reaction satisfaction. The good news is that most of you are probably conducting a level one analysis of your program through post course surveys aimed at answering some very key and basic questions. Was the program relevant? Was the program important? Did the program provide new information? You may also ask questions like was the room temperature a satisfactory? How clean was the equipment or were the facilities satisfactory? These questions help you to identify the student's reaction and satisfaction to your program. In addition, these questions can also help you to predict level three and level four related behaviors. Level three deals with application of the materials and level four deals with the business impact which we'll cover in just a moment. You can predict level three and level four related behaviors by asking a few simple questions during your level one analysis. Did the program provide new skills? And what are your plans for implementing these new skills? In this case we are both measuring the trainees reaction to the skills learned and we are predicting whether or not the trainee will apply the skills in the immediate future which is what level three measures. We'll get to level three in just a moment. Some common ways you can gather level one information is through online surveys, face-to-face interviews, focus groups, you can even do random sampling of students in your class or even paper surveys after a class. I personally prefer paper surveys especially for classes where I hand out a certificate of completion. In addition to conducting level one, most of you are already conducting I would guess a level two analysis which looks at learning. Through the data we capture at this level we are looking at participants in their acquisition of knowledge and skills, participants understanding of when to apply knowledge and skills, and developing confidence to promote the participants ability to apply skills after they leave your program. After all we want to make sure that your participants apply skills immediately after your program in order to prove or track impact. There are two methods you can use to test learning. You probably already use one of these methods, norm reference testing where students scores rank them according to the performance of others in your class or criterion reference testing in which a student's ability to perform a specific task or a set of competencies is measured. In a level three analysis we are looking at the student's ability to apply the skills and concepts. In this level we are trying to answer the following question. How effective are participants at applying what they learned? How frequently are participants applying what they learned and what supporter participants receiving? These key questions are addressed after the training program is completed and can be gathered through online surveys, face to face interviews, focus groups, random sampling. In this case you may survey both a class participant and maybe any other business or departments involved in the process. As an example a non-profit agency focusing on job placement may teach computer literacy classes. To find out if students are applying skills you could survey the student and survey the employer that hired your student. A level four analysis then looks directly at the impact the application of skills has on the business. In this analysis the following key questions are answered. First to what extent did application of learning improve the measures the program was intended to improve? And how do you know it was your program that produced these measures? And that is probably the most difficult one to answer because part of your ROI study is trying to isolate the effects of your training program and relating that to the improvement that the student shows. That may be difficult that there are other interventions that come into play which is why in some cases you may show intangible benefits versus tangible benefits with a level four analysis. This is where you may end up with more intangible benefits. While it may be difficult to measure the direct relationship between skills and monetary benefits you can express level four impact in terms of intangible benefits such as social benefits or attitudinal benefits. Some common ways to receive this information includes surveys for both partner businesses, product recipients and even participant surveys through online surveys, interviews and focus groups. Again whether or not your program measures intangible or tangible benefits or both will depend on your program objectives and your stakeholders. For example if the purpose of your program is to improve lives as an example then you'll probably measure intangible benefits. On the other hand as an example if the purpose of your program is to improve dollar benefits received then you'll probably conduct a tangible measurement. And finally we get to the last level of measurement which is level five return on investment. Remember when I said begin with the end in mind we talked about ROI and BCR calculations at the beginning of my presentation. During this level of analysis we are trying to answer the following question. Do the monetary benefits of the improvement in business measures outweigh the cost of the program? In other words did our financial investment yield a positive or negative financial result? Again your decision to measure to level five will depend on your program objectives and your stakeholders request. It's not always necessary to measure to this level nor is it necessary to conduct the full ROI study. This is especially the case if your stakeholder organization or program is not yet used to analyzing the first four levels we covered. Some programs may only require a level one and level two analysis or an analysis of reaction satisfaction and measure of learning. Other programs may wish to see a level three analysis with intangible benefit outcomes such as improved confidence, increased use of skills or concepts or improved attitude. Again it all depends on what your program objectives are and what your stakeholders request. Remember to begin with the end in mind plan your levels of evaluation between the point of developing objectives for your program and designing materials. Excellent Eric. Thank you so much for that overview and I know personally I've struggled with these concepts. I appreciate seeing the levels broken down and hearing it described in that way. Then I'd like to turn it over to Linda. Can you please tell us about the Colorado State Library BTAP program? Sure, so just a quick description of it. Like I mentioned in my introduction we have a BTAP grant at the Colorado State Library, Colorado Computer Centers, Bridging Colorado's Great Digital Divide. We were awarded the funds this past fall in 2010 and it's a three-year grant period and the main focus of it is to increase access to high-speed broadband services to high-needs communities. So this encompasses ramping up the equipment that's available in these communities and providing computer training and technical support. So we're doing this by setting up a number of public computer centers across our state. We call those PCCs for short and as you can see we've got 81 right now and most of these are based in public libraries but we have a few exceptions. There's a couple that we have in tribal libraries and we also have a few that are in community centers but those are still being run by public libraries. And as part of the grant here at the state library we were able to take on five additional employees to run the grant. So we have a project coordinator and then we also have three trainers. There are on the ground folks that each have a region of the state that they're assigned to and so they're doing train the trainer types of events and just providing support as needed on an individual level to each of the PCCs. So that's how we're set up and here you can see some of our PCCs in action. These pictures are from some of the launch events that our sites have been having over the past couple of months and a lot of our libraries are just getting up and going with getting their new equipment and so they've been having these events just to announce to their communities what's going on and all these new services that are available. So what's all the money being spent on? Well if we look here by the numbers, so in some of the libraries there is already an existing computer lab but then we're ramping up either their equipment, adding new computers, increasing broadband speed. In some of the places there was no existing lab and so a new lab has gone in and so you can kind of see these numbers of the types of equipment, of the types of workstations that we're adding and then our big number that we're trying to hit is the 10,000 trainings. We're hoping over the period of the grant or I should say we're shooting for over the period of the grant to do 10,000 individual trainings of people on computers that could be either through computer classes or one-on-one tutoring or some combination. Here's a couple more pictures of our PCCs. You can see by this top picture some of them also have laptop carts so they're able to go out into the community kind of as a mobile lab type of setup and just have greater access to the public that way. So our areas of focus for the grants include everything from basic computer skills to developing workforce and employment. That could be anything from small business development to job seeking types of skills. We're also focusing on online health resources as well as resources for adult education and ESL and we're doing that with the help of a number of great partners. Those range from the Gates Foundation to College in Colorado which promotes higher education to a number of state and local government agencies. So Kami, that's kind of our BTOP program in a nutshell. Excellent. So now let's get to what a lot of folks are here to hear about is can you tell us about the systems that you use to track your impact? Sure thing and just I'll answer a question real quick. I see from Ben, does the Colorado program deal with children at all? Good question. We're actually for the purposes of this grant focusing on adults so 18 and up. So our evaluation has three components. The first one is outputs and outputs are just those basic numbers say how many BTOP funded computers are in our PCCs, what hours are the PCCs open, how many students are taking classes, just those counts of things. And then the second component of our BTOP evaluation is class evaluations. This kind of ties into that level one types of analysis that Eric was just talking about. So most of our PCCs offer computer classes. So at the end of each of those classes, they administer a survey so that the students can assess both the effectiveness of the classes and the instructors. And then the third component of our BTOP evaluation is immediate outcomes. By immediate, what I mean is say for people who have just taken a computer class, what have they learned by the end of that class or what tasks have they been able to accomplish? Or in the case of people just coming to use computers in an open lab type of situation, what were their purposes for coming? How did they use the computers? So those are our three main parts. So as we started the process of planning our evaluation, we quickly realized that we had a rather unwieldy and huge project on our hands. And we were going to have a whole lot of ducks to get into a row if we were going to be at all successful at reporting the numbers that we were required to report, both for grant compliance purposes as well as for our own research interest to show impact beyond what the grant requires. So I'm hoping that for those of you who are on today who are either working on a BTOP project or with some other digital literacy program, but maybe you're not as far along with your evaluation yet that this could give you a jump start. I'm just going to provide a quick description of kind of the things that we've done to plan our evaluation and show you how we're administering it. And I hope that some of this could provide some time-saving tips for you. So the first thing we did when planning our evaluation is we went through the reports that were required to submit to the federal government with a fine tooth comb because we wanted to pick out piece by piece every bit of information that we were required to track. And we created a data map. And what you're seeing on the screen, this is obviously really basic and short. Our data map is a lot more extensive than this, but it just gives you an idea of what this looks like. So starting up here with the left column, the data elements, those are simply those pieces of information that we need to track either to be in compliance with our grants or for our own research interests. That can be anything from the broadband speed of the PCCs to the types of classes that are being offered to the hours that the PCCs are open, all of those types of things. So then the second set of columns report, that captures the two reports that were required or the two types of reports I should say that were required to submit to the federal government. So the quarterly reports and then the annual report. And by having those columns in here, that allows us to look quickly at the data map and see, okay, what information needs to go where. So we can see, is this going to the quarterly or the annual reports? Or in the case where the columns are blank, then we know, okay, that's information that we're just tracking above and beyond on our own, but it's not required for any reporting purposes. And then the third set of columns over here is the collection point. We have a lot more collection points than what I've listed here, but I've just provided three as examples. The first example is grant applications. And by that what I mean is, those would be the applications that the participating library submitted to be subgrantees in our BTOP grant. And from those we can draw certain pieces of information, such as what was their broadband speed prior to getting the grant money, how many hours were they open, that type of thing. The second collection point is our compliance officer. And so through the information that she collects, she's able to track some of these data elements as well. For example, say equipment, money spent on equipment, she can track through invoices that are submitted and those types of things. And then the third piece of information, the thunder collection points, or third collection point I should say, is our website. So we've created a website for our PCCs and I'm going to show you that in the next couple of slides. And they submit a lot of information this way. And so this allows us to track what's coming in on the website. So by having a data map, that allows us to make sure that all of our bases are covered, that we're not missing any of the information we're supposed to be reporting on. And this helps us to ensure that nothing is falling through the cracks. And the other thing with this is that it shows us if we're being duplicative anywhere. Like in most instances, we don't want to have more than one collection point. There's a couple exceptions to that. Primarily, as you can see with the case of broadband speed and hours open per weekend, we collect information for those in two places. That's because from the grant application, that's assessing where our participating libraries were prior to receiving money, prior to receiving the grant. And then the website tracks what those elements look like after the grant funds have been spent. But otherwise, we want to make sure that we're not being duplicative, so that we're not overburdening our grant recipients or sub-recipients. So just a couple of other things I want to say about how we laid the groundwork for our evaluation before I talk a little bit more in detail about our three components. One of the things that was important for us to do was the pre-test. And by that, what I mean is for the surveys that we were creating and our website, which is tracking information from our PCCs, to those things and the way we've set them up actually makes sense to the people who are going to be filling them out. And so pre-testing can be anything from something informal, perhaps just emailing a few of your computer centers and saying, hey, this is a survey we have for your users. Do you think it's going to make sense to them? Or on the more formal end, you can do more of a complete survey of participants. For example, as we're creating our outcome surveys, we're planning to select a handful of our PCCs this fall, and they're going to test out those outcome surveys in their classes. That way we can get feedback from them, find out, you know, are these working? Are the questions confusing to people? And iron out all the problems prior to deploying those on a wide scale to all of our PCCs. The other thing that was important for us to do when we laid the groundwork was to establish a baseline. And by that, what I mean is we want to know what our participating libraries and their computer centers were doing before they got their money. Like I mentioned in the previous slide, even things like what their broadband speed was. So by collecting those numbers before they got their money, then we can have something to make a direct comparison to what happened once the money was in place so we can help to show that impact. So let me talk a little bit more about how we are collecting information about outputs. And so basically we created a website and you're seeing a screenshot of it here. This is a website that all of our PCCs use. Someone from each PCC reports information to this once a month and about all of the outputs that we're tracking. And that way from the back end, it's easiest to pull the data. It goes into a MySQL database and we can compile it for our grant reporting purposes. I'm showing you the website not so much to talk about the mechanics of it, but more to show you how we went about organizing the information. We were very purposeful in this to try to make it as intuitive as possible for the PCCs. So if you look up here at the top where this orange circle is, we have several tabs on the website and we basically determined that outputs could be divided into three categories for our purposes. The first one is facilities and that's the screen that you're looking at right now where we collect information about say the workstations, hours open, that type of thing. And one thing I wanted to point out on here too is that whenever possible in this collection instrument, we tried to pre-fill information. So for example, under hours, we knew that prior to receiving the grant, this PCC was open 53 hours during the work week. And so we entered that for them so that they weren't burdened by having to do that. The other tabs up here just going real quickly, you can see we have training programs which assess the computer classes that are being given, open access which assesses the outputs relating to open lab use and one-on-one tutoring. And then finally, we have this narrative tab and that gets more into outcomes. That's for those open-ended questions that can't just be answered with numbers but it's a place where we give the PCCs a chance to share stories about the impacts that they're seeing. So moving on to our class evaluation, that's the second component of our overall BTOP evaluation. This also is presented through the website that we created and this is the class feedback form. And again, I want to point out here that we've pre-filled the fields when possible. So for example, this is the name of the PCC where the survey is being administered. That way we're getting consistent responses and we're not burdening the users with having to fill that out. And so this we administer both online as well as we have printable PDFs for the PCCs if that works better. And then the third component of our evaluation is those immediate learning outcomes. We have chosen to assess those in two ways. The first one is for the computer classes that are being offered at most of our PCCs. And so when we looked across our 81 sites, we decided that most of the classes fell into four basic categories, those being technology skills, employment, business, and health. So we created four surveys that assess each of those types of outcomes and then the PCCs can decide for each of their classes which survey is most appropriate for them to use. We also have created an outcome survey for the open lab users. That's more general, just looking at the various reasons why they might have come to use the computer center, you know, maybe for entertainment or to email or things like that. These surveys we're not doing on an ongoing basis but we're actually just doing for two one week periods across the three weeks of the grant. And we're doing these both online as well as through printable PDFs. So I'll just show you a couple examples of what those look like. This is our employment survey and as you can see this is set up as a checklist and it just lists a whole bunch of different options. These are just a few of them for things that students might have learned during the class or tasks that they might have accomplished. And then we're also assessing demographics through that survey. And then here's just a quick example of our general outcomes that again is for those people coming to use the computers for open lab time. And again this just presents a variety of different uses that they might have had for the lab as well as demographics. So one last topic that I wanted to address in just discussing our evaluation was the issue of assessing the more longer term outcomes because I know that for a lot of us that's the big thing that we really want to show. We don't simply want to demonstrate that people learn computer skills in a class although that's great. But we really want to say hey, they learn these skills and that helps them to get a job. So we're choosing to limit our outcomes analysis for this project to just those immediate learning outcomes primarily because of limited resources. We just don't have enough to be able to do more of the overtime type of analysis. And I know that most of us are facing the issue of not having enough resources. But I wanted to make just a few suggestions of ways that you could do this at least on a small scale if that's what you're really aiming to do. One thing to do is to select a sub-sample. So say that you're doing a huge project. You've got thousands of participants, multiple sites. Just pick a sub-group of that with our 81 PCCs. Maybe we just pull four of them and do an in-depth analysis. That will allow us to get a whole lot more specific with them. We can work with the types of information that they're presenting in their classes and the resources that they have in their labs and really tailor the evaluation to that instead of using the more general outcomes surveys that we're using across the board. How would we choose who to sub-sample? Well, a couple criteria. One might be if you have some sites that tend to have repeat users. For example, maybe you have some senior centers. That would definitely be a good option because you know that you're going to be able to track people easily over time in those types of setups. Another good criterion is proximity. So pick sites close to you. In order to do that intensive analysis, you're going to have to be connecting with them a lot. And obviously that's easier to do if they're not across the state. So a couple ways then that you can assess outcomes more over time if you choose to do that is with pre- and post-tests. So that simply means instead of just administering an outcome survey at the end of a class, do it at the beginning of the class too. See where people stand prior to taking a class or even better prior to taking a series of classes. And then that way you can compare where they stood before to where they stood after the class. Another approach is to do a case study. I think taking a more qualitative approach, perhaps doing in-depth interviews with a small handful of users of your center that are repeat users can really help you get some of that detailed information about the impacts that the centers are having on their lives. And so I think all of this really comes together to help to tell the story behind the numbers. Numbers are really powerful, but they're made even better when we can flesh them out. And so I would definitely encourage you to try to get it either at the interviews, using the case study approach a bit, to just provide a human voice to all of the data that you're collecting. So that's kind of our evaluation program in a quick overview. Very good. Well, I can see with all those locations and all the different programs that this is a pretty big project and it's nice to hear the kind of road map of the framework that you're using. So how will this data benefit you past the end of the grant? Sure. So I think one of the biggest things is now that both we have collected all these numbers, our PCCs have all these numbers, use them for advocacy purposes. These are just a couple fun examples of how other places have done this. This is from New York State. This is just showing kind of numbers from a day in the life of the library. But all these big numbers, it makes a visual impact especially to people outside of our fields that don't know how many people are using public computers, how many people are coming into a library or computer center. So use those numbers to show off and to advocate for yourself. Here's another example. This is from a statewide library campaign that's going on in Colorado right now called What's Next. It's helping to promote awareness of the 21st century library and it's being used for advocacy purposes. So just another way to use numbers visually to grab people's attention. So like I said, we've got all these numbers and in addition to using them for advocacy, absolutely this gives our PCCs a wealth of information to draw on as they are budgeting for the future, developing programs, and making those decisions about what to focus on. So now they really have a lot of data to fall back on. And then in addition to that, I think this really has established a program of ongoing evaluation for our PCCs. Those who haven't done it before now have gotten into the practice of doing it. And so they have a toolbox of resources to draw on for the future. They can expand on the surveys that we've created, perhaps also apply them to other areas of their computer center or library. So it kind of sets people up just to make evaluation a permanent part of their centers. Great. And I can't express the importance of the advocacy part because these BTOP grants will end at the end of September 2013 and some will end sooner than that. And none of us want to see the funding just stop entirely. But if we can't prove that these programs have made an impact on the community then we can't prove to foundations, corporations, and individual donors that they should continue supporting these programs in some way. So if for no other reason capturing those stories is to show how very, very, very important these programs are to the success of our communities and economies. I do want to hear about some of the tools and resources that you are using. Sure. And just real quickly, I just highlight a couple here. One is called the Impact Survey. This was developed by the University of Washington. And if you are interested in assessing more of the immediate learning outcomes like what we're doing I would highly encourage you to check out this site. Our surveys were definitely inspired by this and they have a bunch of questions set up under different types of topics that people might be learning about in computer classes as well as examples of demographic questions. A second resource is our BTOP site. That both has a lot of information about our evaluation including an archived webinar that goes into more detail about the website that we created for tracking our outputs and how that all works. But there is also a lot of other resources there too that go beyond just a simple evaluation. So stuff about training, partnerships, promoting your PCCs, all of that is there. And then the third tool, so when we first started some of our smaller PCCs did not have an online system set up for doing computer reservations for their users. And so this is an example of a tool that's out there. It's called PowerLine and it's free and it enables libraries or other computer centers to have a computer reservation system online. So those are just a few ideas. Very good. And so I'd like to put this out to the crowd. If you wanted to make a recommendation for a tool that you're currently using or a system that you're currently using to track then please do submit that via the chat and we'll collect it and put it out in the post-event message as well as it'll get recirculated via the chat. We do have time now for questions and answers. So I'm going to start with some that have come through already. And then I would like for you guys, if you have any questions that have been swimming around in your head please submit those now and we'll try to get to as many as we can. So here's a question that came up earlier on in the webinar. Elise has asked, are you aware of a state or local government model that takes evaluation and measurements from a variety of programs and rolls up the data to a higher level report out? I do want to recognize that we have Susan Burkholder from the Colorado State Library as well answering questions. So she may pop in and verbally answer some questions as well. So putting that out to the presenters, not sure who best to answer that question. Don't all answer at once. And if this isn't something that anyone feels comfortable answering then we can try and bring that to an online discussion. Susan or Linda? I'm not aware of, this is Linda, of something like that specifically for outcomes. I mean there are collections for example, there's a public library survey that all public libraries around the United States are required to participate in where those data are collected at the state level and then move on up into national summary data. But I'm not sure about more related evaluation outcome types of topics. Okay. And Charlene's suggesting that we break that question down. So Elise if you wanted to be more specific with that question I can ask it again. But I'm going to move on to another question. And this is for Eric. Alyssa is interested how or this could go to Linda as well. How do you establish a monetary value for your immediate outcome? So I think she's addressing this to Linda but then Eric if you have anything you'd like to add. So Linda? Yes, this is Linda. You know we are not doing that for the types of outcomes that we're assessing. One thing that I would encourage you to check out and I think this perhaps was discussed in the chat. It was going by kind of quickly so I'm not sure. We have an ROI calculator on our website that lists the types of things that libraries could look at when calculating their return on investments but that might help to give you some ideas for looking at outcomes of a computer center. I'm not sure if that link was shared but if not we can definitely pass that on. And Eric did you want to add to that? Yeah, I guess it's difficult to isolate the program benefits and transfer that or calculate that into an ROI or a BCR. I'd like to go back to what Linda said earlier and what it provides about telling the story behind the numbers. I know you want to try and focus on some kind of cost benefit analysis but really it sounds like the deeper impact is going to come from the stories that you tell behind the numbers and really doing case studies on the folks that you're interacting with. Okay, I'm going to move on to a comment slash question that Ben had. We are in the project with 8 out of 19 PCCs deployed and no real evaluation component has yet been implemented. Do you see this as insurmountable? Hi, this is Susan Burkholder with the Colorado State Library. I had to smile when I saw that because I've been thrown into projects like that before and it is not insurmountable. You definitely need to maybe scale back what you will be evaluating because for example, you wouldn't really have a chance to do a pretest or get some baseline numbers right now. But there definitely are things that you can put into place with your PCCs to collect that information both that you need for reporting and also that helps you understand what's going on with your project or just to understand the field better. So I just wanted to throw out my general encouragement but the others might have some more specific suggestions. And Linda, did you have anything you wanted to share? Yeah, I think it's never too late and I guess I'm wondering too from Ben how far along those 8 PCCs are. But the fact that not all of them are does give you a chance to start more on the ground level with those that are still getting up and running. So I would encourage you to think about what's feasible again in the timing that you have left on your grant. Okay. So if you wanted to, Ben, if you felt like adding anything back, we can try and respond to that. But I'd like to move on to a question that Spencer had and this is in regards to providing a narrative. How do we then present case studies as an effective manner for funders that might not want a lot of narrative? That's a great question. And I think you want to be careful to use narrative kind of to provide exemplars but not to be potentially the focus of your message. So I think really potentially if you're collecting a lot of numbers just around anything even as basic as how many people were in your classes, then just a sentence or two that's come from someone you've interviewed about how that class impacted their life just as a way to again flesh out the numbers. So I would look at taking case study information in that way. Look at it as a way to illustrate the numbers instead of it being the focus in and of itself. This is Eric. I just wanted to add on to that. I had to laugh when I saw that question because you don't want to close support before you even begin sending out your message. And so sometimes even though we collect all of this great information, we have access to all of this great metrics, we want to give as much information and data as we can but it goes down to the question of what is it that the stakeholder really wants and what were the measurable program objectives that you folks are trying to meet. And so for stakeholders who really just want the nitty-gritty nuts and bolts, like Linda said, just go back to what you can, basic information, basic data, utilization of your program, if you can show a difference in scores between pre- and post-test learning, stuff like that, real nitty-gritty, and then hopefully that will open the door for them to ask more questions. And this is Susan and I wanted to add to that too. I'm glad that Eric addressed the other stakeholders for the evaluation instead of just the funder. Maybe the funder doesn't want to read a lot of narrative but that can be really effective in any policy or advocacy work that you do. And also in our evaluation and just data collection, we recognize that we have a lot of stakeholders including our own staff. And so the online tool that we developed, it lets our staff view the information that are submitted by the libraries and their jurisdiction. And then someone like me who is collecting it all to provide to the funders, I can view it all. We also, for example, the surveys that participants are filling out, we're providing them both on paper and online. And when they are submitted online, those can be viewed by the library staff that have access to the online data collection tool. And so we're trying to make information available to the people that are involved in the project at different levels. And so maybe the narrative might not work with the funder, but it certainly works in other audiences too. That leads me to another question that happened earlier which I believe was answered via the chat that Lauren's being spoken about. Shirley's question, are you successful in getting all the libraries to report online? And how do you handle the libraries who don't? And I'm sure this has to, this covers any paper tools or Word documents or any kind of reporting. It's not just the online tool. So what are ways that you get folks out in the field to make sure to send in their numbers? Well, I told Linda once at the beginning of the project that every time that I send out an email reminder asking people to send me information, I wanted to give them a reason why it's important to them. And so, for example, I would say things like, well, I'm going to be providing you an individualized annual report at the end of the year that will show all the numbers that you're giving me, but that report is only as good as the numbers that you give me. And then I tried to explain how that report could be something that you take to your library board or to your partners. Maybe you take it to your partners as a way of saying, did you know that all these people are coming in asking for QuickBooks? Wouldn't you, Mr. Accounting from Accountant, wouldn't you like to come in and volunteer your time to do this? So as a way to recruit volunteers for the classes or for the library staff to advocate for more resources around the public computer center in their library, for a lot of libraries this is a new aspect of their work with the public. And so a way to document the time and resources and energy that they spend working with the public on these computers and a way to say, let's look for a way to sustain these. So that's the big picture. The small picture is I send out a lot of email reminders and make a lot of phone calls. And for those that seem to have a problem, I say, let's log in together and let's walk through this. And this is why we're asking this question. So I just try to be very accessible and supportive throughout the process. Great. So this will be the last question. Can you all speak to the question of self-assessment of skills pre and post versus actual skills assessment pre and post? And this is a question from Diane. This is Linda again. That's funny. Actually we were just having a conversation about this earlier this morning. And I think you have to be careful in deciding which way to go with this because with a lot of adults actually kind of testing them on their skills can be a turn off. It might be insulting. It might be intimidating. And so you really have to be careful about judging whether your users would respond well to an actual kind of direct test of their skills. In a lot of cases I think that it's a better way to go to have them self-evaluate. This is how great is your understanding of Microsoft Word prior to this class than after the class how great is your understanding now, that type of thing. That might be a lot more palatable and you'll get more willing responses out of that type of approach. Hi, this is Eric just to add on to that. Yeah, I totally agree. Some adults, especially if they don't understand why it is that you're testing them or they don't see or can't see the immediate application of the skills that are being taught to how it is going to benefit their life, may have a real huge resistance to testing. It may work in some environments, some environments it might not work. So one way to get around it and just kind of dovetailing on what Linda was talking about is to perhaps do a self-evaluation of confidence, perhaps a pretest or pre-evaluation asking them about their confidence levels, but relate the confidence levels or the questions down to specific objectives that you're teaching in the class. So for example, how confident are you at creating a form letter? Instead of saying something like, how confident are you at using Microsoft Word? Get down to the objectives that your class is teaching then ask confidence levels related to that objective. Do a pre and post and then you can measure the growth and confidence, which if you couple that with your level one evaluation on reaction satisfaction, you can kind of predict whether or not they're going to actually apply those skills later. Susan, did you have anything you wanted to add? No, they covered it well. Okay, fantastic. Well, that is all the time that we have. Again, thank you all for attending. If there's additional comments, questions that you'd like to put out to the community, here's a short URL that you can use to log into our community forums. And we'd like to thank ReadyTalk for making it possible for us to offer these webinars for free. It's made possible by ReadyTalk, which has donated these to their system to help TechSoup expand awareness of technology throughout the nonprofit sector. ReadyTalk helps nonprofits and libraries in the U.S. and Canada reach geographically dispersed areas and increase collaboration through their audio conferencing and web conferencing services. And before I wrap up, I do want to mention that Amy Lucky sent out a link and was re-sent out via the chat, and I'll include it in my post-it message to you all this afternoon. But there was a conference in Ohio that I attended that has a great deal of resources for community technology providers and PCCs and evaluation and whatnot. And there was a wiki that included a bunch of resources. So that will go out. It's an awesome resource. And again, my name is Camie Griffiths from Community Technology Network, and I'm really happy that we were able to provide all this information for you today. And I'd like to thank the presenters, Linda and Eric for your great presentations and Susan as well. And first, Stephanie and Sarah for working on the chat. This is a great presentation and I'm really happy. So thanks all. Have a wonderful day. Bye-bye. Thank you. Please stand by.