 Hai, mums, rungu gwST Mums, t possibility O, hanya Lide Mums mums I'm Don L McKinley This is Chris Dempsey I'm from Victoria University of Wellington Chris is from New Zealand Micrographic Services max Sullivan Senters Apologies today. Max was going to be speaking with his Victoria University Library hat on but unfortunately can't make it I do know a little bit about the project As does Michael Perry who's in the audience so if anyone's particularly interested in in what Max was going to be talking about today we'll have some time at the end that you can ask questions and we'll do our best otherwise you're welcome to contact Max afterwards. So it's a wee panel, it's a panel of two, but we'll do our best to keep you entertained. How we're going to run this session I'll start with my presentation and then we'll have questions after that Chris will follow and you'll have time to ask him questions as well and then we'll use any time at the end just to sort of open up the floor to discussion generally about crowdsourcing and certainly if you've got any questions related to projects that you might have on the boil you're welcome to throw them our way and we'll do our best to field them. So let's get started. So as I mentioned I'm from Victoria University, I've just started my PhD in the School of Information Management. My research focus is on non-profit crowdsourcing and crowdsourcing in the cultural heritage sector is particularly dear to my heart. My research focus is on user interface design in particular. There are two main things that I'd like to talk to you about this afternoon. The first, first I'm going to show you why I think evaluating crowdsourcing websites early in the design process can be worthwhile and I'm also going to talk about one method of evaluation that it's inexpensive and relatively easy to use. Before I get started I must give credit to David Ellis and co who gave a presentation at museums and the web a few years back and inspired the title of my talk, why evaluation isn't a party at the end. It did occur to me just a few moments ago that they did stick me at the end so I don't know if they were trying to say something. Crowdsourcing. Just very quickly so that we're all on the same page I'll just do a really quick crowdsourcing 101. Crowdsourcing outsources tasks traditionally performed by specific individuals to a group of people or a community through an open call. Crowdsourcing author Jeff Howe explains that it isn't a single strategy but an umbrella term for a highly varied group of approaches. Crowdsources working in cultural heritage and academic institutions share two common project goals. The first is to create or enhance digitised data and the second is to engage the wider community. Currently volunteers are performing an increasingly wide range of tasks and it is rather a long list that I have but I am going to go through them because you might sort of have heard about a few projects but you might not be aware of the greater potential for volunteers. Some of the tasks they're involved in include tagging, text correction, transcription, contextualisation, classification, curation and data collection, description, text encoding, mapping and geo referencing, identification, interpretation and translation. This is a new project upon the screen here driven by the very clever people at New York Public Library called Building Inspector. This invites volunteers to assist with cleaning up historical map data and if you haven't had a play with it I do encourage you. If you're coming to that camp tomorrow maybe we can have a play with it. The instructions laugh out loud hilarious they're just brilliant. So the success of non-profit crowdsourcing initiatives relies on meeting two key objectives and they are sufficient participation and quality contribution. Meeting these project objectives requires an understanding of contextual factors such as volunteer motivation as well as effective project and system design and evaluation and refinements to achieve optimal performance. For those of you who aren't so familiar with crowdsourcing a typical website for non-profit crowdsourcing includes a branded home page that describes the project and invites participation and then web pages for instructing volunteers and performing tasks. Additional web pages are commonly used for volunteer registration, presenting detailed information about the project and the project team, updating project progress, volunteer accounts and profiles potentially, project communication and in some cases presenting the outcome project for public use which might be open data for reuse. This is another recent project called LinkJazz driven by the Pratt Institute School of Library Information Science and this project volunteers classify relationships between jazz musicians to contribute to a linked open data resource. Again this one is very cool if you're going to look at any in the next hour or so I encourage you to have a play on LinkJazz. So this quote some of you might have heard me use before but I think it deserves sort of restating. Usability author Steve Krug talks about the reservoir of goodwill that visitors bring to a website and explains that each problem they encounter lowers its level. The role of evaluation in the design and optimization of websites is to identify real and potential problems so that they can be remedied and redevelopment. Usability is a broad concept that refers to how easy it is for users to learn a system and how efficiently it is for them to use, how efficient it is for them to use and how pleasant it is to use as well. Some of the website elements that can impact on usability include content, language, readability, website navigation, arrangement of page elements, consistency, visual appearance, page load speed and the number and complexity of processes required to complete the desired action. So in crowd sourcing terms these kind of design decisions can impact on the effectiveness of our invitation to participate, the effectiveness of our task instructions and the incentives that we put in place. These design decisions can also impact on the length of time volunteers spend on the site, efficient task completion and participant return rate, whether they return or not, how often they return. Evaluating early in the design process can help to better meet the needs of your volunteers and potentially avoid major website fixes later on. I'm going to give you just one example. I'm involved with the New Zealand Reading Experience database which is a crowdsourced history of reading project being developed at Victoria University of Wellington. Based on the UK project which was launched in 1996 the NZ Red will collect reading experiences of New Zealanders from the 19th century to the present day. Volunteers will be invited to identify instances of reading in diaries, letters, biographies and memoirs, from private collections, from libraries and archives and then contribute their discoveries to the online database. The fastest and easiest way to develop a user interface for the NZ Red might be well would be to use the existing UK Red as a template but our project team had a few issues with this. This is just a very small extract from a very, very long page. The task interface that UK Red volunteers use to input reading experiences is currently a very lengthy, somewhat daunting one page online form. Furthermore the current UK website hasn't been subjected to usability testing and there's no requirements documentation available. So this reads the question well how effectively and efficiently is the UK Red task interface supporting rich data collection and volunteer participation. So rather than blindly adapt the UK Red template for our purposes we were keen to pursue this question. So earlier this year as part of the Master of Information Studies program at Victoria I conducted a research project on the NZ Red. The aim of the project was to produce high level functionality and usability requirements for an NZ Red task interface and determine the extent to which the UK Red actually met these requirements. My findings will inform the design of a working prototype to be developed in the next stage of the NZ Red project and this in turn will undergo user testing before being incorporated into the overall NZ Red website design. I employed several data collection techniques to produce the requirements. Today I'm not really focusing on the requirements development as such. I'm really just going to be focusing on the evaluation of the UK Red task interface. So if you're interested in the requirements development there's a link to the full report on my website. One of the tools I used to evaluate the usability of the UK Red task interface was a set of heuristics which are sort of commonly thought of as rules of thumb but what I would prefer you to think of as design principles in this context. These heuristics are short statements about what a system should do. They're generally accompanied by a more detailed explanation and in some cases they're accompanied by examples as well. Heuristic evaluation is known as a discount evaluation method because it doesn't involve end users which in this case would be our crowdsourcing volunteers or potential volunteers. This can be time-consuming and can be costly potentially and heuristic evaluation requires minimal time and resources. So in a nutshell evaluators are guided by a set of heuristics to determine the extent to which user interface elements comply with the design principles. Any breach of heuristics is commonly recorded and rated using a four-level severity scale which was developed by Molika Nielson. The set of heuristics that I used was developed in 2012 by Petrie and Power. They were developed to support the design and evaluation of highly interactive websites such as those requiring users to input information. For those of you who aren't familiar with heuristics, I'll just give you an example from that set so you know what I'm going on about. One of these heuristics is avoid duplication or excessive effort by the users. This is accompanied by their explanation. Don't ask users to provide the same information more than once and don't ask for excessive effort when this could be achieved more efficiently by the system. So guided by these 21 heuristics, I was able to identify potential difficulties associated with the UK red task interface that might be impacting on the user experience and then report how these might be remedied. I identified 32 potential usability problems related to interactivity, to content, physical presentation and information architecture. Of these, I rated six as major or high priority problems, 23 is minor or low priority problems and three is cosmetic only. To validate the results of my heuristic evaluation, I then conducted an online survey of current UK red volunteers and potential New Zealand red volunteers who were asked to identify usability problems associated with the UK red interface. Survey respondents of whom there are over 100 identified 23 problems, 13 major problems, nine minor problems and one cosmetic only. Consistent with my heuristic evaluation, major problems related to physical presentation, content and interactivity and most problems again were related to interactivity. So I said I wasn't really going to talk about requirements but I'll just whip through them really quickly for anyone that's interested. So the outcome of my research was seven high level functionality and usability requirements for an NZ red task interface that supports volunteer participation and quality contribution. They were to minimise user effort, support integration of the task with research processes, enable new visitors and contributors to understand what the task involves quickly and easily, support accurate and controlled data entry, be easy to use for people only reasonably confident with the web, support flexible structured data entry and support bilingual data entry. My research found that the UK red task interface only partially meets four of the seven requirements. Its limitations are partially symptomatic of its age, website design and potentially user expectations as well have evolved since the UK red was last redesigned in 2009. So the first takeaway from my project was that using an existing crowd sourcing project template even if it's essentially the same concept as you're going ahead with is possibly not the most effective way to serve the needs of your volunteers and your project objectives and the only way that you can really determine that is by subjecting it to some method of evaluation. The second takeaway for me was that heuristics can be an effective and efficient method of evaluation. So as nonprofit crowd sourcing websites are in fact highly interactive websites that require user to input information in most cases, the heuristics developed by Petri and Power could be used to support the design and evaluation of other crowd sourcing user interfaces. However, they do have some limitations. Firstly, they were developed using a sample of government websites that unlike crowd sourcing websites you could argue users would most likely visit out of necessity rather than choice and possibly only once. Secondly, the set of heuristics by Petri and Power underplay several key elements that can impact on volunteer participation such as the value proposition behind the project and motivation and incentive. Thirdly, they don't incorporate any relevant crowd sourcing research such as the tips for crowd sourcing published by Rose Holly around 2009-2010 or the guidelines published by Michael Escarides in 2012. So the aim of my PhD research is to develop a set of heuristics for nonprofit crowd sourcing that supports user interface design and evaluation practice. The initial design of my set of heuristics will draw on Petri and Power's heuristics and other relevant sources that I've mentioned and then these will be evaluated and refined over the course of next year. I'll be making early versions of the heuristics available to practitioners for use and I'll be looking for feedback. So if you're interested please see me later or you're welcome to follow my blog for updates. So that's it from me. Thank you very much for listening. As I mentioned there's a link to the full NZRED report on my website. I'll also be making this presentation in the slides available there and I'm happy to answer any questions.