 This talk is, as the title says, about crowdsourcing, and when I received the invitation to give the talk, first I thought, oh my god, I have 45 minutes or 30 minutes to speak, and there are so many things to talk about, and I'm very passionate about this area as well, so I would never be able to cut it down to 30 minutes. Then I started doing some research to see how this community, how libraries, how research libraries in particular, do already apply crowdsourcing, and I found so many exciting and interesting projects, so then I had my second moment of panicking, and I thought, oh actually they know everything about the topic, so there isn't much I can tell them that they don't know about, so this talk is basically the result of the struggle, and my hope is to convince you that crowdsourcing is not just a very hyped word, but it can actually help in many of the content management tasks that you are dealing with on a daily basis. However, crowdsourcing is also a very complex field, so in this talk I'm trying to give you an overview of the different types of crowdsourcing, of all the approaches, the richness of the space, which you could take into account when you set up your own crowdsourcing project. I would also want to give you a feeling about the types of tasks you should use crowdsourcing for, and the ones in which you should rather maybe invest in technology, or train your in-house experts to carry them out, because as much as it is a solution to almost everything online, crowdsourcing might not be the best solution for your particular problem, especially if we take into account very mundane things people might not want to talk about like time and accuracy and budgets and so on and so forth, and then in the end towards the end of the talk I want to introduce you to this vision that we have in one of our research projects at the University of South Hampton, which is related to what we call social machines, which are online large-scale assemblies of human and computational intelligence, which come together in some sort of a utopic scenario to make sure that we solve all our problems in our daily and private life, and that we advance economy and society. So that's basically the executive summary. Let me get back to crowdsourcing as it is. How many of you did hear about this term before? So, yeah, as I was expecting, basically everyone. So in a nutshell, the topic has been around for more than eight years now, but it stands for a general framework by which you have a problem and you solve it via an open call. And when I say open call, I mean you don't use typical outsourcing mechanisms in which you approach another organization to carry out a task and execute a project, but you use an open call to a large network of potential contributors. And you can take this concept and apply it from the enterprise setting in which it was originally defined to online worlds, to social networks, to public sector information, to consultancy, marketing, and so on and so forth. But the basic thing to remember about it is that the call for contributions, the type of people that will help you to run your project, are previously unknown. You want to have lots of them or as many of them as you need to carry out the project, but you don't know them in advance. And since you don't know them in advance, you have to, there are different ways in which you could interact with them or influence their behavior as compared to when you have an enterprise, when you have a group of students in the university and you let them execute the same project. There is a different type of social or organizational structure. There are different types of things you know about their performance. There are other means you have to set up in order to encourage them to behave in a certain way. There are many and many forms of crowdsourcing. So on the, this umbrella on the left hand side is a rough attempt to classify some of the approaches you might have heard of in the literature. So the red and the green bits distinguish between the granularity of the tasks. You have something called macro task which stands for any type of project which you just give out, publish, without giving specific instructions about how the actual project will be executed. The classical example is you want to design a logo for your website. You want to outsource the design of your website itself. You want to find the name for a new product. So you're looking for creative ideas from people you don't know of in advance and you're asking the crowd to come up with solutions. The other type of crowd sourcing, if we talk about it in these terms, is what we call micro tasks. And as the name says, this stands for those types of tasks that are very small, atomic. An example is for instance, tagging. You want people to tag the papers that you have in your repositories so that you can take advantage of all the beautiful search features you have heard about in the previous talk. A micro task in that particular example would be the task of tagging this particular paper. Then you can specify it further. So we're talking about finer granular tasks and the point there is you would want to have many people engaging with these tasks in the same time in parallel. So there, the reason why people do that is because this type of work can be broken down into many, many smaller bits that can be executed independently. And if things work the way people tell you in the literature, then you would end up tagging hundreds of thousands of papers in something like this, which is something for various reasons you wouldn't be able to do in your organizations because you have other things to do because it's too expensive to do and so on and so forth. The other parts of the umbrella, there are contests and crowdfunding. I will leave crowdfunding aside. Contests are complementary to micro and macro tasks because they just speak for a form of engaging with the crowd, which assumes a certain type of reward. So there, basically, you will reward only the top three or something of the contributors and everyone else will go home and will be happy that they participated. Contests can be, so this contest model can be combined both with micro tasks and micro tasks and as I will explain later on, there are other reward models you could apply as well. So already from this picture, I hope you understand that when you talk about crowdsourcing, let's crowdsource a project, you already have a series of specific methods and approaches you could rely on based on the task, on the type of project that you have and on the rewards that you are willing to give out. What are the challenges and opportunities for research libraries from my point of view? I have gone out there and tried to see what type of projects could be useful and interesting for this audience. I have started with something which maybe some of you know, which is an older project called Distributed Proofreaders, in which the idea was to improve the results of OCR processes. So you have digitized text and then there are mistakes, errors that can happen in the automatic process and then you ask experts or volunteers to help correct those mistakes. A similar type of project is a project in the Zooniverse citizen science platform. I don't know how many of you have heard about it, so Zooniverse is a platform for what we call citizen science or crowdsourced science that runs at the University of Oxford. They have 20 or so projects in various scientific disciplines. One of them is called Operation Word Diary. So similar to Distributed Proofreaders, they have digitized context pictures of old war diaries that they would want to be organized and tagged and enriched in a particular fashion. What else is out there? So something I found very interesting is called metadata games. This is a slightly different approach to the usual. I put the content out there. This is a task that will advance the humanity, come volunteers and help me do it. This is an approach which actually exploits our human inclination to spend 20 minutes of our day playing silly games. And what it does is the approach is called games with a purpose, whereas the purpose is understood from the point of view of the designer of the game. You can imagine, you can understand them like quizzes, casual games, very simple games, played at a very fast pace where you are asked to answer questions like, I don't know, what is the capital of Latvia? Is it this city or this city? When was it founded? And so on and so forth. So you play this game, but actually the replies, the answers that you provide are used as metadata for some digital artifacts in the back end. And this metadata games is an initiative to develop technology to create such games easily. And it's been used by a number of libraries, for instance, to build image tagging games. So in terms of opportunities, by engaging with the crowd, be that volunteers or paid contributors, or participants in a contest, you have an affordable and most of the time is quite accurate way to enhance your information management services. This means you could outsource tasks such as image annotation, labeling, or even search to some extent, if you apply social search features. You could also capitalize on a scholarly practice that is getting more and more important in different disciplines, which is citizen science, which outsources, applies crowdsourcing ideas to engage with citizens as part of established scientific workflows. You could also, but I will not talk about this in this talk, have a better customer experience. So visitors of online websites expect certain things from any website. Like in the previous talk, so if Google does it, then of course your search feature will also have to support you to some extent, because they're used for if Amazon allows you to interact and give reviews and so on and so forth. This is something that you will, you will need to do as well. So crowdsourcing is one way to engage with your customers and give them the feeling that they are part of the process, they can, their wishes and thoughts and needs are listened to. The challenges, however, are to understand what would actually drive participation, especially in cases in which you want volunteers, people to volunteer their time to do it. Many projects in crowdsourcing apply similar principles to the ones that I will introduce, but do not reach critical mass. And one of the things I will go into in more detail is what this critical mass actually is. So you have this, this power law distribution that you also see in Wikipedia, the 10% of the actual, of the customers, of the visitors of the website, other ones who contribute 90% of the time. This is not something that should be discouraging. This is something you should be aware of and you should think about what kind of contributions you could realistically expect from the long tail, from the 90% who visit the website maybe once and edit and, and I don't know, correct one spelling mistake in the Wikipedia article and the 10% who actually put lots of effort into editing the articles. These types of contributors are likely to be driven by other types of rewards and motivation. And it will, one of the main challenges when you set up a crowdsourcing project would be to understand that. In this talk, I will understand crowdsourcing as what is called human computation. Human computation means that we resort to human intelligence, to human contributions, to enhance the results of automatic algorithms. If I go back to, well, let's say extracting keywords from a research publication. This is something, this is a task that could be executed totally manually. You give someone the paper and you say, okay, why don't you just tell me which keywords you think are important? And then probably you ask more than one person and aggregate the results and select top five keywords that you want. That's one way to do it. Alternatively, you could say, I apply some sort of machinery and information extraction algorithm who will tell me, oh, I think these are the top 20 topics I would extract from, from this document using models like the TFIDF one that was mentioned earlier. And then you ask, you apply human intelligence just to enhance the results of that algorithm. And this is the classical human computation scenario. This is opposed to something like creative processes. Where, for instance, an organization would reach out to their customers to come up with a new combination for the McDonald's burger. This is actually done. And it's one of the most successful examples. So they reach out to people and say, create your own burger. Then they have some sort of voting mechanism. And then whoever wins has their burger in some area, at least in Germany it was done. And it's actually sold in shops. So I will not talk about this. I will also now from this point on, I will introduce you to the main dimensions of a crowdsourcing project. So the types of things you have to think about when you set it up. And I will use one example, which is the crowdsourced collection of data citation information. So this was an experiment we ran completely by coincidence two months ago at the Semantic Web Conference. And the task was to collect, we wanted to know. So we are obsessed with publishing linked data. We have many of our data sets published as linked data. And we want to know who else is using these data sets in their research papers. So we wanted to collect information about these data sets and about the versions of the data sets that are useful. For instance, I don't know if you have heard about DBpedia, which is a more fancier version of the Wikipedia info boxes, basically. So if you search for DBpedia papers, Google's caller would give you something, 9000 something publications. There are also 40 or so versions of Wikipedia already available. So we would want to know as a community, in order to understand the research to reproduce the results and so on, which version of DBpedia is used in each paper. So we set up this website, which you can find at that URL, just to see what type of crowdsourcing methods and mechanism we could apply in order to get this information. And this is the example I will use in the following. So you want to set up a human computation project. What are the things you should think about? Well, first of all, you have to consider what types of tasks, what type of work you want to outsource in the first place. Because in theory, you could, every task could be amenable to crowdsourcing, but some tasks are more, well, benefit more from it than others. And these are typically those tasks which do not require lots of expertise, but rely on human skills that all of us have. So visual recognition of objects, language understanding, communication and so on. Then you also have to think about who do you actually reach out to. Is it everyone? Is it a casual visitor of your website? In theory, this is not everyone. So based on whom you are going to disseminate, you're going to promote this project so you will not reach everyone. So you'll have a bias, actually, in the types of people that will go back to the website and engage maybe with the project. How are you actually going to outsource the task? So there are a number of things that need to be taken into account here. So I mentioned this micro task. In this scenario, which is typical for human computation, you break down the actual work into smaller units and you have people working on them in parallel. So you have, I mentioned Zooniverse. One of their projects classifies millions of images of galaxies that have been collected by the Oxford Observatory. And they want to know what type of galaxies are there. So they ask people, look at this image. You see some sort of white sphere in the middle of the image. What is this? Does it look more like a star? Does it look more like a plate? Does it look more like a UFO? So they capitalize on our basic object recognition skills, collect that information. Well, that's not science in itself, but it helps them train some image recognition, image analysis algorithms that then lead to scientific discoveries in astrophysics. This type of task can be broken down into smaller units because you have one million images and every user of their system can classify how many images, whatever number of images they want. When you have something more complex than that, you will need to coordinate between the individual, between the individual contributors. And this will require more effort from your side. And it will also slow down the process. What kind of complex work flows do I mean here? Well, it doesn't have to be complex in the sense of something that cannot be written down as an algorithm. It is as, take for example, the task of translation from one language to another. You could, you have a paper and you want to have, you have it in English, you want to have a German version. What would you do? Would you give the paper to one person and say translate it? Would you give it to the crowd and say, everyone, why don't you translate this paper and then I'm going to collect the results and see how I merge them and what do I do with them? What if the document is 500 pages long? Then you're actually one to break this down and to give people, I don't know, maybe one page to translate? Then again, you'll have to think about, do they have the right context to make the right translation? You might also want to go for a setting in which you would ask someone to do the translation but you will not do the correction and the editing. You will ask yet the crowd again in a second step of the workflow to go through the translations and pick the best one. So this is an example of what they call complex work flow in crowdsourcing, which is everything that cannot be, bluntly broken down into smaller units like the million of images of Galaxy example that I brought up earlier. Right. I think I mentioned this already. Zooniverse has something like 27 projects at the moment, some of which probably also interesting for this audience, most of them in astrophysics, and they have a million users. So one million people go to their site and spend hours and hours of their day classifying galaxies, labeling specific objects and videos, transcribing word diaries, transcribing weather logs and so on and so forth. Now let's get back to the example I had before. So remember what the task was? The task was these are the papers, these are the data sets, the data sets, let's establish the links, which paper uses which data set. That's one way to define the task but it's not the only way. You always have to think is this the most appropriate representation of the domain? What am I going to show to the user? Is this enough? Or is it maybe overwhelming? So in our experiment we had just the bibliographic item, the title, the author and the publication venue and some information about the data set. This might be enough for someone who is an expert who comes from that area. For someone else you might want to show them an abstract. You might want to show them the first page. You might want to show them the full paper. What if the full paper is not available? What if it's very long and then the whole process is slowed down? So you have one of the things you have to be careful about is how are you going to represent and specify the task to the contributors? What happens if we don't know the list of potential answers? So in our example it was quite clear because we were thinking about one particular data set. What if you don't know these data sets? What if you ask them what data sets do these people? Look at this paper and tell me the data sets that are used. You will have no means whatsoever to know whether the results are correct or not. You may want to merge them. You will have to deal with different spellings. You will have to deal with different ways of writing down versioning numbers and so on and so forth. Who would be the crowd in this case? Well the people who know the papers and the data sets or even the authors of the papers they require almost no context information whatsoever. You can also think about anyone else in the field. Anyone knowledgeable of English for instance. Anyone with a computer. Anyone with a cell phone. Again if you want people to execute the task on their cell phone you'll have to think of various other representations of the task. You don't show them the full abstract on the mobile phone. So you'll have to think about what crowd would be the most appropriate? What task am I actually trying to solve? And pick the ones that are right for you. And no one says you should just go for one option. How is going to in this experiment so what types of task models can we imagine? One option would be let some algorithm identify the data set names and then offer these candidates to the to the volunteers to select. You could also use something like paid micro tasks. Have you heard of Mechanical Turk? Yeah. So these are platforms where you pay people four to five cents a dollar to solve tasks like that. You could use them to do a first screening and then you could have experts or even the authors of the papers to sort out the challenging cases. You could organize a competition via Twitter. You could have I don't know the question of the day which which version of this data set does this paper use. You could involve the author saying oh we found out that this than this and this data sets are used in your paper. Is this correct? Yes or no? Let's move forward. Two more dimensions of crowdsourcing. So we had the task, the crowd and the way you actually design the workflow. You are me the way to validate the results because whatever will come in from the from the volunteers might not be is not per default accurate for various reasons. Maybe you didn't specify the task well enough. They didn't understand it. You have lots of spam on some of these platforms. People who just go there especially in the paid cases and just sit in front of you have farms, micro task farms, companies who sit in some parts of this world and employ people just to click on some of these solve somehow some of these tasks on these platforms. So it is a big problem and it is up to you to actually collect the results from the crowd and decide which ones are correct or not. Of course some cases are easier than others and I mean by that those cases in which the set of potentially correct answers is no. So if I know all 292 open data sets published as linked data that's all there is to it for me and I will look for different spellings of those data sets and that's it. If the answers are open, if you have free text as contributions then of course you'll have to think of other things. And then there are various ways to optimize the process in particular the way you devise, you engineer your incentives. In our case the validation was quite straightforward. It is quite straightforward as I was saying as the set of potential answers is fixed. What you want to do there as well is to have more than one person giving the answer to the same task because you want to have redundancy and because you want to apply something like some sort of majority voting to identify those answers that are that are likely to be correct. Now something that is less straightforward is related to incentives and of course everyone would want to have volunteers engaging with the site, executing the task, giving ideas for new services. But the problem with volunteering is that it's highly context specific. There is one successful Wikipedia and there were 200 other projects at the same time, similar scope, similar technology, they failed and we actually don't know why. It's not applicable to arbitrary tasks as well. So people are very fond about galaxies but maybe not so fond about research papers and data sets. So this is why you actually go to things like contests or mechanical Turk. Just to worry about this, so if you decide to pay your contributors, it is in general it is affordable because you pay something like four to five cents for each correct answer. However if you want to do this as part of your internal processes, you'll have to think about optimizations because you don't want to spend, so for the 9,000 papers I mentioned earlier, we would have to spend something like five thousand US dollars with all the redundancy embedded with all the validation afterwards. Well for five thousand dollars, you actually hire someone who has knowledge in the field and you can expect other types, other level of accuracy from them. Analytics can help. I don't have much time to go into details for that but this is some work we've been doing for for citizen scientists from Zooniverse where we give them basically a dashboard of what their users are doing. So they can see when users leave or predict when users leave. Think about the performance or study the performance of the individual contributors and then decide how to act accordingly. In the experiment that I mentioned earlier, the important thing to think about is who will benefit from the results and who will own them. So assuming I will actually create all these links between datesets and papers. Is it something for the public good? Is it something that me as a researcher just crowdsourced to the community and then the results will be free? Wouldn't it be a better model maybe to work together with a publisher who has maybe all the papers? Who has already a community? Who has a website in place that is highly visible? But then the question will be will people come back to the site and engage with the experiment if the ownership of the result is with the designer? And so on and so forth. You could use different crowds for different tasks as well. I mentioned this a bit earlier as well. This is some work we have done for the International semantic web conference in which we have successfully combined experts in a contest and then paid micro-turkers to curate dbpdm. But what you want to have in the end, as I was saying at the beginning, is to use humans. Humans are a very valuable resource and they can be quite unreliable as well. You cannot really predict how they will behave. What you want to have is to use them only for those cases which are really engaging and interesting for them, but are also beneficial for your project. So what you want to have is to have the symbiotical relationship system that uses computers and humans to solve specific tasks. And this is this is already very very very technical but just to give you an idea of what for instance the database people are doing. They are extending SQL and so they're query engines with crowd operators. So for particular things, like when you want to merge those two tables over there, they will use half of it will be classical SQL query execution, but in five of the cases they will ask the crowd. And I would like to conclude the talk by revisiting what I said earlier. So I hope I gave you an overview of or at least the feeling of the diversity of this field and the types of questions that you want to consider when you set up a crowdsourcing project. I haven't had time to talk about sustaining engagement and this is this is more an art than a science and I'm a scientist and not a marketer. But I would want to reinforce the last point that I made that computers are still very good at many things. They might be imperfect at others but the ultimate aim should not be to have crowdsourcing applied over and over again for the same types of tasks. The aim should be to collect, to use, to capitalize on these contributions to improve information technology, which is what we call and the social project the age of social machines. Social machines is a term that has been introduced by Tim Berners-Lee early on in which he lays out this this this vision in which you have social machines enabled by the web and which people do the creative work and the machine does the administration. Thank you very much for the invitation again and if you have any questions I'd be happy to take them. Thank you. Can I have any questions for Elina? Thanks a lot for your talk. Irina Kuchma Eiffel I have a question what do you think about the term crowd sourcing because there was a comment some time ago from Francois Grays that it has this allusion to cheap labor menial tasks so maybe it's better to call it crowd crafting because then it would be real innovations by volunteers. Thanks. I look at it from from a systems engineer perspective so I don't care so much how it is called as I care about the principle and the original principle did not involve necessarily these aspects. I think this comment which is which is valid refers to platforms micro task platforms like Mechanical Turk but even then you have to imagine so when Amazon introduced this in 2004 this was quite a revolutionary concept because it brought the computing back to the humans I mean the first the first computers were humans they were actually females and they were hairdressers. The first computers were human computers were during the French Revolution were used to build logarithmic tables. So there was a very simple process apparently hairdressers in that time in those turbulent times didn't have much to do so they were all they all came together were paid to to do these very simple computations this was the first the first example back then and now Amazon in 2004 hundreds of years later introduced it again on the web no one actually knew what this is going to turn into and then you have seen this development some an economist says say or at least the discussions I had this development wasn't quite unexpected it was a particular type of labor market that they're very fond studying in which you see developments such as you know supply and demand you have a very adaptive way of defining the prices the prices per tasks and mechanical turks turk are now on an average of five cent per task from one cent five years back you have the turk the workers as well organize themselves you have a union for instance of turks mechanical turk so the platform provider is now much more careful in terms of ethical considerations legal and taxation issues and so on and so forth so in short yes there are some of these cases but from my personal point of view it's still a very rapidly changing field and which we will see some some directions which are typical to labor market that will try to deal with the situation how many of you have a crowd sourcing project can you please raise your hand ten percent yeah and how many of you like might to start a crowdfunding project crowd fun oh a crowdfunding yeah a crowd sourced project oh and there's also some people do some people do yeah well i hope that you can take some of the some of these insights to into account when you set it up we have another question there uh yes thank you uh adam sofronovich university library of belgrade you mentioned that creativity is an important part in distinguishing between machine work and human work and can you elaborate further on how we define creativity nowadays in the future because we have machines driving cars winning jeopardy contests everything that five or six years ago was considered to be a creative job um and if you could elaborate further on which jobs would be creative in academic library in this sense right thank you okay well let's walk a lot during the presentation so i was stuck here um right creativity for me and from a from a crowd sourcing design point of view creative is everything that you cannot that you don't know how to do so you actually don't have a cookbook saying first you do this then you do this then you do this then you do this and you pretty much can can um estimate the range of outcomes for each step so this would be creative is something that you cannot pin down write down as some sort of algorithm that can be engineered but that's my very limited view of the world and and and the point of view i taken this talk and now the second part of the question was creative jobs in the um in the in academic libraries right um well i think there will be still lots of of creativity and and and room for for humans to contribute when it comes to the design of new services i don't think data processing data processing should be outsourced to to to machines for most of the times it is not done today because the our algorithms are just for our tools are just so frustrating then in the end you just decided to do everything in excel um but um this shouldn't be the case and the hope is that through crowd sourcing projects we will manage to solve at least some of the some of these project uh and some of these challenges but the create the creative bit will still remain in how what type of services do i offer to the visitors um how do i design them how do i interact with them um it will also human communication is something that that is very difficult to be to be to be replaced as well so i think in in that in that um uh regard we shouldn't be so worried yeah we actually should be very hopeful that we will have the time to uh think about new and exciting services and things we could do as opposed to doing content management thank you again so now we have a second round of poster presentations could i invite please the visitors of posters to form a lineup because we're going to have uh lots of posters in only 10 minutes um and while we're forming the lineup may i invite you uh because after these 15 minutes we have a coffee break um you can afford the queue if you go to the reception desk and cast your vote for the best poster you've seen and you're always free to go downstairs and have a look at them again um so please do cast your vote um and now we'll be ready for our posters mechanism is very simple one minute of poster so i would like to first speaker please hello everyone my name is Tanya Stoyanova from New Bulgaria University in Sofia you can see our iceberg actually this is a Bulgaria but it's a metaphor because Bulgaria is a very warm country and our repository was launched in 2005 and is the first repository launched in Bulgaria it has two unique features depositing is voluntary and entirely made by authors and the second one uh the original content is deposited on its uh unique language uh our team performs uh monitoring of the metadata that is uploaded into the repository all the time so that to improve the depositing and self-archiving literacy and trainings also a six month uh survey has been conducted to find out what are most common mistakes made by authors during the depositing and if you want to see the results and to learn more about our open access policy you are welcome to visit us a deposter area thank you very much and congratulations for the conference in the city of Riga good afternoon colleagues my name is Thomas Baldwin i'm from the m25 consortium of academic libraries we're a consortium of higher education and research libraries in the southeast of the united kingdom we recently ran a JISC funded project to examine the consortial purchase of ebooks that project was ebass 25 we identified four possible patron-driven acquisition business models within the project purchase rental usage and evidence based the evidence-based pda model was selected as being the most desirable for our consortium because it combines user choice with the possibility for alignment with libraries collection development policies the poster briefly notes pros and cons of three of the models and then gives more detailed implications of the chosen evidence based model as m25 transforms this project into a service thank you good afternoon everybody the Slovak center of scientific and technical information is a national information center and specialized scientific library this key position in information support the science and research in Slovakia we are a project solver of several national project co-finance from EU resources they are also known as a so-called structural funds our first and main project is name is the national information system to promote research and development in Slovakia with subtitle access to electronic information resources known to academic sphere and in all country under the acronym NISPES project has four specific goals our posters introduce three of them first is centralized provision of access to electronic information resources second national search portal for science and research and third one central data bank of Slovak information resources for research and development please welcome us we are ready to answer your questions thank you my name is that's a Rosenbarber and I work at the Royal Holloway University of London as you may know in the UK we have strong focus on open access largely driven forward by the open access policies from different funding bodies and our poster looks at how one such policy research Council's UK policy on open access has been implemented at Royal Holloway what internal processes we have in place to support academics to meet the requirements what advocacy we do and how we distribute and monitor the funding for article processing charges if you would like to know more or would like to share your experiences please come and talk to me thank you with afternoon my name is Mireia Pérez Cervera and I come from the open University of Catalonia in Barcelona in the UC virtual library we have our website as a unique point of interaction with our users in this poster we try to present how we've passed from an old-fashioned website and usable and not intuitive to a new library site organized using a user-centered design method to do so we've conducted benchmarking benchmarking analysis interviews focus groups and user tests then we've been able to identify problems and develop new actions like including better explanations of processes and services organizing the the access to the content in terms of user needs rather in of in terms rather than in terms of library tools introducing one simple point of access to resources and structuring the content and information to each profile so that each of them could now know the specific conditions of their profile come to visit poster number 16 i'll we i will i will be downstairs on the coffee break thank you my name is Yolanda Ivanova i'm from Riga technical university scientific library the poster theme is creation of common territorial complex for Riga technical university the Riga technical university scientific library which is still combination of the past and the present must be viewed with a new perspective and understanding if it is to fulfill its potential in adding value to the advancement of the university academic mission and in moving with university into the future the poster's main goal is to reflect the new and combine services along with their added values in the united are to use scientific library complex so if you have any questions find me during this conference and we will discuss about it thank you hello my name is Luna Liga i'm coming from the university of Maribor library our library producer decided to support the strategic goal of our university this is to increase the usage of e-learning environment at our university approximately 50 percent to the students this is nine thousand students are users actively users of e-learning environment model so we have to decide to meet their needs for library services also inside of the e-learning environment we started with a pilot this year with the we finished a showcase with one subject at the medical medical faculty so we do this have done this in two levels first level and the enter page of the model with external links as offer for the students and the second level is the my courses page where we use the functionalities of epsco discovery service and lip guides to create a recommended list there did these do the professors and to establish tighter connection between students and our subject librarians so thank you very much for your attention i'll be downstairs number 18 thank you hello my name is Tatiana Timotyevich from the national library of Serbia the title of our poster is how we made it is more visible well in Serbia we have a lot of institutional repositories but the last year we implemented national repository of it is which is called the doi Serbia phd why we did it first to promote our institutional repositories and second to make our thesis more visible how we did it well that is a question now we did it by assigning a doi to all our thesis and also equipping all records for thesis with a lot of useful useful links also we implement and deposit metadata to our various open access repositories such as the european library dark open door etc so thank you for your attention and if you have any questions feel free to ask thank you very much so we have an actual poster it's in the room so my name is Matilde Panest I work for the library of medicine of the university of Lausanne and in Switzerland and my colleagues and I worked on on the ownership roles in the biomedical field we created a new model that defines 10 roles in the publication process this role reflect reality of the division of labor during the publication process and is different from the model that is currently used at the faculty of medicine in the second phase of our research we conducted interviews with researchers to confront our model to the reality their reality and the model we created as well as the results of our interviews are available on our poster please come and see it because it's not here obviously and thank you a lot hi i'm Mariko Willems from the labor office and i wanted to explain your poster number 15 from european newspapers by giving you the premiere of the european newspapers animation the united kingdom is newspapers i find something unexpected every time i look at one it's the newspapers i find something it's time to make some practical announcements oh really? yeah that's the best solution yeah newspapers i find something unexpected every time i look at one it's the diversity there's a whole world inside their pages the news of course but also cartoons letters shipping reports they're endlessly useful to researchers like myself through them i can track the spread of ideas the evolution of language the aspirations and achievements of nations the technology is dramatically changing their value not long ago i'd have to go through fragile paper copies one issue at a time now millions of newspapers are freely available online like the collection created by european newspapers it's 10 million historic newspaper pages from across europe all searchable on a single website the possibilities make my head spin so much content and so valuable i'm very excited about the trends i might now be able to see uncovering how new words spread from one country to another or comparing regional opinions to elections or wars advertising could tell me about the evolution of consumerism or graphic design the real digital revolution is that in two million pages the structure and content is being tagged i'll be able to distinguish headlines from articles search for paris the city instead of paris from greek mythology biella the river instead of the famous comet i see it as a liberation of the newspaper okay i think you have two small announcements to make but before i do i'd like to have another reprise for round of applause for all of our poster presentations