 Thanks to you. Hello, so let me just start my video. Hello, hi everybody. So what we are going to present is what we did at the Open University with regards to the fit for RRI project and the use case we had with regards to the fit for RRI project. But our use case was a little bit different from the one that was presented earlier because we did not have a use case that was relating on responsible research and innovation per se, but we decided to do a research experiment in a research and in a responsible and in an innovative way. And that's why we decided to look at how to text and data mine big scholarly data. And I'm going to explain in my slides further down how we did this in a responsible and innovative way. So the problem with that we were facing was that right now when think of you as being a student or as being an academic university environment and where we have readers who can access the literature that their university subscribes to. But then what happens right now with all this literature is that people can log in through a username and password and they can get the results that they are looking for. And this could be the articles of the papers they are looking for to help them with their studies. But it is not possible to act in the same way and get results that you are interested in in a machine readable form. And why the reason that this is needed is because we have millions of publications being published every year. According to a research study there is one point million of publications per year. Even though we have open access which enables more the text availability in a machine form. Nonetheless this open access percentage is very slow. Therefore it's very low. Therefore there is a very big gap between the content that it's open access and it's closed access. And then at the same time it is very difficult to systematically analyze this information at a large scale. And it's also very difficult to for right now universities what happens is that publishers send to universities contracts which have some clauses where text and data mining is being mentioned. But it's very difficult to understand and analyze these contracts so that researchers are 100% sure that what they do they do it in a legible form and that their machine access can be done in a way that they are not going to be sued for example in the future. And this is a barrier and an obstacle to text and data mine large corpus of information. So what we would like to see happening is to provide machine access to the full body of research of text and data mining. Now there are of course text and data mining challenges and there are mainly three ones but what we did in our experiment because we did not have plenty of time to conduct this experiment and we had a specific frame time frame was to make sure that we deal with the two more important ones and this was the legal issues and the technical issues. Because we are in the UK the legal issues was easily solved and the law created an exception on the 1st of June of 2015 and this means that everyone who wants to text and data mine large corpus of data this can be made possible provided that this is not for commercial purposes but it is for research purposes. So in reality we only had to deal with only one of these TDM challenges which was the technical issues. Now to proceed with that we did we had already done some work in the past before coming to this fit for a right project and in fact the work is that we tried to see how we can aggregate content from open access publishers but also from traditional publishers who give some of their papers in an open access way just to add a little bit of open access jargon the hybrid gold open access outputs. Now what we found out was that there were many technical challenges and this was the fact that metadata was not the way that metadata was presented it was not in a clear and unified way and there were restrictions to regards to accessing to full text and most of the content was accessible via content that it was available on publishers websites and not like an internal backend system where content could be accessible and this created all these difficulties which we managed to solve and in the end what we did is that we built connectors to each specific from each specific publisher who has its own in-house made system and we created the connector for each one of them and we used also the Crossref TDM API to put all this information together as an end result what we gave back to the end user was one endpoint where the user could actually get all these text and data miners could get all these open access contents so we worked on five different publishers which was Sprinkle as Zelvia Frontiers the public library of science and Wiley and we managed to get a lot of data from them which are now available for everyone to use free of cost now with this experiment in the Fit for Our Life project what we wanted to do is that we wanted to go a step further and extend this to make text and data miners to make it easier for text and data miners to connect and we use the word EDUTDM just to make it in accordance with what there is everywhere in the world which is the EDUTDM and as people who are on EDUTDM and they can be everywhere they want in the world but they connect with their own institution's credentials to get access to the internet the same way we thought that this system could be used which would be that users irrelevant of where they are in the world if they have an affiliation with a UK institution then they can use these for text and data mining purposes and what we have done is that we have composed a white paper to develop a conceptual solution for with which the problem stakeholders would agree upon and we as I said earlier we intentionally did not cover ownership development and management and this was intentional because we had very big players in the working group that we created and we did not want to have the discussions evolving around mostly about ownership or management or development but we wanted first of all to see if this was technically possible so as I said in the beginning of this slide what we tried to do is that we tried to conduct this experiment in a right way and the way that we managed to do that is first of all to engage all research stakeholders in this specific experiment that's why we used in the working group that we created we used experts from various systems around this specific topic so we had experts from publisher systems and we have experts from text and data mining and from policy making for example where we invited policy makers and experts in policy making with organizations that make recommendations and spread the best practices but also experts in industry nonetheless we realized that we are missing very big categories of people who would be relay who would be interested into these groups but unfortunately we would not we did not have the opportunity to engage with them and involve them in this co-experiment and this was again due to the time frame that we had in this specific project for example we did not involve governance or we did not involve a lot the society nonetheless we are trying to build this gap by trying to evolve the society with the publication of the white paper that we have composed which is going to be published on the foster platform that Edrian showed you earlier with all the fit for error right training activities and then disseminate this white paper via suitable dissemination routes so that the public becomes aware of our work but based on what we have learned is that provided that we had more time in our hands then we would be able to have more groups engaged with our own research so what we tried to do with this working group is that we tried to reach a common ground some statements that all of us would agree upon because we thought that this would make the perfect starting step but also would give us the green light to start the discussion going with regards to other more specific things that related more to technology that's why we the the sentences that we all decided on from the very first meeting was that text and data mining has the potential to help us improve the way that research is being conducted um everyone was at the same level that the UK is legally allowed to perform TDM and I am mentioning this because some of our members of the working group were not residing in the UK that UK affiliated researchers should be able to get all the content that their university lawfully subscribes to and they should be able apart from reading this content and downloading reading it with their own eyes and downloading it manually they should be able to do that in a machine readable form and that it is a challenge of course indeed to gather all this information and that we need trusted authorization layers to have TDM and then we have some technical challenges and some organizational challenges the technical challenges were the lack of common standards or the choice of formats that this would happen for example would it be JSON or XML and the interoperability between systems because as you can understand for all these to happen systems need to be able to talk to each other and then we touched upon a little bit at the second working group meeting that we had the organizational challenges only because we listened to the working group's needs and the need was to discuss about this but we decided not to work further on them and those organizational challenges was the ownership and the governments and the incentives for publishers so why should they be engaging with this project the cost for developing such an infrastructure and what should be the risks the standards and for all this procedure to take place but also how this could be delivered and what kind of services should be provided in the white paper that we have already finalized and we are very soon to publish we discussed about some functional requirements and we divided our state holders into three big categories which are the researchers and by that we mean the text and data miners or any other researcher who has the skills to perform text and data mining the universities and the publishers so those are like the three big groups and we created some functional requirements for each one of these groups and we just wanted to make sure that the two that we are going to create would be such that it's going to fulfill all these functional requirements for all these groups and failure to do so then would mean that one of our stakeholders would get disengaged with this process and then we would lose them something that could be detrimental to the project and apart from the functional requirements we also had developed some non-factual requirements and this relates more to how the service would work and we had to make sure that all of us again are in agreement with regards to those non-factual requirements again having the same end result and alarming result which may be that the project that would not go on to make this more easily to be visualized from someone we have created this graph and what this graph shows is a little bit how the system would work so what we wanted to do is that we wanted the researcher who is on the left hand side to be able to have only one and one point somewhere in the middle which would be the problem to solve and then all the other points which are the publishers which may be from start from a and then be finishing to z or even more than that then this would not be a problem for the end user so the box in the middle should be the service the blue box in the middle should be the service and this service should solve all the problems from the end of the screen where the publishers are but then on the other side of the screen where the researcher is he should be able to get easy access to information and he should not be worried about what happens at the back end between after the box in the middle and the publishers and then to make this a little bit more in detail what we decided to do is analyze this and say what kind of responsibilities and rules would be performing each of these services so for example the publisher would be the ones who would be doing the authentication authorization and searching of the content but then the service would be doing a validation a url direction but also would be responsible for the content aggregation but then the casting and the searching and the users and monitoring something that it was mentioned earlier at the interaction model and it was something that it was required from the from the service in order for it to work as it should be working and that is all that we did in this project what I would like to say is that in the beginning it was a little bit difficult for us in order to see how we can make this more in another way it was the first project that we did where we spent plenty of attention on not so much on how to conduct this whole experiment but how to do it in a way that we make sure that we do it in a way that confronts with the RRI principles we used we are not experts in the group on RRI and we used a lot the guidance from the experts from the fit for RRI project and we understand that even though in the beginning it was something that where we spent a lot of time effort and effort to plan and organize because this was a brand new for us then I think that everyone in the team knows now how to do this and it sounds a little bit better and when we start our next project then we would have this in our mind and we would start thinking of the RRI components earlier much earlier than what we did now because now we focused a lot on how we are going to do this like technically and we did not focus so much on all the stakeholders and all the four quadruplex quadruplex helix quadruplex helix factors I think I said this I said this in a wrong way thank you very much Nancy for another interesting presentation and very very different than the previous one which of course brings much added value from a totally different perspective since your organization is unique institutionally in terms of its decentralization