 Okay. These are all our partners and our team. Really the aim of our project was to implement and evaluate the impact of an online IP program designed to promote collaborative practice within undergraduate health and social care disciplines. You may remember the last time around we did our first pilot and the second time around we've now just done pilot two and that was 2015, 2016. In pilot one we didn't have UCD, we only had NUIG, now we had UCD students involved as well. And the content this time around focused on roles and responsibilities, communication and teamwork. It was developed and delivered on curator. All students received 2.5 eCats except for the NUIG medical students who did it on a voluntary basis, which was similar to the previous pilot. What we have done in pilot two, we've developed and evaluated a new five-way program. We have developed new content. We've developed and the curator online platform to deliver the program. We created student and facilitator handbooks and training sessions and we created online training video because one of the challenges was in terms of was actually how to navigate the system and logging on was a challenge for students in pilot one. I'm just going to hand you over to my colleague, Tara, to talk about the content. So we sketched out the content in the program. It was quite a start quite a different way from the previous one and this is just an example of the various tasks and activities. So learning objects were established and worked at in a level. So each level contained or pertained to one week and each level then contained a number of activities. Some of the activities included watching videos, commenting, discussing, contributing their own resources that they had to find our flight and then commenting on each other's work as well. The assessment method was divided between the curator system that automatically awarded XP or experience points for viewing videos, for commenting, for engaging in discussion. At the end of each level there was a gate question so kind of end a level or end a week question that students would complete and these were awarded points by the facilitator and this was done offline. There were also within-unit badges that would kind of record the level of engagement. So this would be activities such as commenting and replying to other students' work, voting people up for good comments and basically just being an outstanding kind of peer supporter throughout the process and these were awarded by the system as well. Finally, once the students were finished they were actually given an open badge by the All A Board project as a mark of completion there. And we use a concurrent mixed method designed to actually evaluate the program which involved focus groups with students and facilitators and we also did a post-program student evaluation questionnaire which was the same questionnaire that we had used in the previous pilot one and we did pre-imposed ripples and IEPAS tools as well with the students. In terms of the focus groups we had 57 students involved, most were female, most were in the 1822 age group, most were second year students and most were medicine, 11 students and general nursing students were the next highest with nine. This is what the impact was on the students, prepared them for clinical practice. I suppose it kind of showed that it's not just about you, it's about other professions as well and you can't just solve the problem by yourself. You need others people involved to help you to give opinions. People said that they had a really key understanding of other people's roles. Social care workers in my head I thought there were social workers and I kept answering oh and the social worker would do this. There's a difference between a social worker and a social care worker. I didn't know until I was told. So you know that was, you know, it was good to know the difference. Understanding others roles as well, it gave me an idea, a better idea of what people focus on. For example, in one of the cases there was a huge difference between what a social worker said was needed for someone to do and what the doctor would say, so it was very interesting. Key issue that came across some students was the need that we should be doing this very early in the program. So to me the learning part of the main point I got out of it, you have to get in as early as possible to get the different people together, not just after graduation, introducing it to like a hospital but actually within the schooling now. Facilitators were curators, very positive experience of curator, easy to navigate, easy to find their way around, very effective and easy to use. And some of the comments were, you know, I think the fact that it was online as well, I think you're more likely to vocalize disagreement and stuff. Whereas you know, I think there's maybe a bit of a hierarchy if you're working in a hospital. I think nurses and everyone might listen to a doctor's opinion and maybe, but here everyone kind of got their own platform to express what they wanted to say, so that was pretty good. XP points was a really key motivating factor in terms of engagement. One student said, if there wasn't an XP point system I probably wouldn't have done it at all, to be honest. Interprofessional groups in the mix was really important. You got to interact with people you didn't know like. And from other cultures and for the likes of the speech and language therapist, we wouldn't have that really. So you got to interact with them as well. So it was really good. And this was particularly from UCD students who wouldn't have speech and language students in their university. Case studies, very interesting in terms of the case studies. There was 50% who found the case studies actually were really relevant and 50% who actually did not. So people who found it relevant said the cases they presented us with actually had different ways of approaching it. So at least that was well thought out. But the challenge for others was actually, and we had this challenge ourselves, trying to make them relevant to all the disciplines, which was really problematic. We did the best we could, but we obviously didn't get it right because some of the students were saying, some of the case studies just didn't relate. And we couldn't answer the question properly. Like dementia, we're midwives. We don't know about dementia. Just a few things weren't like set to midwives kind of thing. They're just frustrating to try and answer it and then go and learn all about something that you might never come across again. Other barriers were a limited level of student participation during the week. And this was to do with students who were actually working at the weekends. And others who were working part time during the week left to the weekends to do it. So students then were online at different times. And they felt part of the collaboration wasn't as effective as we would have liked. My time is, you know, has to be very regimented kind of thing. I used to like it on a Monday. And then people wouldn't be commenting until Sunday night. So I couldn't comment on other people's because they weren't doing it until Saturday or Sunday night. And I hadn't time to be doing that Saturday and Sunday night. So it was just being at the mercy of the way it was. And others, barriers were the end of level gate questions were found to be repetitive after about the third week, they felt they were a bit repetitive. The extent of prior clinical experience for some was seen as a barrier. It was interesting for others, it wasn't default actually learning from others online and sharing with others who had a clinical experience that they didn't have allow them to learn. So that was interesting. Engagement clearly was assessment driven as a student said, because we're also like exam focused, we're all just like looking to see how we can get this done. You didn't like, well, I personally didn't like read everybody else's comments, because I was like, Oh, well, I don't actually have to do this. Like probably not the right attitude. But still. So I think they were pretty honest in their responses. Evaluation questionnaire, this went out to 131 responses out of 231. Mainly female responses, mainly NUIG responses, because they were the predominantly larger group, mainly year two, because most students were in year two. Just this is probably most interesting in terms of the comparisons between pilot one and pilot two. So as you can see down on the on this side here in pilot two, everything has increased in terms of in comparison to pilot one. It's the unit improved my understanding of interprofessional learning, improved my understanding of collaborative practice, made me think more about my role, made me think about the perspective of my discipline, how that can impact and collaborative practice, helped me understand how I could improve my practice. All of these actually were increased. Again, the differences in terms of is relevant for my practice that increased the online unit was overall a positive experience. It was useful to my learning. One of the things I found it difficult to disagree and be critical of other participants views and comments. I would have liked to see that go down. That actually went up. Whether it's because we had more students or not, I don't know, but that actually increased. It may be that we had first year students more than we had last time around. We didn't have any first year students. That may be the reason, but I really don't know. Interestingly, 79.4% found the case scenario is very relevant to my learning. So there was a mixture with the case studies in terms of some found that they were relevant and others did not. Again, all of these went in the right direction. The red ones I've highlighted went down, which is what we would have wanted them to do. The the unit was overall a negative experience that went down and the unit was not useful to my learning. That went down, meaning that they actually found that it was. So for us, the majority of the evaluation went in the right direction. We also asked them, as we did the last time, why they would not recommend the program. These were the open questions. In pilot one, they said heavy workload, lack of training and difficulty navigating the system. In pilot two, it was the repetitive nature of the gate questions. The case studies were unconnected and limited group discussions. So they'd recommend, I would recommend given that the gate questions were improved and that the overall repetitive aspect of the course was fixed. Part of it were not relevant to midwifery practice, so I found it difficult to answer a lot of the case studies so that the same issues that were given in the questionnaire actually had been highlighted in the focus groups. We've done a good lot of national dissemination. We have five papers and in present presented, we have done one poster. Fourth coming, we have three abstracts submitted to date to the All Together Better Health Conference in Oxford, the Social Care Annual Conference in Kildare. We're currently drafting a paper for the Journal of Interprofessional Education and Practice and we're going to target some of the conferences that we hit last year with the other system to show the differences and the improvements that have actually taken place. Looking at, we were asked to evaluate how we thought we met our objectives from the proposal that we had. So from the first one, two, three, four, five, six we've achieved. The last two, to compare outcomes across participating institutions, was challenging given the small numbers that we had really to work with from one of the institutions. One of the key difference was to do with the clinical practice in that the first year students in UCD didn't have the same level of clinical practice as some of the students in NYGalway, yet they did comment and say they learned from the others on board, whereas I had some second year students who were in, who had limited clinical practice and they were saying, well, you know, I can't comment on this because I didn't have the practice. They didn't seem to see the same level of learning, interestingly, as the first year students could see. To estimate the cost of implementing digital technology, we did have a bash at this. It was a very crude way of doing it. We actually looked at asking the facilitators to calculate how many hours they spent on the system and preparing the work in relation to the system and we then looked at calculating what a lecture's salary would be and we looked at the room cost of hiring the room costs. What we found was that the online system, the VX system, was more expensive than the curator system and that they were both relatively cheaper than if we were doing face-to-face, but it is not scientific enough. It is very crude and really we wouldn't be able to stand over it to any great extent, but we did try. So to summarise, pilot one really was, we were asked to simplify the VX online system that we did. We placed with curator. Review the workload and distribution of eCats, we did that. Make compulsory for all students. I have to say in this one we failed. We still didn't manage to get the medical students, unfortunately, in our own institution to actually take this on and give ECTSs, but I am sure it will happen for next September. I won't, I'll hold my breath and hope it happens. Improve the instruction training. We put a lot of training and instruction in place for students and for the facilitators as well. And the last time around we had, we didn't have smaller IPE groups. You may remember they were all in one big group. Now they're in smaller groups and involve more disciplines. We have done that. And we, our next steps really is to review the assessment method to make it better. Review the case studies because clearly we've got the actual delivery platform right, but we really have to fine tune the case studies to make them better. Try and make it compulsory to get our medical colleagues in. Explore the potential of creator to incorporate direct feedback. One of the challenges we had, and the facilitators reported this in their evaluation, was giving direct feedback to the students wasn't possible within the curator platform itself. So we had to create a Google doc. So they filled in a Google doc and then that Google doc had to be collated and reported back to the students, which was time consuming. And it actually prohibited facilitators level of engagement online in, in, in, because they had to devote time to do that. One of the recommendations that came forward the last time around from the review panel was obviously IPL online on its own is quite good, but we should look at ways of trying to maybe integrate it more within simulated learning. And that was a suggestion. So actually we're going to introduce, which is outside of this, obviously the funding for this, but we're going to introduce a half day pilot IPL simulated learning session for students in NUIG hopefully semester one of the next year, where we'll pilot it and see how we can actually with a different cohort of students so that we will have different students exposed to interprofessional learning. And that's it.