 I'm Nisa O Regan. I'm e-learning development officer at Manusia University, and I'm the lead for the Y1 feedback project. The long title is Supporting Transitions, Enhancing Feedback, and first we're using digital technologies. This is a collaborative project between Manusia University, AIT, DCU and UNDOC, and I'm here representing everybody today. At a high level, what is this project? First of all, it's a two-year project that we're about halfway through. It's a regional cluster partnership, so it's a common issue which we are addressing together. It intersects both the national forum teams in terms of teaching for transitions and assessment as and for learning. I suppose, what does this project do or what will it do? First and foremost, it focuses on feedback and first-year, with a specific emphasis around feedback in supporting transition. This is our core differentiator. Another unique aspect to this is that we are very strongly evidence-based. We've done a lot of work to look at what's happening and what's happening elsewhere, and that is informing our project. So, why feedback and why first-year? First and foremost, we know that feedback is really important for learning. We know that regular feedback and feedback for first-years in particular can help them transition and that they're more likely to stay on. What we also know is that, as I said, feedback is important, but we know that feedback is, how would you say, a global concern. In studies across Australia, the UK and Ireland, we've seen that there is dissatisfaction around feedback, and that's also the case in Ireland. This project addresses that concern. For example, the Irish Survey of Student Engagement 214 and 213 showed that first-year undergrads said that they never are only sometimes received timely feedback on written work. We want to look at this, not to look at why and to see how we can improve it or look at ways that are approaches that can help this. So, the project, what have we been doing in 215? First and foremost, we set up the project. Then we looked at what's happening on the ground. We looked at how is feedback being given across the four institutions or what's happening in relation to feedback, and we also looked at students. What is your feedback experience in first-year? We got a sense of what's happening. Then we also looked at the literature in the area. We looked at a number of areas around feedback. We looked at approaches. We looked at first-year and the transition piece taking on feedback from the panel. We also looked at the technologies. How is technology being used for feedback? Then we moved on and we looked at, once we've identified a number of approaches, we started identifying partners to work with to pilot these approaches. Then we moved on to plan these case studies with academic partners. We've also been looking at some dissemination throughout. That is a snapshot. I'm going to go into some of these in a little bit more detail. First of all, how we work together. In any collaborative project, it's important to set a firm foundation in front of the team. As you can see, we have regular meetings, face-to-faces and email contact. This is us in Manus. We do need to invest in a selfie stick, I admit. But we tried. Maybe the budget will stretch to it. First and foremost, I said we did a study. We looked at how feedback is happening. This is the publication, Feedback and First-Year Landscape snapshot. At the moment, it's with the designer. It's in the final stages. So, basically what we did, we looked at what is the staff experience. We looked at how feedback is given, what technologies are used. We asked staff about the challenges they face and the approaches they recommend. We also looked at the student experience, how they receive feedback. Their attitudes to feedback and their recommendations for change as well. We've summarised this. This document, as I say, is about to be published. We will update our website with this when it is complete. We expect this to be available within the next two weeks. As I say, it's in the final stages. That is our first piece of work. Just to give you a very quick idea of what we found. I have a very high level that both staff and students have shared appreciation of the value of feedback for learning. This came through very strongly. What also came through is that the student's experience of feedback in First-Year is inconsistent. That's just a little quote that you can read that demonstrates that. This came through quite strongly. In general, there is dissatisfaction around timing, quantity and quality of feedback. But that is from both a staff and a student perspective. There are frustrations on the staff side and the students acknowledge this. That is not to say that they are not an amazing example of feedback practice and amazing accounts. These can be seen in the snapshot document when we publish it. There is a strong perception among staff that students only value the grade. Interestingly, students came out very strongly the other way. They want the grade, but they also want feedback. They want more than brief comments. Shared value for feedback conversations. This came through very strongly when we asked their favorite mode of feedback. It was written with a chance to discuss. This came through very strongly. Overall, we found that there was a low use of pure involvement in feedback from the people and participants involved. We also found that there was a limited use of technology-sported feedback approaches. I would say here that there were examples and very individual examples within that. Overall, this was the picture. The key challenges for staff that came up were time, workload and large class sizes. Probably no major surprises there, but significant challenges nonetheless. Key suggestions. We asked students if they could change the first year experience in terms of feedback. They would like more consistency, more feedback and quicker feedback. Consistency was number one. This gives you a sense. We've taken this data and we're embedding this into our pilots and how we're approaching those case studies. Next, what we did is we looked at the literature. We looked at a number of areas. We looked at the first year, the transition piece and how feedback plays a role there. We looked at feedback contemporary literature on feedback. We looked at approaches and we also looked at feedback and technology. From that, we distilled a number of key things. First and foremost, we were able to identify a set of features of effective feedback for first year. This again comes back to a core differentiator for our project. We're looking particularly at first year and we've been able to identify these. I would say that these have been identified with help from the literature. We will try and embed these within our pilots and cases. Come the end of the project, they may look a little different, but this is our first go. Promoting feedback within and beyond assessed work. Again, it's about inside the classroom, outside the classroom, not just about the summative or the continuous assessment. For first years, it's really important to embed the assessment of feedback literacy. Students always don't understand what they are receiving. Maybe feedback or see a conversation as feedback, but also about setting expectations. This has come through very clearly in the literature. Fostering student competence, motivation and belonging. This comes in around peer involvement and feedback and very important for first year to support transition. Providing opportunities for dialogic feedback. This is the core, I suppose, team running through our work that we are looking at dialogic approaches to feedback, as well as sustainable approaches to feedback. We're also going to be looking at... We also feel that it's important that you embed the digital literacy, especially where technology is involved. I suppose one key feature to effective feedback in first year is about consistent and coordinated approaches to feedback. So program marketing approaches. These are what we have identified as key features to date. I mentioned sustainable already. These are just very quickly some approaches we've identified and some of them which we will be piloting and casing. This is not an exhaustive list. They're not set out that you might use them individually or that you would use them as a group. You wouldn't just use one or the other, but you could. But it is a reset. Right. Another key aspect of the project, so we have the features. We have the approaches. Again, we're looking at how we can leverage the potential of technologies. Within our cases, we will be looking at is it possible to give more feedback, timely feedback. For vision to large group is a key challenge for us. A variety of feedback. Again, the dialogic feedback and flexible access to feedback online. As well as the potential for added dimensions to feedback. So there's a lot there. Each case won't be able to look at all of them, but over across the spread we hope to be able to look at a number of these areas. Okay. So as I said, we have looked at what's happening on the ground. We've looked at the literature. And we have taken key approaches and features and begun planning cases. So project teams across the four partners have begun developing these. We are working with departments in partnership with departments. And how we did that was we held information sessions and participation was invited. And there was also a lot of work that we were aware of. So we were able to touch base with people there as well. We are taking a design-based research approach to the approaches. So pilots, some of them have started already and they will go through an iterative process between semester two and semester one next term. Some may only go one semester, but some may need to go two. In total we have 26 case studies already in progress. At the outset we agreed for per institution and there has been a lot of enthusiasm within the institute so we are currently at 26. Of course that number may drop or that number may increase by the end of the project. That's the way these things go. So I'm not set on that number so to speak. At the moment we have 16 academic departments involved so there's a great spread of disciplines from nursing, engineering, business, French. There are loads more, social care. In total we are working with 32 academic partners. So this is outside the project team alone. So there are huge numbers involved and already involved in this work. Of course students as well are involved as active partners in the pilots where there are pilots going on. They are aware of the pilots and we will be looking for feedback throughout the pilots as well. Okay, so I am not going to go through all these but I really wanted to show you or to give you a sense of the breadth of work being done and the type of pilots that we are engaging with and the types of approaches we are trialling. So you can see here actually the first one there we've linked up with the previous presentation and we are working on a case study with Unidoodle and feedback approaches at Unidoodle. So we have linked up with Seamus and Christine there. Other ones you can see, we're looking at video for some assessment, we're looking at a number of ours focused on in-class strategies for feedback. You can see here the AIT pilots in progress, there are a number of pilots ongoing here and for example the Clickers for Dialogic format of feedback in the Large Humanities classroom is just one. You can see there is a screencasting, e-portfolios and Moodle as well as some of the technologies but a variety of approaches being used as well. Here again you can see the peer is featuring in some of our cases. Peer came out very strongly in our research as a key method of bringing students into feedback and bringing them into the discipline and developing their self-regulatory learning skills. Okay, so there is just a snapshot. Again we have a number of cases here. Another key approach that came out through the literature was especially for first year was multi-stage assignments and that ability to give the students the opportunity to get feedback as they go through a written piece of work both from the teacher but also from peers and there are some cases here on that. One case here as well is how we are also looking at cases involving students and motivating use of feedback. The first case is the best prize for the best use of feedback. National impact, okay, so I'll go through this quickly. I think I must be running out of time. National impact, well I suppose first and foremost we're halfway through and we're starting at local regional and we will move towards national impact. I was going to say we do have a website and as our publications come through we will be putting them up so we would hope that these will be available in the next two weeks. Dissemination, we have already become dissemination. We shared our findings from the Landscape Snapshot at edtech215. We also shared some of these findings at the assessment higher education conference in the UK and we also won the best research poster award at that conference and that was no mean feat but it was a presentation, there was 60 in the running so we're very proud of that. There's the team who presented. We also presented at EATU. This year we've just recently been accepted to CEDA 216. They have a spring conference on assessment and feedback so we'll be sharing our findings so far there and we'll also be looking for feedback on the approaches there so that should be really good. We have a number of staff development initiatives going on for example Feedback Fridays where you can look at, we're participating in the Wattworks and Y initiative across the partner institutes. We will have case study reports at the end however we're only really starting out on those and our big national dissemination will be 217 our national symposium on feedback. Last question, how have we evolved? I guess taking on board the feedback from the panel and also I suppose marking our own learning journey through the project there have been a number of changes or changes in focus. I'm going to take one minute to show you these, I think they're important. First and foremost I suppose when we did start out we were very much focused on the technology based approaches and even though we would think pedagogy over technology we were thinking maybe more audio or video and our thinking has really progressed on that and really we're now focused on dialogic and sustainable feedback approaches for first year. So this is largely informed by our work and technology enabled where appropriate. I suppose as well at the start we were more focused on assessment feedback we were looking at a lot of continuous assessment and that has broadened to all learning interactions. A lot of our cases are looking at in-class methods real-time feedback, automated feedback. I suppose initially why we were looking at teacher and peer we have become more peer focused and that will be seen through a lot of the cases which embed peer even if they're not using technology that a lot of approaches are taking in peer. As well as the start we probably thought more in terms of a single case or a single approach that might be used in a pilot. However that has developed and I suppose that has really developed in our conversations with staff and in those partnerships we're looking more holistically at a module at the assessment and feedback and looking at multi approaches. So that is it. Apologies for going a little bit over.