 I'd just like to welcome everybody to this session. It's good to see that there's almost 40 of us here now online. So welcome. And a special thanks to Lisa Gray from JISC, who is going to provide our presentation today on assessment and feedback. So without any more ado, I'm going to pass over to Lisa. Just check, first of all, that Lisa's ready. Hello, everyone. Yes, Linda, I'm happy to start when you're ready. Hi, Lisa. OK, well, thanks very much to everyone for inviting me to come and chat about assessment and feedback. And I'm really glad to have the opportunity to share with you some experiences over the last few years of working with a number of institutions who are looking to enhance assessment and feedback practices through technology. So I'm here today with my colleague Peter Chatterton, who's been working as a critical friend across the program. And he'll be able to deal with any queries in the text chat, as well as share some URLs as we go along. So that's great. And we've also got Dr. Anne Jones here, who's from the EFX Project at Queen's University in Belfast. So thanks, Anne, for joining us. So I'm going to start today just by sharing some findings from some initial research that we did into the challenges that institutions were facing around their assessment and feedback experience. And then go on to talk about some of the themes that have emerged from the experiences they've had of facing those challenges. And those themes are around feedback and feed forward, around assessment and employability, around electronic assessment management. And then finally, I'll just touch on some change models around large-scale approaches to change with assessment and feedback and ways of tackling that. So just to give you a little bit of background context, we funded this work back in 2011, which was a work that was shared with the EFX Project and we have some of the future effect of a low-key visual in the early days of its development so that people can see the impact of it. But also through its development in the end of the day, we have a bit of a better understanding of what this means, and it's worked out and it's made its way to the same place that we've been looking at over the years. So I think it's been a really good opportunity In 2011, aiming to include the experiences I've had of facing those challenges and those that have not been transferred. The themes around feedback and feedback will take around assessment and employability, around electronic assessment management. And then finally I would be touching on some change models around involved over the years. It makes a lot of change with assessment and feedback and ways of doing it. And delivering and articulating where the efficiencies and quality improvements to be found. So very much a focus on trying to explore the benefits that the technology offered. So just to start with a little bit of research into some of the challenges. So right at the start of the program we wanted to ensure that we had a point in sand from which we could measure progress. So all eight of the institutional change projects provided a baseline review of what was happening on the ground. This was key to understanding the starting point for their innovations but also ensuring that the challenges identified were the right ones and ensuring that all the stakeholders needed were on board. So across the board they gathered a range of evidence both qualitative and quantitative which drew on existing data as well as creating new knowledge, for example through interviews and workshops with employers, staff and students. So the research showed pockets of good practice but overall consistent picture of the challenges and really highlighted the problems that exist with resistance to change and the scaling up of good practice and innovation. So in terms of assessment and feedback strategy and policy we found that the strategy documents tended to be quite procedural in focus and didn't reflect the current thinking around effective assessment practice and the value that assessment can bring to learning. And another key issue was that the devolved responsibility for assessment and feedback across faculties and service departments results in considerable inconsistencies which can make it quite difficult to achieve a parity of experience for learners. When it came to academic practice the issues were varied and complex but included the emphasis on summative assessment and the persistence of traditional forms such as essay and exams even where there were a variety of options available. And probably unsurprisingly, timeliness along with the quality and consistency of feedback was an issue across the board and even where there were clear deadlines set there weren't always time to feed that feedback into dealing with the next assignment. So curriculum design and a modular approach can also provide some of those barriers to the ongoing development approach to feedback at a program level. There was also perception that learners weren't engaging with the feedback that they received tutors felt they may have given a lot of feedback and support but it hadn't been acted on so learners were seen in a sort of passive role waiting for that feedback to be delivered to them but I think the reality is less clear cut as we'll see through the presentation as the value of acting on feedback wasn't always well communicated and was notably absent in most induction processes and there are also issues sometimes around the clarity and the timeliness and the quality of the feedback that makes it difficult for learners to engage. And finally from that research the assessment and feedback process particularly with the emphasis on high stakes assessment and the value that's placed on marks and grades is very different to the formative ways that professionals develop during their working life where much value is going from feedback for example from peers and so that was another challenge identified that the institutions wanted to tackle. So how did they address those challenges? I'll touch on sort of three main areas firstly around assessment practice which focus on the educational aspects of the learning process as well as assessment management which focuses more on the infrastructure and in institutional processes which support assessment and feedback and then finally that key thing which underpinned all that work which was around managing change. So just to start with feedback and feed forward I think one of the biggest shifts that we saw throughout the whole program was in the balance from summative to formative assessment and hence in assessment of to assessment for learning and this was a key aim of many of the projects who wanted to focus on learner longitudinal development feeding forward and Ipsative approaches whereby feedback acknowledges progress against the learner's previous performance regardless of their achievement. So although this is primarily a learning design issue technology has a really vital role to play here in terms of storing feedback across the program making it easily accessible to both staff and students in order to develop that longer term picture of learner's development but I think it was pretty much found that most of the VLEs in common in use recall both marks and feedback at a module level so it's not always easy to gain that sort of information and we have made a little headway into that challenge. So institutions explored a variety of ways that technology could resolve some of those issues from using simple tools to analyze the timing of assessments feedback dates and ensuring students could act on feedback before producing the next piece of work as well as developing tools to analyze and audit the feedback and also creating conditions for student and staff dialogue rather than feedback just being delivered to students so I'll touch on some of those approaches here. I think firstly obviously the design of the curriculum needs to permit this type of longitudinal development and common problems including the lack of formative opportunities or lack of time for feedback to inform the next assignment. So this diagram here shows a very useful tool known as the assessment timelines tool which was produced by the University of Hertfordshire and it just enables an overview to be gained of assessment patterns across a 12 week semester. So these patterns here were developed by the University of Dundee who mapped their existing practice against the timeline and then redesigned that to move much more from an emphasis on high stakes assessment to providing much more formative opportunities within the curriculum for feedback to be shared and used and made use of. A key aim of many institutions was also to better support the learners on going development through feedback and feed forward and to better support dialogue between learners and staff. So very much talk about feedback being a conversation and not just an end to itself. Where feedback focused on a learner's current performance, feed forward focuses on feedback relating to the next assignment. So it offers constructive guidance on how to do better and both are key to learner progress. And that touches on one of the projects at the University of Westminster who developed the making assessment count process to support that ongoing engagement with dialogue and the combination of both feedback and feed forward worked best in terms of learner improvement. This slide is also from the University of Dundee from their School of Medicine which delivered an entirely distant learning masters in medical education and the tutors were delivering feedback weren't really clear on whether the students were reading, understanding or acting on that feedback. So they've completely redesigned the process cover sheets for students' self-evaluation of their performance asking how well they feel they've delivered the learning outcomes what they would like feedback on for example and then assignments are marked by the tutor placed in a wiki space where dialogue continues on how well the self-evaluation matches the tutor evaluation and what students will do next time as a result. And this new process has been a great success even though the evidence showed that it took staff a little more time an average of about 10 minutes per assignment to engage with that dialogue staff found the benefits of doing so far outweigh the negatives. I think one of the most astonishing things that the programme revealed was the lack of discussion that seems to take place around approaches to feedback. The Institute for Education in looking to implement a more longitudinal approach described feedback is taking place very much in a black box and their experience seems to be borne out elsewhere. So the focus of quality assurance activities seems to be exclusively on comparing and moderating marks and so it appears that there's little if any discussion within programme teams about individual approaches to feedback and therefore unsurprising that students talk about inconsistencies in the feedback that they receive and the lack of any conversations amongst staff to establish and discuss the purpose of feedback seems to inhibit the longer term aims to better learner and longitudinal development. So what has emerged from the programme is some really valuable resources around that audit of feedback. I think originally they developed more as research tools but ended up being used very much as staff development tools and a number of tools are produced tools for profiling and analysing feedback that are shareable and that others can use. This model is used by the Institute for Education based on Hattie and Timpley's 2007 model which provides a reflective tool for staff to reflect the content of their feedback. Hattie and Timpley found that feedback aimed at the person, e.g. praise and content is much less effective than feedback on skills and self-regulatory abilities which would enable learners to be much better able to develop their own learner autonomy in the longer term. I think what's quite interesting from the approaches of audit that were taken across the programme is that they all appear to be skewed in particular ways rather than balanced so precisely how it was skewed did vary according to the subject and institution but much of it was focused on short-term rather than developmental feedback and that there seems to be a real need to move staff practice in that direction. One of the other projects did develop an online tool, the Ontetra project developed a tool which provides an analysis of staff feedback automatically so staff can use it to just get a snapshot of the type of feedback they're providing just to see what the balance is looking like and ways in which they could maybe improve that. Okay, so just moving on to talking a little bit about employability. It was a major theme of many of the projects, particularly those at Cornwall College, the University of Exeter and Manchester Metropolitan University and it really goes back to whatever debates that we have about the true value and purpose of education. What really leads to improved employability prospects is of major importance to both students and to senior managers and the projects try to address the fact that traditional assessment methods such as essays and emphasis on summative assessment don't prepare students well for the word of work and the more formative ways the professionals develop in their working lives. Interestingly, one project went so far as to suggest that activities that are brief should maybe be built into assessments and that we're just too clear around the criteria because that will better prepare learners for the reality of the workplace. We're in the real world, acquiring and making sense of that feedback from a range of sources and giving feedback our essential skills as are articulating the skills that they have to a variety of different audiences at different times. So the University of Exeter looked to tackle this problem by introducing new generation authentic assignments based around real world and scenario-based activities that were designed collaboratively by program teams because one of the threads that have come out of their original baseline research was the gap between what employers expect from their new graduates when they turn up for work and what new graduates themselves can offer on day one in their new jobs and although students may have had the competencies required they weren't able to articulate them. So that was their starting point. To bridge this gap, the project developed a model based around six dimensions of authentic assessment to engage staff with principles around more employability-focused assessment design. So the model was used in conversations with staff and can be used to assess worthy of the existing assignment related to the employability context as well as enabling that redesign to be mapped onto it and evaluated afterwards. So the dimensions include things like multiple assessment points. Employment assessments happen more frequently at short notice. It also included varied audiences. So for in-employment it's often peers or clients who are the assessors of the work, you know, not a tutor or academic member of staff. It included problem data about using real-world data and assessment tasks, about collaboration where employment tasks are usually collaborative. They need to encourage students to work in teams with peers to better understand team roles. And peer feedback and review are key ones. So much feedback and employment comes from peers and clients. So there's a need for students to develop those critical thinking skills in this context. So this example is from a psychology module where they redesigned an assessment from a written assignment to the development of patient information leaflets that involved real audiences, a lived experience group of service users and also the university's own wellbeing service. And this is just a snapshot of an evaluation that they did where they asked students to identify what they had learned as a result of that redesigned assessment. It clearly shows the link between the revised module structure and the development of some of those key professional skills that were the intended outcomes of the module around collaboration, independent learning, deadlines, time management. And I'll share with you a little bit later how the project has integrated technology used to support some of these professional skill developments. So peer review is a key means of, as we've touched on, a key means of developing some of those skills required in the world of work. And although it wasn't a major focus for any of the projects in particular, it has emerged as one of the success stories. So the program findings corroborate much of David Nicholl's recent research where he suggests that we need to improve students at evaluative skills if they're going to be better able to judge the quality of the work in the future. And alongside self-evaluation, peer review is a very powerful way of achieving this. And again, going back to the University of Westminster and the making assessment count tool, they found that a combination of self-reflection and peer review gave the best results in terms of enhancing learning. The big note of caution, however, is the need to raise awareness of students around ensuring that they understand the purpose and value of peer review activities. And that's something that's quite missing, but assessment listresses very much came through as one of the emerging themes from this body of work. In that we need to help students to make those connections between the activities, such as peer review, and the competencies that they will need in employment. And one project explored the use of the peer-wise tool, which is an open source tool, enabling students to create, share, and discuss assessment questions and feedback as well as answer others. And they found a positive correlation between student participation in using the tool to create questions and their final examination marks, which wasn't restricted just to the higher-performing students. Although the one thing that was missing there was that they're not quite clear on which bit of that process, whether it's a creation of questions, a discussion of those questions, the answering of those questions, which helped to bring about that improvement. Okay, so just to move away from educational practice, to more a look at some of the more processes that technology can help us with, supporting around the assessment and feedback process. So one of the clearest messages coming out of this work was that electronic assessment management was both an effective and efficient way of supporting the process and is the best way to meet the needs of a very significant proportion of learners. So in using the term, we're referring to a number of different processes, including e-submission, e-marking, e-feedback, as well as others. But there are clear differences in perceptions from different stakeholder groups. However, what was clear is that the benefits were very clear for students and administrators and clearer than they are for academic staff. However, where benefits were shown, there was evidence that there was greater buy-in from staff, and we've seen very much a significant movement of staff away from paper-braced approaches to electronic assessment management. The University of Huddersfield took the most extensive study into this area, and I'll share in a moment just a few findings from their research. I mean, as you can see, one of the headline findings was that for students, electronic assessment management was seen really as an entitlement, not as an option. They also found that when it comes to receiving feedback, students see it as inconsistent when submission is electronic and marks and feedback are given on paper and just seems illogical to them that this inconsistency exists. So overall, in terms of student benefit, they found that they had increased control in agency and reduced anxiety, and that was over control over when and when and how they engage with their feedback, as well as a clarity of day to return of feedback and results. They found an improved privacy and security, so students were confident in the process, confident it hadn't been lost, that it had been securely submitted, they liked the receding service and compared it to dropping into a pile of box with others that they weren't quite so assured of. They also found a significant increase in efficiency and convenience, so all students reported added convenience, including a reduction in time submitting work, hassle with fighting with printers just before deadline times and a flexibility in enabling them to fit their assessments around their lives. And they liked the midnight submission because they could work up to it. They also found that the feedback was clearer and easier to engage with and also understand and store for later use, so they found it was more legible. They liked the way feedback worked in grade mark with the bubble comments. One student noted that having it online made it much easier to go back to when doing another assignment and incorporating that feedback as they went, to those findings around aiming for a more longitudinal approach to feedback. However, the research did show that there is a balance to be gained between giving staff flexibility to come on board with e-marking when they're ready and the strong entitlement that students feel to it. And some approaches that very much handled it with options for staff to come on board as and when they were ready seem to offer most benefits to most stakeholders. I think to make the most of the technology options that we have in supporting the assessment and feedback lifecycle as a whole, we need to better understand the assessment and feedback processes, end-to-end processes that are involved in that lifecycle. And review is a key step in understanding what those processes are. This diagram was from Manchester Metropolitan University who are taking a university-wide initiative to improve assessment and feedback practice. Their project is reviewing the entire institutional process for assessment management and developing systems to address each part of the lifecycle that you can see here. So the lifecycle itself just provides a useful way and a useful conversational tool to see across that whole assessment and feedback lifecycle and hopefully useful in use with staff to have conversations because different staff are often involved at different points and don't always see that end-to-end cycle. At MMU, one of their key problems for the institution was around assignment submission. So they have 600,000 assessment marks which need to be securely recorded for 36,000 students across a wide range of disciplines. And so they place some constraints on the number of assessments so that the total number has reduced but they're still managing a huge variation of the assessment practice. So as part of this project, the university used data to identify critical points for assessment assignment submission. And so you can see here that there was some modelling done which identified some significant peaks in submission, the highest being around 17,000 which was due at the end of March 2012 which can help to inform what best interventions are made to alleviate some of this pressure. I also wanted to mention at this point going forward that out of this initial programme of work as well as some work that's already been done by the Head to the Learning Forum into exploring approaches to assessment management, we've recently just funded a small initial background study which is hoping to gather examples of what's working well, practices, policies and processes as well as some of those approaches to change implementation. It's also having to identify the challenges around assessment management, both in terms of culture, process and technology so that we can start to understand where the issues are and where some potential solutions might be able to help the sector move forwards. There is a project blog. We've held one event already but we're really interested in having conversations with people who are working in this space who would like to share their practice to inform this study going forwards. So do get in touch with either myself or my colleague Jill Ferrell. Hopefully details are available on the blog about how to do that. So just to finish on, more of a look at the wider issues around moving assessment and feedback practice forward and some of the models of change that have evolved from our work, I think one of the key messages to be reinforced by the programme and certainly not started by the programme is the importance of identifying firstly and overall vision of assessment and feedback and the pedagogical basis for change before introducing the technology. Educational principles can be a useful way of doing this and hopefully you've had a chance to read through some of the background materials this week which introduced David Nichols' work around principles. But the use of principles is going traction across the sector and just provides that educational scaffold on which to drive change, aligns to principles. And they synthesise the research on what good assessment and feedback can look like. They tend to be written in an action-oriented way which means that an action falls out of them. And they can also be a useful tool for going back to seeing how effective a particular intervention has been in contributing to the development of that pedagogical aim. So we have a whole set of... There's a number of different sets of principles which summarise those in a document with use for some workshops previously which are shared here. And there's also some background reading from David Nichols' work on why principles are useful. I'm sure many of you may have already seen the REAP work. These emerged out of a project a number of years ago at the University of Strathclyde. And they emerged from an overall vision that assessment and feedback should first and foremost support the development of learner self-regulation. And these seven principles were used to drive student feedback campaigns. They were written into institutional strategy and have been used to inspire many others to take this approach. So that's just an example of one set of principles thinking about what the purpose of good assessment and feedback should be like within your institution. But I think it's even useful at this very high level to use these principles in conversations with staff and to try and gain some agreement across the institution as to what each institution feels is the way forward. So projects have explored a range of approaches based on principle-led change and it develops a range of tools to support the approach. So some of these more practical tools will hopefully show that you can take that high-level principle but really use it as a way of driving through change. So this is an example of some cards that developed by the University of Ulster from the Viewpoints Project. And these cards, so you've got two faces here but they're actually back-to-back. So they're used within with course teams around curriculum design and they can be used with a curriculum lifecycle model so that staff are discussing which of the principles relate to their context when they would like to see each principle implemented at what point and on the back it has a set of suggested activities which enable you to achieve that overall vision. And other projects have taken the same approach too and worked in in different ways. So this is an example from Queen's University Belfast with the EFX Project and they used the principles to articulate the assessment redesigned. Firstly agreeing the underpinning principle that all assessment and feedback should encourage positive motivational beliefs and self-esteem and then under that the other key principles and each principle was colour-coded. And that enabled assessment conversations to happen with staff. Those redesigned assessments to be mapped and plotted and the colour coding was used to relate how each activity was relating back to the underpinning principle they were looking to achieve and Anka maybe said a little bit more about this after I've finished if people are interested in hearing more. And so at last week we come to the technology and this principles discussion and there are some really lovely examples of resources that projects have used to help make technology choices based on the type of learning and teaching approach that they're trying to implement. So for example these are the tech trumps cards which were developed by the University of Exeter and each technology is ranked here according to how it supports each of the dimensions of their work integrated model. So for example Skype rates here is high on supporting collaborative activities less so for others. And so again just conversational tools that help staff engage with the underpinning pedagogy as well as deciding how the technology can best support those approaches. And I've just taken some screenshots here as well from the Queens University Belfast principle cards which again are based on the viewpoints approach but start to integrate the technology in the activities on the back. So as well as the activities that are suggested assessment designs there's a list of technologies that staff might consider some of which will be institutional some of which may be more openly available. And I just wanted to finish on another approach to change which has seen many successes across the program and that's been around working with students as partners. So projects that work with students as researchers with student-led research leading to student focus solutions promoting ownership, engagement, community and participation we've seen a paradigm shift moving from students as consumers to being active contributors to new knowledge which benefits all concerned the students in terms of the development of their skills but also the institution and the staff that are working with those students. So for example, University of Winchester along with Bath Spa University ran a student fellows program where students were inducted, undertook research in training sorry, training and research skills and assessment pedagogies so bringing back the importance of developing those assessment literacies with students and they led projects in their subject areas they looked at the principal sets they looked at technology enhanced solutions to meet those aims and that student fellows program has now been embedded across the University of Winchester and scaled up and has a high level of commitment moving forward. I'd also just like to maybe hand over to Peter because we've also been working to develop a network of students as partners and I'll just hand over to Peter to say a few little words about that. Okay, thanks Lisa. Yes, we're setting up this network it's actually been renamed to the Change Agent Network and there are a number of activities with it. First of all, there's the whole collaborative activities we held a big event a couple of months back at the University of Winchester where we had about 50% staff, 50% students. We're setting up an accreditation scheme an online course and accreditation scheme for students as change leaders and that's going to be accredited by CEDA. We're also developing a journal of innovation partnership and change where we're going to be publishing articles opinion pieces and so on about projects particularly that involve students as change leaders. And lastly, I've put a link in there to the text chat. We've developed a set of effective practice resources for setting up student partnerships and students as change agents. You can access those effective practice resources on the site. There's also a set of, we've made them into a set of viewpoints, cards which teams who are setting up students as partners can actually use to help them in the process of setting up and implementing projects to do with student partnerships. So that's all I was going to say. Lisa, I'll pass back to you. Thanks, Peter. Okay, well I think I'm aware that we've covered such a broad range of ground in such a short space of time so I hope some of that has made sense and ceded some thought. So just to summarise some of the key points, for any effective change, there needs to be clear and shared understanding of the pedagogical basis for it and discussing and agreeing principles can be a good way of doing that. Technology can be a clear enabler to achieve in that pedagogical change and the results that we've seen do suggest and evidence some very clear benefits there. One key message I think is around the notion of assessment literacies, of students developing and understanding them and the benefits to activities such as self and peer review play in developing key skills that they need for life, not just for employability. There are also a lot of messages around the importance of curriculum design and ensuring that assessment is thought through in terms of maximising the opportunities for formative development and that opportunities provide a feedback to be discussed, that it's a starting point of a conversation, but also acted on as students move through the curriculum so looking at that more programme level. So as usual in some of these discussions, the main findings are not around the technology. The technology really works well when underpinned by all of this good assessment practice. So if you're interested in finding out any more about the programme, we've got a website which is the Just Design Studio which has a whole number of different themes identified with links directly back to some of the solutions that we've been discussing today. Also links back to all of the evaluation reports and some of the evidence of findings around that. We've also produced a number of briefing papers which have emerged as the programmes developed around those key topics that are all available from the Design Studio. So there's a briefing around change and the principal-led approach as well as the students' change agent approach. There's a briefing around the electronic management of assessment, one around employability and supporting that through assessment practice and one focusing on supporting longitudinal development through feedback and feed-forward and opportunities for engaging learners with that dialogue. We are also just in the process of developing some video soundbites to support those briefing papers and some written case studies which at least provide some shorter summaries of some of those key messages. So that's the body of work. I'd be really interested in having a conversation about it now or another time. So I think that's it from me. Thank you. Thanks, Lisa. That was a really interesting presentation and as you say, you've covered a lot of ground and there's lots of useful resources being highlighted in the chat window there too. So I hope everyone's able to either download the chat window to access later to those resources or will be able to come back and review this talk at a later date if you want to get some more time to think about it. We'll also make as many of these available within the Octel website for week four as we can. So I wonder if you could just join me and show our appreciation to Lisa by putting a smiley face or clap hands or something beside your name so that we can show how much we've enjoyed the talk today. Thank you. Any questions for Lisa? You are welcome to either put something into the chat window and we'll pick that up there or if you'd like to raise your hand and we'll pass the microphone to you to ask a question. Okay, I see there's one there from Glenn. You had your hand up a minute there. Would you like to ask a question? Oh, okay. Anyone else who would like to ask a question? Future, you would like to say something over to you. It was just the interesting issue of the sort of the balance of formative and summative because of course, you know, for a lot of academics, when it comes to marketing and feedback, it's sort of the stressful time of the year and it all tends to be very bunched and I think one of the things that has kind of come out of the program is that you can't expect academics to suddenly spend a lot more time interacting, dialoguing with students, something that else has to give and I think one of the sort of things that the projects have looked at is do you need as much summative assessment? Can you decouple assessment from modules? So you perhaps have less summative assessment, more formative assessment and the time kind of goes into that formative aspects and certainly if we looked at some of the results coming from the IOE and Dundee when the students did have the experience of having more dialogue with the academics, the academics did actually see a big improvement in their performance. So I don't know, do you want to add anything to that as well, Liza, perhaps? Yeah, I think that's a really interesting point, Peter and often we need to look at not just one part of the process but the whole process when thinking about time-saving. So sometimes something may take longer like in the engagement that the staff had at Dundee around the dialogue around feedback but something is saved later down the line and sometimes we forget to look at that whole process. It may not be immediately obvious to an individual staff member at a point in time but when you look across the process the technology can offer some ways of saving time for particular parts. Yeah, there's a question there too which might relate to that in some way from Bradford asking about issues about anonymity. I guess that will depend too on whether it's formative or formative assessment but do you have any thoughts on that, Liza? Yeah, it's really interesting actually about issues around anonymity because if you put in place anonymous marking in situ then there's a tension there between the longitudinal development of learners because if learners can't engage in a dialogue with a staff member around their assignment then I think there is a tension there. Yeah, and I guess if you're talking about it in relation to employability as I suggest then quite often that kind of activity might be group work or team working and in that case it's much more difficult to be anonymous. It's an interesting point but there probably isn't a clear answer to that. Okay, one of the other things it can offer I think in the chat window was in relation to peer assessment and to what extent we have to prepare students for that because I think it's quite often it's something that there's some resistance to among students which makes it quite difficult for staff to implement. I just wondered what your thoughts were around that. Sorry, Linda, I was just reading another question at the same time as you just spoke then. Would you mind just rephrasing that question for me? Thank you. Sure, it was a question that came up in the chat window about peer assessment and how well prepared students are for it and whether there's still some resistance to it when the staff try to implement it. No, I think that's very true and I think that relates to the point I was making about the need to better induct students and the purpose of peer activities. I think there can be a common misunderstanding that isn't as valuable as getting feedback from a tutor and we've shown the benefits that are there. It's just helping students to understand what they are and how they relate to their skills that they're trying to develop. And I mentioned in the text chat an example of Cardiff Metropolitan who were applying the Westminster's Mac Making Assessment Count process and they found that when they first did it, I think they worked with year two students but their subsequent project, they did it with incoming students and started them at induction and they found because sort of students didn't have preconceived ideas of assessment of feedback, it was almost like grabbing a money and getting that concept of peer with you in right from the outset. Yeah, as I said, you've covered such a lot of ground to you that there's so many areas that we could delve into and a bit more depth but I'm just looking to see if there's any other questions that might want to pick up on. Again, if anyone would like to raise their hand and take the microphone you're welcome to or any other questions in the chat room. I think from my point of view, one of the things you raised there that I'd like to look into more detail is the idea of students as partners and I think that's something that could be very powerful in bringing students on board in engaging more in their assessment and feedback processes but I see that Inna's got her hand up so I'm going to pass the microphone to Inna to see if she has a question. Does it work? Hello. Can you hear me? Yes, I probably do. It's still about anonymity but I have the question at the beginning of the chat because I'm thinking if I use the blog for the students to note the learning journey and reflect on the sessions and particular topics I have an option of either using blogs within Blackboard which will be a controlled audience who use more open blogs like we are using here on Octel and what issues should I bear in mind and I can think about anonymity as well but what are the issues may there be? Do you want to pick up on that one, Nisha? I'm not sure I have anything to suggest but I'm happy to discuss that off-line if you wanted to drop me a mail and we can explore some of the issues further. Okay, I think maybe the best thing to do then is for people to take time to reflect on some of the things that we're talked about today and have a look at some of the resources and then we can pick up some of these topics in the forums and on different communication channels during the rest of this week on Octel. So I think we'll just close off this webinar at the moment and we shall pick up the strands of this during other discussions over the course of the week. But it's been great to have so many of you involved and I hope that you will continue to be involved as we continue to look at our feedback and assessment and student support issues with the rest of this week. So thank you and thanks to Lisa and to Peter for such an excellent talk.