 It's a board to stop Kilkenny, which is a center for adult immigrants. The services are offered free of charge and the tutors volunteer their time. And I'm currently completed my master's in professional development for language education. And I can spell warmth. So the center where I work, we didn't have any means of placing the learners into levels. So they just come in and we sort of place them intuitively into classes or levels. So as part of my MA assignment, I decided to design a vocab size test. And I used EVP, by the way, for that. So the test seemed to work just fine and it was practical. However, we soon felt that it needed to be complemented with some other form of assessment. So keeping that in mind, I decided to design an inset plan for informal assessment of speaking. Informal because of the low resources we had, it was not possible to conduct it personally on a one-to-one basis. So it would have to be done informally, but the tutors would have to be trained. And I also had to bear in mind that the tutors had diverse backgrounds, different educational backgrounds. So I thought it would be easy, but it wasn't. And I had to put in a lot of work and a lot of time and planning. I also had very limited time for this. It was just two hours that the tutors could spend. So I divided it into two sessions. So we moved to the first session. So the first step would be to provide them with a spring board for assessment, to give them an idea of what they were going in for. First and foremost was awareness of the special features of spoken language. Because it differs so greatly from written language. There's fillers, there's pauses, there's ellipses, incomplete sentences, vague language. The vocabulary differs so greatly when we speak. So they had to be made aware of that. Next, I gave them a rough plan for assessment. So the first step in this direction was to have them outline the major syllabus objectives and align them with the assessment tasks, which I'll introduce them to later. And giving them the option of assessing students in standard cycles, maybe two or four students per week, so as not to overwhelm them. And recording performance, of course, with the students permission for later reading. Next, I highlighted various modes of assessment, including individual, parent, group tasks. And I also sort of highlighted the pros and cons of each one. So for example, with group tasks, they'd be more economical and more practical, but they'd have to ensure that every student had a fair turn, or a turn which was long enough to be assessed. Next, I highlighted different tasks, including oral interviews, descriptions, role plays, stories and text retellings, Mr. Nester, and video clips. And finally, I gave them some tips for assessment, which would be the appropriacy of tasks for the levels, basing these tasks on activities that they're used in class on a day-to-day basis. And also gauging the level of task difficulty by the number of events or the number of characters, and the provision of timely and appropriate feedback, highlighting the strengths and weaknesses of the learners' performances. Now we come to the meaty part. So the scale for assessment that I designed was based on the Common European Framework of Reference. And in order to help them to understand the scale, I needed to introduce the CFR to them in a very comprehensive, but brief manner, because we could go on and on and on. So apart from the visual information that I presented, I also included a very short but comprehensive video clip from the British Council. And that's the part when I actually saw people sitting up in their chairs and paying a lot of attention. So that worked very well. Next, I had to introduce them to the scale for assessment, but not just having them read it. They needed to be engaged with the scale, so I devised a gap-filled familiarization activity. As you can see here, there are gaps and they were given the choices at the end, and they did this in pairs. So a discussion was invited after this, and the right choices, we discussed the right choices. Moving on, now it is time to set the standards to actually show them the performances which were defective of different levels of the CFR. So I used Cambridge English Assessment. Now the samples from Cambridge English Assessment can be used by non-fee paying institutions. And what I did was I gave them the company documentation, and after each viewing, they just sort of highlighted the critical features of just one of the testing performances. So as not to overwhelm them. So they focused on one of the test days. They highlighted the critical features in the documentation, and after each viewing, we discussed that performance, why it was a certain level, and clarifications were made. Moving on. Next, this was the second session now after about one week. And it was time to sort of apply the standards or apply what they'd learned. So I gave them the scale for assessment, but I asked them to highlight the important features of each level. So as to sort of recap what they'd done, and to sort of comprehend the levels better. So they viewed the samples once again from Cambridge English Assessment, and this time they identified the level of the performance on the assessment criteria. This was done in groups. Discussion was invited. After each viewing, the clarifications were made as to why the performance was a certain level. And finally, it was time for individual rating, but this was done anonymously. So it's not to put anybody on the spot. So they viewed the samples, and they were asked to enter the ratings in this column under Rater 1. The second column would be used to enter the ratings from the experts of Cambridge English Assessment, and the differences would be calculated to give us the correlation coefficient, or put simply just the rate of agreement between the ratings done by the trainees and that done by the experts. All right. So how did I evaluate the sessions? So this was basically three-tiered. I used the Rater Scoring Form, which we just saw. So the Rater Scoring Form showed a correlation coefficient of 0.8, or let's say 80%, which is reasonably satisfactory. However, the levels B1 and B2, they weren't infused by let's say two of the participants. So that is something I'd have to work on later. The results of the questionnaire were really positive. So all the participants agreed that they found the sessions useful, understood the main features of the speaking performances, felt that they could use the scales to assess speaking and found the sessions useful and beneficial. And I asked them to do this anonymously. So the feed notes, yeah. So after each session, I just jotted down my observations, and this was compiled immediately after the session, so I wouldn't forget. So the feed notes showed that the participants appeared to be engaged, and some of them made really valuable, well-informed comments and contributions. Finally, for the tips now, first and foremost would be practicality, and especially the time. So the time that you decide to allocate to the training should be sufficient to cover the materials, and it should be keeping with the time available for training. So I just had two R's, and I had to utilize each and every second. I had to shorten the video clips. I had to extract the useful bits from them. Next is awareness of trainees' experience and knowledge. That is really important while designing the sessions. If you don't keep that in mind, it may just all be over before it starts. Engagement, yeah. So make the sessions engaging, interacting, interactive, use a variety of modes, and try to get it to different learning styles. Visual learners, auditory learners, group learners, individual learners, and most importantly, comprehensibility. Avoid excessive use of jargon, language that they may not understand, and try to divide the sessions into digestible chunks, I'd say. Next was evaluation, yeah. So your evaluation, try to have a three-pronged method, if you can, and it should be very clear and critical, not just a rosy picture, because it has to feed into follow-up sessions or future training requirements. This is really important. Now the scale that you use for assessment don't just have the trainees read it. Try to engage, try to have them engage, try to engage them with it, cognitively as I try to do in the gap fill activity. Try to have the trainees rate in groups first before they rate individually. This will save a lot of work time and effort because some differences will be ironed out, and consensus usually is reached. Anonymous data collection, invite honest feedback, and last but not least, check all equipment, which I must admit I overlooked things but just fine in the end, but I believe it's really important to do that.