 Though I had prepared slides, I will not show slides, this is about regarding the quality aspect of the FDPs, I will be talking about specifically peer assessment. How many of you faced problems in peer assessment during the last FDP? You can, yeah, faithfully raise your hands, okay. Question number one, was this a problem because the system did not permit something? So how many of you say that, so system by system I mean the Moodle system, okay. Okay, so I am talking only about OER creation, OER creation assessment, okay. Everyone did an OER? Were the instructions for OER clear? Were the videos clear? Were the assessment rubrics clear? Was the example OER clear? Was it comprehensive? Now where was the problem? Yeah, I will give you a chance. So I will give options one by one. So option number one, option number one is that peer assessment was not working because people did not get time to do peer assessment. Option number two, people who were part of FDP did not understand peer assessment. Option number three, both of the above. So how many for options, okay, any other options that I need to include? Number of? Okay, so number of, so this number of assessments provided to you is a system error. Yeah, so Moodle workshop module has a known bug that anytime a grouping is done and it is open for peer assessment, people will get total number of assessments which are greater than those that are indicated. So we are removing that in IIT Bombay X. So in the new peer assessment mechanism, you will have a clear OER assessment question. You will have clear OER example or whatever, peer assessment example. You will have clear instructions. You will have clear rubrics. What else do you want from the instructor? Understanding of the rubrics. Understanding of the rubrics by the peers, okay. Here rubrics's wording may problem. Rubrics criteria's may problem. Rubrics, it's the criteria of the rubrics. Yes, so this is something that instructors can change at our end. So we are trying to differentiate things that can be controlled and things that cannot be controlled. Okay, so things that cannot be controlled, how somebody and even after so many training goes and do a peer assessment that we won't have control. But supposing in the future FDPs, you are associate faculty, I think there is a, we are trying to build some logistical mechanism where these initial assessment at least a quality check can be done by the associate faculty for addressing issues related to peer assessment. So a major change in the future peer assessments is that we are going to completely utilize IIT Bombay X peer assessment mechanism. How many of you are familiar with IIT Bombay X peer assessment mechanism or ORA, open response assessment? Okay, it's bit complicated. I mean from an instructor perspective also it is bit complicated. So we'll be circulating a document. So we'll be circulating a document which explains the peer assessment process. But essentially you know the essence of peer assessment, right? The context has to be explained in the question. The question has to be clearly worded. What is expected from the participants have to be clearly put up in the question. This has to be converted into rubrics. Rubrics should have clear scales. There should be an example graded assessment by the instructor that the participant can first train. Okay, so this example assessment is first graded by the participant and unless and until the grading matches with that of instructor he will not be able to move forward. He or she will not be able to move forward. Now once they start the submission, they will first write their submission. They will submit it. Now this will go to different peers. They will also get two or three, whatever is the number of assessments to be evaluated, they will get them. They will evaluate it using rubric. They will provide specific feedbacks for each and every criteria. They will have to write, suppose let's say criteria one. They give needs improvement. They will have to write in words why they give needs improvement. Now we can, the system can only permit, they need to compulsorily write a feedback. Now if the participant goes and write good, it was not good, we can't do anything. So what we need more stress is telling the participants that your feedback should be of the form. Although your response was indicative of, was trying to answer the question, your answer had the following shortcoming, A, B, C. Due to this, I'm going to give you this grid. Now is this clear? When something like this comes, anybody who is having a peer assessment problem, so typically marks may problem ayenge, if there are problems in marking, what do students do? Let's say series test ho gaya. The series test got completed. They got their mark sheets. Okay, he was supposed to open ended question, let's say 10 mark question, he got only five marks. What will the student do? He'll go back to the teacher. So in this particular FDP, we'll try to make mechanisms wherein the going back is very much arranged. So first they will approach a set of associate faculty who are handling the peer assessment. They will be provided with the peer assessment that is present. And they will make a judgment. And only after that it will come back to the main FDP instructor. So this particular mechanism is what we are planning to do. So for that you should know how IIT Bombay X peer assessment works. So in the afternoon, you will be given a peer assessment activity. So I'll explain the activity in few words to you. Question is, why are peer assessments kept as blind? It was intentional because we didn't want any naming or... So typically mark come who are... So then people will say, okay, this guy evaluated me less. So this is only blind to the participant. And it is an accepted practice even in research paper reviews. So it is an accepted practice in research paper reviews. But what we faculty can see is who gave what marks. So at a faculty level, we will know who gave this particular feedback and we will be able to correct it. So that much authority, I mean, though in all essence, creative commons, everything is good, but it should not result in quarrels, unnecessarily quarrels. I also want to say something. If there is a huge inconsistency in peer assessment, for example, if an assignment is assessed by let's say 5 people, 4 has given 8 out of 10 and 1 has given 2. So I think that person should also get some negative feedback or some means negative marks so that there is a huge inconsistency. Okay, I agree. So suppose, consider this criteria. The person who gave 2 out of 10 was actually writing his assessment. Whereas the other people who gave 8 out of 10, they just gave because it was there in this one. So how will you give a feedback in this case? So this is, so peer assessment. Yeah, so yes, so that is one thing. So if it is let's say 20 people under a single associate faculty, yes, we can do it. Suppose the number is 400 under a single faculty. So this is a problem of scale. This is a problem of subjective evaluation. Both of which are active research problems. Still active research problems in systems or computer systems. So this is something that we are trying to tackle. So yes, we will definitely make mechanisms wherein at least the associate faculty will be able to get one level access to let's say a controlled set or a manageable set. But we will have to look at the overall FDP, the type of roles available because conducting this FDP itself is a very complex, complicated and challenging problem. So that's why all of you who have experience of being part of an FDP are brought here in the hope that yes, in the future FDPs, all of you can assist us in this entire process. Yes. Yes, valid, valid. So the domain problem is something that we cannot immediately escape. So we will try to make cohorts which are so let's say one main faculty, we will try to collect the expertise of that particular faculty in the domain and try to as an associated faculty is based on domains. That is something that we are thinking of while doing peer assessments or doing the FDP coordination itself. Because without that matching it will be impossible for faculty to understand what the person is talking about. But we will try to give so one way of correcting that is by giving generic assessments. Like creating an open education resource, the resource itself is open. So we assess the features that have been used, the functionalities that have been used, which is transcending disciplines or domains. So these are the kind of mechanisms at instructor side that we can suggest as possible solutions till a better mechanism gets evolved. So we will be doing this in the coming process. So I'll just explain what the peer assessment question is in five minutes so that you will know. Our next assessment is actually related to all these. Just one more last query, sorry to interrupt you. I just want you to know if we can do peer assessment in two ways. One could be an objective where the presence and absence of particular thing can be seen. The other should be subjective where they have to evaluate the quality of that particular. So the peer assessment question that you are going to face in the afternoon. The context is assume that you have been assigned as an associate faculty in the future FDPs. There are four roles that we think require others to help the IIT Bombay team in managing the FDP. One is general coordination in the discussion forum. Second is specifically facilitating the peer assessment. So it will involve peer assessment from the scales all those rubric creation, question creation, two peer assessment queries handling peer assessment queries. Third is discussion forum OER, open education resource creations. And fourth one is facilitating the A-view interactions. So assume that there are these four roles. As a faculty you are allowed to select only one role that you think you can do well. Yes, so you can select one role in this and you should now question number one. Identify the challenges that you foresee while executing your role. Given this you know something about the FDP mechanism now. You know a little bit about IIT Bombay's platform. You know how FDP is conducted and you know how faculties actually participate in FDP. So this is not an outlier case. We are talking of real life problem. So you should identify challenges that you foresee supposing you select any of these four roles. Second thing, you should propose solutions, practical and feasible solutions for the identified challenges. Third thing, now that you have provided a solution that is not enough. You should also tell us how will you evaluate whether the solution has worked or not. Suppose one issue is this peer assessment issue. You might have, so that is one challenge, I agree. What is your solution? How will you know that whether the solution can be working? So you can tell us within your evaluation strategies you can propose some new mechanisms through which this could be conducted. So everything under the ambit of IIT Bombay's platform. Everything will be done in the IIT Bombay's platform and there will be a view interactions. So within this particular modus operandi what are the solutions that you are going to suggest for evaluating. Now the rubrics. For the challenges, the quantity of challenges that you have proposed that will be one evaluation criteria. Second is validity of the challenges. How valid or how practical, how much of this real is a real challenge. So for example, and these challenges should be within the purview of the FDP, FDP implementation team. So for example, let's say Prime Minister of India suddenly said that all engineering colleges need to be closed down. This is something which is not under the ambit, this is a challenge. This is a valid challenge but it is not under the ambit of FDP. So the quantity of challenges should be valid. Whatever challenges you say it should be valid. Second thing, solutions. The number of solutions that you are proposing. And again quality wise these solutions should be addressing the challenges and it should be practical. So the quality of challenges. Evaluation. The evaluation should be possible. It should be practically possible that whatever you say as evaluation can actually happen. And these evaluations should be tied to the challenges and the solutions. So it should be aligned with say I say the challenges that participants are not attending the FDP. In the challenge I say ok I make it mandatory. The third one in the solution you say just look at the discussion posts. Are they aligned? Are the solutions and evaluations aligned? Basically some solution mechanism is look at the number of different activities by individual participants or count the number of activities. So that is something evaluation measure for this particular solution for the challenge. So these three are tied. Fourth one is the logical flow in which you actually create this answer. So your question, your challenge followed by the solution followed by the evaluation should have a logical flow. So it is a qualitative aspect of the English that you write. And generally how good your English is. The grammar, the punctuations, the words that you use. It should be in simple English so that your reviewer can also understand what you have written. So these are the criterias that you will be using that we are going to evaluate you. The scales of the rubric are available. So once you go through so basically there are three scales. One is missing when all these are not available. Needs improvement, something is there but the answer could be improved. And target performance. So that is the criteria. So we expect minimum of three challenges, minimum of three solutions and a valid evaluation mechanism for the FDP associate faculty role. Any questions? This is again back to our peer assessment. In the rubrics, you rubrics and scale, many times we feel that we are in a strict compartmentalized assessment. Now can we go a little beyond the framework and then assign some marks for the assessor. Both positively and negatively. So for the assessor or the SSC? The person who is assessing the OER. So in the FDPs we had 80-20 weightage. It is not that when we are doing the peer assessment there is the rubrics and the scale. So in each parameter. So you want some more subjectivity. So let's say 100 marks is there for the entire exercise. 10 marks should be given as instructor, the assessor's purview. It should be under assessor's purview. No, that's not the point. There is a scale, whatever parameter is there. If it is this one, then five marks, then this one, three marks, then this one mark. Now there we are constrained. I am not agreeing with that five. Neither am I agreeing with the three. So scale can be continuous one. Secondly, according to my assessment, even though I get, see an assessor says that the entire OER is excellent. But when I got the marks for that it is only 13 marks. I am getting an excellent remark from many assessors but the mark I am getting is only 13. So to take care of that can I have some position where the assessor can put extra marks or reduce some marks. So point number one, yes we can do continuous scale but for these subjective responses. Let's say what is the quality of the answer? There will be still subjectivity. So how much is few? How many is not a few? So this question will still exist. So we can limit this one but yes, we can give continuous points. Continuous points like so for example in this one you are given 0, 1, 2. So it's a clear question of one. And essentially rubrics should not be converted to marks. So that is the golden maximum of rubrics. But since some quantitative marks have to come due to these open-ended exams, we are assigning some marks. So what we'll do right now is we'll do continuous 0, 1, 2 type of evaluation for these rubrics. Regarding your second question, it will still have issues. So peer assessment even after so many checks and balances will still have issues because ultimately there are elements which are left for individual interpretation and the individual who is interpreting is also very subjective. So you can have a good assessor. You can have a very bad assessor. So in this case the system cannot do anything but yes one possible mechanism. So these are things that we also elicit from you. What could be possible, ways in which these could be curtailed. So this idea generation, this is also crowdsourcing exercise for us to understand the possible challenges and some of the things that we can actually do from our end. So do post the challenges, solutions and evaluations. So solutions are strategies for overcoming the challenges. So these words would be intermittently used in the open response assessment. Any further doubts? There was a repetitive problem in peer assessment. In fact I proposed the idea of giving back the same question to the person who has assessed. Yes, so yes. During the discussion and I did a similar experiment in a school, I mean students, both in the primary level students as well as for engineering students. In engineering students for 60 blue books we did this experiment. And there was substantial improvement in the. So how did you do it? Was it a pen and paper test or was it a? No, no it was a blue book, in a conventional blue book what we for the test, conventional test we do. Okay so it was a, I mean in the essence it's a pen and paper test. That's correct. So in IIT we, so in IIT process Anamurthy's course, initial courses have peer assessment for the long essays. So in case of problems what Prasamurthy usually does is gets the person both the parties, ask them to explain the rubrics to them, this is the criteria where the issue is. There will be a discussion for let's say if it is a longer one it will be maximum half an hour or for a shorter one there will be 5 to 10 minutes discussion. And after that it is then they are asked to reassess the questions based on the final agreed understanding that both has. Now my point is slightly different. Here the problem is mostly the subjective lethargic valuation of the students or in case asked. Where the question is can the system be designed to take care of this? Yes it can be. That's what my approach is. Okay so it would be interesting if you post this, so if you are taking up this. The problem see you cannot, finally it is a subject. Yes. Finally I have to do it. Yes. So if many of the wrong people do wrong things it cannot be, it is considered as right which cannot be. So that's the approaches. Yeah fine. So please do post in the ideas so that we can also look at it and judge the merit.