 So, far we saw what constructive customized feedback looks like, why it is important and how we can do it in multiple choice questions. But many of us instructors also would like to assign longer questions or problems requiring process steps and so on. So, let us pause at a reflection spot at this point. How can one give constructive customized feedback in questions requiring learners to write longer answers or process steps or something that cannot be automated, automatically graded? Please pause, think of one possible way that you might implement this and then resume when you are done. Some of you might have come up with your own ideas about how to give feedback in a MOOC setting for longer answers. One recommended way to do so to give constructive and customized feedback for longer answers is by means of descriptive performance rubrics implemented via self-appear assessment. Rubrics are descriptive rating schemes that define performance at different levels. If something complex or open-ended needs to be assessed, one of the first things that needs to be done is to identify the criteria on which such a problem or such a question can be assessed and each criterion contains detailed descriptors at various levels. So, an outline of a rubric will contain criteria 1, 2, 3 and so on as well as various performance levels. In this case, there are three levels of target performance, just about acceptable performance and poor performance. So, let us see an example. First the question, ask learners to write a program for blank, something. The important criteria that may be used to assess and give feedback on such a question is whether the learner has specified appropriate logic, whether the code is readable, correctness of syntax and so on, there may be other criteria that some of you may think of. In order to write levels, what can be done is to go through criteria one by one and first decide what is the target performance level and then consider the major issues that make up the target performance and then write the other levels. So, let us actually look at a more concrete example. So, if we take the criterion of readability of code, a target performance can be that meaningful variable names are assigned, the learner has commented on the various program lines, there is indentation to maintain readability and so on. If a practice such as commenting of the lines is not done, perhaps some of us might consider it to be a smaller error and we will look for whether the variable names are meaningfully assigned. The learner who writes meaningful variable names but forgets to comment would be given a score of acceptable. And if none of this is done, if all the issues are missing, then the score would be poor. This brings us to the closure of the formative assessment loop in the learning by doing activity. The learner begins by doing the activity and gets feedback on where the work is where their work is at that level and also gets steps that point them towards reaching the learning goals or the learning objectives. Now that we have gone through this learning dialogue, you can get experience of a learning by doing activity by writing your own learning by doing activity.