 Okay, so now let's move on to the second group. They call it as the recommendation system for adaptive e-learning Mentored by Mrs. Shukla Nag, Ms. Feroza Aibra and Mr. Nagesh Karmali The adaptive e-learning technology extends the traditional classroom environment to make the guidance a one-to-one Mechanism that is single machine guiding a single user. I request the team to come up Good morning everybody. I am Rakshit. This is Ashish and this is Samir. Our project is recommendation system for adaptive e-learning This is the outline of our presentation today. First we'll explain the extract, then what are the contents of our project? So the introduction or the abstract. So today in e-learning systems, traditionally and currently also there's a system, there's a defined system of concept There's a format like a student has to follow certain format. What we're trying to do here is to break that format because every student need not learn in the same way Every student is different. So recommendation systems in e-learning is a very good field and very new field in research which focuses on a student's need and the way students learn rather than a fixed format of learning In our system, the course is divided into some concepts and the idea behind our system is the concept of dependencies between them Concepts within a course cannot be unrelated. They are related. One of the papers we read showed that the concepts are related and there are dependencies, values between the concept And all this can be expressed in the form of fuzzy cognitive map where this is one example where C1, C2, C3, C4 are concepts which can be take for example a course in C programming Which can be iterations, decision making, pointers, arrays, all these are concepts and there's dependency values between them Like if you learn one concept then your gain in knowledge of that concept affects your gain in knowledge in other concepts Indirectly, you would know that and this is the formula like knowledge level in a concept J how it is affected by Pi that is the percentage increase in knowledge level of concept I So it is basically a fraction increase multiplied by what is the dependency value So that is how the knowledge level may increase or decrease depending on what Pi is Pi is negative then it may it will decrease and if the knowledge of any I increases that is if Pi is positive then J will also increase KLJ is the knowledge level of a student in concept J at time t plus 1 that is after he gives a quiz T is before the quiz, Wij is the dependency value between concept I and J C1 to C2 is a there's a link between C1 to C2 So 0.45 between C1 to C3, J is C2 and I is C1 So now Wij will be plus 1, Pi is the percentage increase in concept 1 If a person gives the test of concept 1 and he is knowledge of some score that he gets So if that increases that is a percentage increase in I that increases then J will also increase that is the idea So one technique to assign these weights automatically is what Swami is going to discuss now So we are coming to this concept mapping, what we saw in the previous slide These are the concepts that were mapped and we see it is very important as discussion is progressing So the main point that we arrived was that teacher cannot give the exact concept mapping between the concepts It may differ from teacher to teacher So these weights are very important so we decided to semi-automize the weight How the teacher is going to give the weight and it is also tedious for the teacher to give the weights So we are just using two metrics as we are doing a memorization technique We are using two metrics as I will explain the G matrix first First is the G matrix now suppose there are five students S1 to S5 And there are five questions in that in a certain concept So does zero represent that the student gave a wrong answer for that question And one represents that he answered that question correctly This was the matrix generated when five students gave the responses to the five questions This is called the G matrix and I will come to the QC matrix now I will just describe how the weightages are given When a teacher adds a question to a certain concept Suppose there are three concepts, basics of C programming, arrays and iterations When he adds a question to iterations we give him an option How is this question related to basics of C programming? That he will not describe by a rating of 0.5, 0.4 But he will say that it is slightly related, moderately related, highly related Based on that response we calculate the rating or the connection of the weightages Or by our own system, so it will not differ from teacher to teacher It will mostly the same So this 0.5, these are the weightages calculated by the R system When the teacher gave, suppose he gave it, it was moderately related Then this 0.5 was calculated So this is the QC matrix for concepts, five concepts and five questions Then after that we are trying to now calculate the weightages We are trying to calculate the relation And consider the gene matrix now suppose And consider the concepts, this Q1, Q2, Q3, Q4, Q5 So what is right support is how many number of right questions were attempted in Q1 So that is two in Q1 and one in Q2 So these are the one question items that is generated from that Then we are calculating the similarity between Q1 and Q2 By doing the zore of the Q1 and Q2 responses So consider the first two rows of this gene matrix And now the first column, 0, zore, 0, that will be 1 So we are doing it for all the students So if we do that, we get a total of, we sum it and we get a total of 4 This similarity between Q1 and Q2 zore will be utilized later in the next formula When this 4 is generated, we have kept a certain cutoff Suppose that is 40% and the number of students here is 5 So the number of students into 40% 0.4, that will be 2 This value which was calculated by the zore, this sum should be greater than some cutoff That is number of students into 0.4 which are there So if this, suppose this is 4 value which we calculated Suppose it was one for another thing And it will not go to the second step of this concept mapping So in the concept mapping, now Q1 and Q2, the XOR was 4 So it went for the next step, that is the Q2 question item set So in that we calculated how much of the students have attempted the right support for that Both student S5 has answered question 1 and question 2 correctly Logically, you have now mapped for some reason, okay I don't know what is right support You have mapped for some reason, two questions are similar What do you do with that knowledge? Our two questions are dissimilar So we calculate the That you are not proceeding, you dropped it in the middle and went somewhere else It is not calculated in one step What we saw the value between the concept that is not calculated in one step You can't calculate We proceed to calculate the confidence level And after that the relevance is calculated So there are three steps Confidence Q1 to Q2, right Okay, yeah So there are three steps that dependence is not calculated in a single step It requires three steps to do that Right support is how many number of students attempted both the questions correctly So only the S5 student had attempted both question 1 and question 2 correctly Hence the right support is one for this case Okay For Q1 and Q2 And then Okay, alright Okay Why not three question items said I don't know what is the final objective of all this mathematics We are just calculating the mapping between the two concepts Finally, based on this Concepts or questions First we have to calculate the basis Then we calculate the similarity between two concepts You know for one pointer question The ideal two questions would be That dumb students answer one question all correctly And they answer the second question all zero The questions have to be dissimilar Is what I am trying to say So then Because if it is similar questions Then I can drop one I don't need to have both That is the conclusion which I was hoping that you will tell me And you are not telling me Okay Then they are not dissimilar questions Exactly questions I give two questions to a batch of 100 students Same 50 answer this question And say 50 do not answer this question I can drop one question It does not differentiate Okay Okay Fantastic technique I am not saying no technique But that technique I can see a clear use to drop questions These questions are across concepts These are not No, I am only I never compare questions across concepts I compare questions only in a concept For the purpose which I am talking about Across concepts I don't see Any reason to compare questions That is why I told you politics and arrays Is there any reason to compare it If I am teaching C language Okay If I teach say variables and arrays I am supposed to design a question bank Which only test that particular concept Okay But it is not possible as you go further Suppose you go for pointers You cannot You will require some if else You will require some arrays You cannot just Right Right I understand So there is a precedence of concepts There is no question If I attempt correctly My knowledge in the other thing is there Without certain thing I cannot learn something else There is precedence of concepts That is perfectly fine Okay That can be done by the teacher This confidence is calculated by this formula Which comes in 0.5 And now after from the QC matrix We generate a QC dash matrix Which is Now consider the first C1 column of that So for Q1 it will be 1 upon 1 plus 0.5 plus 0.3 So 1 upon 1.8 So that QC matrix is generated That is 1 upon 1.8 is 0.55 So for all the terms this is calculated By this formula So QC matrix is obtained by that And now we Find the reliance between concepts C1 and C2 Based on the questions Q1 and Q2 And by this formula QC1 into QC dash 22 Into the confidence which we calculated In the previous one Suppose we now calculate for Q2 It is 1 into 0.714 into 0.1 Or into 1 So that comes for 0.714 So that relation is between the C1 and C2 concept Based on these two questions Similarly we do this for all the questions And then the total reliance Relation is calculated between C1 and C2 By taking the average of it So now the user modeling Will be explained by Ashish So then this is the user modeling Where we try to store the information About the model Information about the user So user model there are four types It can be classified in two ways Domain related and domain unrelated Domain related are the knowledge That are relevant to the field Say his knowledge level in a concept And all those things And domain unrelated knowledge are Those information about the user Which are not relevant to the Which is universally took for the user Say his personal preferences Learning style and all those things Static and dynamic as it is evident It remains constant throughout the course And dynamic changes to the course So this is We have used knowledge level of a student Which varies from 0 to 1 So this is a sample knowledge graph So he has this much knowledge Leveling each concepts So and then This is the adaptive engine So this is the core of our project Where we have four recommendation techniques And The function of our adaptive engine Is to recommend him concepts That are most suitable to him And also recommend him some learning Objects related to those concepts So this is our first algorithm Which is maximum success path Since we have the concept mapping Available with us Whatever student takes a concept Passes a concept The next concept that we recommend him Will be the one that results in Maximum knowledge gain across all the concepts So Whenever student completes a concept We check out what will be the next concept Such that Because if he learns one concept The knowledge level of other concepts will also change So we try to find out which concept Learning which concept Will help him gain maximum average Knowledge increase across all the concepts So that he will This is basically a greedy approach Where we always select the concept Which will give him the best He will keep selecting Learning more and more every time He will complete the course So this is a user item collaborative filtering Where the user Rakes the concept And depending on the ratings of the user We calculate the similarity between two concepts So this is a very long algorithm That is using All the e-commerce websites Amazon and Flipkart Everywhere they use this collaborative filtering Where they calculate the similarity between Two products based on the ratings provided by the user And we are using the same Modeling between concepts Where we Ask the user to rate the concept On various aspects And then we calculate the similarity between The two concepts based on the ratings provided And then we Predict the utility of a concept to a user Say what will be the Probability that the user will like this concept Based on this formula And Then we predict When we have the prediction value for all the courses We sort them and then we recommend the user The top most Two or three So this is a simple algorithm And then there is user-user collaborative filtering Which will be explained by Rakesh So that was the user-item collaborative filtering That calculates similarity between items Second is the user-user collaborative filtering Calculates similarity between the users Based on what courses they have taken And what concepts they have taken And how they have performed in those concepts So now traditionally collaborative filtering Uses ratings In this system we have used the performance Of the scores of the students in the testing concept So based on those scores we calculate The similarity between students Now the numerator here In the SIC and SJC are scores Of student I in concept C For all the concepts that are common Between these two students we calculate We multiply those scores it is basically cosine similarity So now the courses that are not common Will contribute to the denominator here For each With respect to students So now when students have more courses Common then they will have high similarity Based on when they have less courses common And also there are scores in those courses When we calculate the similarity after After calculating the similarity we calculate The prediction values What is the possible score of the student In this concept based on The other similar students scores In the given concept like I have taken Three concepts other persons have taken Four and three are similar So what could be my score In the fourth concept based on how similar We are and how he has performed So now after we get the prediction values Then we recommend concepts which He is likely to have high scores in Third one will be explained by So user item collaborative filtering Has a very bad shortcoming that This is dependent on human rating So a human may randomly rate something And like It will not be exactly right So user collaborative filtering Overcomes this shortcoming That it is not dependent on human rating Second advantage of user item Collaborative filtering is That we can recommend the concept Without bothering about the content Of that concept So we can find similarity between two concepts Without even looking into the fact Whether what is the content of these two courses Now the last Recommendation technique is based on remediation Now what we saw was Students need to be Recommended some concepts Like he has a misconception Like he learned the concept wrongly in the past And he has Forgot that concept in the past So we need to recommend him the concepts That he forgot, that he needs to revise And this is a technique based on that Here we Here there are basically two very important parameters Which is n i and d i n i will be the Of consecutively wrong attempts Of a Concept c i's questions So like if he continuously Answers Questions of Concept c i wrongly That will be the n i And d i will be the total dependency Of those wrongly attempted questions On that concept So once we have that n i d i value We have a cut off for 30% We say that if a student If he is answering More than 30% of the Questions of a concept Wrongly, even after passing The quiz of that concept He has probably forgotten that concept And he needs to revise that concept once again Then we calculate the importance of revision How important it is for him To revise it So for that we have just simply taken the Average of the two parameters which is n i d i And we sorted the concepts according to The importance and we recommend him Either the top concepts or all the concepts In the decreasing value So these are our four recommendation techniques The The shortcoming of this is The remediation model is Probably for guesswork The system won't take up the fact that Some student can pass using guesswork But We can safely assume that If the question is very smartly The student guesswork won't work The good thing about this thing is It recommends revision to the student Like even if he has passed that course earlier It still recommends him because the system Detects that he has not Like he has forgotten that concept Now this is one of the sample Makeings that we generate for this technique Like this signifies That the students Has only attended three times Question one So here and that question Is dependent on concept C1 by one So here and We calculate and we check the importance value So And we get the importance value and then We solve the concepts according to the importance and recommend And now is the evaluation model With some use experience We have also developed a evaluation model for our system To test how the users like it What are the choices of the student What are the suggestions for the model So it helps in To judge the performance of the system And suggest useful changes to improve the efficiency Of the system We are taking into consideration both the implicit data As well as the explicit data given by the users And the system So we are using three techniques for the Evaluation model The first one is based on the feedback We are providing a feedback to the student After each time he gives a quiz And checks his score Randomly a set of questions will appear Very less number of questions And we will take the ratings based on the scale of 1 to 5 For that question This technique we are using is the scores of the students How much is the performance of the student In that concept That is used for one to one mapping For relationship between the material and the quiz Whether it was related or not The third evaluation is based on the ratings After each During each concept when the student is appearing for the quiz At the end we have a overall rating system On a scale of 1 to 5 He gives his overall rating for that Concept and based on that we calculate The two technical parameters Precision and recall which I will explain in the future Sites These are some screen shows of the system And this is the future work and conclusion And what have we learnt in this system Actually we developed Learned a certain Frameworks languages which we knew previously Jango was very helpful for us Then gate and latex were very much Important to us in our project Documentation is important Is what everyone thinks in FRG The better understanding of e-learning We didn't have a very much clear idea Of how the recognition systems work And We have also written a research paper for that And it was a good feeling to write That paper and these two ones are very helpful For us to do that