 Good afternoon to all. I, Archbishop Tiak, will be starting the presentation for our group, Know Our World, a project guided by Professor D. V. Fatak and mentored by Mr. Vinash Abate, Mr. Mayank Palival and Mr. Rajnikanj Jangi. Before I start the presentation, I would like to introduce my team which are Prashant Mishra, Aman Kansal, Abhinav, Rohit, Nareesh, Nivedita, Chandana and myself. Our project was basically divided into three phases. First was to design an interactive and informative activities using the PI framework. The main aim of developing this activity is to help the student understand the key concept through an interactive environment. The PI framework provides such interactive environment. Our group has implemented over 80 such activities, few of which is listed in our demonstration. So moving on to the next part, Prashant Mishra will explain the next objective that is the prototype designing for computerized adaptive testing. Computerized adaptive testing is a system in which the difficulty of the test item tailors to the ability of the examinee. It is a computer based test. In this test, this test tends to adapt itself to the ability of the examinee. In this, the key concept is that the examinee who has an ability higher is not presented with more questions of lower ability. That is, he does not have to waste much more time on the ability that is much below his range. On the same point, an examinee who has smaller capabilities is not presented with a question of very higher abilities. So in this, it provides the best possible motivation. Now, I move towards the advantages of CAD. So why actually we need computerized adaptive testing? On contrast to the fixed length test. The fixed length test, they have a fixed test length and the student needs to attempt all the questions to get his results. But by using computerized adaptive test, we can have shorter test. For example, we have a set maximum length test, but the computerized adaptive test tends to finish earlier if the termination criteria is satisfied. It saves the time of the test takers and we generate precise scores that match the accuracy of the examinee. And we display the results immediately. Now, I move on to what are the components of computerized adaptive testing. The first and the foremost component of a computerized adaptive testing is a well-calibrated item pool. We need at the back end a huge database so that we can provide the student with an appropriate test and so as to accurately guess the ability. Next is the starting point. By starting point, I mean when the student enters the test, we don't have any prior knowledge about the student. So, what particular ability question should we introduce? In our case, we have scaled the ability from 0 to 800. So, to decide the starting point is a key concept that is employed in computerized adaptive testing. Next is the item selection algorithm. As the examinee progresses, what particular question from the item pool should be picked up so as to best match his ability? Then is the ability estimation? Ability estimation is basically when the student has answered a particular set of questions. So, based upon his responses, we estimate what should be his ability and we provide the person with a question of equivalent difficulty. Next is the termination criteria. This is the most important part of computerized adaptive testing. The termination criteria, it is when the particular test should stop. Like there are particular limits set, that is, there is a maximum time limit. There is a maximum number of questions, but the test can finish earlier if the algorithm termination criteria is satisfied. Now, I would like to show some key features of the website, the web portal that we have developed. It manages minor connection losses. That is, minor connection losses are tolerable. We have provided resumability of test. That is, if a previous test session exists, it is resumable. Then there is a way to generate the result dynamically and we show that the dynamic performance report. Also, the system is designed to be independent of the database architecture used. The item pool can be calibrated in any format of the database. We just use a database interface. Next, I move on to give a small demonstration of the web portal. The computerized adaptive testing is basically focused on certain parameters, like the student has a particular ability. So, in the standard in the computerized adaptive testing, this is the front end module of the computerized adaptive testing web portal that we have created. It first checks whether the client has a particular test session that is already there. If he wants to resume, he can resume. Otherwise, he can start a new session. When he starts a new session, he's shown this front end. Now, in this front end, he can enter the details and the details are validated by JavaScript and when he presses on the continue button, he's directed to this test customization page. In the test customization page, he can select which topics he wants to be present in the test. Now, this at the back end, this module is responsible for creation of the session of the particular student. It stores the relevant information. It creates the relevant objects and stores in the session the objects that that might be used for later on computation of the ability. Now, when the examinee presses next, it goes on with the computerized adaptive testing. So, this is the basic instruction page which lists the various instructions about the computerized adaptive testing environment that we are going to test him on. It also, it is the second part of the session creation module. It creates the basic user logs, the user logs that will be required for item calibration and for storing the responses of the students so that they can be used later on in the result display module to generate the performance report and all. Now, when he presses continue, this is the basic test arena that we present to him. The question is presented in a format and with the four option, it is it will be an MCQ type of test. Then, the estimated ability we have shown now. The computerized adaptive testing assumes that it scales the examinee's ability and the difficulty of the parameter on the same scale. So, if the estimated ability is x, he will be presented with a question of x, of difficulty level x because they are calibrated on the same parameter range. Now, the we the student information is visible. The student is provided with two options either to skip a question or to submit a question. Now, the skip question functionality is a functionality. The student can skip at most three questions and after that, he is not allowed to skip the questions. The skip question, he is skipping it without any penalty. Now, he has the submit, when he submits, his response is evaluated and we are dynamically showing the rise in his estimated ability. Whenever he answers the question correctly, his estimated ability rises and that is, that corresponds to the estimated ability that is displayed and whenever he answers wrongly, his estimated ability falls. For test demonstration purpose, we had limited the test length to be six and this is the final report that gets generated. So, this is the basically the result display module. In this result display module, the information, the user log that was created by the response analysis module is used to prepare this test report. We show the entire test summary, the test, it shows the number of questions that was presented to him, the number of questions that is answered correctly, his estimated ability. We also show a test summary, that is, the question that was presented to him, the correct response and what was his response and the ability, the estimated ability by our algorithm at that particular point of time and we dynamically map this on to a performance chart and we also we are also showing the graph for the deviation in the standard error. So, like this will be the key concept in the termination criteria and this will be taken up in the further modules in the further module that the simulation module. Now, I would like to for the simulation module, that was the third basic objective of our testing purpose for to develop experimentation and testing and simulation of the computerized adaptive testing model. So, we have fully tested this model and for explaining further, I would hand over the mic to Akip. For the computerized adaptive testing, we have opted to implement REST 1 parameter logistic model which assumes that the difficulty and the difficulty and the ability of the user are on the same scale. It also assumes that the answer to a particular question will either be correct or incorrect. It does not give weightage to the corresponding individual options. To calculate the next ability, we have used an interpolation formula and according to that interpolation formula, we have given the best fit from the corresponding database. The basic assumption for this algorithm to work is that it has a very huge back end database and it never falls short of the required number of questions. Second, we have assumed that the database is properly calibrated and there is no error in the calibration of individual items. Our simulation results is basically focused on two items. First is the initial ability and second is the termination criteria. First, I will explain the termination criteria. To terminate the test, there are two parameters. First is the maximum length and second is the standard error has fallen below a certain level. To fix the maximum length of the test paper, we have assumed that our scale is from 0 to 800. So, if I start with the initial ability of 400, it will take 15 questions to reach a scale of 800 and then another 15 questions are added to correctly estimate his ability. So, in a total, we need 30 questions for the test to end. So, on a safer side, we have assumed the maximum test length to be 35 and the standard error to be less than 0.5. Then moving on to the initial estimate ability, as initially we have no idea about the user, about the examinee ability, we assume it to be the midpoint of our range, which is 400 in our case. In an ideal situation, that current ability and the estimated ability should fall on a straight line, but in our case, it is clustered around that straight line and it gives the maximum error of less minus 10. In the second graph, we see that what are the number of questions required to end the test. We see that minimum number of questions is required at the midpoint and it increases on either side of the scale. To mitigate this problem, we have a second criteria that is, we ask the user to estimate his starting ability. So, in first case, if he answers, if he guesses his ability to be correct, then we see that the number of questions required for the entire range is uniform. That is, the test may end within 15 to 20 questions. This is a vast improvement from the previous model, but in case the user has guessed his ability wrongly, there's a system performance degrades and it requires more than 40 questions to be answered. Our system is designed to handle a maximum question of 35. So, to mitigate this problem, we assume that if the user gives an answer to questions incorrectly in the starting, then he has no idea about his starting ability. So, the system restarts with the initial ability of 400 and we can thus end the test in the required number of questions. Finally, the conclusion we draw from the simulation, we see that if the examinee makes a nervous start and he makes mistakes in the starting, then he will not be able to reach his correct ability because the system is designed for a fixed number of tests. Second, the selection criteria is that that the user should be given a question of appropriate difficulty, not a higher difficulty level or a lower difficulty level because it does not improve the system performance. We have seen that our modification in the algorithm is helpful in some cases while it degrades the performance of a system in another cases. Finally, we have also calculated the simulation results for 8 lakh students. These are the following challenges we face in the implementation of our website. We need to create a system that is that is tolerable to connection losses. For this, we have created a backend session for every user and a log for it. So, if a connection is lost, the system can resume from that point. Our website is independent of the backend database, so that was a huge milestone for us. Then to propose a modification in the existing algorithm was another difficulty. And to create a test environment for an 8 lakh student was a very difficult task for us. Now, I will move on to our next objective which was to design a prototype for standardized testing which will be explained by Vinod. Basically, our aim here was to build a fully customized test which takes as input the specifications from the user and presents to him with questions that are as close as possible to his specifications. The user may be a student who actually wants to generate a standard test paper according to a particular set of topics or he may be an examiner who wants to generate a standard test paper based on a particular set of specifications. Now, I move on to the key features. Basically, our models are database independent that is we are not worried about the implementation of the database. Next, we are providing a fully customized test in which the user can specify the duration of the test, the various topics that he want to that he wants to give the test on, the number of questions and the difficulty range from each topic. Moreover, we are also providing with a topic tree that presents to him a list of topics that are present in the database and from which he can choose on. Next, I would like to move on to the demonstration where I explain the various modules that we have used. Basically, the modules involved here are the specification module. The specification module basically deals with the front end. Here, we will be providing him with the specifications like the duration of the test, the topic selection, the minimum and maximum difficulty range as well as the number of questions. The topic tree implementation as you can all see provides him with a hierarchy of topics that are present in the database. He is given the flexibility to choose any topic from a particular level like for example, he can select physics in which case he will be presented with questions from physics and all its subtopics or he may be he may be selecting any subtopic of physics like electricity in which case he will be presented with questions only specific to electricity and its subtopics. Now, moreover the various validation checks for the input fields are also implemented here like the negative like the user should not input any negative number of questions in that in the number of questions field etc. Now, the user after he is free to add multiple topics as he desires. Next, the number of questions the minimum maximum difficulty for the first topic. Now, the second topic the number of questions from that topic as well as the difficulty range from that topic. Now, as soon as he presses the submit button his specifications are transferred to the next module that is the fetching module. This module is the module which actually which will actually interact with the interface that is that is actually going to implement the database. Here we are providing him with a list of suggestions as well as the specifications that the user specified and the specifications that have been met. Now, as you can also he asked for 20 questions from the first topic but he has only but the database has only 10 questions from that difficulty range. So, here our algorithm comes into play and we actually expand the range according to a percentage of the minimum and maximum difficulty and we are able to provide four more questions from the new range. But the remaining six questions from the first topic are now distributed in the next in the remaining topics. So, that he meets the full quota of questions. Now, if the user is satisfied with this specification he can proceed or he is also given a list of suggestions from the topics at the same level where he is given the number of questions from the difficulty range. If not he can discard and change the specifications. Next he proceeds to that basic test page where he will be presented with the questions that have been fetched. Now, the presentation module actually takes comes into play here. He will be here it is here that we have actually implemented a timer for the test which will work according to the specified duration which the user gives at the beginning of the test. Now, here it this the user is also allowed to skip any number of questions that he desires he can leave them as unattempted. Now, this module will also take into account the user response and actually transfer all this information to the next module that is the result display module. As you can all see that we are actually displaying the questions from the topics that the user selected as well as the difficulties of the questions are in the range that he specified. Now, as soon the test will end as soon as the timer expires or the user can manually click on the end test button and end the test. Next is the result display module where we are actually giving him the test score and the list of the all the questions that were presented to him as well as the correct response and the user response for each question. Now, I will hand over to Rohit to explain the limitations. Good afternoon everyone myself Rohit. Now, I will explain about the test that we have conducted with the software. We have actually made a dummy database to conduct to fully conduct a test of our software. Actually, we have added 30 topics in a database in a hierarchical manner in a topic three kind of implementation each consisting of thousand questions. So, in a dummy database we have 30,000 questions and we have conducting a tested on a software fully. Now, moving on to our what are the advantages of a standard testing software. As Abhinav already mentioned there are two kinds of users. One is the student and one is the examining. Everyone has its own advantages from their point of view. For the student, it may help him to test his knowledge in a specific topic. Like say he wants to test his knowledge, what is his knowledge in the topics they sound. He can select the topic and select the difficulty range and give the test and test his knowledge in that topic. From the examining point of view it can help them to set the standard test papers like the CBC board papers can be test, it can be set from them, the topics whichever we will mention them and the difficulty range that we want to set in a paper and the paper will be presented to the students. Second advantage that other advantage that we have in the standard testing software is that we provide a review and change of specifications option in that our algorithm tries to measure user specifications as closely as possible. If the user is satisfied with our specifications he may proceed to generate the test paper. Anyhow a test paper will be generated. If the user is not satisfied he can change this test paper specification according to our suggestions. Now moving on to what are the challenges and the limitations. During our making the software we faced certain challenges. The major challenge that we faced was coming up with an intelligent algorithm so as to as closely met the user specifications as possible. Like there may be two kinds of users one is liberal and one strict. He wants the difficulty in this range only. So we try to develop an intelligent algorithm so that user specifications are met as closely. Second the major challenge we other faced was the database independent system to develop the database independent which is independent of whatever kind of database we used. Everything has its pros and cons so do we have. We have the limitations also like the specific it is not ensured that every time the user specifications will be fully satisfied. We try to match as closely as possible to the user specification. And the second limitation that we have we don't keep track of the user specification like later when he logs then we are not keeping track of his past activities or past records whatever he had done on our side. Moving on to what we have learned through our summer internship programs. First of all we have learned how to work in a team spirit like there are various people coming up with a new ideas their own ideas and then clubbing up the ideas to come up with an intelligent idea. Second the time management was very important for all of us. We learned how to manage our time successfully. Working in a professional environment was a major experience that we gained from our summer internship program. Also we learned exposure to new expose how to use the object oriented programming concepts like till now we have read in the books about the object oriented programming like reusability modular programming. Now we have learned how to implement it in real terms. Also we gained exposure to new programming languages like java scripts and jsp to make our web software come alive. Okay so this was a standard testing software part now thank you. If you have any queries our team is there. There is something that I would like to share with all of you. I don't know whether you have come to know the announcement that yes I did mention that we have an MOU with EDX and we'll be using EDX for running our first course. Our ambition is to have about a lack of students in the first offering and about a million student for one course that we'll offer next year. Now it should be very obvious that when you have such large number of students conventional testing will not work. Now one problem with testing is it has always been used by educational system for evaluation. It has never been used as an aid in learning. This adaptive testing actually permits for the first time the testing process itself to affect learning in a positive and that is why I personally regard this effort as a very critical effort. This is not complete in any sense but I think such nature of work will continue now in educational technology research and going forward this is what would be most useful. Basically such tools are what are required to create the true eclaweas of future where you can learn on your own by using knowledge contents we have created in that particular way. So you are saying this adaptive testing. There is a corresponding thing of adaptive learning where if I do not answer the questions properly I am given additional material additional explanation and additional examples to study and then I am asked those questions. So what will it take to make that as part of your own structure? Sir we have maintained a system log for every user so we will keep track whatever answer a user has not given and then in the end we will suggest him the suggest him whatever that is available in the database that these activities we can teach you so we can present him with that. So is it possible that instead of analyzing these logs at the end after each test you look at the answers which are wrongly given and in your own database I introduce additional tables which will actually direct that student to read some material solve some problems etc it should be possible. Yes sir we we are creating the we are storing the response after each. How complex is your database how many tables are there? Sir we are we have not implemented the database the system is independent of the database we have just created a dummy database to test the system. I see. So it could be any database in the back. Yes sir. But the method I mean ordinarily the method would use a JDBC to connect to a database right. Yes sir. So we have created an object and that object through a function fetches the questions but that part is independent of that website. Yes sir. There is a back end. Yes we have created a dummy database to test it. Using what? What software? My secret. No so that's what I was asking that means your JDBC calls etc complete as far as your methods are concerned. Yes sir. So only that back end database that. Yes sir. That will not be very complex we have made some simple results. Anyway let's give them a big hand. Thank you so much.