 just a estimate so that I can plan for that activity. We are very eager to start using that. Incidentally, the chips have arrived finally, 2,500 of them, but it is too late. But we will be building up those prototypes shortly. So any clue from the clicker team as what is their estimate, that we can't of course afford to have 10 or 20 programmers working on that, but we would have a small team working full time. So what is the estimate, how long it would take, because you people have worked on this. I understand that you are able to look at some voice files independently picked up and you are able to break them into packets and send them across, is that correct? So what is the estimate, how long it will take? Within 2 months can we do that, build a prototype? Should be possible? 2 months. Ok. Thank you. Next please. Good evening one and all. Proceeding with our next project. Our project title is Development of Multimedia Integration Toolkit for eLearning. To start with basically what is MITK? MITK is a platform independent application software where you will be integrating all the contents of multimedia to publish an interactive lecture for eLearning. Basically it is an open source knowledge initiative where all the tools and the libraries that we are using in this project are open source. The reason behind this is the target audience. The target audience is here the teaching professionals and all the students. So generally the target audience is very high. Next. Now the question arises why MITK? Basically if you take any subject say in engineering stream the information of that subject can be in any format. It can be video, audio or any document format which in turn has several formats. So an application must be required which is compatible with all the formats. From this it is clearly evident that particular means an application is required to run all these formats. The previous version of this MITK had some limited features like displaying only the images that are imported, playing a single video at a time and synchronization of this video with the images. So we are trying to enhance this module by adding certain features like document converter, interactive timeline, GL video display, VLC and VLMC, proxy preview, dictionary and presentation templates and slide show. Starting with our first module it is conversion of documents into image formats. Here basically the image viewer is present in JavaFX but document viewer is not present. Like you cannot directly view the document using JavaFX. So we are using Java to convert every type of document into image format and integrating that into our JavaFX module. Here while conversion the quality is retained and there is no loss in the information of the document. Size is reduced approximately 10% of its original size. Next and the data flow diagram will be the input will be our document which is compatible with our open office like it can be ODT, ODP, PPT, DOP, TXT, etc. and first it will be converted into PDF format using open office and it is converted into the corresponding images each image displaying the single page in the PDF. Next the next module will be the interactive timeline. Here we are displaying all the multimedia contents using the interactive timeline wherein we are adding some certain features like zoom in and zoom out to expand or compress the timeline. Next here certain functionalities like slider, line at the end of each thumbnail has been made and preview of the slide has been next. Here as you can see the violet line is the slider which will be moving and its speed depends upon the zoom in and zoom out of the timeline. If you press say on the left side there is a zoom in button it will increase the what we said the timeline of the video. Upon increasing the timeline of the video it is the slider correspondingly moves speed means the speed of it has been correspondingly changed and as you can see there introduction.avi is the title of the video that is being run and below to that there are thumbnails of the slides and these thumbnails can be of short videos also. So, there will be a line separating between each thumbnail which will be with that line you can make it change the start and the end of each video or you can say any slide. Next and the third module will be the dual video display and its synchronization. This feature comes in when you want to explain a particular slide I mean any teacher want to explain a particular slide in detail. So, here this will be increasing this will enhance the classroom experience to our project like the functionalities are like it imports slides and video lectures it provides thumbnail view of all the imported slides and the videos. So, that upon clicking each thumbnail a preview of the particular other slide or video can be displayed and slide and timing details will be or maintained in a table. And once the sequence is set you can set a preview of the slides that you have been downloaded. And here as you can see on the left side that is the table wherein you will be having the list of all the videos and the images I mean slides that you are downloading to set a new I mean secondary video. So, that we can call it as we call it as right pad video wherein in the below that is the thumbnail of the on the right side you can see the thumbnails of the images on the left side you can see sorry on the left side you can see the thumbnails of the images on the right side of the short videos. And in the center that is the preview of the images that you have been selected. So, once you have set the sequence of all the images upon pressing the set button you can view the what we say right pad video means all the in the sequence of the images that you have set next. And this is how it looks like on the right side that will be the main video on to the left side it will be the right pad video. And on the bottom that is a common progress bar that you can see you can seek to at any position and correspondingly the images and the video will be changed. And to continue with the next project I will call upon other team member. Well myself Gaurav Singh this is the application we prepared that is keyboard event handling during the slide presentation. We use the python to create the script of this application. There are some things which I would like to mention about this application. The application runs at back end it implies that even if the control is not in the python ideally then also the application performs its task. And we made it the window specific like when the control goes to the openoffice.org impress and Microsoft PowerPoint window starts detecting keys. Then it detects the hotkeys during the presentation like the keys which are used to change the slides. Then finally it generates the XML file which contains the window name and the slide number start time and time on the duration of slide which acts in the window. Like this is the output you get the window name then the slide number then the start time and time and the slide time. Like the things which the user should keep in mind while using this application is that the user should have press the home key escape key only once throughout the whole presentation. Like the home key is pressed to start the timer and apparently the user will be on the slide number one at that time then the escape key to exit the application. Then these are the utilities of application like that is the part of our major project like proximity modular. Then the statistics helps the user to synchronize with the videos or the concerned videos or segments of the main video. Like these are the future scope to keep track on remote users activity. User's activity like if the remote user is using the system and pressing the key using the keyboard then all the activities along with the window name is stored in the system. And the third one is malware programs keyboard activity detection. Like there are some malware which uses which can which are able to create the virtual keyboard in the system and they uses it if they use it then the user can easily detect this malware in the similar fashion. This is our ongoing project like video line movie creator. Video line is a project that develops software for playing video line and other media formats. We are supposed to code for the video cutter that is to cut the video segments from the main video and insert into the some video like to make a new project like lectures. So, the teachers and professor need not to record their voices record the new project. So, the first version of VLMC was released last month and video line movie creator is a pre-alpha software and is not ready for prime time many key features are simply missing or even buggy. VLMC is a cross-platform and non-linear video editing software based on the VLC media player. So, we started studying the libraries of the VLC media player. The core of the VLC media player is library that is libvlc core which manages all the layers of the modules the codex the playlist and all the low level in VLC. Example it manages the synchronization between all the audio and the VLC. Then on the top of the library that is libvlc core there is a libvlc that allows the external application builders to access all the features. Of the core these are the modules which are used in the VLC. Modules offer certain features which are best suit for the particular environment or say particular files change. What we tried we tried to enter the functionality of VLC using that library which I mentioned earlier that is libvlc and added some of the codes. We solved many of the errors but still there are some errors to be fixed. So, our code and the dependencies of the code and our documentation. I hope that that will help the coming interns to continue the work on this project VLMC. Now I would like to call Ashwin. Good evening to all. I am Ashwini and I will be taking part for the proximity preview. After timeline editing and recording and the multimedia editing on the lecture is completed. We have that proximity preview which gives the output of the whole thing. The features for the proximity preview are the media player, the slide player, slide viewer, the preview, the slide show animation, description pane, custom themes and search box. This is the, you can see that on the left most corner we have the video playing there and on the right most corner we have the slide viewer. Below that we have a description pane and a preview which is below that video which actually have a search box which is a new features we added in this thing. Next thing. A media player is basically a tool here which is used to play a video for the lecture being prepared. It contains a tool tip which is attached to a progress bar which gives the description of the slide at any duration corresponding to the video itself. And a slide viewer is there which is helpful for the display of a slide in any animations you can select from that rotation, a fade, a display shelf and translation features which is there for the display of the animations there. We have a description box below the slide viewer which gives the extra information about the slide. Next thing is a preview implementation which is basically a presentation of hierarchical view of information. A preview is we have nodes there. Below nodes we have children there which are actually basically the slides and a tool tip attached to that that gives the image of the slide, the time, the location and the type. A search box is added you write the search name and you search for that thing that this box directly searches for the image there. Animated slide show these actually it provides an option for the user to view the slides in different ways. This is the sample for that thing we have three samples here. One for rotation, one for translation and other for the display shelf thing. This is the output that the user finally sees. It is a video, an image, a description box, a preview and the theme. We can change the theme from there. We have eight themes actually attached to them and four animations for the slides. Thank you. Implementation of dictionary using C++ and Java. Just one second. Just one question that I had when you make a provision for two video clips to be played simultaneously. How do you expect the interaction to be especially if both the clips have audio? Because from any machine that you will play a lesson, it is likely to have only one audio jack. In any case, the audio out from a machine is usually channelized on a single channel. Now if you have two video clips playing inside the created module, then how do you expect user to use it? Or is it expected that one of the video module will be a silent module? Is that what is expected? So effectively it will be useful for the following. For example, if instead of a slide and let us say writing on a piece of paper, which is being actually captured as a video and is being transmitted, and the act of writing and whatever, whatever, where the voice is not relevant can be captured like that. Implementation of dictionary using C++ and Java. What is the need for a dictionary? If a user is using it in a remote place and he is viewing a lecture, there is a good chance that he might not understand a word because it lacks in vocabulary or something. So in that reason he can use the dictionary, which will be a small box at the top corner of the window to view the word. Why C++ and Java? C++ file handling is easy. Java, the GUI was pretty good using NetBeans. How efficient is the dictionary tool? You can type in like 400 characters and it would give the answer in say less than a second. That was the database that was given to us in the open source, which had many errors also. That word had two references where it was mentioned only one. Final product is the one below, which is well categorized. That will be the search box where you type a word and a pop-up window comes here telling the meaning of the word. The algorithm is a slight variation of the hashing technique. 150,000 words are there and the searches are reduced to six comparisons, which is three times faster than a binary search and you don't have to search for the word also in the file. Once the meaning of the word comes, you have two options. You can go back to the search box or open the manual dictionary in case like say the word implementing is not there and you know the word implement will have a meaning. Suppose you are not able to figure that out, you can manually search for that in the dictionary. Good evening everybody. Besides the main project which has been explained up till now, we have developed this presentation tool on Java FX platform. The main features which set our presentation tool apart from the already existing presentation tools are like, number one it is platform independent. That is for the simple reason that the programming languages we have used are Java FX script and Java. It runs very well on Windows as well as Linux. Number two, it is an open source application. Number three, the application can open not only the projects created on this application, but also the projects which have been originally created on other presentation tools like Microsoft PowerPoint. And hence as the point number four state, any document can be opened as a slide over here. For example, if you are using a .txt document or any other word document that can be imported as a slide over here. Now I would like to call upon Neha. She would explain the functionality of this application. This is the main application. When a user runs our application, you will get to this main screen. Here we have got a menu bar in which there are options like file, view, slideshow and import. I will start with import. When you will click on import, you will get a browse window where you can select any document or anything from your hard disk like ppt or pdf file or any set of images. After being selected, they are displayed as thumbnails and we have also got a scroll bar over here. When you will click on any of the thumbnails, the image get enlarged and that can be shown over here. We have also got three buttons. The first one is delete. Delete is used to delete any of the slides from the sequence and the other two are shift right and shift left. As the name implies, they are used to shift the slides right or left. Afterwards, after doing all these rearrangements, we can say... Excuse me. Once again, request. My mind is extremely tired and a very small amount of miles gets amplified in the mind of this dish. And there is no opportunity to concentrate and learn what is going on. So please, they are not even normal, please listen to them carefully. It's just for material that you and your colleagues only have done. Okay, thank you. After doing all the rearrangements, we can save the sequence just by going on file and clicking on save to our desired location. It will save as a project in our hard disk. And then again, we can open this project just by just in going file and then clicking on open. In file, we have also got the option of create slide. Here we have got two types of templates. One is plain template in that you can free to design by yourself. And then second is design template. In design template, you have got heading, content and then place to input images. You can resize the images just by trying them. Or you can manually also resize the image. Now go back to the file menu, main screen. Here we have got now come to the view. In view, we have got some themes to just to change the background and see buttons styles. And then slide show, we have got five types of animation like regular, heading, rotating, display shelf and photo flip. This is all about our application. Now she will tell us. What we have told is the functionality developed up till now. I will talk about the future scope. There are some aspects if worked upon can greatly enhance the performance of this application. For example, right now there is a limitation on the number of slides that you can import. It depends upon the size of the documents which you are importing as slides, as well as that depends upon the configuration of the system in question. If this is improved upon, then it will be very useful. Secondly, editing of slides. Right now, whichever slide you create is saved as an image and has to be explicitly imported by the user as a slide to be used in a slide show. And another problem is like once a slide is created, it cannot be edited. One of the possible solutions to this problem is if we can save the slides as JavaFX objects. That is as the third point is saying, right now whatever saving we are doing in this application is with the help of an XML file that is using the Java file handling. The JavaFX file handling right now does not give us the scope of saving its object and reading the object. So in future, JavaFX provides us with such facilities that it would greatly improve the performance of this application. Other than this, opening multiple projects at a time and improving GUI improvements and making other feature additions, especially in creating slides, are the other areas that can be worked upon. Any queries are most welcome. Taking Microsoft PowerPoint, if you are working on one project and you go to like the new project then on another window, you can make another project. Right now in our application, you can only work on one project at a time like you import the slides and you are making one presentation. You simultaneously cannot open in another window another project. What would be the advantage of working simultaneously on two projects? So that's just like an extra functionality that can be added. Yeah, but where would I use it? The only thing that I can think of is when I want to select some portions of one project and import them on to another project or get some resources, whether video resources or slide resources or something from one project on to another project. And that brings me to my question that I had in mind. See, while you have done excellent work in extending the functionality of the tool, I still do not see any backend database. Consequently, while your tool will permit people to, let's say, edit their one-hour lecture and prepare things appropriately, etc., if I have, let's say, 5,000 hours of lectures and something like 100,000 slides, a large number of small videos snippets, in which I would like to create sort of group of things which might belong to the same concept which is being explained anywhere. In an educational thing, for example, I think I had mentioned this, that if I have some lectures by Professor Sudarshan on databases and let's say there are components where he describes a design of a relational schema or design of a new algorithm for implementing, let's say, transaction control. There could be another lecture or set of lectures on databases by another expert somewhere else. Say Dr. Sheshadri in Bangalore has given some talks. And I have in my repository all of these. Now, if a learner wishes to just find out that which are the learning resources available on transaction processing, I should be able to extract all the components snippets. So it is not necessarily a one-hour lecture which is a unit. The unit of breaking up could be even a 10-minute thing. Now, these kinds of cross-linkages ideally should be implemented in an object-oriented database. But object-oriented databases have not worked out well. So people use conventional relational databases and then extend it to use whatever object properties for the travel, etc. But your team has not thought about storing whatever work that you do in the form of an organized database, right? You are still storing on XML files. Okay. Yeah. Oh, Parag is here. Sorry. Sir, I would like to point out that this entire presentation is being displayed using a JavaFX application. This is not a PowerPoint presentation. This is this application that they have developed and these slides are shown using... Oh, this is a JavaFX presentation. You get to point that out. Wonderful. The last question I have in mind is how far away are we from releasing this formally in open source? Because a release should mean that we should have proper documentation, the user documentation, technical documentation, and the source code organized on the style of SourceForge. So we would like to release it as early as possible because I think in spite of the Lakuni, there is enough functionality which people can start using, right? Yes, sir. But unfortunately, I would like to apologize. This presentation right now being shown is actually on Microsoft PowerPoint only. The presentation tool which we have developed is there on the laptop but I think there's some technical problem with the projector so it's not being projected. So I can show you the demonstration of that presentation tool on the laptop. That's okay. That's okay. I believe you. I mean, that's not a problem. But this is a generic request to all of you. At least most of you are here tomorrow. Usually I would like to go back and collect information even about the summer interns last year who had worked on Clicker. But eventually, as I had mentioned it earlier, we'll be releasing all of it in open source. And apart from the documentation and other things, we would like to acknowledge the people who have worked on this project, who have contributed. Somebody may have contributed a little. Somebody may have contributed more. Now traditionally, we have a sort of description of the people, brief description of who they are, et cetera. But since we are in a digital age, if you can just sit in front of a webcam and put a photograph of yours and write a few lines about yourself. And if each group describes the work that you had requested for this anyway and prepare that as a document, then we could incorporate that document whenever we release the material. So if you could use some time tomorrow to do that, that would be very nice. Because our documentation, particularly about acknowledging people's work, in India has been very poor. And we at IIT are also not very good at it. So let's start from here. Good evening, everyone. We just saw the commendable work with JavaFX and Clicker software teams have done. Now we being these extendable educational systems team, or rather the Blender group as we call it, we would like to show what we have contributed to the Blender software. You all know that Blender is a 3D graphics application. It's used for lots of purposes. UV, unwrapping, texturing, whatever. But then the most important applications are animation, making animation movies and of course games. And we are here to show the educational animations that we have implemented with the GUI that our group has made. Blender basically works with Python. It basically works with Python. Python being a very good environment, high-level programming language to work with. And Blender and Python come together in a documented form, Blender API applications, the documentations, which have lots of modules, packages as well, which give you lots of functions, which are used for additional tool-making purposes. Now we have made a GUI for the users, for the native users basically. And this GUI, it contains a password window, which takes the password for the MySQL application. And main window, which contains a list of blend files. The whole list of all blend files present in your current system. And there is a search engine too, a special feature to extract whatever version of blend file that we currently need to use. And we have the objects window, which has all the declared modifiable objects of the blend file and all the attributes of the blend file as well. I mean the objects, all the attributes of the objects. And we have XML file import facility to import all the attributes of the objects basically. And now we have actually stored all the object attribute values and everything in an XML file. The XML file, because there is an XML parser in Python, which has a stat element handler, an element handler and character data handler functions. These are used for this tag and end tag and collecting the data. These data, it has been converted into a form of list, which we are using out here to extract into the blend file. Now we have used the Blender API documentation to re-blend the whole blend file that we have obtained according to the user's changed attributes, the modified attributes basically. So we have used some of the IPO modules and other render modules that are available already in the Blender API documentation for this purpose. And the best part about this is that here we have a provision to add a logic file. If each animation has a unique logic file, we can include that to this blend, this whole system and coordinate with it. So it can work for each unique animation as well. And here we have done one more important job, that is we have used it for a set of frames. It's like for an animation, the user gives a start on the end frame and the whole changes are reflected from the whole lot of existing frames from the start to the end. And this can be rendered as well. In a sense, the basic problem with the Blender application right now is that we don't have any button or anything which can directly render the earlier format of this blend file. So here we have the application where the number of frames can be specified, the start and the end frame of each blend file. And if you want one minute of the animation, so you can give the corresponding frames and it will directly render the AVR file for the corresponding blend file. So we have the video effect of, I mean the video version of the blend file which can be viewed anywhere with all the media players. And the last part is the saving of the modified blend file. This is for the database purposes. Here we have saved the blend file in a purple format. It's like if the name of our file is ADC.blend, the saved modified version would be ADC-1.blend. This has been here used because we want it to be in a version format. If someone wants to access a particular blend file or change a blend file, you can see that change has already been implemented in one of those versions. If not, then he can create a new version of the blend file. So it reduces the task of the person who wants to access it. So that's what we have done in a brief format and now we would like to show you working of our project. I call upon Anish from here. Thanks. Basically the aim of our project was to allow the user to change the animation dynamically when he sees the animation. Suppose he has a math animation to show how 3 plus 4 is equal to 7. But now he wants to show an animation to show 4 plus 4 is equal to 8. Traditionally what he has to do is he has to go to the animator and tell him to make a new animation. But now what we have done is we have given a GUI to the user where he can change the values of 3 and 4 to 4 and 4 and accordingly the result will be shown. But there should be a limit and a control over what the user can put its input as. So what we have done is we have made a GUI for the animator so that he can put conditions and limits as in the range what the user can put when he puts the value in the GUI. So we show the GUI that we have made for the user, the animator. This is the animation to show that 3 plus 4 plus 3 is equal to 7. First we will input the blend file description. What is the blend file all about? This will input as it is a math description blend file. Just take it towards the left. Now we will select the objects which are allowed to be changed. When the user receives the GUI. So we will select 3, 4 plus and 7. That is not 7. Shift it to the last frame. We will also select 7. Click select. It will show all the objects. These are the 4 objects we have selected. Now we will try to change the parameters of each object. Suppose we click the first object, the font 004. Now we have the options to set the range for each attribute of that object. The location in the x direction, the location in the y direction, the location in the z direction, rotation, size etc. If it is a text object then we have the option to say whether that text or whether the text of that object can be changed or not. So to activate whether the text can be changed by the user. We have to click activate. So now when the XML file is created, the tag of text changeable is added to the XML file. Now we will set the range of location x. Reduce it. So the minimum value is set as 1 and maximum value we set as 3. So the current value is 2.5 something but the minimum value can be 1 and maximum value can be 3. Like that we can change it for all. Now we will save the frame. All these changes have been made in a particular frame. So we will first save the frame changes, save frame. Now we can change the frame and make the changes in that frame also.