 So, basically I would like to thank Pathak Sir for giving us this opportunity to develop such a project and we learned a lot in this project about developing and work ethic and working in a team. So, I would like to introduce my team mates Pooja, Archana, then Sonu, Prathmesh, Sudhanshu, Prathik and Prashant. So, our project was basically to get a product design, optical assembly design in order to scan. So, our project was basically to design a fingerprint optical assembly. What currently is being done using the Aakash tablet is that they are using an Aakash tablet for Adhar authentication purpose. They are taking a fingerprint and using that fingerprint they are able to tell the if the person whose fingerprint is there, his other ID matches with that or not. So, what they currently are using they are connecting an OTG cable, with that they are connecting a frutronic tech device, they are connecting that device to that Aakash table and then scanning the fingerprint using that device. What we have done in this project is we have designed our own optical assembly, modified some softwares and created developed some softwares and then we are scanning this fingerprint on our own soft optical assembly and we are using this optical assembly for this authentication purpose. The previous assembly was costing something like 4000 rupees. Our current assembly is on an approximate cost of 180 rupees but with some more modification I think it will be a little more costly than it is currently present. So, I would like Sudhanshu to continue. Hello, I am Sudhanshu Verma from NIT, Nagpur and I was working on the optical assembly of this project. So, the optical assembly we have just we are using the Aakash tablets camera itself to capture the image for authentication of Aadhaar card users. So, this is our optical assembly, our optical assembly consists of 3 parts, the first part is the clamp. So, clamp just is used for making the optical assembly stable on the tablet this way and it also there is a support for the spacer, our spacer is 2 mega definite distance between the fingerprint impression and the camera of the Aakash tablet. So, we have used a spacer for that thing, third part is the optical part, third part is the optical part in which we have used acrylic and on the sides of the acrylic we have used LEDs. So, LEDs, so LEDs we have just put LEDs on the sides of the acrylic such that it illuminates the acrylic perfectly by the concept of total internal reflection and we have used components like black acrylic, black acrylic to cover the whole set of such that no outside light interferes with our fingerprint impression and transparent sheet of acrylic for getting the fingerprint impression and we have used LEDs and given up a 4.5 volt power source to these LEDs and next is the FTIR principle, how we are getting the difference between the edges and values of our fingers. So, FTIR principles in works in such a way that the whole of the acrylic is illuminated due to total internal reflection. So, FTIR principle is frustrated total internal reflection in which the light travels due to total internal reflection reflection it travels in the whole of the acrylic. So, wherever there is just touch the acrylic it the light gets absorbed and hence we are able to get RGB image there RGB there and the parts where values are there they are little bit up than this acrylic. So, they are in the air so they get scattered due to which we get a white part white pass there. So, this is it and now Pooja will continue. Hello everyone I am Pooja Dev from NIT, Nagpur. So, after the image is captured it goes through a series of processing initially as my friend Siddhanchu explained the image gets captured after it is captured it is grayscale what grayscaling does is the RGB image that gets converted into a gray image after that the image gets cropped so that the background is removed and only the fingerprint gets processed after that the image is resized to get a clearer image. So, resizing adaptive histogram equalization is done in adaptive histogram equalization the contrast is spread out so that the clarity of the image is increased after that sharpening is done image sharpening basically brings out the ridges in the fingerprint. So, after sharpening thresholding is done right now you have a grayscaled image that gets turned into a binarized image a binarized image is an image which is white pixels and black pixels after that thinning is done so that the thick ridges those get converted into a pixel thickness and finally, the enhanced binarized image that is sent over to the server for authentication. So, this is the basic workflow and live finger detection this is one technique that basically detects if the finger is of a live person or it is a spoof. So, the fingerprint scanner that can be pulled by using a gelatin fingerprint or a silicon fingerprint. So, what it can be this problem can be solved by using hardware or software. So, right now my friend Prathamesh he implemented the live finger detection. So, he used a software approach basically it makes use of poor detection and perspiration where when you place the fingerprint on the scanner due to perspiration the grayscale of the image changes and this change in grayscale determines whether the finger is of a live person or not, but unfortunately it requires a very high resolution camera and the camera of Akash is providing to be a bit of a problem so, we have been unable to implement it in our model. So, I will tell Archana to continue. My name is Archana and I am going to be talking about the next step in our algorithm. So, first what we do is adaptive histogram equalization. So, in this step what you do is you increase the contrast of the image. Now, by doing this you can easily differentiate between the ridges and the values more clearly. So, what is done is you take the histogram of the image and you spread out the values such that the high intensity values are made even higher and the lower ones are made even lower. So, the basic algorithm is first to compute the histogram of the images by checking the intensity values and checking their frequency. After that you calculate the CDF which is the cumulative density function and then you find that of the histogram and that is the transformation function which is used to transform the input pixels to the output pixels and this is the image that we took from the Akash tablet and before we apply the contrast before the contrast is increased that is the image. But you can see that the histogram is not very equalized and it has peaks. So, what we do is we stretch out the histogram so that the contrast increases. And after we do histogram equalization what we do is we do image thresholding. So, thresholding is used to convert a grayscale image into a black and white binary image. So, in this so the algorithm that we followed is also the algorithm. So, in this first what you do is you once again you make the histogram and you can find out the probabilities of each intensity level. After that for each to find the threshold value you take each value each intensity and you find its weight its mean its variance and within class variance and whichever threshold whichever intensity value gives you the minimum within class variance that is chosen to be the threshold value. So, we when we applied Otsu's algorithm before thresholding this was our grayscale image and you can see that the values on the histogram vary on the grayscale and after applying thresholding we get a binarized black and white image with the values only pure black and pure white. Sonu's going to explain the next. Good afternoon everybody. I will be talking about image sharpening. So, when we sharpen an image what happens is we enhance the details of the image. So, details of the image that was not visible to the eye before they become clear after you sharpen the image. So, like features like the hair strands or the facial features they become much clearer. Now, when you sharpen a fingerprint what happens is the ridges they become more clearer. So, you can identify the ridges in the fingerprint. So, the algorithm that we use is we have used Laplacian kernel based convolution. Laplacian kernel is a standard matrix that is used for sharpening. So, first we take the input image and then we apply the convolution to it and then we get the sharpened image. So, over here we can see before sharpening the ridges are in very clear in the fingerprint but after sharpening you can easily identify the ridges and also in the histogram we can see that after sharpening histogram is a little more distributed that is like more pixels are given more importance and more pixels you show the details of more pixels. Next I will talk about image thinning. In image thinning what we do is we reduce the thickness of the image that is by reducing the number of pixels in the image. Now, this is done because a thinned image it occupies lesser amount of memory and it is also easier to process a thinned image because it contains only very few pixels. Now when we thin the fingerprint what we do is we reduce the thickness of the ridges to one pixel thickness. Now this is done because it is easier to extract details like minutiae points from a thinned ridge and these minutiae points are used for fingerprint recognition and matching etc. So, in the picture we can see this is the original input image and after thinning we get a one pixel thickness ridges. Now I will call Pooja to talk about the future scope. Firstly the optical assembly it can be enhanced by using infrared LEDs or assembly LEDs. Secondly, various techniques like use of polarizing filters and macro lenses can be used to obtain a better image of the fingerprint. Apart from that the problem we faced was that we could not send the image to the server because we did not have access so authentication using the enhanced image could not be done. So, that can be implemented in future. So, the problems we faced during the making of this assembly were firstly finding the distance at which the image is focused and is clear for that the viewing angle of the camera was calculated and the focal length was calculated by doing various experiments. The second problem we faced was initially we were planning to use a glass sheet for capturing the image. However, glass sheet has a green tint which affect that the quality of the image. For that we replaced glass by an acrylic sheet. The third was illumination in an acrylic sheet, the light it was not being distributed properly. To solve this problem a number of LEDs were used and the number of LEDs being used they were varied and finally an optimum number 4 was calculated. The fourth problem we faced was deciding the workflow of the image enhancement process. As I explained earlier the image goes through a number of processing algorithms. So, to solve this problem we first made use of SyLab using different permutations we decided which process gives the most optimum result and finally the process as shown was decided and fifth was implementing it on Aakash tablet. For that we used OpenCV, OpenCV is an open source software and it has a dedicated platform for Android. So, we developed the code using OpenCV and then finally implemented it on Aakash tablet. So, the things we learned in this project were basically image processing using OpenCV. Secondly, we learned using SyLab. SyLab as I mentioned earlier is also an open source software. Then it was the first time we were studying image processing and we studied several algorithms to do the same. The fourth thing we learned Android application development. Creating a hardware assembly we faced as I explained before certain problems were faced and we learned to overcome them and finally we learned how to document a project. Like we made the SRS, the SDD and the project report. So, yeah and apart from the project we had a side project. So, some of some Android applications were made. This is an Aakash dictionary. This is an application that provides formulas and in algebra and geometry. The third is this gives history of India. Then the fourth was plotting graphs. The fifth was incredible India. It gives information about various states in India. The sixth is periodic table. It gives information about the various elements. The seventh is consumer protection. The various rules for consumer protection. The eighth is salt analysis. So, on a concluding note I would like to thank Fatak sir for providing us with this wonderful opportunity to work on Aadhar authentication. I would also like to thank Karman Aages sir for guiding us throughout the process. I would like to thank Prakash sir and Nare sir for helping us build the optical assembly. And I would also like to thank our mentors who guided us through all sticky spots and last but not the least I would like to thank all my team members for contributing so much to the project. And finally we would like to show a demonstration of our application. The first image is the standard image of finger that we took from 3mm pixel camera. It clearly shows the better result than the image we are going to took from this VJ camera. We have to just scan. We have to show the demo. This is the result by the VJ camera on Aakash tablet. We have developed using the different techniques using thinning, thresholding. One question is have you actually tried this application against the Aadhar server? No sir. Why? But I had specifically said where is Rajesh? Who are the mentors? They had a problem actually. The idea was that even if they get a rejection the fact that end to end application is built can be proven only after you connect to Aadhar. So why was that not done? They had approached and they could not get the links properly. Sir we were not able to contact the guy who was there at testing server. In the affordable solutions lab? Anybody from ASL here? Sir we met Rajesh sir but he was not able to contact. Sir because we had to open up the test server to get the data. But the person who had opened it up earlier for testing he wasn't available. Oh at the other end? Yeah at the other end. So we went over the cross section directly. We tried finding different links that we wanted to access. Because even if this camera didn't work you have made the same application run on any other Android tablet right? And you have tested that there you get much better clarity. Now do you have a suggestion on the specification of the camera resolution? Camera resolution is important but what is more important is that the finger should be focused on. Focused upon? Focused upon. So in that for doing that we had ordered some lenses but by the time they arrived it was too late for us to implement. The lenses? Yes we have ordered some lenses. So whom have you transferred this technology so that the work can be continued further? What we have mentioned in the reports for the further? The report is with the person here who has read your report. If he has not read it now it's unlikely that he will read it over the next two months. What about your software engineers? So will that person be able to carry this forward? I had personally promised Nandan that we will make sure that it will work. They are currently using it as I said using an external device. We are very keen to use it. I wanted a more clearer recommendation from you regarding both the focal requirement as well as the camera resolution. So let me ask the question other way round. Are you sure that using those additional lenses that have not been obtained the present resolution of the Akash camera will be adequate? If I think it would be adequate that BGX camera should work. The thing is the problem. That's why we are in EGA. You have to demonstrate whatever you want. Now what is the camera with which you have successfully got a good image even without extraordinary purpose? It's a Nokia 5530 3.2 megapixel camera. 3.2 megapixel? 3.2 megapixel. Yes sir. That's a lot of pictures. That cost may be equal to the cost of the Akash tablet itself. Some more than that. You were supposed to build an affordable solution. You were supposed to build a poor-man's-remotion, not a rich-man's-remotion. That's where the part of the lenses, the work of the lenses comes in place. We can regard this as a lot of preliminary things have been achieved. Yes. Dimensions of the attachments, etc. have been taken. The only part that is left is the work with the optics. Yes sir. So, Rajesh, will you please take help of Prashant Prakash Naidya and take this forward? So, what is the time in between? These guys are taking two months. One and a half minutes. One and a half minutes. You will be the responsible person. One and a half minutes. So, that is 15th August then. Slightly less than one and a half minutes. I would like to publicly announce on the independence day that affordable solution exists for all of them. Sure. Don't ditch independence day. Which independence? What has been the cost of the components for the attachments? The current assembly which we have designed is costing something like rupees 180. 180? Yes. But with those additional lenses? With those additional lenses, the lenses we ordered with the whole component, we had to order a camera itself to get the lens out. The camera was 600 cost. So, you can say building those simple lenses would take at the max 200 lens. Manufacturer of the lens, the lens would take 200 rupees. So, it would be 380 or 400 something. That would be 10 times less than what is the result. 400 or 3 figures. No, currently it's 200. Adding the cost of lens that is 200, it would be 380. 380. 380. I had suggested that it should be within between 250 to 400. You think it is possible? Yes, it might. It is possible. There might be a lot of doubts as you can see. But this is natural. Let me tell you, the task given to them was not an easy one. They had to actually study basic optics, which all of you would have studied at some time of the other, but I'm sure like them you would have forgotten most of it. What is total internal reflection? What is reflection? All of you are very clear on those concepts. Meaning you all remember the words. Yeah, not good enough. So, thank you very much. Let's give them a big hand. I would like to take a two-minute break to introduce two very important people. One of them is my colleague whom you might have seen or have you ever made your appearance here. So, Professor Kannan Makhalya, who is actually the coordinator of all national mission projects here, he is single-handedly responsible for convincing me to take up the Akash project when there were some problems and of course for the talk to a teacher program was conceived jointly by us under which we trained large number of teachers. Both these projects are considered fairly successful projects and that is the reason why I am given a free hand to get good students from all over the country to work like this. The other person who is actually almost the atma of the Akash project in the national mission, actually two atmas, one of them is not here. That person is Mr. Sinha, who was the then joint secretary. He is now secretary of the Bihar state government. He dreamt of both the mission and the project and the person who has worked shoulder to shoulder with him right from the inception of the mission and right from the inception of all these projects is Professor Pradeep Verma. So apart from handling these projects nationally, he also works as our direct advisor and very fortunately whenever I request him to come over here and advise us and examine things, he has always been coming. So thank you Professor Verma. Thank you Professor Kannan.