 Good evening everyone. I am Sagar Satpati from NIT Rao Kehla with my teammates here, Anchal Singh from NIT Amirpur, Animesh from NIT Uttarakhand, Soumya from NIT Rao Kehla. A brief description of all the contents. We have overview, software and hardware specs, then documentation details. We have a 3D scanner here. As we all know, the 3D scanner is basically used to analyze a 3D real-world object and to fetch data so that we can have a digitized version of it. Now coming back to the technologies which are present, now the focus has been on implementing certain objectives. Hardware has to be quite easily available, construction of 3D surface points from 2D images, use of image processing, user-friendly software interface and generation of a 3D file which could be used in any third-party software. Why? We have already scanned some of these. No, you are going to scan. Sir, for that we need to go to Linux. This is for Linux. So scan in Linux and show it to me. Sir, we have a recorded video of it. We can show it. We have a recorded video of it. Before we do the presentation, I mean before we give the software specs, let's have a quick details of the technology used. We scanned this thin container. Then the second point was to get the useful contours from that scanned image. For that, we applied some image processing techniques. From this raw image, we were extracting that data and we were storing the coordinates in a TXT file like this. These are the X coordinates and the Y coordinates. Sir, this is the final image. We are not claiming that this is complete. This is not complete yet and we are also facing a lot of problems in the conversion algorithm and those have to be developed. Sir, we are currently facing these problems. This is the object which we have shown you and we have scanned it. This is the scanning process. For the scanning, the real-time scanning we have to switch over to Linux. So we have recorded a video to show you. Sir, how to give me that? Yeah, show it, sir. After the presentation. Okay, sir. We will show you that demonstration. Sir, we are not claiming that the final image would look like this. Don't claim anything. Okay. Okay, it will produce something. It will look like me. It will produce something we showed to you. Yes, we will see. Okay, sir. Sir, meanwhile the things which we want to show you. No, I don't want to see anything at the next project. Sir, we are having this problem, sir. Sir, it is the object that has 90 degree curve. We have shown that we are having a problem to capture this. Yes, it is the problem that we want to implement in future, sir. It is a 90 degree as that we want to cover. Sir, it's fine. I want to see what you do. Okay, sir. We will show you. You will be like a cylinder. It will not look like an inhaler. Yes, sir. If it is can be. Capture a 360 degree view of the image. For that we have constructed a platform and we are placing the object over the platform and controlling it by a stepper motor. The code sends some serial data to the microcontroller and it captures the different views of the object in steps. For example, the stepper motor that we use here that took 48 steps to cover 360 degree. We get 48 frames for a 360 degree view. This, the left side image is how a frame looks like which the camera captures. But we do not require all parts of the frame. We only require the outline of the laser line. So, after some image processing, we were able to extract this redis component and we were able to generate the image which is present in the right side. We are storing 48 such images in our local disc. Also to that we are also storing the value of the coordinates present in that contour of the white line which we are going to use for further. This is an example TXT file which we are storing. The first row is the X coordinates and the second row is the Y coordinates. Next, this is the main part of our project. This is the conversion algorithm. Up to now we have the X coordinates, Y coordinates and the frame in which the point is present. Say that angle is 5. From X and Y, we are able to generate the radius of that point from the center by root over of X square plus Y square and we are able to generate the theta of that point from the X axis of that frame by tan inverse Y by X. So, in this way we are able to generate the spherical coordinates of the point in R theta and phi format and then the next step is to create the 3D object file. We have chosen a polygon format file .ply file which is used with the third party applications like blender and mess lab. And for storing the coordinates in that file, we have to store it in Cartesian system. So, we again converted the spherical coordinates into the Cartesian coordinates and then we stored that in the ply file. And by rules, we followed the rules and we first defined the headers. This is an example ply file that we created after scanning. Next was the plotting the point cloud. This is the object that we scanned. We attempted to scan and this was the output image. This is the point cloud or the just the points we have plotted in the 3D space and this is how it looked like. I admit that this is not exactly similar to this object because this algorithm has many flaws and we still need to develop it. This is the surface plot and it uses an algorithm called as triangulation for creating the faces automatically. And this is actually an inbuilt function in matplotlib that we have used directly. Over to Anchan. So, this is the main window which will get pop-up after running our code. So, we have many sub-manus and manus in the main window. In the file menu, we have exit sub-manu to exit from the main window. And in edit, we have setting sub-manu whose skin shot is shown in the next slide. Settings shows the details of the Ardino's and WAP cams connected to the system so that the user can select it accordingly. Coming to the hardware part, we have used the Ardino Unum which is a microcontroller based using Atmega 328. Next is stepper motor. The stride angle of stepper motor is 7.8 for capturing of each frame and the total number of steps is 48. We have used motor driver since microcontroller is not able to provide such high current to drive the motor directly. And for the power supply, we have used SMPS. This is the circuit connection like pin number 8, 9, 10 and 11 which are the digital input-output pins. We have used, this is the hardware view. We have used two lasers, L1 and L2 which is being shown here. L1 is the reference laser and L2 is used to detect the outline of the object. So, now we are going to show our recorded video. This is just a recorded video of how the object is getting scanned from different views. Now, we will focus on the applications of our project. First is the industrial design. We can do inspection and detection of any object that we want to scan. Next is the healthcare and medicine. We can use to generate a 3D scan of limbs and prosthetics, whatever we want in our medical science. Next is the archaeology. We can scan artifacts and other important things that we want to study. Next is the education purpose. For 3D rendering studies, we can use our project. Next slide please. Now, discussing the issues that we face with the present design as Sir told us to scan his inhaler, we were unable because we have so certain issues that we want to tell you. First is the inefficient conversion algorithm. The problem that we face currently is that the object whichever we want to scan must not have any 90 degree edges or pointy edges. So, for removing the problem, we can in future if we want, we can use two cameras for finding the depth because we cannot analyze the pointy edges. Next problem is poor 3D resolution. Actually, the points that we are scanning, the contour, it contains many points due to which there is a problem in resolution and also we are lagging in processing. So, for that, we have two processes. First, if we are getting a point as Swami told in a PLY file, we can use two process for reduce it. First, if we are having a range of points, we can use the average point, we can take out the average and we can use one point for that. And second is the reduction mechanism. We can find a one point as a mean and then we can use it. Possible future problems are construction of faces. As you know that we have shown you a plotter in which the object was not perfectly scanned. It is the problem that we have not made a faces. For faces, we can use a process called triangulation. It uses all the Python code that we have done it. And in the future, we can implement it, but we need more time for it. And next is the GUI platform which we have shown you. It can be developed more for which we are more interactive. And last is the platform independence. We have developed it on Linux. We can develop it on more platforms if we want. Now we want to demonstrate the product analysis which we have done till late. Yeah, we have done the product analysis. This is the benchmarks that we have set for working off our hardware and software purpose. We have taken all the consideration, the steps that has taken by stepper motor, the stride angle, the speed of the stepper motor that we are using, the voltage range and the current range. Next, we are concerning over the software. As I discussed before that the GUI platform must need to be more developed. So we have just taken the seconds that are taking to load the main window of how much time is taken, the complete scanning processes and everything. The how much the system is required, like we are currently running it on our core i5 processor. If we need to implement in future, then how much things we must to import. It is the responsiveness of our various processes. The initial window loading. Also, we have used the RGB setting. So for that, we have scaled it on a setting of a scale of friends. So it ranges on nine because the RGB setting that we are importing, it's working perfectly fine. Next is the responsiveness of matplotlib. Actually, we are importing too many libraries. So we need to be the coordination between them so that the software must work fast. So we have ranges on matplotlib. And this is the system requirement is an intercooled i5 and the clock speed we have done. These are our reviews. Means we have evaluated ourselves. Out of 10, we have given score to us. And also, we have tested on our system. I have one basic problem. Where was the requirement of real time? I have seen responsiveness. I can collect whatever 100 images, 200 images. And next day, I can generate a 3D image. So what is the question of responsiveness? Responsiveness here means that we have made a software window and in that, there are various widgets. Means we can change the range of the RGB values. For example, we are expecting that he understood what the question is that whether responsiveness is the important parameter for this device. Obviously, sir, we are doing some change. That means we also expect that as soon as we applied that change, that should be visible on our image. So actually, the responsiveness that we mean that we have told you that there is a lag in processing. So for that, we must develop as it scans. We are also seeing what it scans in that screen, software screen. Sir, suppose it is not necessary, but we have implemented that. Okay. Okay. The other view, important point the 45 is a very small number as far as the 48. Yes, sir. That's not accurate. So because 48, you cannot get the accuracy and resolution. We need at least... Actually, the value is based on the stride angle of the stepper. Whatever you got in the market, possibly you may have. Stepper that... One more thing that I have. What is the distance you have used from the camera to the object? Sir, we have not used any precise algorithm to calculate the distance. That is why, because your accuracy again depends upon the focal point, the lens used and then the distance between your object and your lens. Why there are two lasers? You are taking these... One is for the reference and the other is for capturing the edges of the object. So the difference between two line lasers, whether that has been again taken as a trial and error method or is there any calculation you people have done? No, sir. Actually, it captures the coordinates of both the lines of both the lasers. So we are just subtracting the X coordinate of first laser from the second laser. So that process which you wanted to know about, that is in... Means we have not tested at final. Means we... Because the reason is that the distance is visible and that is quite a big. Many times there are certain things that does not come within that particular resolution what you have taken. It is passing first laser and not reaching to the second, so you will not get anything in between. Because if the object is smaller or the difference is very accurate... For that... Sir, we have just made it as generic. Means we have to adapt it, make it adapt. It cannot adapt by itself. Okay, thanks. Anything else you want to move? Thank you.