 My words fly up, my thoughts remain below, words without thought never to heaven go. Team Garura presenting quadcopter based navigation using Akash Tablet with onboard image processing. Mentored by Mr. Sandesh Pati, Metri Mordekar, Mr. Shiva Kiran Kodali. The manager, the group is managed by Mr. Rajesh Kushalkar. Let's start with the presentation. So our topic is quadcopter navigation using Akash Tablet with onboard image processing. Now first of all I want to give thanks to our mentors for all support and help throughout the project. Now I will introduce to our team, this is Aditya, this is Ankit, this is Yogendra and Suji and myself Shubham. Now the main question is what is a quadcopter? Quadcopter is just 4th winged helicopter. Now I will give some of the features of our quadcopter. First of all, video streaming. We have an onboard camera on our quadcopter which will provide real-time video streaming on the ground control station. Next we have dough detection algorithm which is running on Raspberry Pi single board computer. This dough detection is being running on Raspberry Pi using OpenCV libraries. Next is GPS tracking. We have an onboard GPS sensor which will provide exact location of current quadcopter. It will be helpful in navigation of quadcopter from one way point to another. Apart from it we have a ground control station which is running on Akash. This ground control station not only get the parameters and sensor details from the quadcopter but it even have an RC control which can control this quadcopter manually. Now we have a project overview. This main board, CryS Pro board has an 8 mega 250 main microcontroller and have three sensors gyroscope accelerometer, compass and altitude sensor. This main board provides the PWM signals to ESCs. These ESCs control the brushless DC motors. Apart from it we have a GPS sensor. As I told earlier it will provide the location of the quadcopter and these parameters are being sent using Mavelink protocol using Wi-Fi module to Akash tablet. And we have a Raspberry Pi which is a single board computer doing the image processing and the live streaming is being sent using Wi-Fi dongle to the Akash tablet. Now I will tell how this quadcopter moves, the basic mechanics of a quadcopter. First of all it is gyro. So we have demonstrated a very simple animation of an airplane. So the rotation of the quadcopter along z axis is being gyro. So for changing the direction of quadcopter from one position to another is just a gyro. Now we have unroll. The rotation of this quadcopter along x axis is being known as roll. So this movement basically provides the roll. Next is pitch. Pitch provides the elevation to the quadcopter. This is pitch. Now I am going to explain how to implement all these three controls. First is yaw control. In yaw control the motor one rotates at its maximum speed which provides the torque in anticlockwise direction. So using or according to third law of motion there will be a torque provided by air which will move this quadcopter in clockwise direction. And similarly it will work for an anticlockwise direction. Now roll and pitch. So here we will have motor one is being moving at its greatest rpm and motor one dash is moving at lower speed. So what happens this motor will rotate at higher rpm and will elevate this quadcopter at up. Similarly this is for anticlockwise direction where the speed get interchanged between one and one dash. Now we have pitched the quadcopter. Now our task is to elevate it further. So when it was on a certain altitude there was a vertical component. But when it get pitched up so there the component get divides into two parts horizontal and vertical component. So we need to have some thrust so that it can move upwards. Now it is altitude control. So since motor one and one dash and two and two dash are rotating at same speed but with different direction. So the torque generated by both these pairs of the motors will get cancel out each other and will give a thrust which will make it move upwards. Similarly we will reduce the rpm of all these four motors to get it down. Now there are some of the elements which are very important for flying this quadcopter. We will be starting with propellers. So we have used two pairs of propellers one clockwise and another is anticlockwise. And there is a very essential condition to balance the length and the pitch of these propellers. Because greater the length and greater the pitch of these propellers more amount of thrust will be provided more power will be consumed. But to have a life for some duration of time we need to maintain both these things power and length and pitch. Next is electronic speed controller. Electronic speed controller provides the signals to run the brushless DC motors. The signal generated by this electronic speed controllers are of 50 hertz to 400 hertz. And the pulse duration for this is from 1000 microseconds to 2000 microseconds. Now we are moving to the sensor part of our quadcopter starting with accelerometer and gyroscopes. Basically what is an accelerometer? Accelerometer provides just the acceleration of all the three axis. And using this acceleration and human friend trigonometry can easily do overwork. But the main problem is vibrations. During the flight of quadcopter there are lot of vibrations being generated. Due to that we cannot have an stable all the orientations. So to get it we need to attach a gyroscope. But gyroscope even have a problem of drift. Even when quadcopter is stable it will provide non-zero values. So to eliminate it we have to use various algorithms which will be explained further to reduce this noise. Now we got del x, del y and del z means all the translation in all the three axis and orientations. Now we need to know where our quadcopter is currently direction of this quadcopter. So this is being provided by magnetometer. This magnetometer provides the magnetic field of all the three axis. X, Y and Z. Next is the pressure sensor. It provides the pressure at a certain altitude. So using this pressure sensor we can easily calculate what is the current height of the quadcopter. So this will completely provides an 10 axis IMU. It will provide translation of all the three axis orientation, the direction of the quadcopter and the altitude of the quadcopter. As I told earlier we have used GPS sensor for providing the current location of the quadcopter. So GPS sensor uses triangulation method. So what the triangulation method is? It is the intersection of four spheres which will provide a unique location of this quadcopter. And for transmitting the data we are using an MEA protocol. So now I will hand over my mic to Aditya for giving about sensor fusion. Thank you Shivam. So I am here to explain you about sensor fusion. Difficult term isn't it? So just go by the literal meaning. What does it mean? Sensor fusion means fusion of two or more data from two or more sensors. Okay. So what are the two sensors? One is gyroscope and one is accelerometer. And the main question is why do we need to fuse them? The need is because accelerometer is noisy. It has vibration in its data. If we plot accelerometer data with respect to time, the green thing is accelerometer data. So you will see some spikes in that. And gyroscope, the disadvantage of gyroscope is that it has a time varying drift. The blue thing, the blue or red anything, it is the gyroscope data. It has a non-zero drift, time varying drift. First of all, why do we need this accelerometer data and gyroscope data? Well we need them because we need the orientation of the quadcopter. Whether it is inclined like this or like this or like this. One tilt here and the quadcopter can go hundreds of kilometers away in space. So that is a very important point. Now I am going to tell you about a difficult filter, apparently difficult but functionally simple. Complementary filter. I said that accelerometer data is noisy. It has vibrations. What does it mean? It is reliable over long term, not reliable over short term. Gyroscope data has a time varying drift. That means it is reliable over short term. It is not reliable over long term. Time is inverse of frequency and hence we pass the accelerometer data through low pass filter and gyroscope data through high pass filter. Now this is known as complementary filter and it is used in this. It is the algorithm being used in this quadcopter to provide orientation. So this is very simple but don't, what can I say, misunderstand its simplicity because you must have seen that PSLV launched some days back, PSLV launched of India. You will be surprised to believe that the algorithm which is being used in PSLV has the same mathematics as that being used in this quadcopter. It is the same complex mathematics, dear friends. Yes? How did you get it? What? The mathematics? Well there is open source code available for that and you have to tamper in that. They have used the same open source code. Without tampering. There are many methods, there are many methods. This complementary filter is a very simple thing. They use Kalman filter, extended Kalman filter and things like that. This is a very simple thing, not accurate. We have used that. Not accurate. Not accurate. Not accurate to suit the specifications of space. If you are going to moon, you can't land on stars. So DCM, direction cosine metric. In layman's language, in common man's language, this is just a third-order complementary filter. But I just want to say that this is a very important thing. This is a very important project. You know why? Because quadcopters are nowadays everywhere. Okay. So this was about the orientation algorithm suit by us. There is an open source code available for that. Implementing DCM and complementary filter. Arduino pilot. Arduino pilot. Yes, it is an Arduino compatible board. This board is Arduino compatible. So they have given the name Arduino pilot. External autonomous navigation. We have divided navigation into two parts, external and internal. External and internal because in external navigation, GPS is necessary. When you come into a closed and cluttered environment like this, GPS access is lost. So what to do? Quadcopter will be lost. It can't go back. So we have to implement some algorithm for that. First of all, external navigation, GPS is necessary. Waypoints can be set up by our application on Akash tablet. Demo will be shown. Next slide. Internal navigation. Yes, that is the next slide. Yes. The internal navigation is we are using a door detection algorithm. We have implemented it here. This door we have detected. The quadcopter will go through it and we can't give a demo. It is too risky over here. Quadcopter will go through this door and sonars also will be used. We have not implemented sonars as of now because of positive of time. But sonars can be used to create a depth map of this room and the quadcopter can navigate. Next slide. So this is the door detection algorithm used and this is the FC Coley door here. The door has been detected. We have mapped the centroid. In future, we have not designed. We plan to design a PID controller algorithm, a PID controller, so that it can go through its centroid. We have to map the image centroid to the real-time data. But suppose the door has got glass, whether it will detect the door? Sonar. We are implementing sonar for that. That is why sonar is there. To get a depth map of the room. The algorithm we have not implemented, sir, because of positive of time. The algorithm for door detection. OK. Previous slide. Well, we used machine learning. We did hard training and hard training, hard cascade classifier. Hard cascade classifier. And we even used LBP features. The result by LBP was slightly faster. Next slide. The result by LBP was slightly faster. But the FPS was too low to consider for a real-time thing. We even used feature matching by surf, but that was even slower than hard. So at last, we had to settle on the geometric properties of the door. Like this door is a quadrilateral. We are detecting that it is a quadrilateral. There is a limitation on area. The door is open. But the door is open. You are using open CV, correct? Yes, this is open CV. So suppose I have got a poster which is printed a door photograph. What will happen? That will be covered in the PID control algorithm. Sonar. Now, Tuchi will explain about PID. We move to PID control, PID tuning. The PID is an acronym for proportional, integral, and derivative, the mathematics term. We have PID controllers in our board. And we try to tune them through our ground control station. That is the Akash tablet. Why do we need PID tuning? We need PID tuning for the smooth and stable flight of the quadcopter. We want to get the optimum of the desired behavior of the quadcopter. Whether it be in terms of its altitude, its tilt, or angle. And how does the PID work in the quadcopter? It actually reads the data from the sensors. And also from the input that it gets from the ground control station. Compares the two, gets the error. And according to the error, it gives the correction. Next please. And that we have three parts of the PID tuning. The proportional part, integral, and the derivative. Talking about the proportional correction, it is the immediate correction. It gives the correction, which is directly proportional to the error. As we can see through this, the altitude is, the actual altitude is very, very far away from the target one. So according to the error, that is the difference. It will generate the correction. That our quad reaches the desired one. Next please. That's the integral one, the overtime correction. Suppose our quad reaches, is somewhat very, quite close to our desired altitude. But changing the P parameter is of no help. Then we can use that integral error. It integrates, it uses the cumulative error. And according to it, it gives the correction. So that we may overcome the zero steady error correction. The next one, derivative part. Suppose the correction term that we are using, it is increasing or decreasing at a very rapid rate. And we want to slow down on that. Otherwise the quadcop movement is going to overshoot it. It's a desired one. Then we use the derivative control. We simply alter tune the D parameter gains. And the PID gains are very important because without it, the quad may either, it's going to be very wobbly. It may even topple over or it may become very slower and timable. Now we move to the telemetry section. Coming to the communication part. So we are using maveling for the communication protocol. So it is a protocol for communication between our ground control station. That is on the Akash tablet with the quadcopter. So we can like set the different flight parameters and various sensor data we can be getting through this maveling itself. So coming to the interfaces of our app. The first interface is the RC interface. This interface enables the user to control the quadcopter using a virtual radio controller itself. Also the user can use the navigation menu to switch between the different activities. And he can also use the another navigation menu to switch between the different flight modes that we are having. The other interfaces includes the planning interface and the flight data interface. In the planning interface, the user can set different waypoints through which the quadcopter will be covering. And in the flight data, we have implemented an SUD that's a head-up display. It gives us the data of the flight data of the quadcopter. And in the second part, it shows what it keeps a track of the waypoints that is being covered. Other interfaces includes the charts interface and the camera feeds. In the charts interface, whatever the sensor data that we are getting from the quadcopter, those are being plotted along the graph. And in the camera feeds, we are getting live video streaming from our Raspberry Pi camera to the Akash tablet. Next. So now I would give Aditya to tell more about it. So this, I will cover what we would take back from Eklavya. So the first thing is orientation algorithms of quadcopter. So I explained that then is the aerodynamics of a quadcopter, mechanics, physics was weak. So we learnt it again here. Configuring a wireless camera. So we have put a Raspberry Pi camera. Raspberry Pi has its own inbuilt small camera, 5 megapixel camera and a USB camera we have interfaced with Raspberry Pi. But for safety purpose, we also, we also interface Cisco wireless camera and we configured it. Next thing is we explored image processing techniques in detail. We used OpenCV and we tried out different algorithms. Next thing is streaming the video feed of a USB webcam. Raspberry cam over the local area network through Mjpeg streamer that will be shown in the demo. And a new way of thinking and questioning things. And the last is a new communication protocol called Mavelink. So a challenge is faced, implementation of hard training for door detection. So this was a very big challenge because hard training usually requires something between 4 days to 1 week. And that is, that too only on a database, on a 500 images. Okay next slide. Sir wind up. Next slide. Further prospects making Garuda autonomous using GPS based sensor, designing a PID controller for internal navigation. Face recognition in the feed received at the Akash end and recording the video feed at the Akash tablet and processing it. The last two could not be done because we were using Android package for OpenCV and it only uses a hardware camera. But we were getting a video feed and not the feed from hardware. So that was not implementable. So these are the references and now we'll show the demo video. Can I have the lights off please. So thank you.