 Hi everyone, my name is Chinmat and I've been working on Android for the last three years. I've worked on a major modern dandelion console and chances are if you have played some sensor games or you have used some phones or the first couple of games, chances are it's my fault, it's been running. And yeah, if something didn't work right, you know what I'm saying. So today I'll be talking about sensors on Android and I'll try to classify my talk in four parts. Initially we'll have a brief introduction about the sensors. Okay, let's have a show off and how many of you have actually worked on Android or used Android? Okay, all of you have used Android. And okay, how many of you do know about sensors? Oh, okay. Okay sir, the introduction part we can have already. Okay, and then let's lower levels. Okay, so I'll be introducing, I'll be talking about a little bit about the external drivers, the integrations of that phone, the processor framework as such. And then we'll move on to basically the app development as in okay, what are the things you need to keep in mind in the kind of watchers which I personally found. Okay, so basically sensors, as you can see there's a variety of sensors available on Android and they are classified based on the actual hardware present on the device. So there are like two parts. One is the real part, I mean all the sensors listed on the real are basically, there is a hardware component which exists and is matched directly to the sensor. So if you see the accelerometer, there is basically a chip on the device which will actually measure the acceleration and act as accelerometer sensor. And so accelerometer there is a compass which matches the magnetic field, like pressure, temperature and humidity. And there are this set of secondary sensors which are nothing but kind of wrappers around the existing real sensors. So proximity, acceleration, gravity and rotation with there are nothing but a combination of data which you get obtained from accelerometer and gyroscope. It's combined and there is some little bit of math going on in the framework and then it is reported. So basically it's the same underlying hardware which is responsible for multiple sensors. So the accelerometer and gyroscope combined together and the linear acceleration as well as gravity sensors are kind of reported. And then there is this one thing which is proximity sensor. Usually there are separate dedicated hardware for proximity detection, but it will reduce the cost as well as keep the size less. What manufacturers do usually is they use the light sensor itself which has proximity sensing function. So the light, the proximity sensor is nothing but a virtual sensor and it's basically the light sensor hardware which is responsible for doing that. And there are these sensors which are no longer, I mean, which are dedicated and no longer, as I have told you no longer, I wish to use those, we will see why in the later slides. So okay, there are a few of the apps of other games, all of them are just games. There is induction, it was play, sleep lock, all of them rely on sensors as an input device. So you might have played a few of these, so you might actually know how it changed to have a sensor application. Okay, so sensor applications, sensors as such are available as part of the Android framework, standard Android framework. So what are the different use cases of sensors? So we have to have kind of this classification where depending upon what kind of mode the user is expecting the sensor input to be and what kind of application it is running. So basically we have four combinations and three of them need to be handled by the application developer. The fourth one which is auto rotate is basically a kind of special case of passive input and it is used and it is provided by the part of Android itself. You just need to go and enable auto rotate in the settings and all the applications which do not specify any particular orientation, they will be automatically rotated as you turn your device. So let's quickly go through all these individual use cases. So first case active input in which the application in the foreground is actually using all the input from the sensors and basically the application is designed in such a way that it expects the user to actively turn the device. Based on input, it would be something like a simple racing game or something like a ball, the second one on the right which you see is nothing but a lab, a screen shot of a ball labyrinth in which you will be guiding the ball through each of the individuals, which you have to basically start at one point and end it at the final destination. So it is very much like one of those actual devices with 90 metal ball bearings and you could actually tilt it and guide it through. So that all basically is an application available on your Android phone. So second one, second use case which basically most of the sensors have to deal with is passive input. In which there is basically the app is designed in such a way that it doesn't expect the user to do anything actively. But because the sensors are working, the hardware is presented, you get the sensor events, the application can actually register for one and kind of be a smarter in which it doesn't actually require the user to do anything. But based on how the user is, based on how the device is placed, basically some useful input is extracted and it's kind of used to improve the app. You can see the screenshots. And clearly you can appreciate that the one on the right is a better app. I mean just having if you do a get directions on your PC, let's say that there are no sensor input available, then it can just show you a road path. But if you do the same get directions on your Android phone, map application, then you can basically see that the application is kind of showing you depending upon your location, which is done through the GPS, as well as your orientation. So basically the compass is used and it basically knows that you are pointing to the north and it kind of rotates the map so that you get a better view, a better idea. Rather than point A to point B, you can actually see relative to yourself, okay I'm shining here, the map is here, and you can actually map what you see on the map to what you see in front of you. So this rotation feature of the map, the map view which is rotated, can be handled. So here the user is not actually expected to do anything, he's not expected to shake off. It's kind of a smart intuitive way. We just hold the device and the app will actually use sensors and it will mind okay, where is this device pointing and it will rotate the map properly. Yeah I mean I say orientation but it is just the compass, the orientation of the map. The way the map is actually displayed on the device, it's not orientation, it's the compass and it will be used. Okay the third class is the kind of active passive use case in which let's say you have something like media player, of course the default music app which comes with the stock address will support this, but there is an app called VINAPP or ROOTBOX basically allows you to kind of play music in the background and while you're at it, it also supports caches. I mean you can shake, go to next song, previous song. So even though the application is not in the foreground, you are reading let's say some other book, you are just not sure of using it. And suddenly you decide okay I need to kind of focus on this part and I need to pause. So you need not minimize or kind of close your current activity and go back, launch the music app and pause. You can just do a shake. So basically this application, it's a service running in the background and whenever you do any one of these activities, it's not the foreground, it's not the active application which receives it. It's the passive application. So even though the action is active, it is being handled by a passive application which is in the background. This is the third use case and the fourth use case, I mean which is a very common one and it's not really a part of the app developer. It's just that since it's so common use case, the Android 3.0 also allows you. I mean you just have to enable the user enables auto-rotate in the settings and whenever the device is rotated, it actually operates the view accordingly. So here we are actually seeing just two of the orientations. Currently on most of Android devices, you get a 360 degree rotate that is portrait, landscape, reverse portrait and reverse landscape. All the four rotations are supported and in any case, all devices support three. Three rotations at least. Okay, so yeah, that's one. And okay, I'm going to repeat the production of the sensors as in the hardware. As you can see, I mean this is the internals of the hardware. Even as a current developer, you need not focus much on this. An application developer just don't need to know about this much. But yeah, going now, I mean never hurts. So yeah, an accelerometer sensor is basically that, I mean you can see that it's basically any advanced device and there will be tiny little small mass and based on how it moves. Pressure sensitivity of a rail is kind of depressed or elevated and that kind of generates those voltage regions which are much more important for the device. So basically, the advantage of having such a device is that it does not vary with time. As in you do a shake today or you do a shake two years. With the same intensity, you are going to get the exact same values. So yeah, and the problem with having such an accelerometer in the problem with the current accelerometer the way they are designed is the response is not that great. So I mean there is certain amount of lag which is inherent to the device. I mean the measuring, I mean it's got nothing to do with how fast your device is or how fast the framework is running or anything. It's just that the sensor IC, the accelerometer IC itself is going to take some time reporting you the values of the acceleration. So yeah, that's about the accelerometer and the gyroscope on the other hand is quite a versatile device. It is that the acceleration basically gives you the kind of acceleration in three linear accelerations along the three x-y and z-axis. Gyroscope where it gives you angular velocity. As in how fast the device is actually rotating. So the advantage of having the gyroscope is it has a faster response than an accelerometer, but the problem with it is it doesn't have auto calibration involved with it. So along with time, you need some another set of reference to get it calibrated. So the same amount of rotation might give you some values today but after, let's say, you use it a couple of days with the same rotation and the same velocity at the same time then actually it should have given you the same angular velocity but it doesn't. There's some problems with the gyroscope and later on we'll see how we actually kind of can use accelerometer and gyroscope to correct each other. That's the central fusion point which I'll be talking about later. Similarly for the micrometer, it's the compost device and basically it will just rate the amount of number of flux lines passing through your unit area and which basically it uses to kind of figure out. It's very similar to our actual analog compressor section but there is no analog circuit involved. It's just a digital version. So basically this compass is very critical. I mean, accelerometer and gyroscope, neither of them can actually tell placing a device flat on the table and rotating it. I mean, the accelerometer basically cannot do any detection of any such because it will basically figure out that there is a constant derivative along the z axis and that's it. That's about it. It doesn't see any variation, that's about it. Whereas the compass can hold the device flat and rotate it. It will basically, the magnetic field heading through the device will change the earth's gravitation according to the relative position with the earth's gravitation field. And that thing then will help us figure out the yaw of the azimuth which is nothing but 3 degrees with the north. The problem with the magnetometer is, okay, I mean, you can move some other electronic device to right to your phone and then it'll kind of press up all the readings because I mean, it's not just the earth's magnetic field. Any magnetic field will be picked up by the magnetometer. And then one more very commonly used sensor is the light sensor which you can see from this place getting close to the speaker and there is a reason for this. One thing is, like the same light sensor hybrid is used for proximity detection as well. So, yeah, so whenever you're on a call, automatically the earlier testing reports are the issue that whenever you are on a call actively then you would be holding it here and the key I'm holding to touch on the has been activated some buttons and all such issues. So once that was discovered, the common way to solve this was to place the light sensor and whenever you can, they do basically forms of detection. There's IR as you can see here, there's IR LED in the center and basically it'll, when you know that there is any object that's in your ear which is close by and you are in a use case. Like say you're in a form in which the touch screen doesn't really, is it supposed to be used? So if you are placing anything very close to your ear and you are trying to not touch, basically your touch will be discovered as soon as something that blocks the screen. So that's one very common exercise for light sensors. So moving on now, this is like the part two. So where I'll be speaking more about the Android sensor framework. Okay, so as you can see there is all these different different layers. There is this Android application, which the end user basically interacts with. Below it's the Android framework, which is a part of standard Android 4. Then there is Android sensor help, which is either the device manufacturer or any guy who is kind of quoting Android, that guy will basically be responsible to write this app. And then there are the kernel writes, below which that shows hardware services. So the control number goes all the way from the app, that requests the sensor data, the framework response rate. It says I have the following sensors, which one do you want. There will be basically the hardware instruction they are as such. You know the basic, the responsibility of the HAL is that there is a standard Android framework which is common across all devices. And there are these kernel drivers which vary across every device, which are different on each device. So the HAL basically is the responsibility of the device manufacturer or whatever drivers they are using, whatever hardware they are using, whatever performance they might support, whatever rates they do. The HAL is basically supposed to kind of encapsulate all that and provide a standard sensor API to the framework. So we need not modify the framework at all. If you have to develop on any device, there is no real need to test the framework. The framework expects certain APS to be exposed, which the HAL has to do. And the HAL in time will be talking about the kernel drivers. The Android sensor HAL is a user space. It's basically a shared object library which is in system lib hardware. So yeah, that's the basic overview. Okay, now coming on to... Yeah. So let's start from the bottom. So okay, you have a new device in your hand. You know that, okay, it has certain sensors that level on it. And you basically have a kernel source for Android. So how do you basically start? Where do you start? So the first thing to do is start with the kernel. See that if you have any existing device that are available for your particular user. So let's start. So yeah, this is a kernel grid for one of the majors and some phones out there. Yeah, it's basically an open source and open source. So in this first case, you figure out if there is an existing driver available for your device. If there is, then you need to enable it. And if there is, it's not then you might not want to contact the vendor or you might say, can get the data as you can. Write it down. So okay, we'll get into the details as to what are the important things you need to do when you're writing up and integrating your device driver into the kernel. And the second thing you should do after that is basically the user space of the app. So that is also something which helps a lot. Okay, the kernel part. So how many of you are actually kind of fluent with the Linux kernel as well? Okay, kernel part. So these are a couple of things which kind of specifically apply to the sensors drivers and I mean they are like general concepts anyone can use in any of the other drivers as well. So while writing the sensor driver I mean the focus wasn't getting a very quick response, I mean you don't want the user to be playing around and hanging, suddenly the device stops responding. I mean it feels like it just stops responding because the events are coming in late, you don't want that to happen. You know on the other end you don't want to pull so fast or enable high frequency interrupts that okay, the CPU starts servicing the sensor interrupts itself rather than doing all the rendering and other stuff. So for that we kind of tend to I mean work use are a very good solution if you are going to implement a sensor driver I would recommend you go ahead and implement work use in your driver and yeah let's start with that. So basically implementing work use is pretty easy you just have to define work use in your driver and you will be initializing that and then whenever you have to basically do a pull or you have to activate your sensors okay so this is how it works whenever an application requires any sensor data the application has to register a sensor listener with the framework writing term conveys your the frame of the sensor registration term conveys to your app and writing term talks to your kernel saying okay, hey boss which has requested for sensor data you have to start sending the data so unless there is any app which is actively using your sensors it's advisable that the framework will immediately tell you that there are no longer any listeners registered that is no app request any sensor data so it can quickly turn off and shut down all the related hardware sensor devices so the power consumption reduces. So it's like on demand the drivers will turn on the hardware on demand and the sensor devices won't be on at all times so basically what you do is you just initialize the work queue and a work string but there is nothing, the work queue basically match to a separate thread on your CPU so as long as there is no event listener registered the thread will be like that as soon as somebody registers a listener and your sensor framework kind of views the kernel the go ahead that okay there is a request to start pumping data all you have to do is call and just say a queue delayed work. Basically queue delayed work and queue work add to work queue APIs which basically require work queue a work string as well as some delay. So queue delay work is nothing but kind of a deferred handler. So what we will do here is as long as there is no sensor listener registered the work queue will be empty. As soon as a listener is registered you initiate one work. Basically the work string is nothing but a pointer to the function which will basically read the hardware and report the data up so that is basically the third function which is linked to the work string and there is this time out. So as some of you are developing applications might be aware that when you register a sensor listener there is a kind of delay or polling interval associated with it. You can poll it. There are like 4 delays which is like UI, game fastest and normal polling interval. So yeah that is basically how often you need events coming to your device. So it varies from 60 millisecond which is the game delay as well as the lowest is 200 millisecond as that. If you go into the UI polling interval to polling UI then you will get one interval approximately every 200 milliseconds. So basically that info which is passed from the application all the way down to the cardinal is handled here. So how frequently you schedule your work queue? How much your work queue? That is the way to demand here. And what you do here is inside the work script as soon as you report your data you kind of schedule itself once again. So you just check that there will be some one variable which you can kind of use to determine whether it is still valid, whether you need one more poll to be done after this and you just keep on scheduling itself. And once the whichever sensor I have registered and listen to that, that is kind of released or let's say I have this put, then automatically all will come to you saying okay all the sensors are registered and fine. You can now turn down your hardware. And then basically these work queues you flush and cancel any pending work and then you are free to go ahead and close your turn down. Okay, then the next thing. Okay, so the work queues I guess that is something which you can use and then there is one thing about sensors is as I was saying that you need to be aggressive about covering our, you need to reduce the power consumption on the device so you need to be very aggressive as to when you actually keep the device on so that instead of using the suspend functionality you will be basically using the only suspend which kind of is given when your device is just before the device is going into you will get a call given down before the device actually goes into the suspend. So as a kernel driver you might be interested in what are the different events you need to expose and yeah the different sensors kind of map to the different input events. So this list of input events is presented in the Linux import.h kernel directory. Then there is this core files of specific device, the different configurations specific device because of old file and one interesting thing to note here is the orientation of the device and the orientation of the sensor itself may not match so the sensor I see need not be placed vertically in the front of the board itself it might be like rotated and in some corner of the board or in the rear of the board. So naturally the access do not map properly so there is this call platform data which is available to all the individual device drivers and when you implement your driver it's a good idea to have this access remap functionality present in the platform data. So it's specific to the device and not the driver itself the driver can go on into multiple devices but this platform data is in the device in the board. I mean you can get there is a short link to the reference implementation which is based on the android gingerbread release and that is for the Google Nexus S so there is a reference implementation there and you can always access it and if you are going to kind of write an android sensor HAL from scratch that is the board which you need to get and modify. So yeah the sensor HAL code as such is present in android device the name of the windows and the board and lip sensors. And when it's built android it will generate a sensors dot hardware dot SO so that is basically level in system lip hardware. So yeah the things to do when you are going to modify your sensor HAL is update the sensor list that is basically once you open the once you get access to the lip sensors lip sensors here this is the basic code base for the entire sensor HAL and I won't go through each of the individual files as such for now you can always access it at that link and basically the things to do is update the sensors dot cpp file which contains the list of all the sensors supported you update it to the sensors which you are planning to support on your device then basically there is this sensor base dot cpp and basically you use that and that kind of contents basically that contents of the base and it's a base class and you derive your own individual sensor class for each of the individual objects so if you are going to implement an accelerometer you derive something from the sensor base dot cpp sensor base dot h and you create your own accelerometer sensor dot cpp let's see the gyroscope for example what you do is you derive a base class from there and then you basically the constructor is the only thing which you need to modify and there is basically you will change the type of the sensor the idea of the sensor which is against the what kind of sensor it is and then the name this is basically nothing but the input device name which you will be registering to on when you create your idea of cardinal driver and then this is the the given which you are obtaining from the cardinal driver and passing it onto the framework so all this code is actually present and available for free openly on the link I shared and you can just basically go ahead and copy that, give sensors into your Android 3 and make appropriate modifications and you should be able to get your sensor provided that the driver is up to the cardinal driver ok so that's the basic sensor bring up can you maximize it? so there is one thing which I like to talk about one thing of you have sensor fusion ok so there is this company part human sense which does this and basically the name for sensor fusion is that there are different limitations to different sensors none of them are kind of present complete in their own aspect but yeah you will get faster data samples you will have less of noise in samples the data will be more accurate and you can kind of enable advanced sensors like gestures nothing but something like a character recognition if you write your device in there and write something and you see that all the textures basically come onto the framework so such things can be done in half actually being currently done but the default Android API doesn't allow for them and it's very hard for Android application developer to do it as such so if you are going to do it on your own you have to register multiple sensors you will have to write all the algorithms to correlate data and that will be kind of very complex thing in addition to that you will have to write your own custom library to do all that which is again a hard thing so there is this human sense there is this company called human sense which has come up and done all the hard work for this and it basically allows you to create a lot of stuff which is again a shared library and you just have to link your application to it and you will have all the I mean it's got access to all the advanced sensors as well as all the individual sensors themselves will be better so okay, I want you to last here development on Android you might want to note down these links these are the good examples of actual live examples where Android application has been developed from scratch to the end and it's kind of a complete walkthrough I will be sharing this with you and I will be I will be sharing it on my later so yeah it's really good as the application development it's been shared in slides so you don't need to comment on it because you can write all the obstacles so yeah, so the application for app as a first there are a couple of things you need to keep in mind the polling rate as such okay you might want to be tempted to keep a registered listener as the polling rate fastest but basically that's the poor way to do it because the lower level implementation it's just as it keeps as you see the work queue implementation as we saw above and if you are going to keep it as a first registered listener the fastest it will basically register this work queue with a zero time which basically means as soon as the CPU is pre-am going to just go ahead and try and read the sensor, pull the hardware and again return it so you basically do not need that that kind of that fast polling because it will again start interfering with the active application most of the games and even applications register with the polling frequency of a game which is roughly 60 milliseconds which turns out to be like 16 samples every second and if you are running a game which means that you are getting a sample input sample once for every two frames which is pretty decent so you have a very quick response from the UI as well so regarding the calibration there are two types of which the application developer has to do one thing is the zero calibration which is basically nothing but I mean you are flat so let's say the user is going to use your device let's say this is a Android phone and it is basically using to play some racing game if you see that the orientation is basically not flat the default position is going to hold it is in this position so you might want to have a single activity in the beginning of the app in which you say okay initiating a calibration just hold the device turn left turn right okay you are good so what you do in this is you basically ask him to hold it in a default position and whatever acceleration values you get which are not zero you basically use them as offsets so you kind of remap the axis and you use that as the default position and whenever the device turns relative to that you get more accurate events rather than using it if it refers to the default orientation default curve and then one more thing which you need to take care of is the axis calibration essentially by axis calibration what I mean is the devices do not have a standard there is this huge amount of fragmentation in Android certain devices are widescreen certain devices are ported you have tablets mobile phones and most devices there is no fix way of saying that whether your app will be running in an equal portrait or equal landscape so if your app is going to support rotation it's good that you use a remap coordinate axis the VBA so that you can actually get this I mean if you go through the sensor manager and the sensor manager documentation you will realize the problem which must comment on your space in Android the last thing which I will talk about now is basically when you register your listener basically there is this thing called waitlocks in Android which you can use so that your device does not go into a loop or more as long as your sensors are in use so it's recommended and it's advised that you use waitlocks in your applications so the device framework knows that okay the device was also what's the impact of sensors on power usage yes the sensors the accelerometer light sensor so for example the haptic feedback haptic feedback why not the haptic feedback but the vibrator because it's not part of sensor sensors by sensors what I mean here is accelerometer gyroscope, light sensors so basically all of that gyroscope is the most consuming hardware and that's basically for example my own navigation so how do I optimize for yes this is basically what I'm talking about here so you register your listener at the last possible state you don't do it in your own create or on start rather you do it in your own design and then you register the listener on test whenever your app goes into one of the inactive states automatically the sensor listener is unresistant and the message goes to the kernel saying okay the device the action to the hardware is responsible for the sensors it's okay to keep switching multiple times but it's not for what kind of small power is keeping the sensor ICs on in a way of not using them I have seen a few games if you're sitting and playing it's fine let's say you sleep and have the phone in total inverted mode I'm in the control system basically that is what I was talking about in fact zero calibration you need to decide whenever your app is launched you need to decide what is going to be a default orientation so whenever the app launches and you have a splash screen you can say the user okay get into a default mode if you are going to sleep and play lie down and hold it so you can get three samples I'm assuming this is a car racing game if you have some other motions then you can actually ask him to do the extremes which is going to go and you can ask him okay this is the default default tap the screen go to your most extreme tap the screen go to your other extreme tap the screen what you do is then when you get these three samples you use them as offsets and whenever you do any read whenever you get any sensor event and data you basically subtract these offsets and you get the proper relative orientation actually what happens if you are moving forward or moving backward the values actually become the opposite of what you are sitting yeah but if you do a zero calibration if you recalibrate the access itself then all the values you will be using in your app after you subtract the offsets will be related to the new access basically there is a very easy way to do this you can just recalibrate the coordinate access that's what I was saying and you can use that api to get you very good zero calibration and there are a couple of other areas I mean so how do you do this? basically there is this I can't go into much detail I will be posting all the detailed technical stuff on the links you might want to subscribe to that feed what what else what location without GPS does this have a compass sensor on board and there is this geolocation data and basically at any point in the world for any given time your earth magnetic field is going to be a known number so if you measure your magnetic field right now right here and I say okay 745 and this is my field basically using that huge geolocation table I can figure out that I am at what coordinate so that's the gist of it it's not really that accurate because you might end up having some other interference but if you can somehow say that okay you are in interference free zone then you can actually without using GPS get a very accurate longitude latitude and altitude fix the table yeah it's it's not quite an Android it's an open product geolocation data now when it has for a given it's like this for a given time for a given time I mean it's it should be possible not in that time I mean so basically what it does is just have a given time given magnetic field and for that field and that thing you will have a latitude longitude so as we are available you can contact me at any of these forums you can always drop in there what can you say is that magnetic field unique for each device no it's unique for the other as long as you are on earth and you use any device here how can you figure out the exact device location I mean that's basically the beauty of it the geolocation API says that if you give me a time give me a magnetic field then I can tell you that field exists at this time only on this place so if you are kind of like having the AGP is like how AGP is one of the things that it uses is the tower information so based on the tower they send some data to the servers and the servers will actually tell you don't look for all the dirtest satellites look only this much so that's the database of every single tower that's there and at that particular time what would be the satellites that you visible so it's kind of long but the accuracy is one of the problems because magnetic integral is a huge problem so your device are never designed for complete magnetic proof but is there a lot of tech for this the GPS takes a lot of time and it's supposed to be accelerated you can't really get from the proof because of the hot start so we are running out of time so we have the rest of the questions after the session you can always get in touch with our children so I really appreciate children thank you so much for coming over and giving this talk