 My name is Kashif, I work at Direct Time, and I'm going to play through some of my experiences over the last year by developing Android apps. The real thing is that I think most professional people who have done Android development for a while would find it very obvious. This probably works better for somebody who just started Android development. There are some companies that don't really do the things that you would need to find value in. If you find out this obviously probably works for 40 years. So I would start before I begin, I'm going to tell you a little bit about our company, so that we have some context about what app you are doing. So I work for Direct Time like I mentioned, and we do some really exciting work I'm working with one in the business that does communication software. We think communication is largely broken. That's part of that. We don't have communication products on mobile devices, and that's why I've got it. We have, we write server components. We've got desktop and web clients. We do all of that. Yellow Fun Zero is downstate. But it's downstate. You should catch it. I'm going to show you that in the video. We even have a stall. So you don't have to wait for me to catch it. Again, the stall, if you have an offer running, in case you crack out an iPad 2, no strings attached. Okay, the product that I'm working on is called Talk2. Talk2 is a communication software, and we currently do it for Android, iPhone, BlackBerry, desktop and web client. You know, the job in the company is that we want to cover all platforms. So if you can't wait through any of these platforms to you, we'll deliver it in person, but somehow we'll deliver the message. We cover all kinds of services. We cover Facebook, Gmail, Yahoo, MSN. We cover all kinds of channels. Not all of this is completely developed. Lots of stuff is under development. So essentially what we're doing is we're doing... I'm going to talk to you about, particularly, Android chat application, and my experiences while building it. Right here is the screenshot of an application. A very big task, like the unstable version is available on the app market. And the unstable version is going to be out soon. We also have versions available on the web, as well as on the desktop. So one of the first things that I want to talk about, and I've given a whole talk on the subject at Android camp, I think, last year, was, therefore, in all Google apps, you want to use native and not things like titanium or phone app. And that is essentially, you know, that's the basic point. And some of the users that you don't want to use, HTML5 or web-based services, it's part of that because they're usually not complete. They do some 80% use cases. You need some gadget, utility, that they don't implement, and you'll find that missing. They're usually not very performant. There was a talk earlier with you, also, and the gentleman highlighted at the same point. They're not very performant, especially with things like slowly. And we've had a lot of trouble with that. We just had a lot of time trying to make things slow instead of building the product. There are lots of counter cases in all of these solutions who don't have titanium or all that. And their core quality is horrendous. So if you don't try, and typically it's for all of us, but typically the core quality is horrendous. So if you have to open and modify something, it's a pain. I have that up on Slideshare. If you are interested in more details on why this sucks, you're writing a 30-page copy. And that's the name. The other thing I wanted to talk about with this debugging, everybody is quite different when you go to the Android. So if you go and start with one of the sources, the sources that you go to, there's a tab guide, and there's a video stack there, and there are lots of great videos, you know. And most of them will talk to you about telling that you must use dvms. But typically we found that when we started, actually developing, you know, even with dvms once a month or close to a release, you should really use dvms more often than that. So how many of you know are doing Android programming at Razor Maps? Yeah, and how many of you just started? Okay, great. So we have more, most of the audience doing Android programming, and I think about half of them have just started. So dvms, you know, obviously will tell you about what objects are created in your application, what your heat looks like, you know, what the process heat looks like, essentially. But importantly, it will also tell you about thread leaks. And I found that, you know, there are objects that don't get collected because you're reading code, buddy. And you will have to find that and all of the documentation on dvms will tell you about that. But you'll also find thread leaks, you'll have to use dvms. You'll find that there are some threads which are just not dying, because they're waiting on some rock that you haven't released, and you don't realize that because that part of the application isn't what you're working on, it's not what you're testing. So if you use dvms often to debug, you'll find those threads killing them and you'll have to become a lot snappier. Another thing that happens is that in IntelliJ user, in IntelliJ user, you get Mac with XS for Android 3, right? And you start off on the right thing. You start off on Mac. But some IntelliJ users... What is Mac? Mac is a memory-adalyzing code for it that basically takes the dump that dvms produces and makes it the most data information to use some of that. So when I get a Google run, how do you use a code that uses the dump? I call it jhat. I use jhat for a really long time and jhat sucks, actually, compared to Mac. So even if you're an IntelliJ user, you will not notice there's a Mac standalone piece available on the native side. You should use jhat instead. One of the really good reasons to use that is because it will dominate a graph. So when you're trying to debug and you want to find there are objects that you've left behind, a dominated graph will help you measure the impact that that object which you're looking or studying has in terms of memory. So it tells you how much memory that object is taking and how much memory all the references that object has are taking. And so to collect an object, what kind of memory will get released? This dominated graph is not available in that other tool. I find that extremely useful. Another thing I noticed is that often when you start using text to load, you realize that it has two 8K signatures. One is start method tracing with just a file that you want to trace through down to the tool. How many of you are familiar with what Stresu? Okay, so I should maybe talk about what Stresu is. Very few people raise their hands. Stresu is basically a profiler that's available in the Android SDK. And you can essentially run it. How you run it is by calling a function in your code. You should have to go and add this call. And as that code gets processed, the profiler starts recording all the instructions that have been played out in the emulator or in your device. And then profiles them for you and drafts them out and shows you what kind of time various activities are taking. You should use the profiler. You should use the profiler again. And Stresu is the profiler for Android. Now when using Stresu, most people or at least we guys in that book start method tracing and then you give it a filing there and you realize that nobody ever goes to 8MB. It doesn't go more. And you know for a while you can take to look deeper into the APNC. There's an alternative call available in which you basically tell it that you can go ahead and extend the file dump size from beyond 8MB to something like 100MB. And that's when it becomes useful. 8MB is not really useful. It's about a few seconds of running around. Another excellent tool that we have found while developing for Android is Akra. And Akra basically helps you submit error reports, crash reports from the device when an error is analyzed. So if you have any un-part exception you can configure Akra. We'll basically drop the jar and add some XML lines. And configure Akra to go ahead submit that either to a Google form or to your own custom map it can go in a specific course to it. And every time an exception occurs when some user is using a device you'll get that place you'll get to know what device it is and what was the menu there and a lot of other things. There's a lot of different stack place which you can just run in and then debug. Most of you won't end up using Akra and kind of surprised I haven't found this. So that's available on code.google.com slash p slash Akra. It's completely automated. You can also do other things with Akra that's extremely customizable beyond just simple automatic debugging. The third thing which we found kind of useful especially in people in cases where somebody is using your device and you're not there to debug is that a screenshot might be whenever somebody told us internally given the company that I'm facing a problem there's a path we tell them to go to your house and long press our logo on an item. And they get this menu and they send debug logs. You take that and you use Akra to send those debug logs post the state of the machine and you can tell Akra I want to know MB you know I want to know the state of this variable so for them to post all that that's extremely useful that's not something that one would typically find. And if you do that it will become easier for you to debug most of the stations. Yeah. No it's for one application. I'm sure I don't know if you can use it for multiple applications I haven't tried but you can put it into one application of the charge and then there will be a lot of XML code essentially. No we need to do that. Because we have an API in Akra that we use for error report. Error report is what we use to report the error. So you can catch exceptions and then report it by Akra instead of installing the error because it may not be important except for things like that. So in those debug logs you send Akra or log data and that's useful usually to debug. But the third point that I thought was really important is to set it on the LQW which means we're developing for desktops or servers is that you really don't think too much about memory at least not on the outside but androids with this constraint especially in smaller low end devices turns out to be memory. So whenever you're programming on the Android the thing that you want to keep in mind is the memory and there's a lot of effects but it's not called some of those things. So what tends to happen when you kind of do desktop apps is you don't mind storing these variables with state ready process already computed it's all there in terms of state. All that state is taking memory right and what happens when when your app releases a certain threshold of memory that is allocated on the heap or it becomes eligible for garbage collection and it gets collected and then that doesn't need to go to the user experience. So try to store only bare minimum state. It's not something you consciously so try to store bare minimum state or store anything that you need that you could possibly need. The other thing is that you don't need we like using statics we like using statics for everybody studies and things like that and what ends up happening is statics don't get collected because they don't get collected till the class gets collected the class doesn't get collected till the class lower gets collected so that's never So anything you put in a static will stay and it stays in memory and things you should not put in static are context stuff that the UI framework don't put in a static because it can be left there or if you are using it in a static for some reason remember to kind of go ahead see all the code paths and not like when you are done using it because that can just take memory it can just stay there forever soft preferences are not what you don't use on desktop code typically Java has soft and weak differences other than regular differences and what soft preferences do for you is that basically whenever your team is running short of space soft preferences are collected first so you know for example in our app there was a screenshot earlier about there was a roster in that screenshot and it had Aftas for users so the reference to the Aftas is a great candidate a bitmap is a great candidate for a soft reference because only the top the A that are visible need to be memory the hundreds that might be following it at least don't need to be memory so if you use soft references and you scroll the view adapter will then kind of call the image to be rendered and you can get it from your disk or wherever show the image and as you scroll off it's a soft reference and it will get collected as when memory is acquired again something that you don't do on a desktop client typically don't use soft preferences garbage collection garbage collection is a very interesting area especially on Android one of the things I am doing right now actually I am quite finished doing it is I am really going to script that analyzes logcat output and what that script will do is as you are using your application there is a stream of logcat output that output includes garbage collection lines so it says GC concurrent or GC normal whatever done for your process ID so what I think you are doing is that I am going to trap all that garbage collection output and figure out what screen or what activity I am in by augmenting that output with some logs so I know what screen or activity I am in and see when garbage collection increases and whenever garbage collection increases that's a place where we have created too many objects and then too many other things that we shouldn't be doing now why is this a problem because I think 2.2 and earlier garbage collection is completely blocking in Android right so when we are collecting garbage your app is not going to respond your process is not schedule is not running so what's better so what happens a lot is for example I find in some areas we create a lot more objects to render screen right and that we have to show the garbage collection now I think 2.3 onwards there is partial partially concurrent garbage collection happening from the full garbage collection some of it is you made concurrent which is slightly better so that's another thing to look for garbage collection it kind of slows you down it's not fucking you look for when you are doing desktop app actively you are not saying okay what's my garbage collection but I think you should do that when you are developing your Android app and see how that happens one thing that objects could have been compiled early good act is making objects fast you know as fast as they can be using in New York there is a memory allocation that happens to the time that it has to be but that's not object-related compiler such as java really good act they make objects fast you know when you don't make an object of something it's probably the philosophy that you follow which is all great for desktop it's not so great for Android because you have limited memory so it might be better to write your code in a fashion where you reuse objects instead of re-creating them and so that's another case that's useful and counter-intuitive also there is this API there is this framework called on-load memory that is this is a method that's called by the Android OS whenever it's running sort of memory on all the services that are that are likely to get collected services or applications that are likely to get collected so in this method called if you reuse memory like bitmaps images that you're using if you reuse all this stuff out there's a likelihood your process will not to kill and collect it and so people don't actually use that so it's not very easy to use it typically you won't end up having too much to release anyway but you could release bitmaps and things like that so that was memory tip number four is that don't use services services really get collected so even if you think you need a service you'll be a different chat client with a service because you think it needs to constantly run for messages to come in how can you write an application like a chat client which dies after a message is sent and then somehow wakes up when a message arrives it's possible to do that in Android services are less likely to be killed than your application or activity but not having an agreement because there's nothing to get collected and so there is no downside so I think clearly you don't need a service we haven't completely migrated to a non-service team yet because there are some cases that still work on but it's completely to use C2VM the push technology that Google provides and an alarm manager which is a voice available to you in Android you can use these two features to kind of let you know when your process needs to come to life in order to process something so let me give you a quick example typical chat client would just be running a community and have a port opening a socket open and you'll be getting any messages that come on that socket and has the time to process what I'm suggesting you could do is once your user has gotten some required the process can be killed can be collected and you register to the Google push service and you have written a good server component that whenever it gets a message it raises a push to your phone to the Google push service the push arrives on the phone and you receive all the phone through the Android and activate whatever little port that you need for example if it's a message that you can just show the message create a notification for it to whatever else if you need to wake up periodically not based on some event happening outside on the network then you can use alarm manager and that can wake you up and call whatever you need to call on a periodic basis so it is possible not to use the service at all it's best if you don't use it it makes your application extremely robust you can't focus too much on snazzy devices you know when we order the phone to test we can order all the phones that are now current in the market and cool and they all have better specs than our users what they are typically using so you need to kind of focus on the rain devices and a couple of things that happened here one thing that happened was because we had good phones when we like for example the Samsung S2 that's the phone I had and I tested the app and it ran to the app so I said damn it's too easy and we went ahead and released the application and I found a lot of users who had this phone using the app and I thought wow actually a lot of people use the S2 but that's not true what happened actually was the guys who didn't use the S2 left our app rather quick because it ran so well on those devices all kinds of mobile devices right so people have been fooled there by thinking oh ok only S2 and high devices matter and we seem to be running alright and so we are fine so you have to make actually the reason this is important is this is a diverse platform there are all kinds of things running on Android so I mean if you are going to make an app available for them you will be very conscious and see what what choices you are making when you are doing this and so on you go for example see the Android emulator is a generally emulator right it's very strong you could use it it turns out to be stronger than most devices that currently being used right which are currently in the market so you find that your performance on the emulator might be little little sad I was for example that we have this problem because we don't use an emulator it's a simulator they actually run it on the CPU at that same rate and speed up of the CPU so that's why all your iOS apps look prettier when they have been demoed like if you saw the apps today they were a lot smoother right the reason they were smoother is because they were running on the full CPU whereas the emulator kind of doesn't do the full CPU use some double instructions essentially a user of the a portion of the CPU so as to emulate the device and the experience the user has it turns out now that the emulator is even slower than one of most of the low end devices so sometimes you want to get close to reality performance on an emulator so you need devices and you need low end devices for those various resolutions they have DPI and DPI and it's a very high resolution what is that from actually it's something like that so yeah there is no problem with the emulator it's a general emulator so you can use it just a little bit capture all the any device you should kind of use it lay out movements you should work with your scale across all the scale sizes very well whether it's a small device which has a keyboard on it you know it works on that you design it works on that it works on a big S2 device and you don't have to support everything you could you could consciously decide to support devices which are not very high it's just a decision that leads sorry to charge it's very high it's a decision that leads most of us we don't even think about this actually most times but you need to do this kind of thinking when you're releasing for the android because anybody from a 7000 back galaxy wide to a 30,000 back phone could be or whatever it works for the android or you would ask us can you do this right can you do that can you make it look like this can you make it look like that and we say wait we'll check and get back to you like checking for each of these questions took a few days so we can save you over those days and almost everything is customizable not almost actually everything we have met is customizable that we have to do is customize it and it allows you to customize but specifically I have so pretty I think out and the reason that is the reason for that is customized look at our time look at this and it works so when when you're developing for the android the thing you need to tell your to do everything you can you know you know you don't need to even you know a keyboard with a SMI money. Things like that. You don't customize anything. Then that's completely possible. Databases are generally slow here. They're doing this. But Database from the Android, which is SQL like, you know, on a smaller, lower-end device is very slow. It's actually very slow. So what happens is you see this. You can see this really if you're not doing it right. So when you are not doing it right, you know, when we started, one of our biggest performance problems is scrolling. And our SQL is sitting right here. And here's some obnoxious number of people that he knows. Some 8,000, 10,000 of them. He wants them all on his roster. We don't know so many people. So we're not pretty happy with the testing here. A couple of thousands of blocks and all that. So we tell our SQL that, listen, Bahrain, you should test the app. You should try it out. The first thing Bahrain does is he picks the lowest-end device he can find, an HTC legend. That's how obnoxious it is. And he runs the app like that. And the app doesn't run well. Which, from what I've heard, it was pathetic. And the reason that happened is because you know, the largest roster in it, the device was low-end, and we had been folding it in for medium and high devices without actually thinking about size and ability. And so one of the things we were doing wrong, so we had to fix things. One of the things we were doing wrong was thinking SQLite is like SQLite on a desktop. We need an item, we need it, you know, and then we show it, it is in there. And we improved performance glossary just by doing stuff in batches. Batches, writes are not actually possible in SQLite because it doesn't go batch-insert. But it's a lovely tool. Utilically somewhere, I don't know if it's documented, somewhere in the annual protest for database.insert. And what that does to you is it allows you to multiple inserts, they compile it once and execute it once. The inserts are still single, they're not batched, it's not batch-insert, but it's slightly better than individual writes. Another thing we learned was that if there is anything happening on the UI, please don't be doing anything on the database. This is not intro-different. I mean, you don't go thinking, hey, this is what I should be doing. But that's what you should be doing. If there's anything happening on the UI, on the UI, you basically do nothing, that's definite, and don't run any other thread, don't do anything else when you're working on the UI, on the lower devices, that's what's going to give you performance. If you start writing even on a different thread to the database, you will notice an occasional job or things like that on the UI. So you don't do that. You need to write a code in a fashion where work gets held and therefore, and then you do that work later. Either when that activity is not there, there's not much happening on the UI, or the UI is off or something like that, or the user is working somewhere else. There are things that are not bad. I'm saying you're showing your services, if you can. I think you can avoid using a service or something like that. That takes the groups, the ideally operated and directed and the app is starting and all that. That can all be taken care of if you don't use a service. So what they're saying here is one of the contributing points that I raised was hey, don't keep too much stuff in memory, but do bad things. If you're going to do bad things, you're going to do a lot of stuff in memory. So you have to use some kind of heuristic or case based on your own app and figure out how much you can do it, how much you can't do. Both those things are going to put in my experience, but it's a balancing constraint. So actually, I would assume that if you're doing this as a multi credit app, then while I'm writing, while the write is blocked, why should it impact my processing on the UI side? That's why I'm estimating the UI performance. His question is that if I'm doing another thread, why should this go? It turns on the processing path and ready to discuss on no end devices on Android. Even if another thread is doing it, the CPU gets utilized quite a bit or the IO channel gets utilized quite a bit and you don't get the kind of output that you want. So if you notice, typically on desktop, you don't even get, you just have a different thread, my CPU is fast enough, my disk is fast enough to multiply multiple threads and I don't realize it. But you end up seeing it when it happens on the Android. I think it's specifically where we started. So when we started doing batch writes, you would ask for people's aftars to write. We thought we'd batch them up and write them at one time. So if you ask for K contacts, user profiles, or aftars to write, you would get those 8 and you would write and the user's still scrolling on the screen and we have batched in your writing and you could see jerks. That shouldn't be happening if it's on a different thread. On desktop, you won't think that would happen. But we saw jerks. Obviously it wasn't intuitive to us what the hell was happening. So we started doing batches and we tried to figure it out. And we kind of realized that even as a writing to disk, at that same time, when there's some exhausted UI work, which is CPU intensive, CPU intensive fully is happening, then you can have a bit of a problem. So that's... So I'm assuming that your UI thread actually needs 100% of the CPU at that point. Because the amount of CPU to the right operation before it would hit disk at which point there's no CPU required. I get the point that it actually needs 100% of the CPU. What? Scrolling is very CPU intensive despite the GPU. Correct. But it still doesn't need 100%. The background services that are running on Android, they're all consuming some amount of dedicated CPU for different doings stuff themselves. So yeah, but if you're reaching a fairly high point of saturation in terms of the CPU, you start seeing that kind of deal. Again, this is not slightly load devices. You do not see this one in history. You probably won't. You just go past it. So you try to have a Galaxy One or Infrared on Samsung Galaxy S, the first one. You could see these issues. In the UI thread, then that's not a good time to be doing your database rights. You know, if you're doing anything on the UI, the user's interacting with the UI. That's what I'm interested in. So the user's interacting with the UI. Therefore, writing databases inside that for later. So for example, let a patch up for a user's requirement. The moment the user's done strolling, then go ahead and write it down. Don't do it on a different thread at the same time. You can't do anything on the UI thread. In fact, what you're saying is why UI processing is going on. Don't do any other CPU activity. Exactly. It has nothing to do with necessarily just writing to the database. Do any other CPU activity, you don't know, block the UI thread. You want to have context with the UI thread and the patch in the desert in terms. Yeah, but most CPU activity does not do much faster. But yeah, these are easy cases. Essentially, when there's something happening on the UI, you don't want to be doing too many other things. Over there, defer that before the UI can stop doing what it's doing. And then go ahead and do this. So that's what I'm interested in. Does that clarify it? So we can start with a very complicated pattern, right? I mean, just take a moment. Let's say someone starts strolling. Somehow, you have to notify or block your CPU address or write it as a cancer task. The moment you are stopped strolling, then again, you make it up. Yeah, yeah. I'm just struggling. Thank you. What you said, you saw a lot of girls. You were talking about your friends on the Earth. Yes, right? I was talking about myself. No, yeah. I was talking about myself. But that's exactly what you're saying. We do have slightly ugly people doing this. We're not pleased about it, but it kind of helps the performance. As long as the user feels the app is snappy. I mean, you know, we really believe that fast and snappy is a feature now, even in this age. You can use Chrome for no real good reason other than it being really fast. So, yeah. So we want to put this kind of stuff into the app. Have you tried the interactive search where you use a script on searching for some item and then you put the data that doesn't start searching indexing? Oh, I can't hear you. I can't hear you, please. Put the questions to the end. Can you finish this up for us then in our presentation? I'll answer that later. I wouldn't dare. Another thing I recommend is, you know, we know networking works badly, but it really works badly. The couple of things I've just learned today, there's a gentleman who has, I think he works for a call-call processor team or something, and they obviously bring stuff for Android. And he did some research that Steve Souders, who is a guy who used to work at Yahoo for fast websites, for fast hours, kind of brought about. And it is somewhat, basically, the radio is expensive, which is. So when you're on GPRS, you're getting on data through radio, right? So after 10 seconds of inactivity, what Android does is, in order to save back here, it kind of makes your radio connection dominant. It turns down the volume, it's one way to think of it, and it consumes less of power. But then what happens is, if at the 11th of the 12 seconds, just after this is done, the user wants to use the app, you know, and do something, then what happens is it leads to a spike when the radio link is turned back up, right? So this spike is kind of fairly large when it starts, or when it's turned back up from government, and then comes to a level and stays at that level, for a while, and then if it's at that level for about 10 seconds, they turn it down further. And then to get it back from there, it takes a spike. So if you are in a chat pane, you're chatting with your app, the user is actually chatting with someone, and the screen has a lock, right? Send a ping packet or something just to keep your network alive so that the radio doesn't go down, and the response is, that's one use case for this information. The other thing is obviously you can, you realize where your power consumption is going to start looking at things like this. One thing to do to reduce power consumption is multiplex. So don't make many sockets, amortize whatever sockets you have. So for example, if you have a socket which talks to your server, you can substitute at least the incoming data with Google C2D and push service, right? So how that runs is that Google has one socket open on your device all the time. And it's used across multiple applications. So if you use that socket and don't create another socket, you're not using extra backing. And you can only do that if you're using push in your device. You should all use push. Also if you still need a socket of your work, you should multiplex. I'll give you a use case. So a user has multiple accounts to our application, right? We can create one socket for each of these accounts with our XMV server and chat. But the better we do that for mobile devices, you should take one socket and put all of these screens or four screens or as many as they open on that one socket. And you know, just add an attribute that shows your XML to indicate that you use one socket. Don't use multiple sockets. Again, something you won't think of when you're doing desktop programming too much. Prefer native TCP. We kind of build this up very hard and complicated way. But TCP is fast and simple. And you don't have to write code to do anything on it. The code is in the language. Don't build stuff on top of TCP. You should just plainly use TCP. It's fast. It works well. The last tip I've had is around monitoring and similar stuff. So I can use a couple of things that we found very useful. One, Google Analytics, while it claims that it is plainly towards Android and iPhone devices for analysis, is basically useless. So you don't want to spend time on Google Analytics. I think somebody's going to give a talk about a tool I'm recommending, which is called Flurry. It gives you a lot of information. So I would recommend that you use Flurry. It has an SDK for Android. So you don't have to write any code. You just have to send this code. You can send it to you. You can monitor custom events like are people swiping through my app? I'm monitoring that. I see that about 0.01% people swipe through my app. So I can give this information back to my UX that I need to know something with it. So all of this stuff is kind of important because you're not there since you're using the app. So Flurry is very useful. Another excellent tool which is not talked about quite a bit, this is again also in the SDK that we've heard SDK quite a bit, is Monkey Run. How many of you have heard of Monkey Run? Good. That's about 40% of your audience. That's great. All of you should be using Monkey Run if you're doing this. What Monkey Run does is Monkey Run allows you to automate or emulate a user using your app. Basically, through the ADU bridge you're able to send commands and make the app behave in a certain fashion. So you can use that as a basis for building automated tools like Robotium. But that's just one kind of tool space. If you can basically automate your flow through an app, you can do a lot of interesting things. It should look like Monkey Run. Tell us what I had. I hope some of you will find that useful. Are there any questions that I can answer? You can get into that Twitter address or my email address at the right time. And I hugely recommend that you comment about our stall. We'll tell you we have a lot of openings, we have a lot of bright people, and we'll do some exciting work. There is actually a limit on C2DM. It's actually 200,000 per day. There is a software. There is a limit on that. And it gets paid if you cross that limit. So I mean, how do we replace the service using first? So you've got to be really brave when you're on that. I mean, I don't know if that 100,000 number is accurate because I haven't found a number. No, I mean, it's in the Google site as well. It's updated to 200,000 now. Right. There is a limit. It's not absolutely real time either. There are problems with C2DM. For example, Google runs C2DM if there are four apps that are using C2DM. And one of them has a message. It may not send it. It kind of expects some other apps to also have messages. And then one time it will go ahead and... They bundle it sometimes as well. They bundle it to delayed things like that. Yes, that's all true. You should consider this. I'm saying it's possible. I mean, that doesn't require a service which typically you would think of requires a service. I mean, based on the chat app, you cannot push chats using C2DM because that's what I might be seeing. No, no, no, no. You push chat. If this question needs that or the information that, you know, there's a chat that's spending for the server and you could open a shop and get the message. But yeah, we will affect real time this. That's one of the reasons I'm completely new there is because of the users' flow. But it is possible. You may not be writing a chat service. So if you're not writing a chat service, then it's still writing a service to go and meet you. Between that and the non-managing, you should have some solutions. Any other questions? He said, while the UI operation is happening, it's cut out to avoid the database access. So one of the use cases is where the user uses interactive service. You keep on typing text and it searches for that at least. Have you tried anything like that? Because that's pretty much like UI class database. We have search like that, but we haven't made enough optimization there. So my answer is no, I haven't tried exactly something like that here. But we have that to do with search. We haven't made that change of definition work on the UI here because we haven't found it to be a performance problem. But is it perfect? I mean, it's like it goes fine, there are no issues there. Yeah, it goes fine. So it's like a local user that you're searching. So just talk to the user, the other contacts that you have on your roster. And in this particular use case, it's important to be fairly performant. I think the statement was that if the UI operational performing is intensive, then it's preferred not to database work. But just updating, at least you with four or five items from possibly one intensive UI, scrolling is intensive. Scrolling for 8000 people in roster on an HTC legend is intensive UI. That's really where you go. So for example, in that case, a contact roster is completely nobody in you because we needed that. So the search is very easy to do. There's no distance work. And therefore that particular optimization or experience has no value. Any other questions? It was a great talk. I think the most contentious thing you said was the UI and database issue. Can you talk any more about database optimization and any things you might offer? To be honest, you know, not much because there isn't much we have done outside of what I've offered already. No, so no. Nothing I think of which would be useful as a premium. Okay, so all this stuff about the UI being a problem whilst you're using the database might just go away once you optimize the database. Well, you're talking about the scheme of the optimizer? No, I'm thinking more like putting transactions in memory using German mode or putting asynchronous rather than synchronous mode on or using indexing, for example. Yes, so in this particular case I was talking about there was a case for indexing. Always a case for indexing. I'm sorry, what did you say? There's always a case for indexing for performance. In this case, I didn't find the problem to be that because our issue was data coming in from the network and that's the writing into the database not so much reading from it and that's where indexing would come in on reading. So the light is significantly slower than the read. So if you haven't done much else beyond this so your question was can I offer something else? Maybe there are optimizations there. Let's say we have issue 315 and we have to take the break and if there are any more doubts we'll take one more question. This is regarding the Android Multi-Testing. I have seen tasks as well as the handlers so anyway. What I have observed is like I'm doing an asset task and I'm doing it in an activity. So that activity is completed. I'm moving to another activity where I have another asset task. I've already done an asset task cancelled in the other activity. But when I'm moving to other activity where I'm starting another asset activity, the previous asset activity is not closed. It is still running and also the same thing with the handlers also. If I'm using a handler in the Android it is in one activity. So I'm moving to another activity and I'm using another handler. Even though I have stopped that handler there I'm also in that the handler goes to the handler which has been already stopped. So the question is open to everyone. But as in tasks there is not a guarantee that when you cancel it it will close immediately. There is another API called thisCancel right? And that returns to Pune and that tells you whether your cancellation request succeeded or not. So you need to use the handler on that function. Even though handlers are also in the same case. If you're using a handler also. I don't remember all the handlers. Maybe somebody has to answer. But for using tasks I can answer. I would avoid async tasks all together. The red pool that you can use only has one thread in it. So async tasks is essentially you can do one thing in the background and then you have to queue it up. I'll repeat that because I don't know if it was all good. And he was saying that he would recommend not using async tasks at all. And I don't hear about the rest of it. So we can get him on mic. I would recommend not using async tasks at all. Not just because of the boilerplate that is inherently using it. But also because async tasks are run by a single thread pool. Which means that you can only run one async task at a time. So what I probably do in response to that would be an unbounded thread pool using executors. Let him executors with that and then using a handler to shove things back into the UI thread. Alright, and that's one of this talk. Thank you for being here. Now we have a debrief and the next session will start at 3.45. Thank you.