 Good afternoon and welcome to Camera 2.0. So before we get started, I just wanted to get a feel from the audience. How many people here work with the Camera APIs? How many people have tried to bring up a new image sensor on hardware? And so what are you looking for? What, any particular interest, somebody who's never dealt with cameras at all, anybody totally new? Excellent, so we'll try to make it so we can cover, give you a background and then go deep dive into the Camera 2.0, which is the new hardware interface layer. So before we start, welcome once again. I'm Balvinder Kaur and I work with Apptina. I've been there about a year and a half, been doing Android since 2009. I was at T-Mobile before that. I have a lot of background in mobile software and apps, but not so much as camera until I came to Apptina. And I used to really think that, oh, it's this tiny little thing and it takes a picture, but I'm quite amazed at everything that goes into it and how complex it is. And we have, so Apptina builds image sensors. We have our image coprocessor and we also have our Android camera stack. And some of the things we're talking are experiences based on bringing up our own sensors. And I have Ashutosh who's flown in from India just to be here with us today. Hello, Grafton, everybody. Welcome again for this talk. I am Ashutosh working with, so I'm working with Apptina and Android for the last three years. And prior to that I was at Samsung Electronics and I worked on a range of multimedia devices over there. So yes, we will be talking about Camera 2.0. Thanks, Ashutosh. Okay, so our agenda today will generally talk about the camera use cases since it's an embedded Android summit. We're not going to focus too much on the API part of it and the SDK part of it. But we'll go briefly over what the different classes are and what's available to an application developer. Focus a little bit about on the camera surveys. Now with Jelly Bean 4.2, the camera service got re-architected in a big way. There are still no implications at the application layer or the APIs that are available. But we will talk about what the new architecture is. After that, I'll hand it over to Ashutosh and he's going to cover everything that's below the hal. So what's the new interface? The architecture of the Camera 2.0 hal. What does it look like? Device drivers. And then we'll talk about some of the challenges that typically whenever we bring up a new image sensor on a new platform are things we run into. Lastly, we'll talk about some of the emerging trends in this industry and where we think the future is going to be. Hopefully we'll have time left for Q&A at the end. So what are the prominent use cases? Very simply, once you have a camera and a device, you need to be able to do a live preview. So whatever you see, a viewfinder basically. Sometimes not only do you need the view, you also need to be able to provide. That typically goes right from the image sensor to the display subsystem. It doesn't go back up to the application developer. But sometimes developers want to do things with it. So a copy of the preview frame back up to the application developer. Then we have the ability to capture a frame. That's what the most common use case of an image sensor is. I'll talk about a little bit about the emerging field of computer embedded computer vision where image sensors are not necessarily used for getting images but also as a means of getting information especially for the context aware phones. Finally we have video recording of a camera stream. Then there are other secondary use cases. Basically at this point everything, think of a point and shoot camera are a little bit more controlled. What are the different scenes available? Can I set the scene? Is it a sports scene? Is it a night scene? Can I put any filters on it? The new feature that came out with Ice Cream Sandwich was the ability to take a snapshot during video recording. Finally we have the different even callbacks. The shutter was clicked. If the application developer wants to do something special at that time they should be able to do it. Focus was achieved. So these are the different callbacks that the camera subsystem provides to the application developer. And then finally are the information related use cases. As of today there are very few classes and very few very little metadata that is provided to the application developer. The most common is the face class which will tell you how many faces were detected in the scene. What is the confidence level that it is a face? What is the location of the eyes? What is the location of the smile? So metadata of an image. But like I said it's very limited. With 4.2 there's a lot of plumbing that has been done and a lot of emphasis on being able to provide metadata right up to the application layer. There are limitations with this camera API. So if you have any of the Android phones, the HTC, Samsung you'll see that their camera applications can do a lot of features which there are no APIs for it. For example burst mode photography which is very common. You could have continuous burst where you start the shutter button and it keeps taking pictures till you release it. Or you can have a time you take a shutter and it takes three bursts in succession. There's no support for panoramic shots. Then there's no frame data like if there's no frame metadata that is available. If I want to know that there's a frame a frame was taken but what was the exposure with it? What was the focus with it? There's none of that information is available right now. There's no power frame control of the camera. So to back this up a little bit this whole thing about camera 2.0 there's a lot of research that was done at Stanford and you can search for fcam or camera 2.0 and there they describe a lot of the use cases of what they would like to do with the camera. Our example HDR which is high dynamic range. So many times I'm sure you've taken pictures where something is in the shade and it's unclear and then there's sunlight and something under a tree and that's not clear. So high dynamic range what they do is they take multiple shots at different exposures and then they stitch them together. Same thing with flash and no flash. You could have a scene for example a person is sitting in the dark and there's lights outside. You want the flash to be on the person but you also want those lights at the back. So the ability to combine flash and no flash together. There are multiple range of applications that can only be achieved with power frame control. See for this frame I want the exposure to be this, I want the gain to be this. Then there are other things whatever you get back from the camera subsystem is three callbacks. You can get what is called post view callback, there's a JPEG callback and a raw callback. I haven't still seen any device where you can actually get raw callback. I could be mistaken but I haven't seen one so it's not very common. JPEG is the one that is common but you can do a lot of things if you get the raw information from an image sensor. So all of these things are still missing. And now just to give a little bit of overview of what APIs we have currently, there are six classes. Out of this the most interesting ones are camera and camera dot parameters. There are eight callback interfaces and the one at the bottom is in yellow because it was the latest one introduced with 4.2, the rest are all 4.1 and previous. So out of this camera gives the capability to open a camera, close a camera, have access to the camera controls, you set up a preview and of course you can take a picture. The camera parameters class is a huge class with a lot of different API methods. So to help understand with this, there are three categories of API, most of the APIs will fall in. Some of them is the mandatory feature set which is have, which are typically named like get supported preview size, get preview supported preview formats. They have the supported. That means every camera on Android subsystem needs to provide certain mandatory features. There are gets and sets for those. Then there is the optional feature set. Things like is video stabilization available. Now just because there are APIs available doesn't necessarily mean that the whole stack is functional. You could have hardware that doesn't support it. You could have hardware that supports it but maybe you haven't done the correct plumbing in the camera hal and it's not available to the end user. So these are runtime calls. And typically an application developer would do at runtime and say is this feature supported and if it is then you can go and enable whatever you want to do from the UI perspective. Finally, there's also a dump pipe that is available. And we use this a lot internally to provide our own features. For example, I mentioned that burst is not available but we have our own burst extension. There are other things that are not available which we provide. And so we use this dump pipe and basically it's just providing string parameters. You can query the system for what string is available but it's very OEM dependent. Somebody really has to know that the implementation of the camera hal. It will not be a generic solution. And then the rest of the camera classes which I already mentioned briefly earlier. Now moving on to the camera two internals. These got released as so one of the first big disclaimers I would like as we move into this section is that the only documentation that we have is actually reading all the Android open source code and picking up things from trying to figure out what the design is, trying to look at what the method names are, the comments that are there. So one it's all, so you have to, there's no document which kind of defines all of this. So that's one thing to be cautious of. Second is right now there is no APIs available at the SDK. So the next version of Android could very well go and change some of these. So just something to be cautious of in case you're gonna go back and refer to this part. Now this was when I first started when 4.2 got released and I started looking at it. I saw all these references to camera two, there was a camera two dot age, there was a lot of classes and then I started wondering which application was using it. But turns out there is no application that is using these features as of now. Not even the closed source photo sphere that was released, the Google app that was released with 4.2. The HAL implementation that came, that was open stored was the Samsung Xinus 5 based and we'll talk about it later during the talk. And this reference to Android or hardware or pro camera actually came from one of the comments there, which kind of indicated that that's what they're trying to do with camera two. Is provide very fine grained control. So very briefly, I'll go over, I'm sure everybody here in this room has seen this cake diagram. So I'll be focusing on the camera HAL and where it sits. So it's that little piece in the hardware abstraction layer. And it comes with its definition and we and all other OEMs provide their own implementation based on the application processor. That's the camera subsystem. The top part, as you can see, is hardware independent. The bottom is very closely related to the hardware and the hardware in which this case mostly involves the application processor, the image processor, co-processor and the camera hardware or the image sensor itself. So, just the process view. The camera service resides within the media service. Whenever there's a new application, it requests, it makes a request to access a camera over the binder interface. And if it has all the permissions, then the camera service will grant it permission. And it does this by creating a camera hardware object, which then makes system calls into the kernel to actually communicate with the hardware. There's also a communication path from the image sensor to the surface flinger or the display system for preview purposes. If there is a second application and it wants access to another camera object, that is typically granted. However, at a given time, only one application can have access to one camera and a given camera can be accessed by only one. So as a good, from the application development perspective, to be a good citizen, whenever you get into the past state, you should release all your handles to the camera. Inside the camera application, the only thing that's interesting here is what's below the GNI. And this basically, the thing that hasn't changed between camera one and camera two is that any frame, any frame information, there is always a copy made from the native to the application space. And I think it's almost like a security thing here. You wouldn't want, for one of the use cases is a burst, right? Let's say we have an 8 meg sensor and we want to take two seconds of burst. Now, this becomes a problem then, right? Because if for 8 meg picture, let's say you compress it and you have like a 2 meg JPEG, you have to make copies if you want to return it to the application. And if it's running at 30 frames a second, it's a huge problem. There's a lot of memory copies doing away. So this again highlights one of the limitations that are there. I still haven't seen within the re-architecture or solution to this problem. But maybe with the next version of Android, something does come up. It also holds references to different objects and the callbacks are made using the JNI layer. So now moving on to camera service. So everything in the middle are the eye binder interfaces. The camera service is the libcameraservice.so. Camera.h is the hardware interface, which whoever's providing the camera how will implement. Now this changes with 4.2. What we have here is that the libcamera service can now talk to two different interfaces, camera.h and camera2.h. And a given device can have implementations of camera edge or camera2.h. Now there is some glue classes that were discovered, which is camera2client and camera2device, which basically enables this libcamera service to talk both to camera.h and camera2.h. Now let's try to understand a little bit more about what is this difference between the two. So this one I think I've covered. It's basically what does the camera service do? So one of the basic differences that happened between camera1 and camera2 was that in camera1 everything was a function-based. The whole view of the image sensor and the camera subsystem was function-based. Take a picture, get the preview, set mode, and things like this. That has changed from an entirely streaming perspective. So there are different streams that are available at any given time. There's the preview stream, there's a capture stream, there's a callback stream. The ZSL stands for zero shutter lag. There's a recording stream, and it's all stream-based. So there's a steady stream of information that is available from the sensor. So it becomes more dynamic. You can insert different things, you can access different things, you can reprocess them. The camera2.h client sits on top of the HAL and it has different processors that run. These are the different processes they talk to different streams. And Ashutosh will go over this in more detail when we start talking about the camera HAL implementation. The other thing is there's a big focus on metadata now. Metadata of two kinds, static metadata, what are the capabilities of this camera? You don't have to open the, you don't have to open an instance of the camera to actually get that information. It can be, the system can be queried statically. The other is there is now an attempt to minimize copies. Because see out of all the sensors that are available on a device compared to the accelerometer or the gyrometer, the most memory intensive sensor is the camera. And so there's an attempt to minimize the copies between within the native subsystem. The one that happens from native to application for the Dalvik virtual machine, that still exists. But within it, there is an attempt to minimize the number of memory copies. Metadata, there's two kinds of metadata, static and frame-based. The ones where, and if you open up the file, I have all the links at the end in a couple of slides. It has the path to where you can open up the file and actually see. It has a whole list of tags. So the ones that are marked with underscore info are the static ones. The ones that don't have it, typically you'll find Android Flash and Android Flash Info. The others are frame-based. So this frame, when it was taken, what was the flash setting? And it comes bundled back with the frame information. And for all the OEMs in the room, there is provision to provide vendor-specific tags. So if you want to do something to differentiate your stack, this is available. So earlier, just like the camera parameters pipe was available, is available to do settings to control the image sensor. The vendor-specific tags is, again, is another provision, a dumb provision, to be able to actually provide metadata back to the application layer. So now it's become like a two-way street instead of one way. These are all the directories. The only thing I have is the AV highlighted in yellow, which is where the paths change from ice cream sandwich to jelly bean. And the metadata that I was mentioning is the last line on the slide. So are there any questions? Sure. I will be emailing them right after the, yes, sure. And feel free to get my contact information and we can communicate if there's any other information you would like in the future. So that I'd like to hand it over to Ashutosh. Thanks. Hello. Hi. Good afternoon to everybody. So I will be talking about the camera HAL in particular and all the layers that are underneath it. So basically it is, it comprises of camera drivers, different configurations for the drivers, and then we will see some of the challenges that you will face while building up your hand. So this is the typical camera stack in the Android framework. Must have seen it some time. So basically this block is what is known as the camera HAL, hardware special layer. So every vendor has their own implementation of the of this abstraction layer, which basically depends upon the underlying hardware, the kind of SOCs which they are using and the kind of camera hardware they are using. So we will be first talking about the changes which is undergone from ICS to JB, and then we'll move to the camera driver. So this is the typical functionality of the camera HAL, the camera hardware abstraction layer, it basically is very specific. As I have already told you, it is very specific to the camera hardware platform and basically implemented by the vendors. So every vendor has their own proprietary HAL which provides, it basically maps the service calls which is mandatory by the Google to the driver functions. It will get the functionality out of the driver. So Ice Cream Sandwich uses camera.h and Jelly Bean and above probably the next versions of Jelly Bean will use camera 2.h. And the camera HAL also talks to the camera driver. So basically there can be multiple flavors of driver available. Some of the most popular ones are V4L2 and OpenMax. And on top of that, vendors can have their own proprietary implementations of the driver just to, just in case they don't want to expose the functionality. Excuse me. It communicates with the driver through the file IO calls. So this is standard Linux IO calls that they use to talk to the camera device. So this is the functional camera HAL diagram that's how the camera HAL used to look like till ICS and the previous versions. So if you see that the major functionalities of the camera HAL is to manage the memory, it should know what kind of memory it is dealing with, what kind of, what kind of memory the camera hardware needs from it. Then it needs to manage the display surface in a way that the ultimate consumer for the camera buffers are the display surfaces. So it has to manage the equilibrium between the display and the camera. It basically needs to respond to the events that it is getting from the driver and it also needs to generate the event for the application layer. And ultimately it has to manage the camera. So basically this is the camera manager which talks to the camera driver. And this is how it looks into the jelly, this is how it looks with jelly bean. So if you have seen as my, as Balminder has already talked about it, previously the HAL was more function based. So the aim was to get a certain kind of stream. If you want to get a preview, you say start preview. If you want to take an image capture, you do take picture and all this thing. But now going forward the jelly bean onwards, they are not viewing it as a piece of camera. They are viewing it as a stream. So basically the stream can be of multiple types. There can be preview stream, capture stream and capture stream, positive stream, metadata stream and potentially all of them running together at the same time. Metadata as she has already mentioned in her presentation that nowadays focus is much more on metadata. It is the information, extra information associated with the image which basically typically opens up the new horizon for creating some cool apps. So some of the typical metadata are like if you have face information, if you have some interest point information that you want to give out, give it out it can be stream specific or it can be camera specific which can differ. The third thing which I think they have introduced is the reprocessing of the stream. So whatever the initial two block which we talked about deals with the live camera stream which is coming out of the sensor. Here in the case of reprocess stream, we are talking about the stream that is in your memory that you just want to give it out for the reprocessing to do some other processing actually. And the stream manager is the guy who will talk to the camera driver here. So basically stream manager job is to manage the buffers what kind of memory it requires and how it will get the frames out of the camera driver. So this is the guy who will be responsible of talking to the camera driver. So this is just to understand the functional and just to see who does what. So camera HAL basically initializes all the different blocks which we have talked about the stream manager and all of the guys. Basically dispatches all the function calls which it gets from the camera service to the respective blocks. Then the stream manager is the one who basically handles the streaming events who gets its own buffers who manages its own memory and it talks to the camera hardware and also manages the state machine right from stream on to the stream off. Then metadata handler the job of the metadata handler is acquire per short metadata, get some interest points or other things that is required and convert them to the android format and plumb it back and give it to the application layer. Reprocess stream manager basically it sets up and manages the reprocess streams. So this is pretty much about the functionality. Changes, what are the changes that has taken place since camera HAL 1.0. So if you see most of the camera HAL 1.0 functionality has been moved to the service, live camera service. You see the camera to device and all these things have been moved to the camera service. Image metadata has got more importance. So basically there is a different handler that is created specifically to get the metadata to open up the new horizons to create multiple applications. Reprocessing introduced, process and already captured image stream. Then as I have told you before this new HAL is based on the streaming it basically generalized the stream rather than on the function basis. So the camera HAL 1.0 used to have start preview, take picture kind of functions whereas in the camera HAL 2.0 you will see that allocate stream, start streaming and stop streaming kind of the functions. And every stream can, I mean one of the stream can be preview stream, capture stream and some of them or all of them may be running together. So this is again the same diagram. So this is about the camera HAL changes, what all the changes have been taking place from the camera HAL 1.0 to 2.0. Now I will be talking about the driver, some of the popular implementations of the driver and then we will see the challenges. What the driver offers to the upper layer basically? So as you see that there are multiple companies that are involved in this camera hardware creating that image sensor and all other things. So basically the camera driver represents a standardized interface for the HAL and the above layers to access the camera hardware. The image specific or camera hardware specific attributes are handled at the lower layer of the driver. So the driver has two parts, one is very generalized and basically exposed API to the top layer and there is one which is very very specific to the image hardware. That takes care of the actual camera hardware. So right now there are multiple types of sensors available, some of them are raw sensors which will give out the bare image which needs to be processed through the ISP. Now ISP models are very interesting. ISPs are offered by OEMs, the platform vendors or there are some smart sensors which have the ISP on the chip. So you get all the processed image and you don't require the platform ISP which is available with the application processor. So this difference is basically handled at the driver level, whether you want to bypass the ISP or you want to use the ISP for processing the bare image and get the YUV image or something out of your system. For Android, VG4 Linux 2 is used in many implementations. It has been in existence for many years and I mean recently it is undergoing some changes to accommodate the new interest and to accommodate the finer control over the hardware blocks as the hardware is getting more complex. OpenMax is another one which is getting popular and it also is being used to control the camera hardware. This is the V4L2 kernel level block diagram. So if you see what it offers to the top layer is basically a generalized way to access the device and supports Ioptal dispatch and this is the controlling interface for the V4L2. It basically controls the buffer management and it also controls the camera hardware. So when we talk about the buffer management control, what it does is it depends upon the camera hardware requirement. It either allocates the memory, what is the physically contiguous memory if that is required by your camera. It manages the buffer and so basically the buffer, so you create a buffer pool and you will be reusing the pools. You will fill the buffer, then you will give it to the HAL for processing. Once HAL is done with that, then HAL can queue it back to your driver. So we call this as a QDQ mechanism. So it maintains that, it maintains the various states of the buffer. The buffer needs to go to multiple stages before it is ready for the consumption. Apart from that, it also manages the camera hardware. So basically it has the infrastructure to do the device discovery as it has been done for most of the Linux devices, device initialization. Then it talks to the device through the I2C to get device-specific parameters and if you want to do some specialized settings for some specialized registers, then it also takes care of the power management. So basically when you switch off it, switch off the power, it basically takes the sensor into the low-power states, some of them stand-by modes or complete power of the state based on your design. Then it also enable and disable image streaming, which helps you to get the streaming out of the camera. Some of the resources, some of the important resources which V4L2 has on any camera driver for that matter has. So basically memory, as I have told you, memory can either be located at the driver level if you are in the need of physical contiguous memory or there are some of the intelligent hardware that are present now which basically avoids you to use the physically contiguous memory because it is very expensive. So it depends upon the kind of hardware that you are using. Then it needs to support for the interrupts. It needs to handle interrupts such as frame and interrupt and autofocus interrupt or frame start or frame finish. There are a number of interrupts. You can choose what interrupts you want to service. Some of them which is very essential is frame finish, where your frame is ready for the consumption, and your focus events, at what stage of focusing you are at and whether you have completed the focus or not. Camera hardware control. So normally the camera hardware is connected on the peripheral buses such as I2C and SPI. So I2C is the most popular one. Still it is being used to control the maximum number of camera hardware. SPI is the faster alternative to it. And there are GPIOs again. GPIOs are basically used to connect the reset pins and all other standby interfaces for the camera hardware. Then it manages the sensor power. Sensor power as I have told you, that right now power management and peripherals power is of utmost importance in any handle devices. So basically it supports, based on the device uses, it puts the device into the low power mode and ultra low power modes whenever it is desired. This talks more about the buffer management. So in the Linux, in the V4L2, one or more buffers are supported. As I have told you, the buffers either can be allocated in the driver space or based on the hardware, you can use the buffers that are allocated in the user space also. Buffers are queued in the circular list. So you will be reusing the buffers. So basically the starting of the streaming and ending of the streaming are the ones which basically starts the process of filling the buffers. We call stream on. The buffers which is at the top of the circular queue is being taken out and is being filled with the camera data and camera HL, once it is filled, camera HL de-queues it. It crosses the buffer and it queues the buffer back again to the driver so the buffer becomes available again for acquiring the data. And suppose if you are done with your imaging, when you want to switch off the camera, you just need to call the stream of command. It basically stops the streaming and it releases all the buffers that it is holding right now. So this is just the call sequence of what are the calls that you have to follow in the V4L2 framework to get the preview out. Some of them are mandatory, some of them are optional. So I guess let's just concentrate on some of the ones which is important. So first thing is set format. So here you set the image format and the size. Here you say that what kind of color format that you are dealing with and what will be the frame sizes. And I guess cropping, if you want to set the cropping, if you want to crop the image, then you just set the crop parameters or it is optional, you can just leave it out. Then you do the request buffers. You need to tell the V4L2 how many buffers you will be using and V4L2, you have to set what kind of video memory is available to the driver. Based on it, it will say whether your request can be completed or not. So request buffer you have to do, this is mandatory. Suppose the buffers are allocated in the kernel space in the driver, then you want to get the buffer attributes into the user space for your programming, then you need to do query buff. So query buff will return you the buffer characteristics. And since the buffers are allocated in the kernel, you definitely want to map them to be able to use them in the user space. So that's why you do a map here. And finally, once you get all the buffer details and everything, you need to queue them explicitly to the driver. So you say video buff, video IC queue buff. And once you have queued all of them, then you will start, you will do stream on. Finally, stream on will basically start the streaming process. It will enable the RX, it will enable the RX actually. Once you are done with the streaming, once you are done with the camera, you're just going to say stream off. It will just stream, it will stop the streaming and it will deallocate all the buffers, which it has. Okay, so as I have told you that, so this v4l2 framework is quite old. It has, basically it is, I guess it is 8 to 9 years old. And initially when it was built, the camera was only supposed to do preview and some of the captures. But today camera is basically, you know, camera is supposed to do much more than just viewfinder and just image capturing. And moreover, the imaging hardware, the ISPs are getting more and more smarter. There are more and more IPs are getting added to it. And to get the maximum out of the camera, the users are demanding finer control because there are multiple paths that can be available. You can decide which paths you want your data to pass through for processing to get the desired use case done. So keeping this in mind, I think they are the v4l2 community. Basically they are coming up with this media. They have come up with this media controller architecture. So it is designed to support dynamically reconnectable hardware blocks. Connection should be there in the hardware. So there can be a situation where one block can get inputs from multiple sources that can output to multiple sources. So it will help you in creating your own pipeline with a particular source and with a particular sync. It allows for greater programmer control. It introduces the notion of entities, paths and links. So basically your hardware block is the entity here and whatever you are seeing is the paths. So paths are the way through which the entity talks to the outer world. So there can be input paths and there can be output paths. Based on the devices it can get connected to or it is connected to in the hardware. You have those many number of paths but in the given pipeline you have only one input and one output path actor. So this is just an example what I have taken and it has been taken from OMAP3 ISP which is pretty similar, simple one. So if you see that this is the sensor MT9P031 sensor and the sensor is being enumerated as an entity and it has got a source pad attached to it. The CCDC is the parallel interface, parallel image interface of OMAP3. So the sensor is connected to the OMAP3, to the parallel interface. So if you see that the sensor has the source pad and the CCDC has the sync pad. So the data that is coming out of the sensor is going to the CCDC. Now there is a choice you can make. So suppose if you do not want to do any processing, if you do not want to do anything then you can control, you can program your pipeline for the data to flow from the sensor to the CCDC and this is a video zero note that is nothing but your memory. So directly to the memory. This is one of the configurations that you can program. Now suppose if you want to do resizing or you want to do other stuff, there are a lot of stuff I have just taken, one resizer block is an example. Suppose if you want to do, if you want to resize also then you can choose this alternate path where CCDC, this thing will act as a source pad, it will feed to the resizer and the resizer output will finally get into the memory. So like this, there is a term known as entity graph. So user will have the entity graphs where you can have the connections of the blocks possible based on the use cases. So for the bus mode photography you want one way, for the other shots you want other way. So it all depends upon the use cases but it gives the final control. So when you bring up the camera hell or when you write your own camera hell some of the things which you need to take care of, which you need to take care, let us talk about that. I think the first and foremost is the memory management. Since the camera HL is the one who will be talking to the driver and will be basically getting the buffers and giving it to the display for the consumption. So it should be aware of the kind of memory the camera requires and the display requires and suppose if there is no match then it needs to do copy or something like that. So camera HL, when you are writing the camera HL you should be aware of what kind of memory your camera hardware requires and what kind of memory your platform requires. Various implementations of camera driver. So as we have already discussed before L2 is one of the implementation by which you can have your camera driver implemented. OpenMax is another one and some of the vendors have their own proprietary ones. And this gains an importance because in the scenario of multiple cameras front and back one of them can get implemented by before L2 another one you can use the proprietary one or you can use the OpenMax one. So camera HL needs to be aware of that. Color format conversion. There can be multiple color format. The sensor can give out different color format. Display may expect the another one. So camera HL may need to do the color conversion either using the hardware block or the software block. Buffer synchronization. As I have told you the ultimate consumer of the camera buffers is the display surface. And so normally the buffers are shared just to save the memory so camera HL needs to manage the buffers in a synchronized manner to avoid overrun and underrun of the buffers. Then support for advanced features. The basic camera HL with what the Android offers will support only the bare minimum features. If you want to have some kind of differentiating features then you got to support it in your camera either by the Android extension or you implement the whole interface. So are all camera HLs equal? The answer is no. And in what way it differs? So basically supported features depend on the hardware capabilities. I mean the lot of functional that can be enabled only if your hardware supports it. Then also the way you implement it in the camera HL there may be a need to integrate third-party IPs to get the functionality done. So it all depends how you program it and how you use it in your camera HL. Again reliability and finally how easily you can add extensions to the Android feature set to get your other features done. I think with this I think that's all I have to talk about and probably Balminder will talk about some of the latest trends in the camera thing and probably then we'll take some questions. I guess we're running out of time. Yeah, so we have a little bit of time but we did cover most of the time. We did cover most of what we're seeing. So some of the things computer vision applications we're finding a lot of interest, object tracking, gesture recognition, augmented reality, computational photography where people can do things. I mentioned HDR, flash, no flash, different hyper focus, lots of different things. So basically the output there will be even better image quality on devices. 3D imaging is another thing that sometimes gains a lot of interest and ways I've seen things there but where you use multiple cameras to create a 3D image. And yes, so I did mention this. This was something that I found this morning where the VP of Google Wiggandotra mentioned at Google Plus last night that beware of the next Nexus will have a very insanely great camera just you wait and see. But what does it mean? I guess we'll just wait and see because that's what as much information as I have. And we're ready to take any questions that you may have.