 Good afternoon. My name is Guendal Grignoux, and I'm going to talk about more sensors, not the fingerprint sensor, but will be the smaller ones, like accelerometer, gyroscope, magnometer, light, and what we find in the Chromebooks. There we are. So more books. Let's say this slide is the current implementation. And there is boxes everywhere. So the sensors are here. We have the sensors. They are connected to the embedded controller. And for each different sensor, we have a driver. Then we have the embedded controller. There is a common interface over different type of bus to connect to the kernel. So and then you have the cross EC sensors at the top of the cross EC stack that present through the IIO-CCFS interface different sensors. So we have the accelerometer, gyroscopes, and then another driver, the cross EC sensor ring. And what we have, the use case of the sensors is twofold. So it has been introduced first with convertible. The first one was Samus in 2015. And it was used by Chrome to calculate the lead angle on the rotation, a clanscape or a portrait. And this block has been added when we developed the Pixel C. So it's running on Android, but using the Chrome OS kernel. So we have AL that access all the sensors. And the ring sensor is a requirement for the cross EC sensor AL, which we choose push model, where all the events are going naturally from the sensor to the EC. Whereas Chrome, which is a little more ancient, use a pull model, which gathered the event from time to time. Another thing which is interesting is the cross EC AL as an extra sensor for coming from Chrome. So it's a private sensor. And Chrome decides to calculate the orientation because the orientation is not just if the screen is on portrait, but if you have an external monitor, the rotation change as well. And then the Android, instead of using the accelerometer to calculate itself, the rotation, use that sensor. So that's the blocks that cover the current implementation. And if we go a little bit inside how the sensor works inside the Chrome OS, have a nice time diagram. So life of sensor data starts on the upper right. So the sensor, most of the sensors will trigger an interrupt to the EC when there is a sensor event available. The EC is gathering the event. And then when the EC feel it's time to send the event to the host, it will trigger an interrupt. It might not be for every event. We support batch mode, which is required by Android. I do very few applications use the batch mode in the R++. So we have the interrupt. The interrupt is for all the wallets is not just for the sensor. So the cross EC stack needs to know the source of the interrupt. So that's why it's on the get next event. So sensor piggyback on the keyboard subsystem. So we use what we call the MKPB protocol. And the EC will send some information about why the interrupt has been triggered. The cross EC stack used the Notifier API to call everyone. We'll have an event who wants to take care of it. And then the sensors will say, OK, that's a 5.4 event. I'm taking care of it. We can't send an extra event here, but usually we don't because we piggyback on the keyboard subsystem. We have like 13 bytes. And we can use that information to gather some information. We still have to gather the sensor information. So we gather the sensor information to the arm. Then we may have more than one sensor information, a batch, or if you have several sensors that's been triggered interrupt. And then for each sensor, we feel a buffer. We feel a data structure in a buffer on this. The sensor is picking up on this buffer, like it's called IO device. And then it's calling on its reading that file and get the sensor information. So that's how it works. Not that difficult, but it's challenging. And I'm going to present one particular challenge. So one thing which is a little bit difficult is the time domain transition. The EC works in on time domain. And the AP has its own time domain, and they need to be synchronized. So I'm going to use a variable. So in a perfect world, if that's the time where the sensor produce the sensor event in the EC time domain, on B is the time where we interrupt the host. On C is the time where the interrupt is received by the host. In the ideal world, the time stamp of the sensor data in the AP time domain is C minus B plus A. And that should work. Unfortunately, that doesn't work very well. So usually, the sensor generates a spread event evenly. It might not be the data that you ask for. There may be some variation due to temperature or the factor. But usually, it's pretty constant. The interrupt happens when it happens. And the problem is the time between the host, the EC, triggering the interrupt and the host receiving the interrupt is actually not constant. And that's in consequence, the time of the sample in the AP time domain, I call E prime, will not be even. Well, that was fine at the beginning for games or for applications where not requiring time stamp. But for applications that doesn't, in particular ARCore, when we decide to support ARCore, that doesn't work anymore. We have to do something. So what we did, we added a medium filter. So the idea of a medium filter is, let's imagine you have a spike in interrupt from time to time with the medium filter. Assuming you don't have spike too often and the spike is less than the depth of your filter, then the medium filter can weed it out. That's the theory again. What we find out, that was working fine on ARM, but not at all on Intel. Why? Because the EC doesn't use a real interrupt. They use a message-based interrupt through a CPI. Therefore, when the EC wants to send an interrupt, he sends a message that will be processed by the CPI interpreter in the kernel. And then the CPI interpreter will call our driver. And the problem is the time stamp. So we only see the bottom half of the interrupt handler. And that's not a good source from time stamping. There is too much variation. And even with the medium filter, we were not able to clean up the timestamp. So what we did for ARCore, we added the 11th hour. We added a requirement that, oh, if you want ARCore, you need to have another interrupt between the EC and the IP. And that helped things. On the newer platform, that helped a lot. And that helped us so pass the CTS test, which was good. And talking about ARCore, we had another kind of a sensor. So what you have is the camera is actually the camera, it's slightly modified to have the vSync as an output on the GPIO to the EC. And then we have a very simple driver that will convert the GPIO into a sensor. And then we present that information through IIO sensor, the counter. So it's super trivial, actually. Each time we have a change in the GPIO change, that's included on the counter and that goes to the IEO interface. But the beauty is that we are in the same, the counter on the gyroscope is in the same time domain with the same error, with the same imprecision, but it's the same. So this way, we can actually merge the camera information with the gyroscope information, which is critical for ARCore and also for camera if you want to do image stabilization, for instance. Another issue we solved on the sensor side was that on Kevin, we were using at least 2% of the CPU just gathering the accelerometer information. So there was some imprecision in the driver. We were reading too much data from the spibers. But anyhow, anyway, the main problem is why do you need to gather sensor information when you idle, when your device is sitting on a desk? Especially given that the EC is already calculating the lead angle, we need that so that we are not waking up when your device is suspended in tablet mode. You don't want to wake up when you have a key press. So the EC is already calculating the lead angle. And there is already an interface in between the EC in the input system for a user space to listen to, if they want to know if there is the current tablet mode. It's called EVENT switch. There is already EVENT switch for tablet mode. So we tied everything together. If the lead angle, we calculate the lead angle, we present a nanometer device, which is, I mean, nobody uses it, but at least it's there. It's just to represent what the EC is calculating. And now, when the Chrome is relying on the tablet mode switch to go in tablet mode or go in desktop mode. And the tablet mode switch is tied to the lead angle calculation. So this way, when you're in desktop mode and you are using a latest Chromebook with kernel for 14 or above, Chrome is not wasting time for calculating the lead angle on probing. It doesn't prob the accelerometer anymore in desktop mode. In tablet mode, we still do gather the lead angle, the lead accelerator to calculate the orientation tablet or portrait or landscape. And actually, that's kind of a waste. We could have similar mechanism with EVENT based. Unfortunately, portrait landscape, we don't know if it's, we don't have a proper subsystem for it. It's not input. It's not IIO. So we need to find something else. And to be observable, we haven't found anything so far. So that's what we have today. And on the sensor side, we actually need to do some work for the future and the near future is ArcVM, because the solution we have currently here, it's assumed that the sensor can access the CCFS interface and with ArcVM, that doesn't work at all. So we need to change that. And what we need also to change is the cross-sets answering, which was introduced for the Android, was a mistake. And it was when I took a design from another tablet manufacturer in retrospect, that was a mistake because it's not assumable. So every kernel, we have to port this driver over and over. It's a waste of time and it doesn't make any sense. And also another issue, have you seen before? The Android got all the sensors via the push model where the sensor, the event, are flowing to Arc. Whereas other clients, it's a pull model. They have to ask the event. So that's not great. That's not a very nice design. So what we're going to do, for cross-sensoring, is not assumable, we're going to get rid of it. The inefficient access for other Arc++, we're going to add a service in user space. And this service will make it VM compatible so that we will be able to use it with ArcVM. So the sensor. So the sensor is not rocket science. If you look at the HID stack, you have already this concept of hub between the driver and the sensors that spread the event coming from the EC. And then instead of listening to one file, you have to listen to all the sensor IO device file. So that works. Obviously, if you just do that, you have a problem. It's given only one application can listen to IO device. The Chrome cannot listen to the accelerometer anymore. So how do we fix that? So we need a user space component. And the de facto standard to access IO sensor from user space is to use IO, LibIO. So LibIO has a different backend. So the most of use one is the local backend, which via C interface allow access to the IO CFS attribute. And if we use the IOD server, we have actually more backend. So what the architect of the IO subsystem on the writer LibIO have designed is the way to access remotely the sensor. So there is already existing backend. One, it's a USB using a serial profile. And the other one is the network socket using TCP IP. IOD is listening to a port and you can connect to it. And then the client is using LibIO. The interface, the API the client use, the subset is pretty common. I mean, the API of the network socket or the USB socket is a subset of the local API, but still it's usable. So if you write your application properly, it doesn't matter which backend you're using, you access the sensor the same way. So here, for example, if you have an application, we use the network backend, go to Network Link. The LibIO received the packets. So on the LibIO here, connect to LibIO using the local backend and access through the kernel, access the IO device. So using LibIO allow us to reuse the code that has been done to access IO device over the network. We have already a well-known protocol to talk to the sensor remotely. And also, we'll be able to upstream the change that give extra paraphrase to review the change we want to make to the LibIO when we have to. So what's the block looks like now? So every client, so now we can think of more than one client, like Chrome, PowerD, or Camera, CameraAl, can use LibIO via a new backend, the socket interface. And the Android can use sockets if they're running in mini-gel, or vsockets if using in VM. Connect to the IO service, which use the LibIO as a local. And then we have used, sorry, IO service use LibIO with the local backend to access the sensor. So I mentioned IO service. Why not using IOD directly? So we need a beef up server for the sensor. So that's why we're going to rewrite IOD in C++. One big change is how we send events, sensor information to the clients. So normally what happens is if you have several clients connecting to the IOD and the sensor events arrive to IOD, IOD will spread the sensor. It will broadcast the sensor to everyone. So in our case, if you have Android asking for a sensor like 150 hertz on Chrome as a much lower frequency, you don't want to overwhelm Chrome with too many sensor events. So you have to spread. So in this example, let's imagine the sensors usually support a limited amount of frequency. So let's imagine they support only 125 hertz and 250 hertz. And Android is asking for the sensor to work at 150 hertz. To meet the requirement, we're going to set up the sensor to work at 250 hertz. And we don't want to send a sensor event to Chrome at 250 hertz. So every 13 events, we're going to send an event to Android, to Chrome. So Chrome will get this sensor at the frequency about 19.23 hertz. So that will meet Chrome requirement without overwhelming the Chrome process. And another thing we want to do is the IO service to be configurable from Chrome. So for instance, if you want to enable or disable sensor access to virtual machines, or if you want to tune the service. So as I said in the slide before, we need to add, for Libio, we need to support new backend, which is socket-based. So it's very similar to the network backend that already existing, but will be the unique domain on the V-Sauce. And yeah, so when you see this, this is scary. When you see this architecture, because it's very different from what we have today. So that will be an interesting mere question. But we hope to be ready for ArcVM. So what we're going to do is we're going to first change, a little change, we're going to keep the ring driver. But we're going to introduce Libio to the clients, only with the local backend. Then we add the IO service just for Chrome, not for the ArcPlus yet. And then when we are ready, we go to the final implementation. And then we are ready to move to ArcVM. So yeah, that's the work we are doing right now. If we look at the future, the future work, like when everything is done, what is nice with the IO services, we will be able to support guest OSs with very little modification, actually no modification. So in the case of Windows, Windows rely on the HID protocol, which is, I mean, mentioned before with the input, it's a subset of USB, but now it's on live. So HID defined house sensor information can be reported to the host. So what we have here, we create a virtual high square C controller, which we present to the VM. On top of the virtual high square C controller, we create a virtual HID of a high square C device, which report a collection of HID sensors. And Windows will load the proper driver for those device, for the HID of a high square C on the HID sensor driver. And then connected to Libio, we can send the request coming from the Windows VM, translate HID request into a Libio request, go to a service, which will translate the request from Libio to a CFS call, which will be translated by cross EC sensor hub into a request for the EC, which will translate it to the real sensor. That's investigation for us, because it's a lot of translation. We have to pay attention to the timing, but that's very enticing to be able to use a pure guest OS without modification and be able to use the existing code to access the EC. So yeah, that's the future, the future future.