 Hello, you guys. Can you hear me? OK. I'm going to start right away. Well, this is a presentation on peripheral IO. It's a part of Android Things. And it's the second part of a two-part series, the first of which was presented by my colleagues, Geetha and Anisha earlier today. It was a prerequisite, but if you are not there, I'm going to spend a couple slides just two minutes at the beginning to go over what's Android Things and then deep dive into peripheral IO right away. My name is Sanrio Alvarez. I'm an engineer with Intel, and mainly focusing on enabling Intel's startup boards on Android Things. The agenda is very small today. I probably won't take most of the time here. At the end, you guys come over. I'll show you the boards and how the connections are made. So go over through the introduction of Android Things, what are the differences. The Things support library will see what it's made of. The user driver is being one thing, peripheral IO being the other. What APIs are available to the application developers. And I'll also introduce a couple of open source libraries, UPM and MRA. They are part of the Intel IoT DevKit and open source. And we are planning to leverage what's available there for Android Things developers. Take questions at the end and if you take away links. And if I have time, I'll also go through those links so you can ask me questions at the time. If you've heard the news, Android Things was released back in November last year. It's a developer preview. And three boards made the cut. Intel's Edison, Raspberry Pi 3, and NXP, Spico, and Argon. Intel's Edison is available on three different kinds of boards, the Arduino, the SparkFun blocks, and mini breakout. I have those boards here. Two weeks ago, DP2 came out and Jule was released at the time. It is the most powerful of a lot. And it's mainly focused with the computing power. It's mainly focused on for autonomous robots and IoT applications that require edge processing. Now, if you're not familiar with starter boards, the starter boards are just boards with GPIOs and buses that are available on the boards exposed so that you can connect sensors and actuators and communicate and configure them. Brief introduction, Google's tackling the IoT market with Android Things. But leveraging as much work they've done on Android as possible. Although it is Android-like, there are differences as certain things that are not available, like system apps and content providers. Framework APIs is a subset of framework APIs, like activity and maybe location, but not all. Common intents, contacts, telephony is not available. The download manager is not available, so you can't install APKs by using Google Play. Also, no APIs for input and authentication, which keep in mind that Android Things is also meant for devices without displays. So there's no user input there, for now at least. It's just DP2. The rest of it stays the same. The Linux kernel is the same Linux kernel port on Android devices. They're using the same port. Hell, with very little or no modifications, we've been able to use on our Intel boards. Certain C libraries, native libraries, are available. I know I've mentioned OpenMax. Maybe it didn't make the cart for DP2 graphics, but did it. We did send some work, did a lot of work there. But it is coming, if it hasn't yet. Android runtime is available. So very much like Android, but a few things missing. So keep in mind, if you're looking for content providers and stuff like that, you have to make sure it is available. This is a simple stack. Well, the thing I'm going to talk about is the Things Support Library, the orange block there. And what it does, it extends the framework to communicate and integrate with devices that are not available on mobiles and SOCs. It's also streamlined for a single application use, which means at the end of boot up, you don't have the home screen and you can't go from one application to another. It's a subset of what the window manager can do. So at the end, you have to make sure you have the IoT launcher in the application. And that's the application that will be launched at the end of boot up. And mainly, that is the application that is meant for that particular device. For example, a toaster. Just one application at the end of boot up. All right, let's focus on the Things Support Library, break it down. It's made of two parts, the peripheral IO APIs and the user drive APIs. The peripheral IO APIs basically help you communicate with the device using standard protocols like QArts, Py, I2C, et cetera. The user drivers is an extension of the framework, and it helps applications inject hardware events into the framework so that other apps can register and act on them. For example, if you have a GPS module and you're connecting it to your device, if you write a user driver for it, you can make it part of the framework. And another application can use that to gather location data through the user driver APIs. That is one part of it, user driver communicating with the applications. The user driver also needs to communicate with the GPS module. And that it does via PIO. So PIO provides the UART APIs, the SPI APIs, and the low-level APIs. So to get the communication flow right, application registers for events using the user driver APIs. User driver API communicates with the device using PIO APIs. What are the benefits of having user drivers in the framework? Well, yeah. So you mean to say, take GPS module, for example. For a GPS module, you might have UART communication. So the GPS module that you write, the user driver that you write, uses the UART APIs that are available by PIO. Now, the UART APIs that are available on PIO, I'll come to it in the head in the slides, has to talk to the Linux SysFS UART nodes, the TTY. And that's what PIO really does. It manages communications and communicates with the SysFS nodes. If it's GPIO, it's SysClass GPIO. If it's PWM, DevPWM, and Spiderman, stuff like that. And that is what really it does. So it's a thin layer. But there are advantages that I'll come to why PIO makes a difference. You could write directly talking to the SysFS node, right? But there are advantages of having this layer and it abstracts quite a few things. It also, I'll come to it. But before that, user drivers, what are the advantages of having user driver? It's easy integration. It's part of the framework. You know what APIs to expect and to write. So since it's part of the framework, it is also APIs can, applications can call in those APIs and use the driver. If it is also reusable, what it means is if there is a user driver available online somewhere and it's not part of your framework, you can pull it and install it in the framework and reuse it in your application. Now what is this beneficial tool? And mostly the device sensor manufacturers, they could write a user driver for their sensor and put it somewhere on a GitHub maybe. And what that does is it's just one implementation of the user driver. They don't have to take care of whether it's running on a Intel platform or a ARM platform. All of that is abstracted by the peripheral IEO layers. So they just have to have one implementation. Also, you get the benefit of portability. If you're connecting the GPS module on UART1 of a Raspberry Pi 3, you have a user driver and an app for it. You remove the module and put it on UART1 of a Edison. And without any changes, you can run the same app and the same user driver and you'll be able to communicate with it. And that's the benefit of having a user driver in framework. So how does the UART1 from one to the UART1 to another platform, how's it done? And it's done using the Peripheral Manager client. This is the architecture of it. Now the Peripheral Manager client has a central class and that class exports methods to all peripherals that it supports, like GPIO, IOC, et cetera. Now these are the only five protocols that are supported right now on Android Things. Each protocol has a manager and a shim driver layer associated with it. Not to be confused with the user drivers, it's just a layer that communicates with the Linux SFS. Also all of this is above the AC Linux layer, so on Android devices and things devices, you cannot go to the shell and just start configuring on production devices. So you have to go through the Peripheral Manager client and it gives that security. Also, when an application is asking for a GPIO for one pin, this architecture creates a connection between the physical pin and the app. So it provides serialization and mutual exclusion of the resource. And why is it necessary? Multiple processes can, on a Linux machine, just running Linux, it can go and write to the GPIO. And if the GPIO has an eye to see on it, a different process can come. There is no safe way of managing which process is doing what on the SysFS node. The Peripheral Manager client gives that benefit. It maintains that communication. And unless that communication is not deleted, it does not allow a different process to configure that particular pin for something else. So it provides mutual exclusion. This layer also abstracts the low-level C interface or shell interface. And you can write applications in Java and C++. NDK support is now available. So you can write C++ applications and Java applications to communicate with low-level hardware. Another major part of playing with the meta devices is the muxing. And each platform has its own different configurations. If you want to get a GPIO on an Edison or a Joule, you have to look at its data sheet. You have to look at its layout and wonder what you need to do. All of that is abstracted in the device hell. That device hell is given by the vendor. So we wrote a device hell for Intel's Edison separately, Intel's Joule separately. And the PinMux Manager is standardized APIs. It calls into it, and every time you want a GPIO, it goes to the device hell, configures it, and hides all that complexity from the app developer. That's mainly the benefit of having this structure. Now, suppose you have a board. You have the layout. This is an Intel Edison's Arduino layout. And you have the layout of the pins. And you want to blink an LED. You look at what GPIO is available. Now, you see, some GPIOs have multifunctionality. So they have to be multiplexed between different functions. Some of them don't. But you don't need to worry about it. What you do is you just ask for a particular GPIO. I don't know if it's clear, but I think it is 13. IE 13 is where the LED is powered through, and the ground is connected right next to it. Now, when you're writing the application, all that you have to do is get a Peripheral Manager Service instance. It gives you a lot of APIs for all peripherals. For GPIO, you can get the GPIO list, so all the GPIOs that are available. Now, since you know it's connected on IE 13, you ask for it using Manager Open GPIO. And when you do that with the GPIO name as IE 13, it does the muxing underneath it, and it also forms a connection to the GPIO. So others cannot use it for I2C, SPI, et cetera. Now, those are the methods that are available right now for GPIO. You could set a direction. You could active type, get value, set value. It also, if the GPIO is capable of interrupts, then you can set the trigger on the edge and register for a GPIO callback. Now, in case of the LED, you would go set direction out, and then using set value true and false, you could toggle it. Here's a pressure sensor that's connected to the Arduino. Now, you look for, it's an I2C base, so you look for the I2C bus, and it's the same bus here. So there's two signals coming out, SCL-STA. You connect to those, and you power it using the five volts, I think, here. So that's all you need to figure out. You also need to check where this I2C is, like what bus it is, what address it is. And once you know that, and that's all you need to know, you can give that information as a device name in the address, and the open I2C device will create a connection for you, for I2C. You can get a list of I2Cs that are available on the device, read and write to it, the normal I2C APIs. Similar thing with SPI. There's a full duplex and half duplex available, communication right now. And different boards, make sure when you are writing to it, if you always do the bus list, SPI bus list. Sometimes what happens is, even if it's listed, and you are running the Android things, if something is wrong with the SPI, it might just show you that SPI1 is available. So it's very sure, because the SPI bus list goes and communicates with the device. So you know what's on it. And then you choose the device that you want to connect to, and configure it, and stuff like that. UART is also available. You can configure the UART. There's, I think, two UARTs available on the Edison, as well as the Juul. And make sure if you have the console connected, then sometimes the console, the kernel logs can tend to come on this device. So make sure you're not configuring it in the right way. And PWM. Any questions on this code? Yes. So DP1 was just Java applications. DP2 provided NDK support. So the libraries that I'm going to talk about, UPM and raw, are actually written in C, C++. So we were waiting for the NDK support. So you could write C, C++, native code. In fact, the peripheral IO manager is written in C. Just that in the DP1, there was no APIs available for it. Not anymore. With the NDK support, you can write, no, Android Things has the Java framework available, but you can write applications that are not in Java. Yes. OK. So UPM, useful packages and modules, is a collection of sensors and actuators. This effort was started, I think, a couple years ago, maybe three years ago, and Intel was leading it. There's large support in the community for multiple sensors. Some of them are just makers. Right now, they're going into industry protocols and sensors. So there's support available for this, open source support available. Now, we are planning to integrate this, very close to it, to make it available on Android Things. So for that, these are all the benefits you have. You can write applications in C, C++, Java, all different things. You can write three lines of code and get the sensor code available. All these protocols are available, but because we are on Android Things, only part of it is available right now. It has multiple sensors and multiple OSes that it's supported on, and over 250 different makers and industry sensors. So I don't have an architecture slide for this, but the way UPM and MRA used to work before Android Things is you have a MRA layer that used to act the same way the peripheral IO works on an Ubuntu system, for example. And you have the UPM on top of it, so the MRA would provide you with GPIO analog and all these. And the UPM sensors would provide you what is now user drivers on Android Things. So what's happened now is, since the peripheral IO is available, the MRA is just a shim that is calling into peripheral IO APIs. And you have all the goodness that's available on UPM in Android Things. Well, these are the landing pages. I'd like to open a couple of them and show you how to go and get about and where to find information. Let me go to that. I'll come back to it. The first link is basically the Google's landing page for Android Things. There's two sections to it, hardware and SDK. The hardware section provides you with all the boards that are present, that are presently supported. Also, this is where you'd get the system image downloads. I think if you're part of the previous presentation, Anisha mentioned that. But these are the two links that you can find what the device, the pinout, looks like. So you don't have to go and look at the data sheet. All that you need to do is look at the Arduino pinout and the SparkFun pinout. It just shows you what's available. Same thing in Joule and the rest of them. When the SDK part, the main things that I'd like to point out are all these that are supported. So the native PIO just came in two weeks ago. And there are user drivers. These are drives written by Google. And they are available on their GitHub, as their GitHub. And contrib drivers are the user drivers that are available right now. And there's a bunch of them, like 10 of them. But if we were to utilize UPM and all those sensors are available, then there's 250 of those available here. And I think we will come out with maybe a white paper shortly of how to integrate these as natives, native, because this is all CC++ code, or even through Java. So you can just pull the AAR and start using it. So those are the links. Also, another link on how different these boards are and how they, these are the boards available. So Jewel having different, the rest of them are pretty much similar, but the Jewel is much stronger. But also, right now, TensorFlow was introduced on Raspberry Pi. But soon, I think Jewel, you could run it on Jewel and you can have edge processing on Jewel. So those are the things that are coming up. There's a lot happening very quickly. I try to give as high-level information on peripheral IO because they are changing things underneath it very quickly. Since the Jewel was a new one, that's already released. But we are already working on another one. Yes, I think I have. I just did it today. Yeah. I can see it. You can see it. I'll check it again. All right, any more questions? Feel free to come and ask me any questions on the boards. How to connect them, play with them every day. So also, there is a demo going on upstairs. I think it's there until 1 p.m. And there's a home kit. Yeah, smart home kit that is running Android Things. So that's only up until 1 p.m. All right, that's that.