 Cool. All right. Take your seats. I know you're excited. The time for the state of AGL, plumbing and services, has arrived. Good. Just to start off, I'll introduce ourselves. Scott Murray, I've been using Linux since 1996. And it's started off as hobbyist, and then was lucky to turn it into a career. And I've been doing embed at Linux since 2000. Couple brief breaks to do our talk stuff. And today, I'm a principal software engineer at Consolco. And I'm going to Matt. I'm Matt Porter. I've been a Linux user developer since way back in 92, believe it or not, so as a university student. And doing embed in Linux became my full-time job back in 99. And I'm currently working as the CTO of Consolco Group. Little syllabus for our class today. We're going to go through just an overview of AGL, a little bit about the release history. Keep in mind, through the magic of the Linux Foundation scheduling, this is essentially a follow-on to Walt Miner's talk about AGL this morning. And we're going to go more in-depth into the APIs and some guts of that. We'll talk a little bit about current and planned features at that level. Build system, give you a little overview of what that looks like, how things are organized, some of the plumbing components. We borrow that term from Linux Plumbers Conference, that ecosystem middleware, if you will, that we have in the Linux community. Then we'll look at the application framework in a little depth and then kind of get in the meat, which is the APIs or bindings. Then we'll look a little bit at roadmap and talk about maybe how you can get involved. All right, so overview. So automotive grade Linux, if you hadn't heard from Walt Miner's talk, it's an embedded Linux distribution targeting IVI and now ADAS products. Historically, it was IVI and the scope is expanding to cover what's needed in real products. It's based on open-embedded build system and Yachto Project Pogie Reference Distro. And so one of the key things is there's an application framework, well-defined application framework for developing applications to make that a lot simpler than the very random libraries we might work with on traditional Linux on a daily basis. And then, of course, an SDK that has to go along with that. So the goal is to provide a secure application runtime environment, something sandboxed, and a uniform set of APIs meeting developer needs. And so the purpose of this is to provide a base, a common base for real products in the automotive market. All right, a little bit of history of where AGLs come from and the features associated with some of the releases. So this started the first release. And you see these lovely cute names with fish. So we have Agile Abacore all the way back in 2016. And it had some basic things like a most driver for audio and a few demo apps. And then Brilliant Blowfish, you see this trend every six months, July 2016. That's when the first version of the application framework came in, and then also some audio routing functionality that came behalf of Genevieve's audio manager framework. And then Charming Chinook, that brought in some of the first application framework bindings. So these are actual APIs or exposed application developers in a well-defined way. So for Bluetooth, Wi-Fi, radio, you see those are sort of commodity things that you need in a vehicle. Also, the cross SDK first became available there. And a lot of additional BSP support. So it was about the time that interest is picking up, and a lot more people wanted their board supported. You head forward to Daring Dab, July this year. That brought an enhanced version of the application framework. It's what we're working against these days as developers. And then some additional bindings, things that you would strike you as being necessary in a vehicle as well. Also, Smart Device Link, which is another app framework for open-sourced, their framework for applications. And the next one is going to be Electric Eel. And that's what we're working on in mainline and master of AGL for any of our new development now. And that's targeted for January 2018. So right now, we've got an application framework. We have some core APIs. We have audio routing. And then we have some demo applications. What's in tree today is some QML-based applications. But as you may have heard in Walt's discussion or talk that the intention here is to be UI-independent. That's really a hard requirement in this market. So everything needs to be UI-independent. And we'll talk a little bit about that when we get into why and how we're doing these APIs. So in the future, we're going to have an audio API, media player API, window manager, home screen, storage API, all these things. You'll see trends with these that map out to the types of APIs that you see in popular mobile operating system ecosystems. We'll talk about that when we get into the meat of it. So I'll let Scott talk about build system. Yeah, so the build system and distribution organization aren't dramatically surprising to anyone who's familiar with open embedded and Yachto project reference distribution. So AGL is actually based on the Yachto project Pocky distribution and uses basically a set of layers on top of that. So we've got OECore, which is the actual open embedded core. That's the basis of Pocky. A bunch of the meta-open embedded layers are used to provide various utilities and demons. So that includes things like meta networking, meta-pearl, meta-python, things like that, that a lot of those are prerequisites for other things. So the security aspect of AGL is provided by a couple of layers that are currently carried in the meta-intel IoT security repository. So we get our layers that give us MAC support and Scenera from there. So those are the backbone of the security mechanism used by AGL. Then the actual meta-AGL layers are sort of contained in a repository together. So there's one for the app framework, which we've been talking about, and I will discuss further as we go along. There's a BSP fix-up layer where we have some tweaks to the various vendor DSPs or open-source DSPs to kind of smooth the rough patches over as we go along in our upgrade cycle. There's actually the distro layer for meta-AGL, has the distro configuration. And there's an IVI common, which contains a set of package groups that group sets a functionality together that you can select to provide different vehicle infotainment features in your final configuration. So Matt had mentioned that the demos are actually based on Qt and QML. So there's meta-Qt5. Then, of course, the meta-AGL demo layer actually contains the demo applications that you can use when you actually build a demo image of AGL. And, of course, the all-important BSP layers themselves get your board support. So there's a dozen boards or more actually supported out of the box and use layers such as meta-free scale, meta-RenaSOS RcarGen3 for the current sort of focus of AGL for board support is the RenaSOS's M3 ultra-low cost board or H3 board, and then meta-TI for things like TI value. And, of course, there's also Raspberry Pi and a handful of IMX6 boards. So there's definitely a lot of flexibility there. And so with this nice layer stack, you can kind of control feature set and easily switch in another board. So hand over to Matt to talk about some of the plumbing and the services that are based on top of it. Yeah, so Scott covered the layout of how these layers are separated. And just to kind of highlight some of the key components, plumbing pieces in the distribution, it's system D-based. When you run an application, each application is a service. So we're making use of system D's ability to help us sandbox things. And then one of the things coming in the future, and again, Walt already trumped us by mentioning it, is that AGL is considering moving to dynamic users and using that feature, that new feature in the future. On the audio side, we're using also in Pulse Audio. No big surprises there. That will continue to be a theme throughout this talk as there's no surprises here. You're going to see a lot of familiar faces. The one you may not be familiar with is the Geneva Audio Manager. That's a project from the Geneva Organization. We've been using that for policy-driven audio routing. It has the notion, it works in conjunction with Pulse Audio. There is a Pulse Audio plugin. And then if you've ever worked with the Corking plugins, they have very fixed policies. They're based on parameters. And this allows one to write a very complex XML-based policy. There's a big rules engine that operates with that router module that plugs into Pulse Audio and allows you to do dynamic policies for Corking and so forth. On the graphics side, it's purely a Waylon Weston architecture. The key differentiator from those of you who are running it on your desktop, say on Fedora now, is that it's using IVI shell. One of the unique things, if you're not aware of what IVI shell is in vehicle infotainment, of course, is that that works in conjunction with the layer manager. And so one of the unique requirements of automotive industry is to be able to separate aspects of the application in layers. So a great example that people always use is a nav app. The actual back-end engine and the graphics rendering may be a completely different engine from the UI decorations. So they might have one layer, which would map to a surface in Wayland, expose that with the graphics of that map being rendered in the turn-by-turn, but have different decorations and controls on another layer. So they might have a proprietary piece that does that, and the rest is the UI. So that's what IVI shell is. Bluetooth, BlueZ5, location services. We're using GPSD, Geoclue. We'll talk more about those later and how they're used in depth. Telephony O'Fono, networking, Conman, WPA supplicant. Looks a lot like a desktop distro, right? Yeah, so all these familiar friends. All right, I'm going to let Scott introduce the application framework. So Matt just described the actual plumbing. The application framework is what basically AGL uses to sort of contain that and expose it in a sort of controlled way with declared interfaces that people can write against and not have to worry about hitting the low-level interfaces of these different applications or demons. So what is the application framework? It provides a sandboxed application runtime environment, which I think Matt's already said, but it implements a complete application lifecycle covering install and basically start up and potentially upgrade. And using system DC groups, SMAC and Scenera provides a secure runtime environment and is also a Scenera-enabled debustee that's used to control some of the aspects of the Scenera security policies. This is all sort of controlled through a web socket interface to the bindings that the APIs define. And so that's how applications actually talk to each other and to the underlying implementations of the application APIs. And there's a W3C widget specification that's actually being used by AGL for the packaging of the applications. And that's how actually in the configuration of that, they sort of expose what their required binding APIs are and what they provide. So that's how things are actually able to be connected together and work as a system. So there's a couple links in that slide to a lot more information about the widgets and also a high level description of the actual application framework. There's quite a bit of documentation on the docs on motivelinux.org site. And if you have any more interest in this, there's quite a bit of material there for you to dig down and do some deep diving. So binding overview. So the actual bindings are to abstract the UI from the back end implementation. So as Matt already alluded to, this allows you to basically replace your UI with your own custom one or switch to a different UI toolkit. These existing demo apps use QML, Qt-based apps. But you could do an HTML5 UI. You could do a completely native toolkit of your own choice. And the mechanism allows you to do that. And as I said, this allows you to reuse all actually your back end application. As well, the mechanism allows you to have fine-grained security control over what applications are actually able to talk to and provide some levels of access control so that you don't have applications potentially talking to parts of the API that you don't expect them to. And so this is done with smack and the scenario mechanisms. And the end goal is to provide a complete and consistent API. We want people to be able to develop apps for AGL and know that they'll work going forward, or at least we'll have a versioning of the binding APIs that they'll be able to easily upgrade and have their apps work on different AGL compatible platforms that people build with the AGL distribution. And once again, if you're looking for information, there is quite a bit of documentation on how the bindings work. So just a quick blurb on how bindings are actually put together. The actual implementation of a binding is done in a shared library. There's a basic API for the bindings themselves. And so you provide this information when you actually register a binding. Your implementation provides a name, a list of binding verbs, or which are the actions that the binding supports being done through it. It contains actually the implementation of the verbs and the events, the actual back end logic to implement the verbs. And as well, there's a pre-init and a knit to give you a couple levels of initialization. A knit is actually what happens when the application does connect. Pre-init is basically when you're starting up and the binding gets loaded. So it gives you a couple levels of initialization. There's a specification now in the newer version 2 of the binding framework that allows you to describe the API with open API, which potentially allows some degree of disspection. Your application could actually take that string and actually parse the XML and work out what the binding exposes. There's actually a textual description. And there's as well, there's some event handling stuff related to tracing and profiling of the binding API. There's this extra callback in there for that. And as well, there's actually now in version 2 a no concurrency flag. The binding APIs are pretty much fully asynchronous, except in this case, if you set no concurrency, verb calls into the API will actually be concurrent for that application. So that simplifies application development for some situations where, if you're writing a very simple app, you might not want to have to worry about a lot of asynchronous programming. But in general, this isn't recommended. You should be prepared to have your app actually receive a whole bunch of synchronous events because that's the world we live in today. Pretty much anything particularly complicated at all has to handle a synchronous behavior. And so by default, that's kind of the behavior that the binding APIs expose. So just to continue, when you actually start up an app, the application and binding packaging with the widget format includes this XML file, which specifies description of the application, the name, and the author, and the license, and such stuff lists the permissions that that package requires and the bindings that it requires and the bindings that it provides. So when application is started by the application framework, it spawns an AFB daemon instance, which loads and initializes the bindings that that application says it requires and provides, executes the application and pass port numbers and authentication token arguments to the spawned application for it to communicate with the binding daemon. And it's important to remember that in this case, with the architecture of the binding framework, every instance of the binding is separate. So if an application loads a binding and another application loads the same binding, there are separate instances, basically separate loads of the shared library. So if you have a shared resource that the binding is providing access to, you do have to worry about currency control. You'll have to implement a mechanism for that in the binding, potentially with things like debus or other IPC to actually control that. So this is a very quick run through of application bindings. There's a lot more detail about this and how you actually implement a binding to expose functionality for application use. And once again, there's quite a bit of documentation on the doc site. And so just a quick run through of how the bindings are used in an app. So the interface is actually through HTTP requests or WebSocket. And it's all done with the JSON format, which is pretty commonly used now in web application development. And so this is an example here of what a sample request would look like. And so in this case, this is actually looking to set the temperature on the driver's side, I guess, to 16 degrees. And so HVAC set is actually a particular binding verb. And so the request response are actually in JSON. So you get elaborate response back. And you can decode that. And you actually see you might get a more detailed error message in the JSON format. It can be structured. As well, you can subscribe and unsubscribe to events. And the events also arrive through the WebSocket as JSON, but asynchronously. And so once again, there's a lot more details available. And so based on this mechanism, there are now quite a few APIs available in AGL. So Matt's going to run through quickly, given we're running out of time, quickly, what bindings are available today and a quick blurb on each one. So you can see this big list of what's upstream already. If you can imagine, there's lots more to go. And we've talked a little bit about where things need to go. And you can imagine some. But we're going to go through these pretty quick. First, I want to just point out that if you're really carefully paying attention, what you realize is all we're doing is a glorified wrapper at the end of the day around common libraries. So it's always the details that matter, though. So we're building a shared library that does JSON-Waste WebSocket transactions and talks to some sort of middleware that we've exposed or have in the base image. So we have all those ones I just showed that are upstream right now. Some of them are still work in progress, like everything in the world. And then if we look at the work in progress bindings, we have audio bindings still being worked on. They'll plan to come in and some new home screaming and window manager, new revision of those, and canned bindings. All right, so let's talk about the master binding. This is kind of the root of all the goodness, if you will. So this binding manages application lifecycle. So in any cohesive application framework, you have some basic operations you need to do. You have a third-party set of applications you need to be able to install, uninstall, start, terminate, pause, resume, all these kind of things. Very straightforward. So if we look at the binding APIs, and what you're going to see is this concept of these verbs. So the verbs are our calls that we can make. And you're going to see this throughout all these examples. And you'll see some definite patterns on this. So this is a wrapper, in this case, around the AGL specific application framework. So to accomplish all those things, you can make these calls through JSON and WebSocket to the application framework. And you can get the state on something. You can install or uninstall packages and so forth. There are no events. It's got mentioned, right? Things work asynchronously. The responses are asynchronous. There aren't any events defined with these. These are all call response-type actions. So let's jump into something that's more of a connectivity type thing, fundamental. So what we have today, and this is still going through some iterations, is we mature and have more dependencies on the Bluetooth binding. But the Bluetooth does exactly what you would expect, right? Device discovery, pairing, right? Connection settings, right? We also have the need for things that are more at a use case level of you get in the vehicle and you have multiple phones paired and there's two of them present in the vehicle. It needs to know which one to connect to, right? So you need a device priority list exposed. So there's verbs there to manipulate that and pull from that. And I'll explain how that's used later by another binding. AVRCP controls are managed through the Bluetooth binding. That's their home now. Things can get reorganized. It's like any open source project. We're going to continue to evolve, right? We may split some things out into a separate binding. Media metadata position tracking is also housed here. Future work, try to cover future work. We need to do some cleanup in this binding. And this is what it looks like. So no real surprises here. Everything I said there, just you can deal with the RFKIL interface through power verb, right? Start and stop, discovery. You see connection stuff. When I mentioned cleanup, there's a number of things where we have some old bindings. Don't have a model where it's a single verb that's a getter-setter type API, right? So you'll see some of those anomalies on Bluetooth and Wi-Fi bindings. That's why we mentioned some cleanups necessary. But when you clean up, you also have to fix the apps, right? This is one where most of these closer to production type bindings that are dealing with real I.O. have events, right? So we need to be able to process events while we're running an app that a device just showed up. Because what if your Bluetooth was off on your phone, you got it in the car? It needs an event, right, when that thing reappears so it can go and connect and so forth. So all the good bindings are event-driven like this for connectivity-type things. So the Wi-Fi binding, again, no surprises. And I should mention, Bluetooth binding, obviously, its big dependency is bluesy, right? Is what it's wrapping around in that D-Bus API. The Wi-Fi binding discovers Wi-Fi APIs, right? Connect to disconnect. It can handle WPA2, pass key input, right? That's all done through D-Bus, gather status. And then it also manages a network connection. So it's kind of like a Wi-Fi and network manager binding at once right now. So future work, it needs a little bit of cleanup. It probably needs to be split into a network bearer management type binding plus some provider, right, where Wi-Fi is one. Maybe WAN is another, for example. It's kind of a logical separation going forward. That API looks a bit like this. Again, this is one where scan could be a get or set or single verb after we clean this up a little bit. But one thing I didn't mention is that what you'll see on every binding that has events is you'll see a subscribe and unsubscribe. So that allows the client, which can be either an application or another binding. So you have the ability to stack bindings, right? So you have these shared libraries. And you have one binding stack on another. So if we had a network bearer management type binding, it could depend on the Wi-Fi binding and the WAN binding, for example, right? And so you may, a network bearer management binding might subscribe to those events for the network list and so forth to manage Wi-Fi access points. The radio binding, this is radio tuner binding, conventional old school over the air, believe it or not. We got to have that, right? And right now, it's based on the RTL-SDR code. And there's a number of features specific to the current demo apps that we talk about here, not super relevant. But the important thing is it sports an RTL-SDR dongle, mostly because there's not a lot of good commodity AM FM tuners that interface well that you can get at. So we use that to drive development on this. In the future, it'd be nice to do some additional tuner hardware support on some of the real automotive platforms. And metadata, right? RDS support, HD tuner support, are some obvious paths to go. So we just do AM FM right now. It looks like this. So again, you'll see the subscribe and unsubscribe later in the verb list. We use that same model throughout all the APIs. So let's say we start a scan with scan start. That's an asynchronous event, right? Depending on where you're at, that could take five seconds to scan through and maybe not even find anything. So that's event driven. You're going to get the event back station found when it finds a station. And then you can update your UI and so forth. Everything else pretty much straightforward. So keep going through this telephony binding. This was one of the earlier bindings that is what we'll call a stackable, or it's a client of another binding. So this does Bluetooth HFP support, does what you expect, originate, answer calls, right? Get some status on the call, right? Maybe the remote party hung up, right? And we can get information like the CLIP and CLLP number identification, right? And this depends on Ophono, Bluezy, and Pulse Audio, right? Ophono is the actual voice call agent for Pulse Audio, or Bluezy. And we have some more features we're working on for this, which is being able to in-call, send dial tones, call waiting, hold forwarding, and then voice modem support for those WAN modems that support voice calls. So this is the current API. We dial, hang up, answer, and then everything's event driven, right? We have to be able to know, right, get the event that an incoming call is there, right? So that we can pop up either an answer or a decline button, right? So when you're doing a phone app like we've had to modify, those are the types of things you handle in the app, the way that Scott was showing with processing these events in an API. There's a media scanner binding, and so that back ends on light media scanner. Again, more common middleware stuff from the Linux world. So that binding simply scans the removable data now, keeps the database, and we can access that, and we get events based on new data being available. So it's a very simple one. You just subscribe to the events. It'll let you know if there's new media added, removed, so you can sort that in your playlists and do what you need to do in your specific media player application. There's a new media player binding. This is in very early development, but because everything's done very incrementally in the open, we have an early version of this media player binding upstream, and it's simply playback and control. It depends on G streamer. Up until, well, now it's being integrated into the demo media player app. All of our media playback in the actual demo layer, so the Meta-AGL demo layer apps, was done with the QML media player. So it's a Q media player object doing all that playback. So we're decoupling those things. Part of this theme of getting the UI abstracted from the actual back end of things. In the future, we'll add video playback into this binding. It's very simple. You can set up a playlist for it. You can get the current state of the playlist and modify that. You can get metadata, and then you subscribe on, subscribe to events. So as the playlist changes, you get event. And you get events based, metadata events that tell you the position duration of a track as that continues. Because you need that to update the UI, position in a visual sense. OK, and then the next set, we have a whole bunch of location-based services. Simple wrappers, again. GPS binding, just wraps around GPSD. So it's the exact same set of GNSS data that you will get from GPSD protocol. So latitude, longitude, altitude, speed, and time. And these ones become very simple. So you have a simple getter. You can subscribe to event, and then you get that same data on the event location thing. So you have the ability to just go pull that in a pulled way. But most usage models dictate that you will be subscribing and just allowing it to give you the regular update on the location data. A follow-on to this one was the geoclue binding. So you might detect a little overlap here if you know what geoclue does. So geoclue has location data as well, almost the same set. It also adds heading data. But what's important about geoclue is it expands the realm of providers of location data to what modern systems require. So we don't always have a fix on enough satellites. We're in a building. You can use geoclue can gather location data from Wi-Fi, AP databases, the 3GPP, if I get that right, tower information, the GIP databases, and GPS as well. So there's a little overlap with the standalone GPS binding there. It's got the almost the exact same API. Well, it is the exact same except the actual parameters. You also get a heading out of that based on how well geoclue can do that. This was added in to better support location services the way that most modern mobile operating systems do so. And so on top of that, there's another stack binding, a geofence binding. It's kind of a critical part of modern mobile APIs and location services is the ability to add a bounding box and track egress events with that box. There's also the concept of a dwelling time or a dwell status, I should say. And so dwelling is the concept of, based on a timeout of entering a bounding box, then you create a special event. That means, OK, for example, the use case that drives this, and you may see that in your favorite mobile operating system, is something notices that you've arrived at home and triggers some behavior. And that's typically based on some timeout. So a dwell indicator might set it at 10 minutes and then you get an event. So you can policy drive that with this API through these interfaces. So the way it works is you take a min, max, latitude, longitude. You can add a fence with those parameters. So you've got a bounding box. You can remove it. You can list where they're at. You can set the dwell transition time before that event happens. And of course, subscribe and unsubscribe to these events. So a geofence event will tell you, hey, I've entered. I've exited one of these fences. Or, hey, we just hit dwell. And it'll tell you which fence and so forth. And one of the things, probably future, right now the dwell transition time is fixed across all the fences. And we will probably add per-fence dwell transition timing as one of the big things. OK, let's talk about next steps quickly. So in addition to some of the cleanups and feature changes that Matt's just described with the bindings, the roadmap includes additions, such as Bluetooth PBAP support, which basically would be addition of a contacts database from your phone through the Bluetooth binding so that the telephony app can actually bring up your name of a caller and do sort of the typical caller ID that you get on a mobile device today. Actually have that as part of the AGL API. Complete the media player binding. Actually start integrating with the demo apps and add video support. Basically, get that working hopefully for early next year. It's a very big ask that comes up very commonly about AGL as being they'll actually have video playback as part of the API. Adding a speech recognition and text-to-speech bindings has actually been recently at the ADMM last week. There's some new member companies that provide basic libraries for this, as well as there's a couple of big open source projects. Actually building an API interface for that, having a binding that you could actually use in a demo app or as part of your product to actually do the very common things that you see in a car today, incoming text being translated into speech, vice versa. So that's a feature that's really required in AGL going forward. Matt's mentioned the WAN support. So actually doing a WAN binding and actually doing the work to actually integrate voice calls if the modem actually has voice support. And as well, we mentioned the audio bindings. This would be a work to actually build a first class interface for audio in AGL. That's basically application developers can work against and actually pull out the audio manager as it stands today, very likely, or refactor it into something that works a bit better with the AGL application framework. And so that's hopefully coming quite soon and applications can start to convert over and use that API. And there's a new home screen and window manager binding that's actually going to allow much more sophisticated home screen behavior than some of the existing demo apps. There's been some external demos that have a quite fancy multi-screen that's going to become our first class citizen of the AGL API, hopefully very soon. And we'll see much more sophisticated demos, hopefully, that are upstream. And so if you want to get involved. Do you want to take over? So the community has a lot of support channels. We have IRC channel on FreeNode, the mailing list. There's a weekly developer call that anybody can call in if they want to take part and pose a question or ask about an issue they have. There's an open Jira and Garrett. If you have a Linux foundation ID, which anyone can create, you can file a Jira issue against AGL. If you have a feature request or a bug, and if you actually want to upload a change, you can do that through Garrett. And of course, we've mentioned the Doc site. There's also a nice Wiki site that has some startup guides, information about the releases. And feel free to check those out and find a lot more about AGL. And there's a couple of nice links there to some of the getting started stuff. And come join us. Like every open source project, it desperately needs more developers. We're lonely. Come join us at IRC and stuff. So we're over time, of course. Real quick, Katsuko Group is hiring. We're looking for engineers. Come talk to us. It's a gratuitous ad here. But more importantly, come to the technology showcase. Tomorrow evening, we're demoing two AGL platforms with some of these bindings and apps. So you can actually see the thing for real instead of us talking about it. Thanks. Thank you. We can take questions outside. Yeah, if you have questions, we'll be outside. Yeah.