 Let's try this again. That's better. So we basically decided to focus on the integrating with the web app, both inside the vehicle itself that runs on the center console screen, touchscreen, and also on the back end web app as well. And we also do canvas management to actually control the climate control system and set fan speed, temperature, et cetera. We also obviously do the telematics link. We deployed one of our telematics servers on a Linux foundation box in the cloud to basically establish communication between the website of things and the in-vehicle side of things. So just briefly go through what we actually did. Everything north of Exaport is inside the vehicle. Everything south of Exaport is on the Linux foundation box on the web. The communication link itself was a quick and dirty one. It's a standard 4G hotspot setup, no magic at all. We did not do any kind of integrated 3G dongle or something like that inside the end of this box in the vehicle, which is because we were short of time. And this was the easiest thing to do. So we have two main use cases here. One is that. I'm getting broke over here. Inside the vehicle web app developed by Timo is coming up soon as well by Cymbio. You set the fan speed, for example. We send a JSONRPC command over HTTP to our Exasense device running inside the vehicle, which basically forwards it to a demo app. And this is a couple of hundred lines of code. It's very, very small and compact. The demo app emits a can message to the HVAC system to set the fan speed itself. It also forwards the command to the Exasense server, which forwards it to the back end web app to update the interface that you're running on your iPad or your laptop or whatever. And the converse is also true. You can go from the back end web app, set the fan speed there, JSONRPC to the Exasense server. Since the vehicle is always online, we immediately forward it to here, we emit the can frame, and we also update it in vehicle user interface as well. So what we did was that we basically, again, we focused on the core telematic side. And basically integrating between the two different user interfaces that we run in parallel and make sure that the commands sent by those user interfaces were forwarded correctly to the can bus and HVAC system. Matt Jones provided us with the can database, which was actually only a single frame and a bunch of bit fields. And we both listen and emit can frames there. And it was maybe six or seven hours of work to get that up and running. So basically, what we do for four labs to kind of move on a bit and why we're in this and why we're approaching AGL and JLR is to basically we manage connected devices on an industrial scale. We do device management, we do traffic management. We route traffic to devices in hard environments such as telematics where links may or may not be available, there may be a multipath thing, there may be cost constraints on how much data you can send in a month, et cetera. Our product suite is divided into two bits. We have the Exosense device stack, which you just saw in action at the previous slide, which is MPLv2 open source and available for download and usage. And we have the Exosense server itself, which is basically the device in the traffic manager. Now I have to be honest with you and say the Exosense server is closed source and vice is by us on a binary basis or hosted by us. But the Exosense device and the Exosense server communication protocol called ExoPort is open source and you're free to implement it yourself and you're free to implement your own server should you be unhappy with us. We think it's a dumb idea because our server is much, much better, but you're free to do so. And what we want to do as a company, especially in this crowd, is to lower the barrier of entry to set up your vehicle or your connected device. We want to be able to provide you with reference hardware, running Tizen or whatever you want, download our software stack, set up a demo account on our free server and be up and running and touch your device, push software to it over the air within a day after you receive the reference hardware. Or if you run on a virtual box, just provision it, set it up, connect it, and get going. So if we look at the Exosense device stack itself, so it's Mozilla Public License V2, we have a bunch of reference hardware. The build is done using Yocto, which either produces RPMs that go straight into Tizen or it can produce complete images for whatever Yocto hardware support you have. And we have a few reference hardware platforms we like from Invest, among other things, that actually have built-in CAN, ShipSets Autumn, which makes them very easy for us to integrate. And they're also automotive rated, which makes that we can take this one step beyond the demo act when deployed in pilots as well. Even if we use Erlang as our core language, I'll get back to that, we can support pretty much any language through a debus interface that we have so that these languages can, your local application running in a telematics device in Tizen, for example, can receive RPC calls from the server. And it can access a local configuration repository from the server. These applications can also be upgraded over the air from the server. We provide full security through authentication and bidirectional encrypted RPCs using OpenSSL or similar. This depends a bit on what communication link you're using. If you're using SMS, for example, which is a standard case, you need to tweak that a bit. So if we look at overview, this is a bit outdated. So the terminology ahead of all this change. On your right side, you have the Exocense server, and you have your server application, which is usually a web server similar that communicates using to the JSON router internally through JSON RPC. Now, the JSON RPC can then forward RPC commands to the Exocense device, which will then forward it to the device application. And it's full store and forward on the service side. So if the device is not available right now, it holds it until it becomes available, and then forwards it. We can also monitor data, which I'll get back to you through probes. We also have full configuration management, so that you, from a central point, can configure and manage a couple of million devices without any major issues. It's obviously overhead on so many devices, but it's definitely there. And that means that you don't have to worry about local configuration and how do I am going to configure this specific device out there. We can push out the configuration data and make it accessible from different configuration systems. If you need to set up your Apache server or something else, we can basically export config data locally. We have data acquisition modules for AD converters, GPIOs, et cetera. So we can sample. We obviously have Canvas integration. And your device app has full access to all these components. And if we dive into the stack, again, this is outdated. Some things have been added. Some things have been not implemented yet. At the bottom, we have the Linux kernel with our device driver and file systems. We're on Erlang at the bottom. And there's a long, separate seminar, why that's a good idea. And then we have basically a bunch of components. And when you build the Exascence device stack, you can specify exactly what you need in order to minimize your footprints. You don't have a lot of blow. So we have a graphics manager, which some people may want if it's more embedded graphics, and it's a weaker system that can't run a full HTML5 stack. Configuration management, I mentioned. We have NMEA0813. We have Canvas with socket camp, and also actually can open implementation and open source as well, which we believe is a first. We have the full back-end server communication with encryption and everything, data acquisition, I2C, which we haven't implemented, RPC management, and monitoring. And on top of this, you can write your application either in Erlang or through D-Bus access in any language you want, C, Python, or whatever you fancy. Briefly, the service stack. Since this is not open source, this is the only slide. It's device and traffic manager. We also do package and configuration management, so you can upload packages to a repository in your server and push these out to the devices over the air, make sure to get installed correctly, and also update the internal database so you know what is running where on which devices. We can do mixed assets. So if you have several different device types of different hardware divisions, vendors, protocols, communication facets, after we do that as well, we know what your data plan is so that you can basically avoid getting dinged by over data if you have to push out a big firmware upgrade, for example. And all the interaction with this goes through JSON RPC admin. RPC calls to be forwarded to devices, et cetera. So it's a single point of interface. So I'm going to make an example out of how to use this with a recall case study, which is a basis in a real world recall that was done a couple of years ago. And I think if you remember back, you know what I'm talking about. So basically, we're looking at a case study where we have a deployed fleet of vehicles, a million vehicles, for example, and suddenly you start to get intermittent reports of failures, brake failures, for example. And I pressed brakes, nothing happened, or I had to stumble. Nothing happened. You cannot recreate it. Somebody crashed. You got the vehicle in. All the logs are cleared. No error codes, no events, nothing out of the ordinary. So you don't really know what's going on. And you don't really know if it's software, hardware, mechanical. And worst of all, we have uncertainty. You don't know what to tell the press when they start to see this, and they start to call you and say, hey, what's going on? There's like five people who claim that the brakes didn't work. And you have very little data to go on. You get in the occasional crashed vehicle to your lab. You disassemble it, but you can't find anything wrong. Everything looks OK. And there's obviously huge pressure on engineering in this case. How are we going to solve this and get it out? So what we can do is that we can design, if this happens, for example, let's say that you have an issue with the TPS, which is the throttle position sensor, you can design a monitor package or probe that basically looks and sniffs passively on the canvas, waiting for an anomalous situation to occur, such as both the brake pedal and the throttle pedal being depressed at the same time, which is a typical case. That normally never happens. When the probe detects this inside the vehicle, it will start recording everything, just a big debug dump of all the canned data, dumping it into local storage for later transmission. You design that probe. You drop it into the telematic server, the exosend server. You transmit it to the vehicle where over the air asked the guy or the owner's driving, the girl is driving. And it now silently sits and monitors the canvas in read-only mode. Looking for this anomalous situation to occur with both the brake pedal and the TPS and the throttle is depressed at the same time. When that happens, we have a fault detection. And the telematics device, through its probe, starts to record everything that's going on. 30, 40 seconds later, or when everything is released, we immediately call home and said, vehicle VIN 1234 had this issue. Here's all the canned data from all the sensors as the issue was happening. And now, you suddenly have two megs of data from a situation that definitely should not happen. There's a very, very strong indicator that this is actually a you-caught-to-use case or a default scenario in the wild as it happened. And you can now analyze it. So if we look at the conclusion, basically shortens the root cause location time, that we can immediately, for the OEMs, can say, we saw this. We're a bit worried about it. We decided to push out a probe to 10% of our vehicles, so 100,000 vehicles. It was a week, because that's the time point of the estimators. We're going to catch it. And as soon as you have the data, you can see this is a mechanical issue. This is a software issue. That's even better, because we can now design a fix and send it out over the air. And we don't have to do a recall. This can be done over the air, if it's software only. So this is what we can provide through AGL. We obviously tried to establish ourselves here as a company, we're a young startup, et cetera. We thought that one of the better venue was to give out this device stack as open source and actually enable this level of functionality to anybody who wants it. We obviously hope, at the back end, that somebody is going to license our server technology as well and use this, because we still have to meet payroll somehow. So we thought this was the right venue to push this technology out there to get as much exposure as possible. And the JLR demo that we did right now is the first very small step on this. And that's pretty much it. I guess I can take a few brief questions now. That bad, I'm sorry. Yes? Yes. So we work both within and outside AGL. So what I told Matt is that we'll have a look at an automotive message broker. It's a normalizer, basically. It's a pluggable back end through canbass, ethernet, or whatever you have. And reads sensor data and frames from that. It could either be an AD converter or reads sensor data and normalizes it into standardized messages, which have yet to be standardized. So Matt is working on that, which is needed, because we need a common terminology to decide that this is actually RPM, or torque, or center box, diff temp, or something like that, center diff temp. So what I told Matt is that we're happy with that. We'll integrate with that. And in the AGL case, we'll just basically drop the candidates here. And we'll remove that from our mix when we build for AGL. We're not tied at the hip to our own technology. We integrate with whatever we find. We are fairly communication agnostic. So we take whatever channels we have. Again, this may overlap a bit with the Conman and other connectivity issues. But we have a full network detection system in place inside the Exascence Device Stack that detects when Wi-Fi LANs come on board, the 3G connections, PPPs come up. And then we start using that. We signal that back to the application and say, hey, you now have an IP link. We have these three links to choose between. You can now start transmit data. By the way, this is how much data you have left this month before you start to have to start prioritizing your traffic. So if a P-link comes up, we can use that as well. We obviously probably need to integrate with lower level net link drivers, or P-link needs to integrate with net link in order to have to work. But the framework is in place to handle that as well. So we try to be as agnostic communication as possible. So we can either do old kind of amps, burst technology. If you have an analog link, we can go that far back, if need be, I hope not. OK. Thank you very much for your time.