 Okay. Yeah. So, hi, my name is Thomas Keller. I work in the Samsung open source group and I'm here today to talk about Jerry script. So I'll start off just talking a little bit about Jerry script, what it does, how it works and so on. And then I'll show you a quick demo there set up on the table there in front already, and then we'll have time for a couple of questions. So, yeah, essentially, what is Jerry script? Jerry script is a really lightweight JavaScript engine. So we developed that from scratch, and really with the goal of having an engine that can run on really resource constrained microcontroller. So devices, let's say, I think like 32 K of RAM is what you need to do something, which is not just a hello world. So for a hello world, you right now you need three K of RAM. So that's basically the bare minimum of memory that you need on the device. And, yeah, as I said, originally, it was developed by Samsung, but now we have a small community around it, various different companies contributing. And it's an open source project. So it's released under the Apache license 2.0. And you can find it on GitHub. So, yeah, one of the first questions usually we get when we talk about Jerry script is that people ask, why do you even want to run JavaScript on a microcontroller? Why not just use C? And our motivation for that is that JavaScript is such a popular language and it's really easy to use and to learn. And there are so many developers out there that we just want to give them a way to develop for small, low end IoT devices in the languages they're used to and with the tools that they're used to. So that's kind of our core motivation for doing this. And the other thing is that in this segment, like small IoT connected devices running on a microcontroller that you typically are not really, you don't really run performance critical code there, right? So it's more like control tasks, pulling some sense or sending some network messages. And so there you can also, at least to a certain extent, get away with the inherent performance overhead a JavaScript engine always has. And the other thing is that, or the hope is that JavaScript being a higher level language than C, you can be more productive. You can write a code faster, develop your application faster and also your products and get them shipped faster. So that's kind of the hope there. And another interesting aspect is that with JavaScript, it's also really easy to just load some code over the web. And so you could think, for example, have a small microcontroller and then run a web server on that, like a really, really simple, light white one and connect with a web browser and enter some JavaScript code executed live on the device and maybe interact with some peripherals connected to the microcontroller. So that's something, especially for prototyping, I think that would be an interesting use case. And that's very easy to do with JavaScript, but if you want to do this with C, this will be much more complicated. So that's kind of one, the dynamic nature of JavaScript helps a bit there. So, yeah, just a couple of, like, kind of the key characteristics of JavaScript to give you a better idea. Essentially, the single most optimal, single most important optimization goal is to have a low memory footprint, because that's typically the resource you are the most constrained on the device. We also care a lot about performance and the code size of the engine itself, but a lot of the optimizations are really targeted to keeping the memory usage as low as possible. And that's also why we do things like, so we don't do any just-in-time compilation, we just don't have enough space for that. So we just have a classic interpreter which executes the JavaScript bytecode and all those things. And to achieve the low memory footprint, we do various different things. So one thing is that we have a very compact object representation. So all the data structures that we need in the engine to represent JavaScript objects. So that is very much optimized to be as compact as possible. And then we also do things like pointer compression. So internally on our heap, we use 16-bit pointers, even though we typically execute on a 32-bit host. So that way, especially for pointer heavy programs, we save a lot of memory because essentially our pointers are half the size than they would be regularly. So obviously you have to pay the price in terms of whenever there's access to memory, you have to compress or decompress the pointer. But from what we've seen, the trade-off is still, if you are on a constrained device, it still pays off overall. And for people who don't want to use compressed pointers, we also have an option to turn that off and that also gives you a larger address space. So right now with pointer compression turned on, you can just address 512k of memory, which is for most of the really small devices just fine. But if you have a device with 2 megabytes and you really want to use that, then you can also turn it off. In terms of translation from the sources to the bytecode that we actually execute, we try to be as light-white as possible. So while we are parsing, we are already creating the bytecode instructions and we are not having any intermediate representations in between. So we don't even construct an AST. So AST stands for Abstract Syntax Tree. So we go straight, while we pass, we already create the bytecode. And at the very core of the engine is the compact bytecode of Terescript. So that's also one of the key features that makes Terescript successful. So this is essentially, we have like, I think like two or three hundred different bytecode operations that represent common constructs in JavaScript. And then we don't execute them directly. We decompose them into up to five kind of atomic operations which are much more simple and those are implemented by the interpreter. And yeah, this whole setup gives us a very compact representation on the bytecode level. A couple more things about JavaScript. So it's written in pure C99, so we really try to keep it that way and not use any new extensions just to make it as portable as possible so that as long as your platform has a C99 compliant compiler that you can just build it and it will work just fine. The source code we are currently at 91,000 lines of code, we're getting close to 100,000 lines of code there. And the code size, so the size of the JavaScript binary itself, that is at 133k right now on Thumb2. And this is with the full profile, so the whole language standard. We also offer a minimal profile where some of the features are dropped and then you can even get below 100k. And this number is important because that's essentially the amount of flash that you need on the device to get JavaScript running. And it's also in a way the overhead in terms of flash memory that you have to pay for using JavaScript versus a native C application. One important thing to mention is that JavaScript really implements the full ECMAScript 5.1 standard, so we implement all of that. And we also have the corresponding test 262 results, so this really works. We pass the conformance test suite and another thing is our C API, so if you have an existing application and you want to add some scripting capabilities to it, then you can use the C API for that. Or the other way around, if you have a JavaScript application and you want to invoke some native code, then you can also do this through the C API. And another feature is the bytecode snapshot feature, so that one allows you to essentially pre-compile your JavaScript sources into the compact bytecode format. And you could even just deploy the bytecode rather than even the sources. And this has a couple of advantages, so one of them is that the whole process is a bit faster because you don't need to pass the code again. You can do that offline essentially, but that's usually not, almost not even noticeable the difference. The other benefit is that you can, if you pre-compile the bytecode, you can offload it into flash memory. And this is very useful because if you have, let's say you have some library code written in JavaScript that is not changing very often, then you can just pre-compile it to the compact bytecode and you can put it into flash memory and you can execute it directly from flash. And that way you reduce the pressure on the overall main memory. So yeah, that's quite a nice feature. And yeah, portability, so this is also very important, so we try to be as portable as possible so JavaScript can run on all different platforms and boards. And so the engine is designed to be fully self-contained, so we don't even have dependencies on the C standard library. We have our own really small C standard library, so it's really just some essential functions that we have in there. And because of that you can also just run it bare metal, you don't need any operating system or runtime support from the operating system. And out of the box we support a couple of different boards. So the first one that we supported was the STM32F4, but we have a couple more. So for example the Arduino 101, that's an x86 based board, Intel contributed that port and maintains it. And then we have also the freedom board from NXP and photon boards, also a bit more about that in a couple of slides. And we also have an experimental port for the ESP8266. And in terms of real-time operating systems we have support for Nadex, for Zephyr, Mbed OS and Riot. And if you want you can run it on a desktop operating system as well. And that's particularly useful if you want, for example, you want to debug an issue in JavaScript. Then on the desktop you have usually better debugging capabilities than on the small microcontroller, especially things like wall-grind or address sanitizer. If you want to track memory corruption bugs, then that's much easier to do on the desktop than on the small devices. Yeah, so just to give you an idea what kind of hardware we're targeting. So the photon board is essentially a $20 Wi-Fi enabled IoT board. And that one has 100... Cortex M3 clocked at 120 megahertz, has a megabyte of RAM and 128K of... Oh, one megabyte of flash, 128K of RAM. So on that board you can already do quite a lot with JerryScript. So at 128K you can run a substantial amount of JavaScript. So in the demo I also using the photon so I can show it to you there as well. So yeah, just to give you an idea of how little memory it consumes in practice, I want to show some slides about measurement results. So this is the... Yeah, should be readable I guess. So this is memory consumption for the SunSpider benchmark. And we are comparing JerryScript versus Ducktape. Ducktape is another open source, light-wide JavaScript engine. So in a way it's a competitor to JerryScript. And what you can see here is essentially so the red bars are the memory consumption of Ducktape. The blue bars represent JerryScript memory consumption. And you can see that we fairly consistently are significantly below what Ducktape consumes. So that was not always the case like that but we spent pretty much the whole last year, we spent a lot of optimization. So we optimized for memory consumption and also performance. You will see that in the next slide. So yeah, right now we are doing significantly better and if you look at cases like this here, the date format benchmark, we are really easily in order of magnitude better. And performance-wise it looks quite similar. The difference is not as big as on the memory consumption side. But yeah, there's even one benchmark here where we are pretty close. But on average we are like two times faster than Ducktape. So then you can see we spend a lot of time on that. So demo. So I have set up a small demo there and I'll just explain a little bit what devices are there. So essentially it's a multiplayer Pong implementation. So very classic game. And we have two devices. So we have the Raspberry Pi here and we have a photon board. And each of them is connected to an LED matrix connected via I squared C. So you can see that there already. And then we run, so all of the code that's running is JavaScript. And on the Pi we run just Linux and then JavaScript on top of it. And on the photon we run JavaScript on top of Riot. And we're using Riot because for the communication we're using Six Low Pan. So Riot has a very good stack for that. And that's why we're going with Riot here. And the other thing to mention is that each of the device can be controlled by a human player. That's why we have the keypad on the Pi and switches on the photon. But you can also run it in AI mode where the computer opponent is playing. So yeah, so maybe I'll just show it to you if it still works. Yeah, so now basically both devices are in AI mode. And you can see that this is the, so the AI is not perfect. So sometimes one device wins. And so this is the photon board. So you can see it's really small. The flashing LED means that whenever there's a Six Low Pan packet being transferred the LED gets toggled. And since the board just has Wi-Fi built in, we have to use an 80254 transceiver for the communication. And that one is just, this is just a regular open-lapse 80254 shield. So the same one that we're using on the Pi. And this one is just hooked up by SPI and that works actually quite well. Yeah, and this is just regular Pi. So maybe I'll just start playing myself. Lost. Okay. I need to practice more. Yeah, and you see, so the ball essentially passes over the network. And it's very smooth. So obviously since it's, okay, now I turn on the AI mode again. So obviously since it's Six Low Pan is a lossy protocol, so maybe not the perfect protocol for this kind of use case. But it works quite well in practice actually. And yeah, can play a little bit more here. Yeah, that's it pretty much from my side. Thank you. Okay. So it looks like we have still seven minutes for questions. Yeah, yes. After you talked last year. Yeah. We and I, with Peter, we tried a gyroscope. Okay. We tried to make an open-diversity package. I think we got to the point where the MIPS architecture we tried to... Right. MIPS, that was from there. Yeah. I don't know what's the status with it. And second thing, we tried to contribute very simple things like, kicks a little bit of the read me and things like that. The cool request was like for a simple change. I think it was two, three lines of changes was horrible discussion. One thing we, and I think I will mention it in my talk this evening. He was very frustrated about this merging process that got some kind of lengthy debate by multiple employees of Samsung in Hungary. So, yeah, I think there we saw a problem with your contribution policy. I mean, it was a simple example on how to run it. So I would encourage you to work on that. Yeah, yeah. I'll just repeat the question. So just the short summary, he tried to contribute some small changes to Jerry's script and it didn't work as smooth as expected. So, yeah, essentially, I guess that was still very much in the beginning of the year, right? So we have improved a lot, I think, on the overall contribution process also how the things get merged. And I think at that time we didn't even have any contribution guidelines or anything. So if you look at it now, you can see that we actually have quite a good throughput and also the time between pull requests being sent and getting merged. Obviously, it depends on the size of the change. But now I think if you would do the same again now, probably it will be merged in a day or two. So, yeah, right. So the other question was that Jerry's script didn't work on MIPS. So that's pretty much still the case. So no one has really, I mean, we haven't gotten really any interest in MIPS specific boards. So at least from Samsung side, we're not working on that. And also from the community, we haven't really seen any further steps in the direction. But we would obviously be open to that. I tried because I wanted to have Jerry's script on OpenWRT for all the different architectures and architecture for routers and chips. Right, yeah. I think on ARM it was working fine. Other ARM 9, ARM 7, it's 86 obviously working fine. But on the internet it was really a block. Yeah, we don't really test on MIPS or even build. Okay, next question. Who was, I don't know, maybe you? So the question was whether we did any memory consumption or performance measurements against the established desktop engines like V8 or SpiderMonkey or JavaScript Core. We have done some measurements a while ago on memory consumption. And there typically the minimum footprint you have is like eight megabyte. So there's no way you can scale that down. And performance wise, I think we are at least, I would say, like a hundred times slower or something like that. So on the desktop, I would only recommend usage if performance is not your primary goal. Yeah. Okay, next question, yeah. So one of the nice things about JavaScript is the ecosystem. Yeah. Do you find that you're able to use things like various different MPM packages or do the requirements of the device mean a lot of custom code? Yeah, so the question was that JavaScript is a lot about the ecosystem and the question is whether it's easy for us to use existing code or like MPM packages, things like that. So we essentially, we've been pretty much focusing on getting the engine to a level where it's competitive against the other engines out there and so on. We also have a framework called IoT.js, which essentially is like a lightweight version of Node.js for running on top of JavaScript. But that's still, so development last year was not really going strong. So it's still, I would say, still kind of in the early stages. But a framework which is a bit more mature is Zephyr.js. So Intel developed basically or, yeah, JS API for Zephyr and they've been working on that quite a lot last year. So that definitely works. But I think there's still kind of early days in terms of JavaScript frameworks targeted at really those low-end devices. Yeah. Okay, one more minute, one minute 40. Yeah, one more question. So I've known that before. So for me the question is what is exactly at the moment the community setup? Is it like what is the governance process? Are you part of the foundation? Are you transferring into the foundation? Yeah. Do we have currently other participants? Yeah. Okay, so the question was in terms of like governance of the project or how the project is organized. And essentially, so we just in September last year, we moved JavaScript over to the JS foundation. So the JS foundation is also a relatively new foundation covering kind of all the different JavaScript projects. And it came out of the jQuery foundation. And in terms of governance, we have contributors from various different companies. So Intel is a strong contributor on the narrow. Pebble was a big use of JavaScript. Unfortunately, they got acquired by Fitbit, but they were actually using JavaScript already in production on their smartwatches. So that was quite interesting. But yeah, in terms of governance, we are kind of growing. And right now, still a lot of the core maintainers are employed by Samsung, but we are slowly getting more, appointing more people from other companies and kind of diversifying the community. And the code, I mean, it's Apache 2.0 and all the IP has been transferred to the JS foundation. So this is not really a Samsung project anymore. Okay. Okay. I think we're done. Do I have time for one more? Okay. So, yeah, sure. Is there any plan to cover ECMAS? Yeah. So the question is if there are any plans to work on ES6 features. So we actually have started already or not Samsung, but people from Intel have contributed some early ES6 features already. So we are certainly open to contributions in that area. And we've also been working on support for promises. So, yeah, that's definitely something we're interested in. And also the community seems to be quite interested in. Okay. Thanks a lot. Thanks.