 The engine itself has a base footprint of around 3 kilobytes, so as long as you have at least this amount of memory, the engine technically should work. I would say that you still need at least a 32 or 64 to do something which goes beyond the Hello World, I guess. And the engine itself was originally developed from scratch by Samsung. Now it's an open source project, and we have some users from various systems companies. It has been released under the HEPA license 2.0, and you actually develop on your top. Usually one of the first questions that come up when we talk about various systems is why do we even want to run JavaScript on microcontrollers? So our idea here, our motivation for doing this is that we really want to enable web developers who could have a way of developing software for low-end devices in the language they're used to, with the tools that they're used to. So not everyone wants to write decodes, so for those people we want to give them an option, and especially to consider how many JavaScript developers are out there, and how popular the language is. Yeah, this is kind of what we're aiming for. And the other thing is that if you think about it on the microcontroller segment, the type of code that you run is typically not having a CPU bound. It's more focused on control cars, or things like calling a sensor. And that's certainly also, so if you use a JavaScript engine there, then at least to a certain extent you can also get away with the inherent performance overhead of the engine. And the overall goal here is to have value in JavaScript, working with a higher-level language that you can develop your phone faster, prototype something faster, and that way also ultimately increase your time to market, or shorten your time to market. Another interesting thing about JavaScript is that it's very easy to also load code dynamically over the network, because it's much harder to do if you're writing a software engine. And also from a security perspective, you still have, between the code that you're actually choosing and between the operating systems, you still have your virtual machine in between. So you have some sort of sandboxing, obviously that JavaScript engines also have to debug. And there's certainly no exception there. But yeah, it still kind of gives you an option. Probably you shouldn't do this for really crazy security stuff, but if you have constraints there, which allow you just to give you some flexibility in adding some scripting support to your IF devices, then this definitely can be an option. OK, so some more background on JavaScript. So the engine is about a little bit more than two years old now. The development started in June 2014. And then pretty much for the first year, all of the development happened behind those doors until we outsourced it in June last year. And then JavaScript had its first bigger milestone in August last year when it finally passed 100% of its two-conformance test suite. And then we had another big step forward in January this year when we landed a completely rewritten compact bytecode implementation. So it was at a very core of the engine changing a lot of the internals. And that was a big multi-month effort to get implemented and we landed that earlier this year. And then for the last couple of months, we were mostly spending our time on optimizing the engine, so implementing new optimizations, to reduce memory consumption, to increase performance. And the end result of that essentially was the JavaScript 1.0 release, which we released in early September. And now we're kind of shifting our focus a little bit because right now the engine is quite competitive if we compare it against the other ones. So in terms of memory consumption and performance and full size, so now we're kind of shifting our focus more towards increasing usability of JavaScript. So basically, making it really easy to use for web developers to create the tooling around it. And I'll say a bit more about that later. Couple key characteristics of JavaScript, so the single most important optimization goal is to have a low memory footprint because that's the result we are most constrained with on the devices. And we are also doing only interpretation on the engine. So there's just not enough memory available in things like just the time population. And to reduce the footprint, we have a very compact object representation. Also show a bit more about that later. And to reduce the overall memory footprint, we are using compressed pointers. So all the internal pointers on the heap are only 16-bit wide, even though we're typically executing on a 30-bit architecture. And in terms of the translation, we are going straight from JavaScript sources to byte code. So there's no other intermediate representation in between, like in abstract syntax tree. And we're also doing this to be very memory efficient. And then at the very core of the engine is the compact byte code implementation. So that's a very specific implementation, really optimized for having a footprint that is as low as possible. And I'll also show you some more details about that. Portability is very important for JavaScript. So we designed the engine in a way that is very portable. It's completely self-contained. You can build it in freestanding mode. So it has its own very small C library with just some of the essential functions that we need. And if you want, you can also run it bare metal just fine, so that works. In terms of hardware support, we support the STM32-F4. So that was kind of the first board that we used with JavaScript. And it's still kind of a reference platform for this. But we also support other devices like the Arduino 101 or the Freedom K64-F4. And we have an experimental board for the ESP866 as well. In terms of rating systems, we fought not X, Zephyr, Mbed, OS, and Riot. So quite the coverage there. And you can run JavaScript on the desktop already just fine as well. So that's useful if you want to use tools like Morgrain or if you want to debug an issue when you have all the comfort that you have on your desktop system. A couple more things. So JavaScript is written in C99. So that's the standard rate deal of effort to keep that a pure C99 code base. We are not using any new extensions. And so technically, as long as your compiler is C99 compliant, you should be able to get successful build of JavaScript on your platform. And in terms of code size, we are at about 84,000 lines of code right now. And the binary size is to compile JavaScript for a sum2. Then we are at 156 kilobytes right now. And that's filled with GCC in LTO mode. So LTO stands for LinkTime Optimization. So essentially, the compiler can optimize across different populations in it as a whole program use. And yeah, one important thing to mention is that the engine really implements all of the ECMAScript 5.1 standard. So this is not just the subset. This is really the full specification. And as I said earlier, we are also passing the respective component test for that. So this really works. And then we also have a C API for if you want to embed JavaScript into your own C application that we have an API for that. That also works if you want to execute native code and call it from JavaScript directly. That also works through the C API. And then we have the bytecode step for C++. So that essentially allows you to precompile JavaScript source code into bytecode. And the advantage here is that you can take the bytecode and you can offload it into flash. So you reduce the pressure on the main memory. And it's especially useful if you have a lot of JavaScript library code, which is not changing very frequently. And it also helps to reduce, because often the peak memory consumption actually happens during parsing. This also helps to reduce that if parsing reaches your peak. And so this is just a couple more pieces of information about the SDM32-F4. So this hosts an ARM chip, ARM microcontroller, Cortex-M4F, roughly 170 megahertz. And it has 192 kilobytes of RAM. So this is already one of the larger microcontrollers. I think the current maximum that you can find on the market is around 256 kilobytes of RAM. And then after that, it really goes up to 8 or 32 megabytes already. So yeah, this is already one of the bigger ones. And it also has one megabyte of flash memory. So we've been also looking into the, so that's another interesting piece of hardware we've recently ported JavaScript to. So the work has not been upstreamed yet, but we're going to do that in the next couple of weeks. And this board is much more interesting for, if you really want to use it for some project, because the SDM32-F4 board is an evaluation board. You don't really want to put that somewhere in your house attached with some sensors and so on, because it's just too big. But the photon board is really small. And it's also very cheap. So it costs like $19 if you buy it in the US. And it has things like Wi-Fi already integrated. And in terms of hardware specs, it's a little bit weaker than the SDM32-F4, but still with 128 kilobytes of RAM. And then Cortex M3 is still doing quite well. And you can run even bigger stuff on that with JavaScript. OK, so here's just a quick example of the C API, so you can get an idea of how this looks like. So this is just a very simple hello world here. So essentially, we define a string here. And then with a single function call, we can execute that. And the API will do all the work for us, set up the engine and allocate the memory and so on. So it's very easy to use if you just want to execute some script. This is a slightly more involved example. So here, we're still doing just a hello world, but we're doing it with two statements. And we are evaluating each of them individually. So we need to do a little bit more work here. So first, we need to initialize the engine. Then we eval the first line, do some memory management here, and then eval the second line. And then we have to clean up the engine. So it's slightly more involved, but still, it's fairly low overhead API. And yeah, works quite well. OK, so that's the first look at JavaScript. So now, to talk a little bit more about the internals of JavaScript. So this is just a picture of the high level design overview, the architecture of JavaScript. So we have a parser here, which takes the JavaScript sources. And then it produces the compact bytecode from that. And there's also a literal storage. So this is kind of the system that handles all the literals. And obviously, since we care a lot about memory consumption, we do things like making sure that we don't allocate the same literal many times and things like that. So that's basically what the literal storage is helping with. And then we have the runtime, which has the interpreter at its core so that the interpreter is responsible for executing all the different bytecode instructions. And then we have here basically all the runtime support that we need to implement the JavaScript specification. And we have a garbage collector as well for all the memory management. OK. So parser is heavily optimized for low memory consumption. And our canonical test case is running it on the 95 kilobytes of JavaScript sources of IoT.js. So IoT.js is another open source project which was started in Samsung, which essentially is a lightweight version of Node.js. So it kind of gives you all the APIs to make the whole package of JavaScript and IoT.js useful. And parsing those 95 kilobytes of sources consume about 41 kilobytes of memory. And here we also have the breakdown of how much the individual parts consume. So you can see that around 13 kilobytes are just the pure bytecode. And then we have another 10 kilobytes of references to the literal storage. And in the literal storage we have another 12 kilobytes of data, all of that are literals. And then we need another seven kilobytes for parser temporaries. So all of that memory is really just needed when during the parsing, once it has been parsed, all of that memory is free again. So you can see we are at about 35 kilobytes in total. So if you would pre-compile the 95 kilobytes, you would end up with 35 kilobytes that you can store in Flash. And that would already reduce the peak memory consumption here. I mentioned this earlier, so the bytecode is generated directly. We have no intermediate representation. We go straight from the sources to the bytecode. And the parser is a standard handwritten recursive descent parser. And one thing to note here is that we are not relying, for the recursion, we're not relying on the compiler generated runtime stack. Instead, we have our own very compact, essentially just a byte array which we use to track the recursion, because that's more compact and consumes less memory. Compact bytecode, so that's essentially what a typical instruction looks like. So we have one or two bytes for the opcode of the instruction. And then we have a variable number of arguments. And essentially, it's a variable length bytecode. And we have 306 opcodes defined right now, so that sounds a lot when you hear it for the first time. But essentially, a lot of the operations there are just variations of always the same operation. So as an example here, we have an opcode which kind of handles this expression, where you have our referencing property. And so we have an opcode for that. And to execute that opcode, we actually decompress or decode the compact bytecode into a sequence of multiple different atomic instructions. And that way, by having these compact bytecode instructions, we get a better code density, because we cover all the common constructs. We have other examples here, like a method called with two arguments or incrementing a variable. By not having those things already decomposed into the atomic operations, we can save a lot of space. So the compact bytecode really was a big step up in terms of improved code density of our previous implementation, which kind of just had those things directly. So interpreter is responsible for executing all the code. So it's using both a stack and registers. The stack is used for temporary values. And registers are used for local variables. And then what I mentioned earlier, the interpreter decodes the compact bytecode and translates it into a sequence of up to three atomic instructions. And then the interpreter has the implementations for the different atomic opcodes and executes them. Compress pointers. So as I mentioned already, all our compress pointers are 16-bit wide. And we let them point to 8-byte aligned objects. And on a 32-bit system, this allows us to save half of the memory already. And if you multiply the maximum number of what we have with U in 16, so what we can address with 16 bits, then we end up, because our objects are 8-byte aligned, we end up with a total heap space of half a megabyte. So that's usually enough for the embedded devices. But it's for people who want to use JavaScript with something bigger, where they are already running into that half a megabyte limit, you can also disable pointer compression. And then you get access to the full 32-bit address space and can essentially have a four-gigabyte heap just fine. Then values, so JavaScript is a dynamically typed language. So all of the values carry type information. And so the type is not associated with the variables, but with the values instead. And our standard representation for JavaScript values there is a 32-bit wide encoding. So we have a small two-bit type field here for the specific type of the value. So that can be a primitive value, like a Boolean, or null, or undefined, or it can be something more complex, like a number, a string, or an object. And for the object, we essentially store pointer here in the value field. And since our objects are 8-byte aligned, we can still store the pointer directly here. We don't have to use pointer compression or anything. And that helps performance a bit. Strings, so string descriptor is 8 bytes long. It has a couple of fields. So we have a reference count. We have a type field. We have several different types of strings. And we also have a hash to optimize frequent operations on the strings. And it also has a 32-bit value field. So one of the types is just a regular string of characters. Then we have short strings. So that's where we store the string in the value field itself. And then we also have magic strings. So we have a mechanism where you can register frequently used strings with the engine. And then you basically don't have to create a string yourself anymore. You just pass around an index of that table. And that all, especially for large strings, where you might generate a lot of copies, this also helps the memory perception quite a bit. And number representation. So we have the default in ECMAScript is double precision. So if you want to have an ECMAScript compliant implementation, you have to use double precision values for all the numbers. But we are also offering a mode where we're using single precision instead. So obviously, that's not ECMAScript compliant anymore. But it's helpful for the people who really want to get maximum performance and who are willing to trade off, do the trade off there in terms of losing precision over gaining some performance. And this is especially interesting for in the micro controller segment with devices like the Cortex-M4F, where you have native support for single precision, but no native support for double precision. So yeah, that can make quite a performance difference if you have a micro controller that has native support. Objects. So this is essentially what an object descriptor looks like. So again, we have a reference count here. Then we have several compressed pointers here. So you can see the total size for the descriptor is just 64 bits. So we have two, three compressed pointers here. So one is just pointing to the next object. So that's used by the garbage collector. Then we have a pointer to the property list. So that points to all the properties that belong to the object. And then one thing to mention is that also in JavaScript, all functions are objects. So we have object descriptors for the functions as well. OK. OK, so now let's have a look at some performance numbers. So this is SunSpider benchmark looking at memory consumption. And we are comparing JerryScript 1.0 versus DuckTape 1.5.1. So DuckTape is another open source light-wide JavaScript engine. It's a little bit older than JerryScript. And so essentially here we see the results for all the different benchmarks that are part of the SunSpider benchmark suite. And the red line essentially is DuckTape. And the blue line is JerryScript. And as you can see, if you look across the chart, you can see that basically, JerryScript is outperforming DuckTape on all of the benchmarks. So typically it's consuming at least half of what DuckTape consumes. There are some extreme cases here where really just JerryScript consumes 20 times less memory than DuckTape, which is already, I mean, also DuckTape is optimized for a low memory footprint. It might not be as optimized for the really low end microcontroller, so but some work has also started in that direction. So probably DuckTape will also shrink a little bit here. And you can see that it kind of has at least 97, I think, is the lowest. So there are probably also some options you can use to cut the DuckTape footprint a bit further. But this is really just the default configuration of both engines. OK, so performance-wise, we have a similar picture here. There's JerryScript is better than DuckTape in pretty much all of them except one here, where DuckTape is slightly faster. And yeah, typically it's also two times faster than what DuckTape takes about twice as long for most of the benchmarks here. OK, so that's performance. Now I want to show you a quick demo. So this is basically an implementation of the classic Pong game. So we are running this on two devices. So we have a Raspberry Pi and an STM32F4. And each of the devices has an LED matrix connected via I squared C. And the idea is that we have a single shared display. And each of the devices controls half of it. The devices are connected via Ethernet. And essentially we have a simple client server system where the server is running on the STM32 on top of JerryScript and IoT.js. So all of this is implemented as a Node.js module. And we run it on the Linux side, on the Raspberry Pi. We actually run it on top of V8. And yeah, the logic for the game runs here. And the human player is using the USB keypad to do all the input. And I have a short video of that that I can show you right now. OK, let's see. So here's the demo. And you can see here's the Raspberry Pi, STM32F4. Each of them has an LED matrix connected here. Networking connection is here. And here we have the human player. And you can see the pedal on the right is controlled by the microcontroller. So that's all JavaScript running now on the microcontroller. And you see it's very smooth, actually. The ball essentially passes over the network between the devices. And you don't really notice any lag or OK. So that's the Pong demo. Then let's see. Then I have another demo. So this is the JerryScript 6 low-pan demo. So this is a demo we just developed. And it's essentially very similar to the Pong demo, except that it also supports Tetris. And it's really a multiplayer. So there's no AI involved. And the key difference is that we are running, instead of the STM32F4, we're running it on a photon board. And we're running it on top of Riot rather than NutX. And that gives us the ability to have, instead of having an Ethernet connection, we now have a 6 low-pan connection between the two devices. So 6 low-pan is IPv6 on top of the 80254 low-power wireless standard. So essentially, the photon, you can just run this as a battery-powered device. And because it's using all the low-power protocols, it won't drain your battery immediately. So this demo, I don't have any video of that. But we will show the demo in the technical showcase session on Wednesday. So if you drop by our table there, you get a chance to see it live in action. And you can try it yourself. So that's the 6 low-pan demo. Future work, we have a couple more ideas of what we want to do in terms of optimization. So we've done a lot of optimization work already. But there are still a couple more things that we want to do. But our main focus for the next year will be to work on things like debug support. So JavaScript doesn't have a debugger right now. So that's definitely something we want to add. And also add some more tooling around JavaScript, like a memory profiler, because the developer really needs some assistance there. Because you can't just write JavaScript as you want if you're targeting low-end devices like that. You need to be a bit more conscious about the memory allocation patterns. So creating some tooling, which helps with that, is also on our roadmap. And we're also thinking about implementing some of the ES6 features. But we haven't really decided on what to focus on there specifically. And obviously, there are new boards being released all the time. So making sure we support the latest boards is also something we're constantly working on. OK, so in summary, I mean, JavaScript really shows that the approach is working, that you can run JavaScript on really small microcontrollers. So JavaScript is getting fairly mature now. Pebble is using it in production already. So they run it on their Pebble smartwatches. So that has been deployed already to hundreds of thousands of devices. And they've just released an SDK for third-party developers so they can develop watch faces for the Pebble watches in JavaScript rather than see. So it definitely is getting more and more widespread. And using JavaScript really helps you to do to prototype something faster on the low-end device. And JavaScript has a small but active community. And if you want to find out more about it, please have a look at javascript.net. It's not working. And yeah, we're always looking for bug reports, feedback. If you give it a try and run into any issues, just let us know. And we'll try to fix it. So that's it from my side. Thank you very much. So I think we still have time for questions. Any questions? Yes? OK. How can we do the real time for a different context because we want to isolate, like, two nodes. Each of them should be used on the complex and all. So we just recently added support that you can kind of have a javascript global context. So previously, there were a lot of global variables and so on. So we have kind of isolated all of that into a single object. So now you can indeed run several, well, I'm not sure if that's called instances. But we certainly have some abilities to manage the memory there. Mm-hmm. Yeah. Do you have something like that to just 1,000? Yeah. We don't have it right now. I think someone else was also requesting that just recently on GitHub. But that's certainly something that can be implemented very easily. So you could add an additional hook in the interpreter and then specify some limit there. And I mean, there's already some overhead involved. So if you carry another counter there, probably you won't really notice a big performance impact. Yeah. Yeah. OK. Any other questions? Yes. Yeah, yeah, yeah. So we were still in the planning phases for the next year. But that's something we really want to focus on in the next year. I would guess it will take at least, I don't know, I would say six months to get something that's usable. Maybe we'll be done earlier. But yeah, basically, next year is all about that. Any further questions? Yes. Are there data in the integer type? No. So we really just have the JavaScript number type, which is a floating point by default. We use integers internally because a lot of the calculations are obviously just integer calculations. So we optimize that a little bit. But we don't expose any additional integer types to the user. So that all happens transparently in the engine itself. Yeah. Yeah. Having some good inner drop rate. Yeah, yeah. It's hard to interface with devices. Yeah, that's a good point, yeah. Any other questions? Mm-hmm. So I mean, we're not exposing it to the user. But certainly as a developer, you could just modify the engine. And if you see certain patterns in your code, then you can just add that. I mean, essentially, adding a new opcode is just kind of decomposing it into the existing atomic operations. So it's fairly easy to do, I would say. Any other questions? Yes. Yeah, yeah. So IoTJS? Yeah, exactly. So because essentially what Javascript provides is just implementation of the JavaScript engine. That's also very important. But then you need to have something on top of that, right? For accessing, I don't know, GPIO and your network protocols and so on. So IoTJS has been aiming to be a lightweight version of Node.js to provide some basic stuff for that. But the project itself has not been very active this year. So there was a lot of work last year. But this year, it has been progressing quite slowly. But we are also in the process of reviving it and doing more work on that again. If you look at the repository, you can see that there already has been a couple of commits in the last couple months. So that's also something we are going to look into to have a framework on top of Javascript, which helps to use them. And that's also part of the whole usability aspect to make it easy for the developers. But there's certainly still some work left to do there. OK. Any further questions? OK, I guess we're done then. Thanks again.