 So hey, everyone, my name is Alejandro. Today, we'll be talking about the past, present, and future of JavaScript engines. You can find me on social media by A0bieto. And before we continue, I do apologize if we go too fast over these decades or the past. It's a lot of content to get through all of the slides. But feel free to reach out, send me questions, or find me right after the talk, and we can dig in whatever question you might have. We will start by the year 1958, 1958, with one of the year lists published just in time compilers that was attributed to Lisp, the programming language Lisp. And in the year or in the decade 1960s, there weren't many virtual machines around. And a note in that by virtual machines, in this talk, I will refer to the type of virtual machines that translate from one instruction set to another instruction set. There are other types of virtual machines that virtualizes operating systems and hardware. We won't cover those virtual machines today. So one of the benefits of introducing virtual machines around this time is that you could have some of your code of your implementation to stay the same. Say in this case, the transformation from code to an intermediate representation will stay the same for different CPU architectures. And remember that around this time, there was not a monopoly or a CPU architecture that was predominant over others. There were a couple CPU architectures lying around. And you would have to actually write your own compiler for a programming language for all of those CPU architectures. So this type of virtual machines allowed you to just write the transformation for intermediate representation to machine code on a case-to-case basis. In the 70s, Pascal was invented, which was an imperative programming language. And it started to gain some attention as well. This, an overly simplistic diagram, is how a virtual machine would look like at that time. And it wasn't conceived that virtual machines could be as fast as other compiled programming languages, like C. In this decade also, Smolta was created, a language designed specifically for educational purposes. And it was authored by Alan Kay. Smolta code was also executed in a virtual machine. And it is an object-oriented programming language. Its BM was a subject of a view enhancement over time, like just in time compilation later in the 80s. Other important things happening during the 80s was the programming language South, which was created in the year 86 by David Unger and Brandon Smith, who then later pioneered some of the concepts that we will see into the virtual machines space. Two key concepts were born with self on one side, adaptive optimization, and on the other side, generational garbage collection. These two concepts transcend itself and were later applied to the JBM. And are present in all major browsers we have today with some differences. By this time, I started what we will call the first wave of adaptive optimization. And this first wave, there was a profiler involved that marked some of the functions to be sent to a just-in-time compiler. In this example on the left, say, you are executing a function 1,000 times, then the profiler could identify that function as a target for optimization or to be run through a compiler. In the 90s, there were other improvements over existing run times, like small talk and self, small talk being actually implemented in production systems. But the most iconic advance, from my perspective, were the invention of these programming languages. Python, being quite older, it was born in the 90s, while both Java and JavaScript were created in the year 95. By that time, it was not called JavaScript, which is kind of weird. The increase on this innovation was not just a coincidence. Personal computers, which previously were way, way expensive, started to become more affordable by this time. From this graph, we can see that the slope starts to get pronounced around the 90s and goes all the way up like that to 2005 or 2010. During this period, it began the second wave of adaptive optimization, which was adding multiple levels to what we had previously saw before by having multiple just-in-time compilers. Say on the previous case, with a simple just-in-time compiler being applied for a function running maybe 1,000 times. And the other optimizing compiler for functions that would go over 50,000, 100,000 times. In the 2000s, we saw the web first big steps. Multiple browsers were released like Firefox in the 2002. And Opera changed into a free-to-use approach. And I actually remember I used it in Opera. It was a nice browser, like fast JavaScript execution, fast tab startup as well. Who else here remember using Opera? OK, wasn't that bad. So the web pushed JavaScript engines by that time. You could do all sorts of things now on the web, like log into chat rooms and talk with hundreds of people simultaneously. You could browse your emails. You can navigate through maps on your browser as well. For those that are looking at me like, what are you talking about, mate? That's not something new. Trust me, it is new. It started around this time. It is fairly to say, by that time, JavaScript engines were not catching up with other virtual machines like the JVM or Java virtual machine. If you remember the graph of personal computer's growth, these was growing rapidly. But at the beginning of the decade, we started with this percentage of a world's population with access to the internet. It was still very low, 5%. Now we will go through a couple of diagrams on how JavaScript engines would look like around that time. JavaScript core being the engine for Safari. And again, in this overly simplistic engine, we see an interpreter which will take the JavaScript and will run that JavaScript with node code being generated whatsoever at that time. Also no optimization either. For SpiderMonkey, around that time, they also had an interpreter. But they would also have an optimizing compiler that would target these hot paths we were talking about if a function was being executed too many times. And B8 at the beginning, it was created around 2008. I think Chrome later was released later. But it didn't have an interpreter initially. It had two compilers. One for compiling the JavaScript core as fast as possible right away and an optimizing compiler that would compile these functions that we're executing too many times. By this time, we're also stepping into what is called the third wave of adaptive optimizations, where BMs would profile the current execution not only to identify which paths are hot and optimize them, but also to know how to optimize. In other textbooks, these is called online feedback directed optimization. These strategies that we're talking now were later first implemented in Hotspot, JVM, and would then later be implemented in JavaScript engines. From all the periods that we will see today, this in a way is the easier of them all to describe, because we all been through that decade. But at the same time, the amount of things happening during this decade, it is so high that it's very hard to keep track of them. If you remember the graph of personal computers, around 2010 is where it starts to stagnate. But we still see an increase on internet users. Actually, in 10 years, it grew from 5% to 28%. Outgrowing, I believe, every projections that people had around that time. The number of websites, it also grew from less than 200 millions to almost 800 millions or more in the year 2014 or 2015. What we are seeing here is a graph that aggregates smartphones by the amount of memory. And it's dated for the year 2017. Dark gray bars represent devices with 1 gigabyte of memory. Following, there's a yellow bar with devices that have 2 gigabytes. And the light gray is devices with 3 gigabytes of memory. I'm not sure if you can reach out to see the labels below in the graph. But developing countries like Argentina, Brazil, Colombia, Nigeria, South Africa have devices with just 1 gigabyte first. In the rest of the countries listed here at least, we see devices with 2 gigabytes run first. But by a great distance compared with a number of devices with 3 gigabytes of memory or more. Now, if we dig a little deeper into some of the trade-offs that Justin Time compilers had to deal with, we will find that as we apply more optimizations, the code generated starts to grow. And internal representations start to get more complicated. You need to keep track of more things happening, either if you in line, if you take something outside from a loop, whatever optimization you do, it often results in memory usage increase. The second trade-off that compilers have to deal with is the latency introduced in execution time. The best optimizations available will introduce a higher latency compared to not applying any optimizations at all. On this scale, on the extreme left, we will find interpreters, because those will not require any generation of code and would only require the parsing of the code you have for them to start up. If we go through engines during this time or during this decade, we will see a couple of changes and improvements over them. As for the monkey being the engine for Firefox, went through a couple of changes in just six or seven years. They moved from one compiler to two compilers, and then later on they changed the optimizing one. In this case, IonMonkey, which is the layout they are using today. And the interesting thing about SpiderMonkey, bailouts or the optimizations would go up to the baseline, just-in-time compiler, instead of going all the way back to the interpreter. For ChakraCore, the X Microsoft Edge engine is kind of more difficult to describe over time because they open sourced this virtual machine just on 2016. But by that time, they had an interpreter and two compilers, one for simple optimizations and one that offer the complex optimizations. But what can I assume that they also had their share of experimentations before deciding for these final architecture that we see here? JavaScript Core is one of the engines that changed as well during this time. But they didn't change it drastically. They introduced the second compiler around 2009, 2010, which is DFG or stands for Dataflow Graph just-in-time compiler. They also have a simple just-in-time compiler, an interpreter, and they switch to having three compilers instead of one by 2016. FTL called Faster Than Light with the type inference and type specializing compiler. So all the best optimizations being targeted by that new compiler. In the case of V8, it's one of the most interesting cases because in my perspective, it suffered from the most radical changes over time. By 2016, we see that they still hadn't added an interpreter and they would still have a parser that would output an abstract syntax tree and would compile that as fast as possible. For the optimizing compiler that was a cranshaft. And for the optimizations, the code would go back to the baseline just-in-time compiler. If we would graph those two in the scale that we had before, cranshaft will be closer to the right. And the baseline compiler would be to the left, but not to the extreme left. Because compiling code will still have some latency. We still have some overhead for starting up. By 2017, V8 had switched completely to this new architecture. Of course, they didn't just drone a switch from overnight. They introduced some changes progressively. They did all their testing that they needed to. And once they felt very confident with this new architecture, they changed from having two compilers to having one interpreter in this diagram called ignition and turbofan, which is replacing what previously was cranshaft. It would be closer to the right if we would graph that on a scale. And comparing with the previous one now, the interpreter in ignition would stand further to the left. It would have less startup time for JavaScript to run. We haven't talked too much for garbage collection or their approach. But nowadays, if we will go through all nature browsers, we would be able to say that V8 and SpiderMonkey use a precise strategy. So they count exactly or they know exactly how many allocations they did during the execution. When JavaScript core and ChakraCore do a conservative approach, which is looking at memory positions and saying how confident they think it would need to be collected or not. But they all use a generational strategy, the one that was introduced by self in the 80s. This extensive gain in users and market penetrations in multiple devices also came with a whole set of new problems. JavaScript and JavaScript engines started to become a target for vulnerabilities that would allow attackers to get or extract data or to gain code execution on multiple devices. Previous photo was from actually a TV. For this year alone, there were granted bounties for 60,000 US dollars for an integer overflow vulnerability on Amazon Echo, another one around 20,000 US dollars for another integer overflow in a TV that would allow attackers to get a reverse shell. And that's just this year in this event. I know Ponto had also an event in Vancouver, and they released other bounties as well. So for what's next, a few caveats before we move on. These are all my observations. And have in mind that I'm not a subject matter expert on this field. I don't work on a daily basis on JavaScript engines. My goal is to share with you all the point of view for a developer on how things could look like in the future. Just-in-time compilers will keep getting better, but I don't see them overcoming the constraints that we talked before on memory specifically. Because of how we write software, we are drawn into making short-lived allocations, which is one of the fundamental pillars for a generational garbage collection, the strategy that all JavaScript engines are using today. Interpreters on the other side, as a tool with the lowest latency that we have found or that we have discussed today, will continue to have its place as long as we don't find something faster for startup times. One of the things that I certainly hope to see more, and I do believe we are on the right track for getting there, is better collaboration between vendors working on JavaScript engines with other parts of the web ecosystem, like tooling projects. This year there was a blog post written by the DBA team on how JSON.parts could be between 10 to 50 percentage faster than parsing just a straight object little. And it is not something the DBA team was encouraging us developers to go and change on our repositories, but rather full tools like Webpack to include that in their logic for bundling. And it's also interesting because this is not crime-specific. It would yield performance improvements on other browsers like Firefox or Safari. We have also seen innovation happening on the field of JavaScript engines with various specific goals, like the Hermes engine, the sign for React native applications, or Jury's creep and duct tape. It first started happening with them with very low resources specific JavaScript engines. And I believe it could be a good approach or at least an interesting one to see with Node. An option for Node could be having different set of flags or configuration values to let the engine know it is running on a long lifespan context. I do know there are flags to, say, modify the limits of memory for V8, but that doesn't actually change how the engine manages memory internally. This start is for the last June. And we can see that many of the efforts that have been in play rendered an enormous effect on helping people to utilize research online. And I consider that a win-win. If, as we keep this number going high, services and companies will be able to sell to more people and people will be able to reach out to more resources, knowledge, and other services online as well. I hope this rundown helped to get an idea on some of the decisions that were happening during all these decades. I hope that covered some of the fundamental concepts to get where we are today. If you want to go and see the slides or the resources that I use for this talk, you can find them all in this link. If you have any follow-up questions, you can reach out to me online or in the hallway. That's it, thank you for your time.