 Okay, so welcome to my session about GRAW-EM and its support for tools. I work on for Oracle Labs and on the debugger support in GRAW-EM. The following is for informational purposes only. You should not make any decision on this. So first I will talk about GRAW-EM and polyglot world that is associated with that. I will explain Truffle ISD interpreter which is an essential part of GRAW-EM for languages. Then I will describe instrumentation for tools, how tools can plug into GRAW-EM. And we'll then have some demos where I will show how GRAW-EM works and how the tooling look like. So GRAW-EM consists of several components. The core components of GRAW-EM is just-in-time compiler which is an advanced, very efficient compiler that replaces C2 in JDK. Then it contains a Truffle framework which is an interpretation framework for ASTs. Language are written as AST interpreters and they provide the AST nodes to the Truffle framework where they are efficiently partial evaluated and compiled. GRAW-EM provides by default JavaScript integration at LLVM. It's a bit code interpreter but you can install other languages as well like R, Ruby and Python. The next component for GRAW-EM is SDK API. It's for embedders. If you would like to embed GRAW-EM into your existing application, you would use the SDK API that will allow to work with GRAW-EM and run guest code on it, etc. Then it contains Oracle JVM runtime which adds Java libraries and garbage collector and things like that. And one part of GRAW-EM is head-of-time compiler. It's a native image generator which lets you compile bytecode to a native image so you can create a native application from your Java application. The advantage is that it's possible to compile also language interpreters. That means that you can create a native runtime that will execute JavaScript sources or other language sources or all these combined. Here is in the picture what I was talking about. You have a just-in-time compiler that compiles the languages that compiles to bytecode like Java, Scala, Kotlin. Then you have interpreters for a bunch of other languages. All that is processed by GRAW-EM, JIT compiler. And it's possible to embed all of that into OpenJDK for instance where you can replace the C2 with GRAW-EM JIT compiler. And we also provide note implementation where V8 is replaced by GRAW-EM that allows you to run all the other languages in a note as well. We are adding GRAW-EM into Oracle database. It's possible to add it into MySQL or it can be run as a standalone application. Now what Truffle interpreter is, order language implementations are written as ASD interpreters. That means that there is a unified ASD representation in Truffle notes and all you need to provide a language integration is to write an interpreter that will translate the language source into the Truffle ASD. Then Truffle cares about the specialization of notes, partial evaluation and compilation. That all has the advantage that there is a very low overhead in language interpretation because the ASD representation is the same for all languages. That means that you can seamlessly execute one language from the other as the ASD is just one for all of them. Also, there is an instrumentation support built into the Truffle and it works on the ASD level. That means that when you install an instrument it plugs into the ASD and it can be specialized and partial evaluated with the code. So the instrumentation essentially becomes part of the code and that means that there is nearly zero instrumentation overhead because it's all compiled together. And tools are attached as node wrappers to the ASD. Again, here we see that in the picture. Initially, you will have ASD nodes initialized. Then depending on data that are flowing through these nodes, the nodes get specialized for strings, for double integer at whatever types are there or there is some generic representation. After that, it stabilizes the Truffle, creates the specialized nodes and then it's partial evaluated and compiled altogether. How tools are plugged into that is here. Consider a node. The node is replaced by a so-called wrapper node that delegates to the original node and also to a probe node. That is something which can be used to attach the instruments. Here are several subscription nodes attached to that probe and that delegates to client nodes that are provided by the instrumentation, by the tool that you want to plug in. So for instance, if you have a debugger and you submit a breakpoint, you find the correct node which represents that part of the code and you create a node that represents the breakpoint. The breakpoint gets the notification when the node is started to be executed. Also, when the execution of that node finished, it gets the return value or when an exception was thrown from the node execution, it intercepts the exception as well. This, for instance, allows to have very fast conditional breakpoints because the condition can be part of the AST and it's executed together with the code so there is near zero overhead for the condition execution. The important thing for Gravium tools is that they are language agnostic. They attach to the Truffle AST and they should not care about the particular language that the node came from. How it finds out where it actually should attach or what the nodes are about when it cannot find language? Nodes have tags. When a node represents a function, it's tagged with a root tag. When the node represents a statement in the language, we have a statement tag for that and similarly for calls, for expressions, etc. So this is how tools find out what the node represents. Then nodes have a source section information associated. The source file it came from and line and column number information. So again when you're going to submit a breakpoint to some line in some source file you find that from this node information. And this is always language agnostic. So when instruments decide where to register it needs to define which tag and which source section it should attach to. This is declared in advance and when the node is created the instrument is notified about that and the wrapper node is created around that node and the instrumentation is attached. So in order to see how it works I'll show you some demo. We start with cross language debugging in Chrome. So I have prepared a few files in JavaScript, in Python, in R, and in Ruby. We can run all of that in a Gravium. I'll show you the run script. We'll use Gravium RC11 and we run a node with inspect when we will breakpoint at the first execution. We'll attach Chrome to that in order to can use Chrome inspect and here we'll see Gravium is available to attach. Do you see that? So we are suspended to the first statement here. It's called to a weather method. This is a very simple sample application that computes weather regression model from some cities. We have a database of cities and temperature of them. It's some artificial database and you are going to create a regression model out of that. It started from node. It's a JavaScript application but to compute a regression model it's not very convenient to do that in JavaScript so we are using R for that. So we submit a breakpoint where the model is created. Maybe first I can show you that this JavaScript file is loading. If we step into that, it's loading Ruby file. It's loading also R file and there is some Python file written here as a string. So we can continue to that breakpoint where the model is actually computed. It takes some time to load the model and initialize the languages and so forth. And when the breakpoint is hit, we can step into that and we are in an R source. Here you can see on the costech that I came from JavaScript and the JavaScript just calls the R function written here. If you look at the arguments, it gets some functions which came actually from JavaScript which called this R function. So we can step through some R code for a while and eventually we'll get into the place where the JavaScript function is going to be executed. When we step into that, we are back in the JavaScript and we see on the costech that from the JavaScript R was created and R was executed and that called the JavaScript again. So the costech shows you all the stack of the languages. We can go back to R. Here you can see the local variables. So Chrome Inspector has fortunately no problem to show you R variables here. And if we get back to the JavaScript, we see the return value here. It's an R list which is essentially an array, but it came from R and R list. So JavaScript here gets some value from R, but it knows from the generic representation that it's an array. So it had no problem to find out the length of that R list because it's an array. It knows that it has some size and JavaScript knows that the length is a property of array and it shows you return the size of that array. In a similar way, we can step into other languages. We have Ruby here, so we can step into Ruby. Again, we can debug the Ruby here in the Chrome Inspector. So no matter of what language is there, the tool is able to work with that. Here we can step into the Python script and again debug a Python script including local variables and everything. When we are back here in the JavaScript file, we can look to the local variables. So for instance, here we see a function which came from Python and which has the Python description. We have a weather model from Ruby. We have cities from Java, city service Java object here and there is a create model function from R. So we have all languages here available, all languages interpreting and you can debug all of them at one place. So that's all from Chrome Inspector. The next demo is going to be some preliminary implementation of LSP. We have a preliminary integration with Visual Studio Code. So I will start an LSP server here. So you're starting now with VM with SNLSP server component. Is that correct? Yeah, this is an LSP instrumentation which is soon going to be a part of GraVM. Currently it's in an experimental version and it's provided for a so-called simple language. We have a testing language for GraVM so that we do not need to deal with all the complexities of other languages but this is something to test to Swiss and to showcase how GraVM works. So at least you are sure that there are no existing language servers for this simple language. It's served by this GraVL LSP server and now I will go to the VS Code client and run a sample application in Visual Studio Code. We'll run this LSP client here, the LSP client plugin. And it should open, yeah. Here it opens a sample SL file. It's written in the simple language and it has connection to the LSP server. So for instance if you select some variable name it finds all the locations or places when the variable is read and when it's written to. This is provided by the language. The notes are marked as that they are writing some variable or they are reading some variable and from that the LSP integration finds out locations of variables and if they are read or written to. There is also code completion so you can see the local variables and the other things that you can call at that location. So this is a preliminary version of the LSP server. And the file name demo is in NetBeans. Yeah, NetBeans GraVM integration. Here I have NetBeans 10. It's the latest version that was released late in the December last year. And it has a built-in support for GraVM debugging. So I will run the demo that you already saw. It's here. But this time I will not run it with the Chrome Inspector option but with standard debugging options. And so it's listed on the address 8000. I will attach Java debugger to that. And yeah, it does something. In order to see what it does, we pause that. Here in the main thread you can see that it's doing something with Trefunals so it's executing something on the GraVM. And for GraVM debugging we have a special icon here that is relatively new that's a toggle pulse in GraVM script. That means when you toggle that it will suspend as soon as it finds out that some script is executing in GraVM. So we can continue the execution and wait for this action to actually find this out. And we are at the beginning when R script is executed. Here on the stack we see the JavaScript file and we can do the very same debugging that you saw in Chrome Inspector here in NetBeans. Yeah, we can continue the breakpoint. We can step into an R function and here you see the local variables and everything. The advantage is that this is Java debugger. That means that you can switch from the script view to Java view. And here in the stack trace you will see all the notes that are executed and that actually interprets the language. That helps language implementers to see how the notes that were wrote were executed and that provides the insight into how Java works. And you can see here also the probe nodes, the tools integration. Here the probe node was entered and there is stepping node associated that comes from debugger. The node was entered so we get on-enter event. We can step and we will suspend it on a step. So that's all from me. If you have some questions, yeah, you are first. What is this expected way to learn TOEFL? Because I read everything on the GAU website and I still don't understand half of it. Because you need to read some random papers like TOEFL to understand how to start it inside. Do you want to write some comprehensive tutorial or something? We are still working on documentation. And unfortunately probably the best way how to start out is to look to the simple language implementation and just see how... It's simple, really. Yeah, yeah. Well, sort of, yes. Comparing to other language implementations, it's very simple. But we are working on the documentation definitely, yes. Okay, I can help you with this. Okay, you are welcome. Question about the Chrome demo was very interesting because I'm wondering how you can go all the different types from R, from JS, from... You actually had an icon with C++. How does all this look fine in the Chrome debugger which expects, I think, JavaScript? Yeah, fortunately it's quite tolerant because it also expects TypeScript or CoffeeScript or other other languages. So they apparently do not have anything specific to JavaScript but they will blow up if they get another language. So this is why we can do that. Dictionaries, Python dictionaries, they are very different. Yes. Ruby has the anchors, like the keys, they are actually... Yeah, every object from every language provides its two string implementations. So we take the string representation from the language, the language sees that object. But it's still language agnostic in terms of tooling. Yeah. But I understand correctly, the compiler is faster than the Java compiler, the just-in-time compiler. Are there any plans to improve the Java runtime to get as fast as...? There is a project to move actually the growl just-in-time compiler to open the decay. In many situations, yes, not in all of them but in probably the vast majority of cases it's faster. Yeah. This is sort of the plan, so we'll see how it works. Yep. In JavaScript, there are double... You should repeat the question. Yeah, so I'm sorry. The question was how we deal with types. For instance, in JavaScript, there are doubles but in Python, there are other types. JavaScript has big integers, so JavaScript can convert that to big integer. The idea is that there are some primitive types which flows freely through languages and there are object representations that when a language gets object from some foreign language, it delegates operations on these objects to that foreign language. So when it needs to find out a property of some object, it asks the object to give me that property and it's up to the Python code to search the object layout and to provide that property. So if that object doesn't represent a primitive value, if it's quite long, it can delegate all the operations on these values to some methods on that object and JavaScript delegates that to Python. Yeah, definitely. The languages need to be updated with the latest versions. So for instance, in JavaScript or in Node, when there are some new features added, we need to keep up with that. But it's not that hard because all that we need to upgrade is the interpreter, not the runtime or other things, just the interpreter and that's it. So that allows us to bring new JavaScript features faster than V8, for instance. There are some libraries distributed with the languages. For instance, in R, we have many libraries redistributed and regarding Python, I'm not really sure, but we definitely work with NumPy or have some integration with NumPy. So yeah, these languages are not in final state yet. JavaScript and LLVM is final. It's complete, but these other languages are in progress. They are mostly complete, but not 100%. Can you talk about the question and regenerate the tree? Let's say you modify a function now. Yeah, the question was when there is a probe, if it's possible to replace that when the code is modified. Yes, the instrument can freely attach and de-attach, and then we transfer back to the interpreter mode, change the notes layout, change the AST, and then after that stabilizes it's compiled again. So yeah, there is just the optimization at that location. The notes are changed as they need to be. For instance, if you remove a breakpoint, the breakpoint that represents the note is removed. The note structure changes. So it's transferred to interpreter mode, and then after it stabilizes, it's partial evaluated again and then compiled again.