 All right, so before I get started, how many people have experience with Haskell? And like supposing you had Haskell on the JVM, what benefits do you think you can get? I just wanna hear your round of opinions, anybody? Yeah, exactly, so like, right. So like an issue with putting a Haskell project in an existing project is that it compiles to native. So in order to make it meld well with the rest of your system, you'll have to use like Apache 3 after, so many other inter-process communication methods in order to communicate with it, which you lose some efficiency in that communication. So what I wanted to do is like, remove that communication latency. So instead of communicating through some inter-process communication, you just rewrite the entire Haskell runtime on the JVM. So now once we have Haskell on the JVM, what new possibilities does that give us? One, so one of the biggest problems in Haskell is the fact that there's no cross-platform GUI, like the GUI library. And the reason for that is like, it's very hard to write native GUI apps. Like you'll have something for Linux, you'll have something for Windows, but you don't have one uniform thing to communicate across all of them. But Java gives us a cross-platform solution, like Java FX is also swinging and all those. But since Java FX is the most recent one, I've been using that. So another thing you can do is build FRP library on top of that, how many you've heard of FRP? So yeah, so it'll give us a nice cross-platform FRP framework, like it'll solve that problem we've been having in Haskell for a long time. And yeah, you can also build games if you have the thing of both. And another cool thing is being able to program Spark jobs, Hadoop jobs inside of this. Like normally you'll be confined to Java. And obviously Java's error-prone has so many side effects as we talked about this morning. So like this will, so GSCVM is like this, embeddable lazy functional language you can use on demand on the JVM. So just like a couple weeks back, I implemented a feature to be able to export a Haskell function to Java. So like, let's say you have Haskell function that takes an integer and returns, say returns another integer, say something like the factorial function, just for example. So what that feature allows you to do is be able to export that and be able to call that as a normal Java function. So what that generated function does is it initializes the runtime system of GSCVM and then it allows you, it does all the conversions necessary to convert from Java types to Haskell types. And then it converts back from Haskell type to a Java type. Okay, so another cool thing we get access to in the JVM that I don't think people have tried that much is hot code reloading. So people familiar with closure would probably be familiar with that. So it's still like something that's not well researched but it's something you can try now because the JVM has like this inbuilt dynamism because you can load classes at runtime rate and create class at runtime. So hot code reloading would be so much like several times easier to implement on the JVM as opposed to say on native like GSC. And yeah, the biggest point, you get access to like the entire Java ecosystem using this within Haskell. So I think, yeah, so what's GSCVM? It's just Haskell to JVM compiler that supports GSC Haskell, specifically 7.10.3. So that's like almost like that's a second oldest version. Right now we're at GSC eight. The version just before that is GSC 7.10.3. So this supports like all those modern features like what type family is all those things. So we already attracted some contributors. So Brian McKenna, the guy who gave the talk this morning. So he's contributing very nice bug fixes and a lot co-shared. He gave a talk of machines this morning. He's also, he provided the foundation for the library I use in Haskell to generate the byte codes. And then Siby, he's right here. So he's been helping out maintaining the infrastructure and everything. And Christopher Wells, he's not here. Okay, so the main goals of this project are to maintain compatibility with GSC Haskell. So the reason I say GSC Haskell is that there's competing implementation to what I'm talking about. There's a thing called Friga. I don't know if you guys have heard of it. So that is like a subset of Haskell that compiles to the JVM. So that's why I emphasize GSC here to emphasize that it's a Haskell that GT compiles. So that's a major goal. And the reason why that's a major goal is that not only do you get access to the Java ecosystem, we get access to Hackage, the Haskell ecosystem as well. So this gives us, so this project allows us to use two ecosystems simultaneously, which is pretty cool. So another thing, as far as I've explored Friga just a little bit, but I noticed that the foreign function interface, the way you interact with Java method inside of Haskell is a bit complicated. So I have introduced some new ways to make it easier. And then another goal, for those who know about GSC, GSC is like a 25, 24, 25 year old research project that's actually gotten into industry, which is pretty cool. It's in the unique position of doing that. So another problem there is like, they emphasize implementing more advanced type system features more than like trying to optimize what they already have and make it faster. So there's also a gap there. So I wanted to fill that gap as well. I wanted to make this completely industry oriented, meaning I felt that was fine. Whatever GSC 7.10.3 had to offer to the table is pretty good. Like you can write really cool programs with that. So I wanted to take that as a base and then extend it from there. So I'll discuss what my future plans are in that aspect at the end of the talk. So now this is a comparison. So GHCVM is compatible with Hackage. I'll give a link at the end of the talk where I show all the packages that are compatible right now. So Friga is not, it is somewhat compatible with Hackage. I won't say it's complete not, but there's a lot of, because it doesn't have all the features of GHC that are used in the libraries in Hackage, you can't actually just compile out of the box, but now you can with GHCVM. So you can also interact with Java libraries. In GSC, you technically can interact with Java libraries through JNI and stuff. One company has done that with interacting with Spark using JNI, but it's not, it's very difficult to do. You can do it, you can just easily merge in any Java library you want. You could be like an expert almost. So all of them have like a basic level type system that basic level type system like parametric polymorphism type classes, those basic features that were there for a long time. And then it also has access, as I said, because it supports 7.10.3, it has access to all the advanced features that have been coming out in the past couple of years. And as of now, we don't have support for template Haskell because that requires the implementation of the interpreter. So the equivalent of GTI for GHCVM, that's not done yet. That's a to-do item. So another major difference is the way concurrency is handled. I'll mention that in some time. So how did it start? So I started playing with Haskell about four years back and I was just trying to solve like simple mathematical problems on Project Oiler. So I need to start on Python and then the solution didn't come out for some reason. It took forever to run. And then I just re-implemented Haskell at the time it took up when we learned and suddenly it worked. So that was when I got the, oh, wow, this is magic moment. So how many of you actually think Haskell is magic? So, and you'll get to see, today I'll discuss the execution of a simple program so you get to see how much work is actually going on to get that magic to work. So then I took some deviation to do lots of Android programming. So that's when I got access to the Java ecosystem. So I understood what are all the problems in the Java ecosystem. So then I also was just a hobbyist Haskell. I would read all these articles. The thing is Haskell has so many features and like it's almost overwhelming. You feel like you have to learn everything but then later as I moved on I learned you actually don't need to learn all those advanced features. You're finally just a basic core. And so about one year ago like I got reached my saturation point in Java. I'm like I really just want to work in Haskell. I've also worked in lots of startups. So in those startups I was developing the back ends and working with Android apps. So I had to use Java, I had no option back then. But yeah, so I was starting to get the idea of okay I really want to use Haskell for mobile and like all these other things. That's where I got the idea, okay I need to somehow get Haskell on the JVM. So that's one thing. But back then what I wanted to do was I wanted to create a DSL that cross compiles to like iOS, Android and Windows. I don't even know if that's worth it but yeah. So that was the goal back then. I just want to create a DSL. I don't want to directly compile to Java. But yeah, this turned out. And then another major thing that influenced JVM was my time with closure. So I spent almost six months working in closure. So it was a great language. And what I loved about closure was like the ease of being able to import Java functions and everything. But the only thing I missed from closure was the type system. Everything else was amazing in closure. So then once I reached my saturation point of closure as well, so I was like okay maybe I should start getting down with this Haskell to JVM compiler. And so what I did was I started reading lots of research papers. So compiling Haskell to JVM is not a new idea. It's been researched and people have written papers on it. Like back in the 1990s and 2000s. So they were papers but they weren't completely conclusive. Like they all determined okay, like it's inefficient to do this. It's not worthwhile to do something like this. So and I agree like maybe back in those times back then Java was still like very primitive. I don't think they even had a proper JIT compiler back then. So like it was extremely inefficient to import Haskell back then. So but now Java has come a long way since then it's been a decade and a half since those papers were written and lots of developments have been made. Now Java's used everywhere. So now I felt okay maybe this is the right time to work on something like this. So another major motivation for this was I wanted to use Haskell in my startups and one of the things I didn't want happening was I didn't want to like suddenly have a bug in production like a compiler bug and then having to wait on the GSC people to get it done. So if you guys I think I mentioned just now GSC is a volunteer project. So there's no people completely committed full time to maintain it. So that's actually a bit of insecurity for me and I'm not just for me I'm pretty sure for a lot of other people who actually want to use in production. So I wasn't, I didn't like that as well. So I felt the best solution if I wanted to actually use Haskell was to learn GSC completely. So if any problem ever came up I'd be able to deal with it. I'm sure not everybody comes to that solution but yeah. So I think I talked about this lack of libraries. So like just Hackage is developing pretty well. There's lots of libraries but the libraries we need for day to day use there's still a lot of work there. And documentation is a huge problem as well. Like it was just a lot of people because they say types are descriptive that won't even bother documenting. Types are descriptive but they don't subsume documentation so. And nothing was cabal. So actually one of the reasons why it took me so long to actually even write one line of Haskell code was because of cabal. Like I was just trying to compile sample programs and what happened there happening was I would get that dependency failure, dependency resolution failure and then I'd be like why do I need to waste my time with this, I have so many other things going on. And after stack came out that's when I started becoming more active in Haskell. So now I'll talk about how GATVM was architected and how the commons work together. So GATVM consists of many components. Like this top half, the top half is all from GHC. So I haven't done that, that's been already done. And so it starts with the driver. Driver is the one that manages the compilation of different components. Like I had to modify that to also compile Java files. So as of now you can actually compile Java files and Haskell files both with GATVM. So and then once the driver determines what the component to compile is, it'll send it to the next part of the pipeline which is partial type checker optimizer. So that part I want to discuss because I haven't done much work on that. That's already been done by very smart people. And it works very well actually. That's why Haskell is pretty fast these days. And then so the output of that entire process is a thing called SDG code. I'll explain that in a couple of slides. For now we'll just leave it. It's just the output of GHC. And so the code generator, this is completely what I had to do from scratch using the GHC code generator as an inspiration. So what the code generator does is it converts this SDG code, this low level intermediate representation of Haskell to a class file. Actually not just one, they'll convert to many because of how the implementation is. And then we'll wrap them all up into a jar. So almost every Haskell module you'll write in a project will compile it to a jar file. And then it also takes care of at the end when you want just one single Uber jar, it'll link all of them together into one giant self-contained jar that contains the runtime system as well of the RTS. So I also wanted to take a minute to explain how Cabal VM worked. Cabal VM is the fork of Cabal that's patched to work with GHC VM. So one of the things I had to do in order to maintain compatibility with Hackage is I had to patch some libraries because a lot of the core libraries in Haskell, they use C functions. So in order to make those work on the JVM, one option is to compile everything in JNI which is probably not a portable solution. So instead, I just rewrote all those CFFs if I call them into job. So now I need to somehow tell Cabal that these changes are there because obviously I can't sub-bin option patch because this is unrelated to normal GHC. It's specific to this. So rather than interfering with the normal GHC process, I decided to make my own repository that contains a set of packages, not packages, patches. So these patches will be pretty small, maybe like the diffs will be like what? 20 lines, 200 lines, depending on how many CFFs there are being used. So now what Cabal VM does is it consults this repository and if you're trying to install a package which has a patch in this repository, automatically patch it and then it'll build the patched one. So using this method, I was able to actually get access to a good chunk of Hackage. As of now, we've compiled a good number, maybe like what, close to 20, 30 packages. So it's pretty nice. So yeah, so GHC VM Hackage is like this layer between Hackage and Cabal VM. That it acts as like a filter. Okay, so I'm going to, so now I'll discuss the runtime system and I'll explain it by showing a very trivial Haskell program and showing how it works. So before we get into that, I wanna take some time to explain lazy evaluation. How many people are comfortable completely with lazy evaluation? You'll understand all the quirks and everything. Okay, good. So this will be useful to a lot of people then. Okay, so this is a very trivial way to distinguish between lazy evaluation and strict evaluation. So according to Wikipedia, it's an evaluation strategy which delays the evaluation of an expression until it's value is needed and which also avoids repeated evaluations. So that second part which says, which also avoids repeated evaluations is probably the trickiest part of one of the trickiest parts of implementing laziness on the JVM. In general, actually. So here there's two code samples. One is in Haskell, one is in Java. So I tried to make them as similar as possible. In Java, there's no generic exception thing. So I had to improvise. So what's happening is you have a list of two elements and undefined for those you don't know, it's a value which upon a value, if you evaluate it will crash your program. It's almost like you can think of it as a result of a function that throws an exception or something. So somewhat similar in Java is, so I just initialize a null string and I'm calling it, okay it's not get length, it's actually length of my bed. So basically I'm trying to get the length of the string which is null. So in Java this would actually crash with an exception right away with a null pointer exception if you executed this. Why? Because Java evaluates every expression. So this val dot get length or val dot length will get a value right away and because val is null you'll get a null pointer exception done. So it doesn't go any further. But if you look at this example, this example will actually run in Haskell because Haskell is lazy. So what does lazy mean? Lazy means that the evaluation is delayed. So this two and this undefined are not evaluated until they're needed for some purpose. So head, what does head do? It evaluates the list and gives you the first element. But let's assume this is the only program. Let's say the old program was printing the head. So you never care about the second element, right? So why should a program crash if you never actually use the second element, right? So that's a capability you get from lazy evaluation. And because of the lazy evaluation you can do lots of matrix. So now I mentioned SDG code before. SDG stands for spineless tagless G machine. This is an abstract reduction machine that was written in a research paper by Simon Payton Jones a while ago. So this is actually, this mechanism is the reason why Haskell is as efficient as it is today. So this sounds fancy, like SDG code, but it's actually just a reduced former Haskell where all you have are just cases and lets to make it very simple. So in SDG when you look at the code, to give you, so one thing it's very tricky to do in Haskell is try to actually execute in your head. Like you write a C program, you can just go through it line by line and be like, okay, this is happening, this happening, this happening. When you go to a Haskell program line by line, you can't do the exact same thing. So I wanna help you guys visualize it a bit better. This isn't perfect, but it's a bit better. Then A normal form, it just means, okay, everything is simple. You can't have complicated expressions. Everything is bound to some variable. So another part of the SDG code is function applications, primitive operations and literals. So primitive operations, these are the things that you can't implement in Haskell that have to be implemented at the runtime system level. So an example would be integer addition. The runtime has no idea how to add integers actually. It just knows how to, the only thing the runtime knows how to do is delay evaluations. All it knows how to do is delay them, unbox things, box things, that's it, but it doesn't know how to do the little operations. So you have to use primitive operations to do that. So I'll introduce a very simple example. So the final program on compile is take two of, map times two of a list going from one to five. So this looks deceptively simple. Well, the answer to this is, it should return a list of two comma four. But what exactly goes on in getting this to evaluate is the interesting question. So first let's take a look at the definitions of map. So these definitions are a bit flawed. So first let's take a look at map. So everybody's familiar with map, right? It's like the bread and butter functional programming. So it's a function, it's a function that takes another function that'll transform a list from one type of elements to a list of another type of elements. So the standard we have anybody, and you begin to redefine it, it would be okay. So if you do a case-by-case analysis, so a list can only have two possibilities. It can either be empty or it can be broken down into two parts. So if it's empty, then obviously, if you have an empty list, there's nothing transformed. So you switch to an empty list. If the list has some elements, then you destructure it. So you destructure it, you apply the function on the first element, and then you recursively apply map on the rest of the list. So now that is like the normal Haskell implementation you would do. So now this is actually what it would look like, roughly it would look like this when you compile it to SDG. So as I said, SDG is just a simpler version of Haskell that just has cases and lets. So this is actually, the example you see here is equivalent to this. So what it does, so case is you can think of as the primitive in the RTS that says evaluate this, evaluate whatever is inside the case. So in this case, first you want to evaluate the list. You want to see whether it's empty or not. So if it's empty, return empty otherwise. So now you see a let. So what this does is, when you see a let in SDG code, you should think of that as allocation. So anytime you have a let, it's allocating a new thing called a thunk. So a thunk is a suspended, it's like an expression that hasn't been evaluated yet and eventually needs to be evaluated. So I could have shown you guys actual SDG code, which is there's an option in GHC you can send a D dump SDG. It'll give you SDG code, but it's ridiculously hard to read because it has all sorts, it uses all sorts of randomly generate names. So I had to make it, I did a manual compilation for you guys so you could, it's easier to see what's going on. So what it does, it'll create a thunk and then it'll construct a new list with the transformed element plus the thunk. Now let's look at take. So take is a teeny bit more complicated because you have an extra thing where you have to check the base case. So in the base case when it's zero, you should return it. When there's nothing to take, you should return empty list. And then in the false case, if it's not zero, then you evaluate the list, break it up, if it's empty, again return empty. If it does have some elements, then recursively call take with n minus one and then with the rest of the list and then reconstruct the final thing. So here you're doing two allocations. You're allocating a box for n to store the value of n minus one and you're allocating a box for x as double prime. So now you can see where the allocations are happening, where the valuations are happening. That's what STG code does. It's like a, what's up? But there is an allocation actually. When you, there's an implicit allocation when you reconstruct any data type. That's also allocated, my bad. Actually, this is not correct. I should have added an extra let. Thank you for catching that. So this is actually, that was improper at STG. So there we go. So this is actually how it's supposed to be. Thanks for catching that. So yeah, there's two allocations here. And then you bind them together. So this, so there's two ways to allocate at STG. One is to use a let statement. Another way is to construct some Haskell data type. So yeah, similarly in case you have two allocations and then an allocation at the end. You can think of it like that actually. So when I speak of allocation, I mean data, new data is being used. Like a new, yeah, some RAM is being blocked. Yeah, exactly what I mean. Huh, actually I'm going to explain that a little bit. The reason why I'm explaining all this at the very low level is so you guys understand how to debug space leaks at some point. Because you'll understand where, how to indirectly see when there's a space leak. Actually Neil Mitchell recently gave a good talk about detecting space leaks. So I'll give like, I'm basically giving the background of that talk. Like while you need to know and understand. So that's the program. So now we've broken down map and take into their very low, their most low form. So now before we get into the runtime part, let's just take a look at how the evaluation will look like. So the specific example we have here is take two of mapped times two, one to five. And for those of you who know Haskell, there's a lot of syntactic sugar here. Actually one of the function that lists one dot dot five is actually not a list, it's a function. So for now we'll ignore all those extra details. We'll just make it simple. So now let's look at the definition of take. So what does a take do? First thing take does is it evaluates whether the expression n is double equal to zero. So that's, you can substitute this as Brian mentioned this morning, referential transparency. This thing I'm doing right now you can do it in other languages. This thing you can do in Haskell though. So here you're directly replacing take with a definition, which is you're comparing two, you're comparing two to zero. So obviously that's false. Again, for double equals there's a trick there because it's a type class function. There's actually more going on but I'm ignoring all that for now. So just assume it's just like, it's a direct function that gives you the result. So double equals to zero is false, right? So now let's look. If the evaluation returns false, go to this branch. So that's what the evaluation returns to. So now you have something more complicated. So now access and take is now map in this case. So now I have to evaluate that. So now I have to evaluate that, get the result and then I have to remember the result for later. So anytime you have to remember something, you have to store it somewhere, right? So we're going to say, we're going to store what to do with the continuation. So what to do after we get the result of this map expression? What do we want to, the function that says what to do with the continuation will be stored as a frame, stack frame. I'll explain that in just a bit. So that's what this plus zero means. Plus means I just create a new stack frame of a zero index. And then the next part is, now let's say you got the result of map. So the result of map will look something like this. It'll be times two of one colon map F two dot dot five. So what that means is it's taken, it's stripped off the first element. It's applied the function. It hasn't applied the function to it, if you noticed. This times two is still there. This is the thing with the lazy evaluation. Absolutely nothing will get evaluated until the very last moment. And that one fact is why you get space leaks. There are times where you want this times two, one times two to get evaluated right away before carrying it through all the way down. So as you observed that that's there. And then map F two to five. So if you look here. So that gives you a value of some map. So now you look at, so now it can be deconstructed if you look as a colon, right? That means it has two parts as a head and a tail. So now we deconstruct it. So this X is now times two applied to one. And this XS prime is map F, map of the rest of the list. So now what happens, we do two allocations. We allocate a slot for N, which is the N you pass into take again. And then the slot for the workers are called to take. So now this returns a list, a newly constructed list. And eventually if you continue this process, eventually you'll get two to four. And another thing is this expression by itself does nothing because nothing is forcing it to evaluate. You guys have to assume that you're printing it. So when you print something you need to evaluate everything so you can, you know what to print. So that'll force the evaluation of everything. So now that's like a basic idea of how the lazy evaluation works. And now we'll get into a new degree of details. So now take some time to explain how the GACVM runtime system works. So what it does is it'll generate a main method for you, classic Java main method. And then we'll do is it'll do initialization of the runtime system. And then it'll go into Haskell world. So you see this border, this is intentional. It's to show that there's two worlds here. There's Java and there's Haskell. So right from the bat, right off the bat you'll initialize a runtime system and then it'll get sent to Haskell world. So by the way, everything I'll be explaining to about how the runtime system works is exactly how it works in GHC. So you can actually, whatever you learn here you can apply to any development you guys do in GHC. So what are the different components of the runtime system? So if you see here, you see core. So what core refers to is the processor of the computer, of the underlying computer. So you can have a computer with multiple cores, right? So on top of each core you'll have an abstraction called a task. So a task is just some data structure in the runtime system that has a one-to-one correspondence with some physical processor. So it's a way of abstracting over the processor in the runtime system level. And then you have a thing called a capability. So what a capability is, is something, it's again another data structure that keeps track of all the Haskell expressions that need to be evaluated and all the threads. So for those of you who are familiar, Haskell threads are green, which means that they're very light-weight and create like millions of them, just similar to Erlang processes. So in order for that to work, you need a capability to manage these threads. So the goal of capability is to keep all this information like what are all the threads that have to run and what code do these threads have to run and when should I contact switch these threads? When should one thread stop executing and go to another thread? So capability will run one thread at a time, one green thread at a time. So now capabilities and tasks are also in one-to-one. So you can have, so let's say you have N cores, you'll have N capabilities as well. Actually you can have less depending on how you configure the RTS. So yeah, so the next thing below, so as I said capabilities organize what are called thread state objects or TSOs. So those are the threads as I mentioned. And then inside those threads, you'll have the stack of the thread. So this stack as I mentioned before, you need it for doing lazy evaluation. So this stack will tell you okay, this stack will tell you what to do next. So anytime you're doing an evaluation, let's say you've finished evaluating like you've gotten to the point at which you can start that list. It's in weekend normal form, that's what it's called. When it gets to that point, you want to know what to do after that, right? So the stack frame is like the memory that remembers what to do after you've finished evaluating whatever you have to evaluate. So there are actually many other frames. There's frames for exceptions to be able to know, to catch exceptions and stuff. There's so many other stuff but I won't get into any of that. We'll only be discussing one type of frame today which is the update frame. This is the frame that takes care of this one part of lazy evaluation where it says which also avoids repeated evaluations. So how do we do that? We need to remember the result, right? So we need to override the thunk as I mentioned before with the value that it evaluated to. So that way if the thunk is used in multiple locations you just use the evaluated value rather than evaluating it again which goes against lazy evaluation. So here's the basic hierarchy. So when you have a Haskell function it compiles down to a subclass of stgclosure. So stgclosure is like the superclass of any object used inside of the gsvm runtime. So these are the four main subclasses. These are like the categories of Haskell objects in the runtime. stgfun is a function, Haskell function. stgpap is a partially applied, partially, I don't know what the second piece. So it's basically a partially applied function and then stgthunk as I mentioned is a suspended expression that hasn't been evaluated yet. So the thunk will contain code that builds the expression when it's entered and then it'll also contain code to override itself when it's done. So once the thunk overwrites itself it'll be one of these stgfun, stgpap or stgconstructor. So stgconstructor or the version I put here, this is the superclass of all Haskell user defined data types. So when you have lists, when you have bulls, everything, the superclass of those types is stgconstructor. So stgthunk will evaluate to one of these three eventually. So now I'll be showing, so I'll quickly show they, okay if it's five minutes I probably won't have time to show this. In that case I'll just go to, so did you guys at least get something from how lazy evaluation works? Like is there any questions you have about that? Thunk would evaluate to one of those three. So thunk is a suspended expression that has to value to something, right? Because Haskell- Function is made out of thunks, right? Yeah. A function can return a function. So this thunk will evaluate to a function. Yeah. In that case your outer function is a thunk and it will evaluate to, it will get stgthunk. Stgthunk will evaluate to stgthunk. So if you're writing a head or a map, would that be an stgthunk or would that be an stgthunk? So let's go back to expressions. So I'll classify these now. So map itself is an stgthunk. But you see this expression here, take two map of this. This entire thing is a thunk. This entire thing is a thunk. This sub-expression map, this map times two, that's also a thunk. So a thunk can be composed of other thunks. A function doesn't evaluate to more thunks. It creates thunks. So when you execute that function, it creates more thunks that eventually have to be evaluated. Good? Okay. So in force, I'm out of time, so I'm gonna skip to the end. I will, the other stuff I had prepared was like where I hand-compiled these two functions to what it looks like in GATVM. So unfortunately, I don't have time for that. I'll just upload it and you can find my GitHub probably. So now I just wanna spend the last few minutes talking about where this is going. So first step I wanna introduce Type Lead, which is a startup I'm working on. I want to commercialize the work that's going on here. I wanna provide commercial support for GATVM. But another announcement I wanna make is that I actually wanted to create a new language entirely, which is branched off of GSC 7.10.3, which is focused on adding all those features necessary to get this into industry. So one thing I had discussion with some startup where they discussed a problem with records, Haskell records, so things like that. So any major problems that are actually acting as a barrier for it to be adopted, those things we'll be working on. So one of the solutions to that record problem is prototype polymorphism. I think, Brian, you've implemented this, haven't you, and Roy, so that, and we also have a focus on enterprise libraries. So as I mentioned before, Hackers is great and it has lots of libraries, but there's still a lot of gaps in terms of proper database access libraries for all the different databases and all that. So in this company, we also want to create really nice courses to teach this language to people. So you guys can think of this as the, what do you say, the implementation of Haskell that's completely focused on trying to get into industry. So I also would really love some contributors. So right now, you know what, I'll show a quick demo. So this is a FFI example that, this is how FFI looks. FFI is foreign function interface. So this is how you'd call Java methods. So this is if you want, so here you declare, you create a Haskell data type that corresponds to a Java type. So here you declare, okay, this Haskell collection type corresponds to Java dot utility collection and so on. So these are functions to import certain methods. For example, if you want to import the add method and the get method and so on. So I'm sorry for being quick, but I don't have time. So this is a simple example. So what it does is I've created a monadic interface to interact with Java functions using a thing called a Java monad. So in the Java monad, the first argument will be. So in Java, you have a concept of this, right? So the object you talk about now. So I've managed to find a way to embed that same concept inside of Haskell using monads. So you don't have to, in this case, so you can actually call functions on an object without actually using this, without actually mentioning the object itself because that's threaded through the monad. So you can generally, you can think of the Java monad as a state monad, but it's a very special monad that's recognized by the compiler that'll do the optimization is to make it at least somewhat efficient. So what this does is all it does is just, it takes a lazy list, it converts that to an ArrayList in Java, and then reads from that ArrayList and prints stuff out. It's just to show you can create an ArrayList from GSMM and then come back. So let me run this real quick. So as you see, so it printed out exactly what it's supposed to. So what the program did was it called pop, so it called populate array 10. So what it'll do is it'll create an ArrayList that contains the numbers from zero to 10, and then it'll print out the values in the ArrayList. It'll read from the ArrayList and then print it back, all within Haskell. So the key part here is that you're not ever actually calling Java. Actually, the only part you're called Java is when you want to read from the ArrayList. Obviously you can't do that in Haskell exactly. But the point is you never had to write a line of Java code. All of this is within Haskell only, and it's in a nice typed monadic interface. So if you see, what I did was here I called, so what I did was I created the integer, integer objects wrapping the Haskell integers, and then I multiplied them by five. So when you multiply the numbers from zero to 10 by five, you'll get a result like this, zero, five, 10. So yeah, so I could, if anybody's interested in contributing to this, making this evolve, like we need help porting package libraries, we need help writing pure wrappers around important Java libraries like JDBC. JDBC is a biggie that is used a lot. And then we also need a Java FI generator. So you saw the FFI I was just showing. So it looks pretty complicated, right? So it looks pretty complicated, right? Like in a lot of this can be machine-derived, meaning a program can just read a class file, Java class file, and figure out how to generate these signatures. So it shouldn't be something you have to write yourself, right? So it'd be cool to have some program that can just take a list of all the methods you want to import, no, I might be generating the signatures for you. And then ID support. So the work is being done on this, but it's been delayed, like, there's a plugin called HaskForce that it's a plugin for IntelliJ that works with Haskell. So I'm working with the maintainer of that to get this working for JCVM as well. So you have a nice ID to work with. So that's it. And you can contact me through any of these methods. There are a lot of foreign-industrial usage, a lot of stuff is missing from regular plain-over GNC. And you're trying to fill in the gaps. Do you have like a quick, or can you share a quick list right now? What do you feel are those gaps? So I actually don't. Once my work has so far has been just trying to get this thing to work properly, that's what I have to work on the next month or so. Just figuring out where all the gaps are. I think you're serving on the correct. So you've already started something about this, about how to standardize the web application development in Haskell. So I wanna do something similar to that for like all the major things, like being able to create data pipelines, data pipelines involving a doop and spark without actually touching a doop and spark within Haskell. So there's many ideas I have, but right now it's not complete plain-out yet. I sort of agree to that. I've been focusing on the web app side of the gaps, right? I'm sure there are gaps in other things as well. But why try to fill in those gaps after porting GHC to Java? I'm moving on to the JVM. So one in the JVM, like you have like a ton of libraries are already available for you. So once the Java FFI is in place, it'll be easier to speed up the process essentially. Like in GHC, the best case you have is you have to write C functions, which not everybody these days is comfortable with doing. But everybody these days can easily write Java functions, right? Java's taught everywhere. People are familiar with writing Java functions. So it'll be much easier in that aspect. Yeah, that's the point. But eventually you need to do side effects, right? And you need to call, if you look, even if you, so you said you shouldn't have to like write another language, right? If you look at the core libraries of GHC Haskell, it's all like really, really low level stuff. Like I've been porting it, so I know it. It's very, very low level stuff, like copying memory from one place to another. Very low level stuff is happening and they call C functions for that. So I want to make that easier instead of C, Java. You can do that same low level stuff in Java now. When I was, yeah, there's, I don't have an answer to that question, right? So I haven't thought through it completely, but I have given a basic thought. So the main thing is, as long as like, the changes you make to that function that you're reloading don't actually, like the types are just a way to protect yourself from shooting yourself in the foot at compile time. So when you get to the runtime phase, the types don't exist anymore. They're not there. All you have are the SDG closure I told you. You just have those, that's it. So you don't have any notion of types there. So as long as like, so yeah, that's something we have to think about, but I would think as long as the type hasn't changed when you change the function upon reloading, as long as it's the same, I'm pretty sure as long as you change just the implementation, I don't think it will make a huge difference. Again, that's just what I think might not be true. Thank you.