 Let's see about that. Okay. I'm going to talk about open problems and cross-compilation, especially for Haskell on ARM and iOS. So, I'm words and a bit background is, I work for a German software company and we produce a mobile banking app for the German market, and that app only runs on iOS. So, that's basically my day job. So, why would you want to use Haskell on ARM anyway? Few of you have probably Raspberry Pi at home or some network attached storage that's also having an ARM CPU in there or an iPhone or iPad or some other ARM-based device and you might want to have fun with your device and install some code there. So, you might want to use Haskell and that's where you probably want to compile for ARM. Maybe you also want to have communication for example, with your iPhone, iPad, with the backend service and you want to share code and have the same language on the front end and back end. So, in that case, you also might want to be interested to use Haskell on ARM. Why would you want to cross-compile? Well, the Raspberry Pi, those of you who have one probably know that it's not the fastest machine. Other ARM devices are often also not the fastest machines because that's where ARM is basically used on low-powered devices. If you have ARM-based devices, you might for example think the iPhone is one of the most restricted devices. So, getting the Haskell tool chain to actually run on the iPhone is one of the things that's probably not possible unless you kind of jailbreak your phone in that case, you're probably not able to use the latest iOS versions, which might complicate your problems. Other things like the network storage I have at home, runs a custom operating system based on Linux, but getting the Haskell tool chain running there is probably also not that easy. So, in those cases, you probably say, okay, cross-compiling might be a good idea, especially if your main device is a lot more powerful than your target device. Do you understand a bit more about the cross-compilation? Let's look at the GHC stages which are built when you're building GHC. So, first of all, you have a Bootstrap compiler in your system, which is the one that's being, so we're talking about building GHC from source. So, you have a Bootstrap compiler which is basically targeting your system it's running on, and then you're building with this stage one, and you can decide what target stage one is supposed to run on, and usually you basically have here in this chart, usually what you usually build is host and target are the same always. So, you're building like your stage one on your host for your host, and then you're using that stage one to build your stage two, your final GHC compiler you're going to use. Stage one, compiler has a few limitations. It doesn't allow dynamic loading of libraries, and it doesn't allow template Haskell and a few other restriction, but the stage one, yes. The stage two obviously does it because that's like a full-blown GHC and that even comes with GHCI. That's kind of linked. So, for building Haskell and iOS, you usually use the GHC's LRVM backend. So, GHC has basically two big backends, one is the native co-generating backend, and the other is the LRVM backend, and if you're building for iOS, you basically want to have fat libraries in the end. So, you want to have an I386 slice and an ARM slice because the simulator runs on your Intel machine and basically pretends to be an I386 architecture, and you want the ARM slice, in that case when you're building for your device, that that slice is used. That's what they call fat libraries. So, what kind of restrictions do we have? So far, there is no ARM 64 support in GHC 7.8, and you must disable that code removal. Now, who knows what that code removal is in this regard, because that's what you end up having an Xcode to switch off if you're using Haskell for iOS. Xcodes, the linker basically strips all code that your things is dead, and the Haskell code that's generated coming out of LRVM ends up looking to the linker like that code, so it's going to restrict everything. Luckily, that one was fixed, and that one was fixed as well. So, it leaves me to have one big issue and that's template Haskell. Template Haskell is like an extension for GHC, so it's probably, I think, is there any other Haskell compiler that supports template Haskell? Okay, yeah, template Haskell, well. So, yeah, template Haskell. I tried to put this in three categories. One is you basically just, you're using it for kind of macro expansion at compile time. This is perfectly safe, you're just basically using Haskell to generate Haskell at compile time. Now, you might also expand architecture dependent values. So, if you're basically running like this lift MaxBones type team in a splice on your host while assuming that template Haskell would work on your host while targeting your target, you would get the MaxBone of your host system, which might not be what you want. And then there's like another big thing in template Haskell, it allows you to the arbitrary IO. And that is something which even when we have working template Haskell is going to be kind of a question of how to implement this correctly because for example, if you're looking at embed Git revision from the Git embed library, you probably want to have like your revision where you're compiling the source. So, you basically want that splice, for example, to be running on your host system and not on your target. Okay, kind of, yeah. I just wanted to make this point because that's going to be probably better to understand the next few slides. So, there are some solutions to this. There's this evil splicer or zero TH approach which basically zero splicer is from the Git NX project for getting Git NX to work on Android devices. And that basically uses the dump splices parameter to DHC to basically dump the splices, compile them and then kind of put them back in. And the DRTH approach is very similar and that comes with the binary which basically allows you to open a DHC Haskell file, then looks for top level splices, compiles them, produce another file where the splice are extended. Both are at least to me kind of hacky. And then there's the restricted template Haskell idea that means that you're basically restricting template Haskell to allow nothing but macro expansions because those you could do on the host. You could also try to go for it complete, I guess most are. The thing what it's for is at least I understood was to include lists of data in your program. So in that case, you basically wanted arbitrary IO to read in the file of data points and have them put in your program at that point. Yeah, well IO. So you could also do complete stage two cross compilation which would mean you're compiling on your low power device, full blown GHC and then compile with that on your low power device. So if anyone tried that on the Raspberry Pi, it does take quite some time. Now there's another approach and a popular cross compiler in the Haskell communities is actually GHC.js because that compiles on your host basically to JavaScript. A good thing is mostly target and host are still the same. It's just a different language. And they came up with this idea of the out of process template Haskell thing. So it was pioneered by Luigi Tegman, that's the guy from GHC.js. And if I remember the story correctly, they basically were over, some evening were discussing how to basically support template Haskell in GHC.js. And ended up coming up with this idea, okay, we're basically just going to take the template Haskell compilation part out of the compiler and put it on the target. And then we're going to communicate between the compiler and the target and have the target execute the splices. And that means you need some kind of template Haskell runner that you're going to install on your target and you need to ship the splices and compiled code from your compiler to the runner to execute them and get them back. Why do you need compiled code? Because your splices can depend on code you just previously compiled. A library, that's also part of your completely compiled module. Now that means that the runner must be able to load code somehow. You somehow need to be able to get your newly compiled Haskell code onto the runner. One thing is if you have dynamic loading of libraries, that's easy if it works. If it doesn't work, what you could try is basically to embed the runner and the code you just compiled into a new runner, put this onto your target and then have that launch up. But you probably need to make sure that you're kind of persisting the state in between. So another obstacle is that GHC doesn't really allow you to do any template Haskell if you're trying to compile with a stage one compiler because there are lots of checks that say, oh, there's template Haskell we're stage one and then it just blows up and says stop. This isn't like nothing I'm going to work with. So what you usually end up doing if you try to do this is you're looking for all the GHC ID finds in the GHC code and then selectively enabling those. Yeah, and then you also need some way to communicate with the runners. So for GHC.js that's pretty simple because GHC.js can basically launch on its own system a Node.js instance and start the runner on there. For example, if you're trying to do this is iOS. There are two problems. First is that the dynamic libraries GHC produces cannot be loaded on iOS so far, which is basically a compilation, an issue in the compilation pipeline. And the other is that the compiler doesn't really have any control over your device. So, but you could load dynamic codes in principle that's enough if you start the runner on your own and then compile. So that's the whole idea which sadly doesn't work yet but that's where I want to go at some point. Okay, and there's next slide. So the ultimate goals for me, which is like my Harry Audarsh's list of whatever I want to do. And that is a better plugin interface because currently the plugin interface only allows to do type tracking plugins for GHC. But the whole idea basically could be extended to have hooks in many different parts of the compile pipeline. There is some resistance say to add those hooks, which is kind of sad, yeah. Actually, tech checker, you can register it and transfer it. But you record the core. Yeah, but for this stuff you couldn't do. You need a few more hooks to be, for that in the compile pipeline to actually have hooks early in and yeah. So that's what I wish for to have a better plugin interface that allows more hooks to better control the whole compile flow. And I wish that at some point we have kind of full support in GHC. As I said, full template has got support for especially IO is stuff that's going to be hard to decide how to do properly because you most likely want to execute the IO actions on your host and not on your target. And if we had full template Haskell support we could use stuff like language see inline object of C. If anyone hasn't seen that, that's pretty neat because you can basically just embed objective say as a quasi quoted part in your Haskell code. And yeah, then the last point I really wish for is multi target GHC which means that you have one GHC binary which gives you, where you can say which architectures you want to have the executables for and then you can basically get what you want. I hardly understand you. I'm sorry. Okay. Yeah, and that's basically it. So thank you for listening and time should be okay, right? Yeah. I think that with stage two compilation you can get the full power of template Haskell. So couldn't you do like a dynamic binary emulation for ARM architecture say on X86? Army. Run your ARM GHC. You mean if you couldn't start like an emulator on your host system to emulate the system you want. It's like a QEM you can do in format registration for ARM binary. That you could potentially do but at that point you're also having to have the runner on your QEME, right? So you still have to do the compilation in your emulator which would be similar to having the runner there anyway because the runner is basically a compiler that's running on your. I mean, you don't need a separate runner, right? Because you have the full GHC in ARM in that case. Okay, but then what would you do? Basically compile all in the emulator. So you're basically doing like a full stage two compiler on an ARM emulator. The compile compiler. Yes, that you could do. That will probably be a little slower. So. Sure, of course, that's the weight of ARM emulation on your host machine. So for me, this out of process template has to be just, the idea is pretty sexy for me because it means I don't need an emulator for any architect. I would want to support at any point and one could potentially share even this with the GHC.js team. So at some point we could basically. This is my emulator. Runner. Okay, that was. So the runner I have so far is basically just, it uses HTTP as like the underlying protocol to transport the stuff to the runner and it's basically just an iOS application you start. And then you communicate over HTTP with it, which means that I'm just going to pass GHC like the IP address or the name that's going to be looked up and then GHC knows what to do. So that for me isn't that much of a hassle because for more developing iOS you always have your phone with you anyway. But as I said, that doesn't work yet because I can't even load dynamic libraries. Okay, more questions or should we continue the next talk and then questions afterwards? Okay.