 Okay, hi. I'm Robert Verding. I come from Alang Solutions, a company. We do training, consulting and support in and around languages on the Alang ecosystem. I'll just try and explain a little bit about what this is and how it came about. Try and explain why it looks like it does. So yeah, I originally worked for Ericsson. This is a long, long, long, long, long, long time ago. And they had a computer science lab where I worked. They also have, I suppose they still have, they had a switch called the AXE, which was a very successful switch, but it required a lot of effort to develop and maintain it. And one of the things we were supposed to look at in the lab was how we could make programming of these types of applications easier, more efficient, but still keeping the same characteristics. And so what was, what is, still is the problem domain. What was the problem domain we were looking at? The type of problem we were trying to solve. So these 12 points they come from a, 10 points, they come from a thesis written by my boss there and beyond a decade of the problem domain for these type of applications. And if you look there, for example, there are different levels of interest, you might say. So the most interesting points, one's marked in red here, are quite general. And if you look at all the points there, for example, there's absolutely nothing about telecoms in that list of the problem domain. The telecom bit was easy. Making the switch do something, just send it a few commands in, set up a connection between two users, that was easier. Or making the telephones ring, that wasn't a problem. We did a lot of experiments in the lab about that. We had our own small switch exchange and we had hacked that so we could connect it and control it from our own. We had the first Unix machine in Ericsson. And we could control that. We basically programmed telecom applications in every language, every system we could run on our machine. I think about 30 or 40 different versions of that. And again, the telecoms was not the difficulty. This is the most difficult bit. And some of the points here are very general. So yeah, we have a large number of concurrent activity. We were thinking in those days of switches, maybe hundreds of thousands of connections to them, maybe tens of thousands of calls going on. So there's a lot of stuff going on at the same time. You have timing constraints. Things must occur at certain times or take, or be completed within a certain time. You have distribution. So if you get further down, we have the last one, we have fault tolerance. So the system must be able to survive errors, both in the hardware and the software. And to do that properly, safely, you need a distributed system. You need at least two computers if you want to make a fault tolerance system. So therefore you need support for distribution. A telephone switch should never go down. That's one of the basic fundamental requirements of it. So you need to do an operation over many years. And you also need to maintain it and do software maintenance on the system while it's running. You cannot take the system down to do that type of thing. And again, the fault tolerance for hardware errors. So these are some interesting, these were some of the, these were the problem domain, the type of problem we were looking at. And the interesting thing here is that this is not just telecom. As I said, there's nothing specifically telecom in here. These are very general problems. Especially the red ones I marked out are very common in many systems that have these requirements. So this is the type of problem we were trying to solve. So we realized that the telecom fit was not the easy bit. This is the difficult bit. How do I do this? So a few reflections on this before we go on here. We were not out to implement a functional language. This might be the wrong place to say that, but we weren't out to implement a functional language. We became functional as, as our development of the Alang language and the system around it evolved. We became functional. We found this was the best way to go about it. Actually, we started off in Prologue, which is very different. We migrated. We were not out to implement the ACTA model. So we, we, we read later that Alang implemented the ACTA model and we went out and read the ACTA, found papers on the ACTA model and said, yes, it does implement the ACTA model. We had not heard of the ACTA model when we were doing this. So we arrived at this thing as the best way of solving the problem. We were trying to solve the problem. That's what it's all about. That was the whole thing. That was the whole goal. A few other reflections as well here. Having this way, making and we're out to solve the problem, that made the development of the language and the system very focused because we knew what we were trying to do. Our goal was not to do it in a specific way, but to make it work. And that meant we had a very clear set of criteria of what should go into the language and what should go into the system. So was it useful? If we came up with a fantastic new feature, was it actually useful for solving the problem or not? And did it or did it not help build the system? And we came up with a number of ideas we thought were fantastic, but they just weren't useful. So they went. We came up with a number of bad ideas as well. The language and the system evolved to solve the problem. So we were developing the language and the way of working, using it to build systems at the same time. So we had this idea of what we knew what the system, the type of problems the system should have and how we should try and solve this. And we've developed the language and the way of using features of the language at the same time. So we're working upwards and downwards at the same time for it. That's why when you look at the language you'll find some things are extremely easy to do because that's what it's designed to do. Like in a functional language, calling a function is easy, right? Because that's what it's supposed to do. Here doing a lot of these things is easy because that's what it's supposed to do. So the system, the Allen and the system was designed to solve this type of problem. And in the language and in OTP, which I'll explain later what this is, there's direct support for doing these type of things. This is how it evolved. And as we kept on working, our ideas about solving the problem, how do you do this, they evolved as the language and as our system ideas evolved at the same time. So where we finally ended up, as I might say, the first principle for it was well, we need a lightweight concurrency. There are a lot of things going on in the system at the same time. We have to be able to handle these, all these things very efficiently at the same time. So we based on processes, our concept of processes, we must have a large number of processes in the system. So in those days we're thinking around hundreds of thousands. Now there are island systems running millions of processes in one system. So we have this large, this need for a large number of concurrency. It must be lightweight. It must be fast to create processes, it must be fast to do context switching. Intercommunication must be fast, because everything's based around the processes. We need asynchronous communication. So one of the problems from the timing constraints is the system must never block. And as soon as you start doing synchronous stuff, you block. So we have to make certain we never blocked. So you need asynchronous communication as the basis. We need process isolation because things are going to go wrong, therefore process must be able to die. We need the basis for handling errors. Again, this gets back to the fact that we assume that you're going to get errors, so therefore you must be able to handle errors while the system is running. So you must need provision for that. And you also need support for continuous evolution of the system. The problem is not loading in code to a running system. The problem is loading in code to a running system while it's actually doing things. There are many systems today where you can have a shell or something that's loaded in and just keep going or start again, whatever, but we had to be able to make sure that this is while the system is actually running. We had a few more things. We arrived at, okay, we need a high level language to get real benefits. We were comparing the things with C, Cascale, Ada, other languages as well, so we find we need a high level language to do it. The language should be simple. The language in the system, the way the system works, should be simple. And what I mean by simple in this case is there should be a small number of basic principles everything's built on. That's not easy. That's difficult to work these things out. It's easy to just throw in the idea, new things all the whole time and make the system much more complex, but to make this work in the basic system should be very simple. If you get it right, then you have a powerful language and a powerful system. Because then you can build whatever you need on top of that. You can build all the functionality you need on top of that and don't have to bake it in and throw it in and make it difficult. So in this sense, small is good. And the language should be simple and to understand in progress. I don't know if we succeeded in the last bit, but yeah. And another thing we found out, we should provide tools for building systems, not solutions. Because one of the things we noticed when we were developing the our language and system, so we had to use a group who came back and we were working together with and we found that often when we tried to provide a solution for them to try to help them solve their problem we got it wrong. Just misunderstood the problem so that we found the basic idea was provide tools for them to build systems to let the users build systems because they know exactly what they want much better than we could design for. But we could provide the right set of tools for them to build what they were trying to do. Yeah, this is actually me at work. Sorry, it doesn't look it, but it is. So what am I doing? Well, playing with a train set. If you look at the back here, this thing there, that's actually a small switch. That's the switch we had in our lab which we tested things for. So everything we were doing, we were running against that switch and testing it actually worked. So we could make telephone calls on that. And we were going to have a trade show. We were going to take part in a trade show and we thought well presenting that box like that is pretty uninteresting. At most there's no blinking lights on or anything like this. Most are small red light settings running. So we decided how could we attract people? So we decided we would make a train set that would have a train track running and have a train running on it powered by our line, of course. So that's me trying to sit down there and program a system. I probably went slightly overboard at the end because we had a complete ATC system for it. So you could run trains, you make sure they never collided and things like this for it. You could book train path from one point to another in conjunction with others and the train would move and all these things for it. It was a lot of fun. But it was all written in our line and it all worked and it was fault tolerant. If something went wrong, everything just stopped. There were no collisions or anything like that. So yes, we're doing that. So where did we end up? Now we're going to look a bit more about where we got to and how this gets back on to what we're talking about the ecosystem. So yeah, what we end up with the R-Lang language, sequential language, it's a simple functional language. If you look at it, there's nothing really complex about it. It has a slightly different syntax. Most functional languages have a different syntax. Just pick one that just looks different from everything else. It's safe. Now we're comparing to low level languages here, for example, with no pointer errors and things like this. It's reasonably high level. It was then and still is in many ways. It's dynamically typed. The whole system is dynamically typed. And there are no user defined data types. You have a fixed data type you have to use. That's it. I can discuss talk later if we have time or if you ask me afterwards why, but there was a specific reason for doing this. It has also a bunch of typical features of functional languages. All data is immutable. We have immutable variables. They aren't really variables. We extend the use of pattern matching for everything. And there's no built in loops or anything like this at recursion rules. Again, this is just very common to functional languages. There's nothing special about this. The sequential language is quite straightforward. I'm not going to show examples of it, but I just want to make mention one feature. I think it's fantastic. Binaries. We have a data type called a binary. It's an array of bytes. Which is completely uninteresting. Except the interface can be very nice, right? So I can interface this not just as bytes. I can interface it as bit fields, integers, floats and things like this for it. And I can write down a structure as a pattern which I can then use both to build it and to match against. So this binary, which is a valid syntax, this describes an IP version for data in one go. So there's a 4-bit version. There's a 4-bit header length. There's an 8-bit service type. A 16-bit total length. A 16-bit ID. 3 flag bits. Which I've only seen to be 0, but tell me if I'm wrong here. There's a 13-bit fragment offset. 8-bit time to live. 8-bit proto. There's a header check sum of 16 bits. We've got service source IP and destination IP of 32 bits. And then you've got the rest, which is a packet. And this binary description describes that packet as in one go. And I can use that, both to build a packet. If I use on the right-hand side I have this construct to build the packet for me. I just give the values of the fields and I'll just build that packet for me. But I can also use it in a pattern to match against a packet. I can get a packet in and I can pull the whole thing apart in one go by using this matching pattern. So it's taking pattern matching to another level. I don't know if other languages have got this far doing it. It's a shame if they haven't because they shouldn't. But so, yeah, this just shows another thing. We were actually very actively using this, allowing to talk with other things, other parts of the system using this to communicate with us. Yeah. So that was a sequential word. The concurrency. We found concurrency to be fundamental. For our type of problem the concurrency was so fundamental. This is not something you put in a library on top. This is something you bake into the basic system. So this basic support for concurrency in the language itself and in the virtual machine, the infamous language. And as Mike Williams, one of the other co-developers of the language, he said, there are three basic properties you need to make for building really concurrent systems. And you need to be able to create processes quickly. You need to be able to do very fast context switches because things are going to be happening all the time between it. And the time to send messages between processes must be very short. If you want to make a truly concurrent system, you have to be able to do this quickly. This is the performance is dominated by these points. That's why we put a lot of effort into this. That's why, for example, we don't use operating system processes operating system threads because they're just too heavy for what we're doing. It's just not an option. The concurrency model, apart from that, is very simple. Again, it's a simple thing. It's a key scheme. It's simple here. It's based on lightweight processes. And they are truly lightweight. You can have millions of processes running in your system. There are products doing that. It's all based on asynchronous message passing. That's the only way of communicating between processes. It's asynchronous message passing. You need to have this for non-blocking. The system must never block things. It should never block the only way to really guarantee that is by keeping everything asynchronous or as much as possible asynchronous. It's really something asynchronous to block. It's a very basic mechanism. It's very cheap. If you need more complex mechanisms, you build that on top of the asynchronous communication. We have a selective receive that allows us to choose which messages we want to look at at a certain point in time. And other messages will just be ignored that have been sent through the process. Which means we don't have to avoid a combinatorial explosion having to handle every message everywhere. Processes are isolated. So they can quite happily crash without taking down other processes. And there is no global data in the system at all. We only have local data for it. Again, it's all about the concurrency and the fault tolerance for it and the scalability of it. So that is the basic fundamentals of the concurrency model. And it actually isn't more complex than that when you look at it. It might be more complex to use because it's a completely different way of thinking. The basic premise is easy. And it's the same thing with the error handling. The basic premise behind the error handling is that errors will always occur. You will always get errors in your system. You can try as hard as you can to make it error free, but you will always get errors. It might be software errors, it might be hardware errors. Someone sends you the wrong type of data or whatever it might be. You will get errors in the system. So the goal from that is to make sure that the system must never go down. That is the basic goal of the whole thing of the error handling. The system must never go down. Parts of it may crash and burn. They will. But the system as a whole must never go down. So again, in conjunction with a telephone switch yes, you might lose a call occasionally, but the switch itself will never crash. So when things go wrong, the system must survive. That means robot systems must always be aware of errors. So you must always think the errors, things are going to go wrong. What am I going to do about it? But I do want to avoid writing error checking code everywhere. One, that's very verbose. It's a lot of effort and it's also very easy to get it wrong. If I have to go try and check it with everybody there and work out everything that could happen, whatever. And we want to be able to handle processes crashing because they will crash. And we want a mechanism that interacts well with the process communication. So what we want to do, we just want to let things crash. So when a process goes wrong in the system, we want to let it crash and let the system clean up afterwards and keep on going. And crashing one process will not take down the system. So the error handling mechanism, again, is very simple. It's process based. So we happily crash processes. I'm talking Alan processes now. We link together processes. And when something goes wrong, the process crash will send signals that the process is linked to and they will crash. We'll take them all down. We can take down a group of processes working together. They'll just crash the whole thing. You can, however, you need to monitor processes. So there's a way in the system where we call trapping exits so that it can monitor and make sure that when it gets an exit signal it's converted to a message and it can see that the other processes die and do things and clean up about it. So that is the basic fundamentals of the error handling mechanism. So it works very well together with the other processes. They sort of fit together. But you can say that this way of handling concurrency and the error handling, they're sort of two sides of the Alan concurrency model. They fit together and they work together. There's a lot about error handling here because it was very fundamental. So how do you use this to build robust systems? How do you build robust systems? So we're not saying fault-free systems. We're talking robust systems. Things that can survive errors. At least you need to ensure that necessary functionality is always available. There will be parts of the system of some parts of the system which always have to be available for the system to be running. So even if you get errors there, they still have to survive that. And the system has to be able to clean up when things go wrong. So yes, we might crash processes but we might have to clean up after that process so the system keeps working. And well, to be really strict you need at least two machines. So we need some form of fundamentals and distributions. So looking at the keeping necessary functionality we use something we call supervision trees. We build trees of processes, supervisor processes that manage their children and they monitor and manage their children. And if a child dies, child process dies, then the supervisor knows what to do. How to restart it if it's to be restarted. And you can build supervision trees for it. And supervisors can supervise other supervisors and etc. And using this mechanism you can build functionality that will always need to be there. So if something in a supervision tree which say a some form of server or service crashes, then the supervisor will know how to restart and keep on going. So it'll still be there. Everything won't be in here because some things we can just let crash. But we also need to be able to monitor clients. Clients, servers need to be able to monitor clients. Processes we're working together need to monitor each other. And groups of processes might, co-workers might die together. So one crashes, the other is cut down. And we can use the error handling primitives to do that. We can use the error handling primitives. We do use the error handling primitives to build the supervision trees to make fault tolerant systems. We can do that to clean up and monitor what's going on. And so the mechanisms we came at are extremely simple, but they are the right base for building this on top of it. It's the same with the concurrency model. It is very simple, but it's a suitable base for building any form of concurrency on top of it. Yeah, so this gets to the OTP bit here. OTP stands for open telecom platform. So what is it? It's a set of design patterns for building current fault tolerant systems. So how do I use these primitives having the language to build this type of system? For example, we have the example with the supervision trees. That's also supported, for example in OTP. How do I use this? So it's a set of design patterns. It's a set of what we call behaviors that implement the design patterns and allow you to plug in your specifics into that. And the behavior is extensible. So there come five default behaviors with the system, but there's nothing stopping you extending that by putting in new behaviors depending on what you want the system to do. And systems will do that. It's a set of libraries. Well, you need a force for programming. It's quite a large set of libraries. And it's also a set of support tools for building systems and also building releases of systems. This is what OTP is in there. And it's basic design patterns are the type of systems. We were thinking of when we were designing the language and looking at the type of system you would have using language. So the language and the systems fit together, OTP is a way of implementing that type of system. The whole thing fits together. Which if you look, makes something extremely simple to do. The important thing to realize, there's absolutely nothing about telecoms in OTP. If you look inside OTP you will not find anything telecoms. I think the closest is an SN.1 compiler. So there's nothing about that. So it's all about building this type of system. How do I build large scale concurrent, foreign scalable systems? I can just point here the type of system you build with our language, they tend to be very operating system like. So you will have a large number of processes coexisting, working together. It provides a set of services, then you have things using the service. And there's very seldom a central thread of execution. It's not like you're running one thread of execution, maybe starting a few things in parallel. You've got all these processes running concurrently, doing stuff together, working together. That's very much how the system looks. At most, if you want a central thread, there's something which starts this stuff. Then it all just runs. Anything that might be lots of little threads running. Every time you need a concurrent activity, for example, you will start a process through that. And it will manage that and then when it's done, it will die. And you just keep on doing this whole time. So there's no central thread at all in the system for you. Just a different way of thinking. I like to say it's less, our language is not so much a language with concurrency, but a system with a language. You're building systems all the time. So now we've gone to the top level, now we'll just go down to the bottom. So what is the beam? We're almost done here now. And it's a virtual machine drawn out. Which says very little and says very much. That's all it is. That's what it's designed to do. And so because of that, a lot of the properties of the system of the language, of course, are built into the beam. So it has support for the lightweight massive concurrency. It's the one that handles all the processes for you. It sets up processes. It's the one that does the asynchronous communication. It has the process isolation. It has the primitive for doing the error handling. All this is baked into the machine. It has support for the continuous evolution of systems during dynamic code handling. It has support for the soft real time. It didn't now, but now it has a transparent multi-core support. So it will happen. When you start up an hour long, it will happily use all the cores on your system. It will dynamically load balance between all the systems and things like that. Between all the cores and so on for you automatically. And it has a lot of interfaces to the outside world. Interface mechanisms to the outside world. The language design point of view, you seldom see these things in the language. You see these things from our language. There's a function to create a process. There's some way of sending messages and receiving messages. That's valid. Everything's under there for you. This is what it's designed to do. It has a few other features. This is more like on the functional side of the things that we see. It's the one that supports the immutable data. This immutable data is all the way down. It's baked into the base of the machine. If you try and take data, I honestly don't know what will happen. It will not work properly anyway that much. It's the one that only has a predefined set of data types. It supports pattern matching. Built in support for pattern matching. It has support for the functional language side for it. It has a specific model of handling code. And it's the one that doesn't have global data. So all these features of the language, they're built into basic support. Basic into the basic being the virtual machine. This is nothing strange. I mean, that's generally what any virtual machine does. It's designed to run a specific something. I mean, the JVM is designed to run Java. It's a beam it's designed to run. And it supports all these things. And as I said before, the reason it has support, for example, for concurrency and the error handling is because we've considered these to be so fundamental to our problem that we baked the primitives of that in at the very lowest level. This is not a library. So now we've looked at everything. We've gone from the very top OTP. We've looked at our own principles of it. We've looked at the beam, bloated support that. So now we end up at the ecosystem of this. So what is what we call the Alang ecosystem? Well, it's a set of languages running on top of the beam of Alang and OTP. That's what it is. So there's not just one language. There's not just Alang. There's a set of other languages that use this. And again, if someone decides to implement new languages for it, it can be extended as well. The thing with this is that these languages, if they follow the rules, they can easily and openly interact and support other languages in the ecosystem. Which means the system of the whole will become much more powerful than any language on it. You can use these things. You can interchange and you can mix and so on. This is what the ecosystem is. It also means, for example, that you will never make the wrong choice of the language because it will work together with everything else running on the ecosystem. Because you can write your system mixing the languages for them because they interact with each other. There's just no problem. There's no problem calling functions from one language from another language because it feels well-defined how to do that. It makes the whole system much more powerful. It provides a lot of features in all the languages then get it. For example, there are primitives in the beam and in the OTP for communicating with other languages. There are a number of different ways of doing it. And one example here is we can talk, for example, with a JVM. And because the ecosystem can, any language running on it can as well. So we can interact with a JVM. We can send work to it. We can ask it to move in. We have a slightly special back door here. It's our jang. So on the JVM there's actually an ALANG system running for it to implement on it. It's a very full one, a complete one. It's just slightly lacking to become an actual product which is a shame. But it is very good. Which means you can run distributed ALANG. You can run one ALANG node on the JVM and another run on it on the ALANG system that can communicate with each other and work together if you want to do that. Which means it's quite fun. So I just want to wind up talking a little bit about extending the system. So Newskins for the old ceremony. That, by the way, is a very good CD. Nothing else comes out. It's a bit old now, but it's very good. It's like all then economies of music. Very depressing, but it's very good. So yeah, Newskins. We want to extend the system. So we'll look at a couple of different cases here. So we can have languages that still keep the basic ALANG execution model and data types. But maybe we can give you a new syntax and a different packaging for it. And two examples of that are Alexia and LFE, which is displayed at ALANG. Which is missing a parenthesis. Yes, I know. It's a shame. And both of those work at the base level. So, yes. But you can add other languages as well too. Two others. There's actually a Lua running on top of the ALANG system as well. We interact with the others and the prologue as well. And how they work is that you have the basic system. You have the virtual machine at the bottom. You have ALANG. You have OTP, the whole set of libraries for it. And you can take your new language and you can put that on top of it. So you can add your new language on top and use all the features in there. You can add your new libraries and you can interact with existing libraries. Doing this way you can put a new skin on it. Which can look and feel very different. But at the base level it's not. And the thickness of the skin will sort of tell you how well you interact with the system. So, one example of course is Alexia. Which is relatively new but becoming very popular. And from its own quote, it's a dynamic functional language designed for building scalable and maintainable applications. Yes, that is whatever the whole ecosystem is all about. It's influenced by Ruby, but it's not Ruby. So it looks Ruby-ish, but it's definitely not Ruby. It has metaprogramming capabilities using macros. We can do things for it. And it has many libraries and interfaces. They've been standardized, rewritten, extended with new features as well. For example, they're using OTP behavior functionality by adding new behaviors. Which is nothing strange, because that's what OTP allows you to do. It also comes from the extensive set of builds. So this is one language. And this is, I would say, it's quite thin skin. Because when you get down to the base, it's still the same execution memory model as Alang has for it. That's quite thin skin for it. Which means the interaction with Alang is very tight. Another one, of course, is LFE. This is Play With Alang. That's for us who like parenthesis. It's a Lisp syntax front-end well, it's more these days. It's actually a real Lisp with all the features you would assume to have in a Lisp. So it's all the Lisp goodies. It's a truly homoaconic language. It was real macros. Macros had God intended them to be. It seamlessly interacts with the rest of Alang OTP. It has a small, very small core language. So again, I would class this as thin skin. An example of another language on top is Lua. There's a Lua implementation running on top of the running in the ecosystem. So again, a quote from Lua. It's an efficient, lightweight, embeddable scripting language. It supports procedural programming, object-oriented programming, functional programming, data-driven programming, and data description. Basically everything. One thing I like about Lua, it's a very small language. They've kept this simple. They've managed to keep it simple. It's using the right primitives. It's simple. And you can actually do all these things on top of it by using the primitives in the right way. So philosophically, I'm very much liking it. We implement all of 5.2. For example, we implement shared mutable global data. Which is things that the virtual, that the VMOTP is not supposed to do. So we implement that on top of it. And we implement Lua's handling of code, which again is different from the handling of code in the ecosystem. So that means, and a few other things as well, that just don't fit very well for it. And that means we actually, this is quite a thick skin really on top of it. But it does, and it works on top of it, and it can interact with everything on top of it. So if anyone's interested, I can show a system later. We're running small spaceships, and in the logic of the spaceships, it's programmed in Lua. And running on top of the Alang system, so each spaceship is an Alang process. Just to show that the interaction does work for you. So this is quite a thick skin, to be honest. Because I have to implement the things on the shared mutual global data. That's where the main problem is, it just doesn't exist. I have to implement this on top of it. But it works. And it's the same, they're not going to take up the prologue, but it's the same thing with the prologue as well. That also works. That has it again, a different memory model, execution model, which you have to implement on top of it. There's also other things. So this is what the ecosystem's all about. These languages that are running on top of the Alang Beam, and interacting and coexisting together to provide support for it. And using the basic features and extending the basic features to interact with it. To give you a different feel for what's going on. Yeah, that's that. So, any questions? I think, do we still have a bit of time left? Yeah, we still have time. Any questions? Yeah. Okay, the question was why not use a defined data type? It's basically having to do with dynamic code handling. The code handling is such that at any time in the system, you can load in any module you want or redefine or reload in any module you want. While the system is running. In that sense, there is no concept of a system. Which means that if I was to define a data type, have a user defined data type, there's no guarantee that I don't redefine that in the middle in one module, using in the old version of the other module or something like this one. It just doesn't work. You might be able to do that. I don't know. You'd probably know better. If you could do that these days, we could not come up with a way of doing that then. Because the very dynamic nature of the system means you cannot have that concept of something which is user defined and global. Okay, if you get running thin skin, you fake it. Okay. So for example, a classic one is Alang has something called records. Which are just syntactic sugar for running tuples which are predefined. Alexia has structs, which is just based on maps, which is one of the predefined data types. That's really unusual. If you want to run it at the low level, the native level, that's how you do it. You fake it. You make something that looks like something else. If you want to run something like in the Lua, well there you implement a memory model. You implement a system which implements the whole memory model. So for Lua, I had to implement shared global mutable data based on top of that system. That's how you have to do it. We'll start at the bottom. Alang has a mechanism that lets you upgrade at a very fine level. Here's where we operate. What if you're running on a cluster of nodes? There's another way to do that, right? Which is just one by one, take a node down, take one node down, upgrade it, bring it back up. And then just the fact that the cluster as a whole is supposed to work in a full tolerant way means it should just work. Now I can see all kinds of arguments in principle for why you don't want to conflate an upgrade with nodes. But in practice, whereas Alang is developed for your experience now, is that fine-grained in-service upgrade really used? It's used in some cases, but often it is as you say, if you're running multiple machines or multiple nodes, you will take down, generally you're rolling upgrade and take down that. That's the more standard method for it. But as I said, there are some cases where people are actually doing dynamic upgrades of one node that's system-wide fine. If you have multiple nodes, you still run into the problem with user data types because if you're doing rolling upgrades, if I change a data type in one node, it's suddenly completely inconsistent with everything else and all the other nodes. You still have to do a lot of work to get around it. I'm guessing you can get around it. Yeah, I didn't say it was easy. It's possible. And I can say there is support for doing it in OTP. There is support for doing that. That's one of the facilities you get for handling releases. You can define and upgrade what it's supposed to do when it's upgraded. And it will do it for you if you get it right. But yes, most people or people using it will do rolling upgrades instead. There's nothing. Well, it depends which company you went to. Ericsson, there are existing systems that are programmed in a language called Plex. There's still AXE in the bottom somewhere, so I'm probably still programmed in Plex which was a very simple language which supported these features. This, by the way, gave us a very nice target because if Plex and the AXE could do it, we had to be able to do the same thing. Well, the same effect. There are a lot of languages being written in C. There are a lot of languages being done. I'm guessing probably some were written in Assembler as well. What they're written in today, I have absolutely no idea. Hopefully they've gone higher up. I read an article a few months ago where the author was suggesting that if there's a pre-condition for your program, a file has to be present. And for some reason that file is not present in Java or other languages, you have a tricad slot. Here, you just let it crash. It's based on processes. Be very careful. When I'm saying let it crash, I'm not saying let the system crash. I'm saying that one small part of the system crash internally. Be very careful. There was a discussion on Stack Overflow a couple of years ago about the Alang letter crash philosophy. And one guy there was getting extremely worked up because he said, how can you just let your system crash? If my system crashes, it costs me a few hundred thousand dollars every time. So how can you do that right? And he worried, missed, of course, we're not talking about the system. We're talking small parts of the system, internal parts of the system. And the way to get around this, how you handle this is you design the system in such a way that when something crashes, other parts of the system will know what to do, how to clean up after it, to restart things when necessary, clean up after it. That's what the let it crash philosophy sounds much worse than it is in that sense. As I was saying, the worst thing that can happen is the system go down. We can accept things going wrong internally, might lose things for a few minutes around them. Another perhaps better way of describing this thing called the error kernel. You have a central by the system which can handle errors everywhere else in a reasonable way. If that answers your question. So yeah, it sounds much worse than it is. It's a great marketing but... Yeah. Erlang has a property like hot loading code. Is it a property of Beam or is it a feature of Beam or is it a feature of Erlang? If it is a feature of Beam, is it possible to use that feature in Lua also? It's a feature of the Beam. It's a feature of the Beam. Or the basic mechanism is implemented in the Beam, then there's a library on top which uses that. The trouble or the issue, problem, trouble, how do you want to say it with Lua? Lua has a different way of working with code. It doesn't work the same way as the Erlang system provides you. Which means if you want to be completely Lua compliant, you have to go past, you have to look past the Erlang code. You have to do it yourself, basically. So you can't use the basic mechanism in Erlang to handle Lua code. You have to do it yourself. Wait, I've got one here then. No, you have to think about this. When you design the architecture of your system, you have to go through and think, okay, I have this class of processes. If one of these crashes, what do I do? If this crashes, what do I do? You have to structure your system around that. So once you've done that, then they can crash because then you've fixed up the system around so it will handle it. But you do have to think ahead of what can go wrong. Well, not just what can go wrong, but where can things go wrong and what do I do about it when it happens. Then I can let it crash because I know the rest of the system will clean up. Yeah, okay, we have to wait one more. Okay, thank you again very much.