 Yeah, read the movie. If you look at the movie, notice, I'm the one who fixes the bug, right? Joe and Mike are just sitting there complaining about it. I'm the one who actually fixes the bug. Yeah, that is Mars, from the picture from the Curiosity rover. I haven't got Alang on Mars yet, but it's a goal. So yeah, okay. Oh. So yeah, the Alang rationale. I'm gonna try and explain little why Alang and that things around Alang look like they do. Okay, that's the goal anyway. So we have to start. Okay, so what's a rationale, right? It's sort of fundamental reasons, the basis, next position of principles or reason. Just trying to explain why things look like, what they look like and why they look like they do. And the question is, why should we bother having one? And a simple reason, well, one simple reason is it just helps users to understand how and why they should use things. So you can see this concept, this construct was designed to do this, which makes it easier to understand when you should use it. To help the language designers, if you're going to extend the language, try and keep track of what's going on, help the implementers, of course, so they can say what features we consider interesting and help people wishing to extend the language. I can say now, one very serious error we made is that we never wrote down why things look like they do. Okay, so we had long discussions about this. We had discussed things for years, literally, and we arrived at a solution and we thought this was good, but we never wrote down why things look like they do. After all, I worked with them, we thought they were self-evident, right? So how could anyone else, how could anyone not do it this way? They're stupid if they do it some other way, right? Not everyone agreed. But so I would say it's a recommendation to Chasse and the Alexia team, if you, when you make a decision for something, describe why you've made that decision, why you've done this and not something else, right? It will help people along for much later. And one more thing, I haven't found this very important because we were talking to people later on who we considered to be in the know and they did not realize why we did some things. Actually, what was the reasoning behind it? Well, actually, why things looked like they did. So it's a very, I think it's a very important issue. Yeah, I found, I've forgotten one slide, concurrency and parallelism. So our language originally is all about concurrency. OK, so concurrency, from my point of view, is well, what's the difference between concurrency and parallelism? Now, if you go and look at definitions, many people just equate them. I don't. I take another view, which is not mine. I've taken it from others. Where concurrency, that's a feature of the language, of the problem or of your solution. The problem has lots of things going on and more or less at the same time. Therefore, I want a concurrent way of describing that solution. That's the problem. Paralism, that's what your underlying system provides for you. So you might have one core, you might have 100 cores, you might have 100 machines or 1000 machines. That's the parallelism of it. And they're two different things. So you can run a concurrent language on top of a single system, which was what our language was done for the first 10, 15 years. And then you'll still get all the concurrency, but you won't have parallelism. You can run on a parallel system and you'll get parallelism and concurrency. So yeah, that's just it. So the problem, this was the problem we were trying to solve. So Ericsson had, well, a switch called AXE, they still have it. And it was and is a very successful product. It made Ericsson a ton of money, it was a very successful product as such. But how do you make programming these and maintaining these applications easier? So it was a difficult thing to maintain and keep and work and extend. But you had the requirement of the characteristics. They could not change how to behave, exactly the same characteristics of the system. And a bit about the problem domain here. So yeah, this is from a thesis my, well, the boss at the lab, Bjarne Decker wrote about some of the problem domain we were looking at. So yeah, if you look through some of them, you need the concept of time in the system. Things will happen at a certain time, we should not take, we should take a certain time to do things. They need to be distributed. Again, if you are, we'll get back to that more later, but if you want to make a truly fault tolerant system, you need at least two computers. However good your system is on one computer, when someone pulls a plug on that, it goes right, you need at least two computers. So you need distribution. Okay, we were controlling hardware, it was a very large software system. It was quite complex functionality in it. So I've got a big system, I make a change here and it affects a lot of other things I probably hadn't thought about originally. It should be in continuous operation over many years. And this is the cruncher, really. So typically from those days, I mean a telephone switch, that could be something the customer be using for 20 years, right? And it should not go down very often. So if the thing crashes, that would have cost Ericsson a lot of money, that should just not crash. And the maintenance should be, you should be able to do everything while the system is running, code upgrades and what have you. Quality requirements, yes. Fault tolerance. Again, the system should not go down if something goes wrong. You should be able to accept the fact that something, things will go wrong and how do you design the system in such a way it keeps on going anyway. And the last one is large number of concurrent activities. Even in those days, we're talking here late 80s. We were thinking of switches which might have hundreds of thousands of connections, maybe tens of thousands of calls going on at the same time, plus all the things the switch is doing anyway. So that's what I thought was quite fun reading a couple of years ago about the C10K problem that came up and which was supposed to be a very cruncher. For us, C10K was always trivial, right? If our system could not manage, well if our system could not manage more than C10K, it just wasn't interesting, right? I think they've got to see 100K now, haven't they? Or see 1 million K or something like this. But that's the level of concurrency we were thinking of. So yeah. And so some reflections on this, around this, I think might be important to point out. We were not trying to implement a functional language. The reason it is functional is because it became functional as we were working with it. We were not trying to implement the ACTA model. We actually hadn't heard of the ACTA model while we were developing it. I know the ACTA model is about from the same time. So we're not trying to say we were stealing it, we just hadn't heard of it at all. Someone came along afterwards and said, you're implementing the ACTA model. So then you go out and look at the paper and you find the ACTA model and say, oh yeah, we are. Right, yes. But that was not a goal, right? So what we're trying to do, we were trying to solve the problem. We had this problem, we were trying to make a system to solve the problem. And I want to point out here, it's not just the language. The language is part of the system. The system is a whole we're trying to work on. And yeah. So this actually had one benefit. It made the development of the language and the system was very focused. We had our problem, right? So what do we do, what do we need to be able to design this problem? And we were lucky to have a very, from very early on a small user group who were much more knowledgeable about the actual application than we were. Well, Joe and I knew quite a little, well, sorry, back. We originally started by Joe Armstrong and the reason a lot of it looks like it does is he was working on a prologue system doing a set of rules for implementing telecoms. And he says I'm number two. I don't know, I can't remember from those days, but I'll trust him on that one. And the third person was Mike Williams. And Joe and I were pretty not very knowledgeable about telecoms or the internals of telecoms. I can make a telephone call if that was about it. Mike had done quite a lot of telecoms programming, so he knew what it was about. But we had this user group. And every time we could come with a new feature, we could give it to them and say, is this useful? Is this useful for your problem? And they would come back and say, yes, this idea is very good. We can use this. They would come back and say, no, this is totally useless. We can't use it. Which actually happened quite a few times, actually. So yeah, that made our focus very narrow on it. And so it was about to solve the problem. So it also meant we avoided a problem. Some language and system has is that the designers have a lot of very good ideas and you find a lot of things coming into the system. And maybe each one in itself is good, but the whole becomes too much. We could avoid that problem because say, is this feature useful? Yes, then we put it in. No, we just don't put it in. Now I think Francesco said that probably Joe and I were more interested in adding new fun features into the language than Mike was. He was a sort of sensible person here for this. But yeah, that's one reason why if you look at the language and you look at the system, it's very basic. There are a few other reasons for that as well too, but that's why. So where we ended up? So the language would develop iteratively with our ideas, their feedback, etc., etc. And we arrived at a number of first principles, I'll call them. So lightweight concurrency must handle a large number of processes literally even from the beginning of thinking, sort of hundreds of thousands of processes and process creation, context switching, communication must be fast. You should not have, that's something, from our point of view, using the concurrency was a fundamental design goal or design factor. So if you're coming from an OO world, you start thinking about the classes, you're coming from the concurrency oriented world, you start thinking about which processes the system has. Asynchronous communication, this is what the application needed and we thought it was a much better base to build things on. You need process isolation. So what happens in one process must not directly or indirectly affect what happens in another process. And this gets back to error handling. The system must be able to detect and handle errors. That was not an option. If it couldn't do that, it was uninteresting. You need continuous evolution of the system, which in this case referred to being able to upgrade the system with code while the system was running. Some other principles as well. Well, we need a higher level language to get real benefits. In those days we were considering languages like C, Pascal, Ada, we needed something much higher level. And the language should be simple. Simple in the sense that you need a small number of basic principles and if you get those right, you can build everything on top of that. Then you get, there needs to be a small number of them and quite powerful, then it's good. So small in this case is good. You want to avoid the case where you've got a bunch of features in the language, then you find you need something else but you can't do it with the existing features, then you have to add a new feature and then you get sort of a heaping creatureism and the whole thing builds. And the last point is what we found out the hard way. Provide tools for building the system, not solutions. Most of the time we tried to provide solutions, that's when we got the problem wrong and the solution wasn't usable. So there are a couple of, I can give some examples later on about things like that. So yeah, these are some of the basic principles of it and that sort of reflects very much into the Arlang system, to the language and to the Arlang system. So getting on a bit. So we had the sequential language and we wanted a functional language. It is a functional language. It has a different syntax. Most functional languages have a different syntax. I don't think you'll find a functional language that looks like something with the C-style curly braces. But it's a very simple language. If you look at the Arlang syntax, you'll find it is much simpler than most other languages. It might be different, but it is a very simple and it's a very consistent syntax. Safe, well yeah, no pointer errors of course. That's a good way of crashing the system. Reasonably high level. It was then and it still is actually. It's dynamically typed. I'll talk a bit more about that later. And the same thing with the last point, there are no user defined types in it. That was not by accident. That was by design. Dynamically typed. I think someone said it because we couldn't implement a type checker. Well, we implemented a compiler. We implemented the language implementation. So if we wanted a type checker, we could have done that as well too. I mean, we taught ourselves the other bit. So what we could have done that as well too. There are a lot of papers. Yes. We had quite a lot of different backgrounds. So Joe would be working before with a Swedish space agency. So he was designing systems for controlling satellites, right? Written in Fortran. Michael, he'd worked a long time in telecoms. He was a very proficient C programmer. Before I started YALANG, I'd done quite a lot of work with Lisp. Well, C, Pascal, Lisp. Both implementing and using. The same thing with logic languages. Using and implementing logic languages, especially concurrent logic languages. So we had a very broad base. And we could have done a type system. It wouldn't have a problem. I like dynamically typed languages. So about concurrency. Lightweight concurrency, yes. Millions of processes are possible. And to do that, I'll say green here. I don't know if that's the current word for it. You can't use the operating system processes. We'll just allow you to do that. Just too much memory for it. So you can have millions of YALANG processes. And I think WhatsApp's the best example of that. The best example I know of anyway. They came out two years ago and said they were running two million concurrent TCP connections on one machine using YALANG. That would mean at least two million YALANG processes. So it's possible to do it. And it works. We use processes for everything. There is no global state. So I use processes for concurrency, yes. And also for managing state and any form of resources. And processes are isolated so you can quite happily crash them without affecting anything else. And there is no global data in the system. Really, there isn't any. Yeah. So one of the first principles behind things is with YALANG things. I'm not going to call them objects but it means too much to many people. There are two basic types of things in YALANG. We have immutable data structures. That's it. YALANG terms, they're all immutable, that's it. And we have processes. And if you look in the system there, there are actually two different type of things you have in the system. The X-Table. Yes, there's an X-Table which I was mentioning that. It's in there. The table itself is immutable data structure but all the data that's in it is a normal standard immutable YALANG data. So I can go in and put new things in the table but I can't modify one of the actual data in the table. It's all immutable data structure. And actually if you look at the X-Table how it's implemented and how it works it's very process-like. You could actually implement the X-Table I've done it once with a process and get exactly the same functionality for it. So from our point of view what's a process? So from our point of view it's something which obeys the process semantics. Duh. It's parallel independent execution. The processes are independent and they run independently of each other. They communicate by using asynchronous message passing and you have links and monitors for error detection and handling. So if it's a process I can link to it and I'll find out when the process crashes. And they obey exit signals. So if it's a process and I send an exit signal it will die. That's what a process is. And how you actually implement it from our point of view is completely irrelevant. So a bit more here. Everything's run in a process. You cannot run anything outside a process. And all processes are equal. Allowing is a very egalitarian system. All processes are equal. There are no special processes, no nothing. There's nothing like you sometimes get the feel of in other systems with concurrency that you have a central threat of execution and they might start other bits and pieces like that. There is no central threat of execution. So for example, the outlink shell is just a process like anything else. That's why you can start up lots of shells if you want to. And there's no process hierarchy. Flat process space. And again as I say before process use for many things managing state, concurrency, etc. etc. Again, process communication. Everything's by messages. There is no backdoor communication method between processes. If I want to tell a process something, I have to send it a message. And all the messages are asynchronous. Okay. A biff here. That's the outlink term for built-in function. They're the ones that are basically all in the module outlink. I know LXC has split those out and moved them into different modules amongst other things, process and kernel. But they're what's built into the actual system. And all the biffs around processes and communication are asynchronous. All they do is check arguments. Then they just send things off. It happens. And there is actually one exception, also half exception to that. That's when you send to a registered name. You're actually going in a synchronous fashion and checking that registered name exists before you send to it. But the actual sending is then asynchronous. A very nice feature with this is it works with distribution. I'll talk a bit more about that later. If you want to do synchronous stuff with distribution, it's difficult. The underlying mechanism does not properly support it. If I send a message to another machine, it's very difficult to keep track of whether it actually arrives or not. OK, it might maybe be a local network, but here we're thinking wide. So it's very difficult to do that. And we mostly got that right. But we actually ended up in one case where one of the reasons, well, people hadn't realized what we were thinking of here was the asynchronous. So if you look in the definition of the link function, the return value is not asynchronous. Originally it was. You tried to link to a process. The process wasn't there. They sent back asynchronous again, saying it wasn't there, but now they put some synchronicity in there. That was just one case they hadn't realized how we were thinking since it was out. Communicating with the outside world ports. Well, there are two methods. There are ports which makes the outside world look like a process. They may process semantics. You send messages to them. You link to them. You send very nicely with the Alan Wave thinking. We use ports to talk to hardware in the general sense. So when you open a TCP connection you're actually opening a port to the outside world. And ports on the inside port needs to talk to someone. If something comes from the outside world it's going to become a message and it has to be sent somewhere so the port has the concept of a connected process. It only processes the port. Anything that comes in from the outside world will be sent to that connected process. There are a number of biffs now working on ports. You've got port command and a few others as well too. They didn't exist originally and if you don't want to use them you can still just send messages to the port to get exactly the same feature. Actually implementing these the feature misfeature of having the biff-port biffs actually makes the implementation more difficult. Because now the implementation has to both handle asynchronous things and biffs which also should behave asynchronously at the same time so the implementation has become a bit more complex. And honestly we had a long discussion whether we should have a data type called a port at all or just make them the same as PIDs processes. So you still really bake in the internal thing here. I'll get back to this in a moment. So error handling. The basis here is of course errors will always occur. No matter how good you are no matter how fantastic your system is no matter how much help you have in your system you will always get errors. And I don't agree with people who say if we had a fantastic type system we wouldn't get errors. You can always get errors. That's easy. Bad input. Something goes wrong in the system. Something I don't have any control over does something with my system so I get errors. So that's just a fact of life. And again if you want to build a robust system you have to handle errors in it. It will go down. So from our point of view the system must never go down. So parts of it might crash and burn you'll get errors coding errors, hardware errors, whatever but the system must never go down. Okay this was for our type of system. We're thinking here telecoms type of systems so yes if something goes wrong you might lose a call but the switch itself will keep on going. And that of course may vary or will vary depending on your type of application. So the basis here is you have to sit down and think and work out what happens when something goes wrong. What should my system do when something goes wrong? That's what Greg was mentioning yesterday in his talk about. You have this thing and it forces you to sit down and consider what should happen when something goes wrong. Should I let it crash? Should I try and save state so I can go back etc. but it forces you to think of that problem. From our point of view crash it, let the system go on. Yeah, Francesco when he flew over he had to let it crash t-shirt on, right? I don't think anyone reacted to it but never mind. Pardon? So yeah, with the other hand so yes, so this is what a system must do it must be able to detect the errors, contain the error, contain the effect of the error and handle the error and recover from it and you designing your system must decide what handling and recovering from means for you. So yeah, robust systems must always be aware of errors but you don't want to write error checking everywhere. When it explodes the code you're always going to get it wrong so I've been there done that or you just don't do it because you don't want to do it. So classic Unix programmer every system called just ignore the return values anyway, right? So what we want to do is we want to avoid writing error checking code everywhere, right? And we want to be able to handle processes crashing among cooperating processes. So the idea is that the general case is you'll have a bunch of processes working together doing something. It might be a connection, it might be a telephone call, might be something else depending on what you're doing and usually if one of those processes crashes the other processes really can't do anything sensible. So might as well crash them anyway, right? And you want to interact well with process communication. So it must fit in with the standard communication mechanisms otherwise it can become all strange. So the basic philosophy, yeah, let it crash right. If something goes wrong in a process don't handle the error cases program as for the correct case which is nice and easy, if something goes wrong crash the process. Someone else will clean up after you, right? It's a very nice idea. So yeah. And the other handling is process based, yes. If one process crashes then all cooperating processes in this case processes which are linked together should crash as well. And system process the system can then monitor processes, actions, tasks, jobs, whatever you want to call them and when they go down and crash that system can clean up after them and knows what to do. It can reset stuff, it can reset hardware, can reset resources, whatever it might be. It might be restart them. Maybe these processes are always running so the system will then restart them. Sometimes however it can be quite reasonable to handle errors locally. It might be sensible to do it locally. I'm doing something and if I get an error here I can do a sensible thing then of course you do it. But don't be scared of crashing the processes. So in our line this again applies. So far everything I've said here applies to Elixir as well. I think most of my presentation will apply to Elixir about three slides that don't. We're not scared of errors. We're not scared of errors. We're not scared of processes crashing. We know what to do. So the modules and code and code loading only have compiled code in the system. That's it. Yeah. Module is the unit of code handling. Both the unit of compilation and the unit of code loading. So you load a module. If you delete a module, you delete code. You delete the module. There's no way around that. You can have multiple versions of modules. You can have two currently. I don't know exactly why we chose two originally but we did. And this means there are no inter-module dependencies at all in the system. That means you can quite happily delete and reload modules on a perlop module basis and you know you'll not make anything funny with other modules. They might try and call you and in your new code you might not have the refunction there or the module might have been deleted but you won't be... there's no dependencies between the modules. And all functions belong to a module and again, all modules are equal. Sometimes you'll see in documentation things like system modules but there's nothing special about a system module. They're just modules. The only difference is who wrote them. And there's no module hierarchy. It's a flat module space. And yes, this causes problem with module naming. LXEA has one way of handling it by implicitly... well, first prefixing the module name with LXEA then allowing dotted module names. That's just a way of getting around the flat module space. Yeah, there are quite a few things that we're missing from the very early Alang systems. Actually, the systems used to write the first products. Well, code handling... well, that was very, very, very early while we're still running on top of the Prolog system because then we're just using Prologs code handling but as soon as we started writing our own emulator about 1990 we started doing code handling. Binaries took a while before they get in. So the first couple of years we didn't have binaries at all. So when you were talking about outside worlds, the list of bytes you were sending backwards and forwards we didn't have ETS tables either. They also came in later. Funds were much later. OTP came about 1995, 1996 and NIFs are very recent. It worked, anyway. But a lot of these things you take for granted today in which are fantastic just didn't exist. Distribution, loosely coupled nodes processes. We were thinking very dynamically here so nodes could come and go. It wasn't like you set up a system with predefined system with a set of nodes on it. It was very dynamic. And it's completely transparent if you want to. So I mean you can do all the communication mechanisms with the error handling to distributed nodes transparently if you want to. Sometimes you do, sometimes you don't. And having everything asynchronous the communication and error handling mechanisms asynchronous made it actually much easier to implement implement distribution. Except for this one feature of sending to a registered name on another node. I think Cluck at class V extremity did the actual distribution I think he said something like there were four different error cases you had to handle just to get that relatively simple feature working properly. So synchronous stuff is a pain. Yeah, OTP. OTP of course wasn't the first attempt to writing systems with Ally. And the first product products made they they tackled all these problems and we've thought a lot about how do you build a system. That's one reason why some of the features look like they do because then you could use them to build systems. So most of the concepts you will find in OTP they existed before. So the first product they attacked these problems they made a system called the boss the basic operating system which solved these problems. But it was for them, it was for their product it wasn't generic. So when other products came along you needed a generic system for doing this. So a lot of these ideas, these ideas were supervision trees and linking and how you build things together they all existed in the boss they existed before the boss when we're doing things anyway. So it's a large set of libraries basically all the libraries are part of OTP it's a set of rules and design patterns for building robust systems. That's the goal, how can I make a system which is robust and fault tolerant etc etc etc. That's the goal that's the whole goal of the language. Generic behaviors, patterns tools etc supervision trees it has the concept of an application which is actually a very bad name because it doesn't mean it all means exactly what most people think about when they hear the word application so if you're looking at OTP and it talks applications think of components it's more like what it is. And each application here will have its own supervision tree using supervisor for the code in it and you build a supervision tree to handle the case to keep track of when processes die and more specifically if you need to restart them. If you don't need to restart them you don't have to put them in supervision tree you can of course, but you don't have to. The systems point of view our systems built with our language tend to be very OS like. So if you look at you start up your typical operating system whether it's Unix or whether it's Windows or whatever it might be if you look there you'll find a large number of processes running in the system which provide services in a more generic sense. They're just there talking about the system. That's very much like when you do an Alling system you'll find the system provides you with a lot of services whatever they might be. There is very seldom a central threat of execution. There's very seldom you have one threat of execution which might start its further processes as they go. That's just not, typically not the way they look. You might have something which starts processes for example if you're doing a web server you will have one process which sits and receives connection requests but then it will start another process it will start another process to handle that connection and will go back again. It's most likely will not sit and handle the fact or monitor that process or anything like this for you. So it's a very operating system like. Again getting back to the fact you can have lots of shells. Of course you can run lots of shells in your operating system as well too. You can do the same thing here. So there's seldom a central threat of execution and that reflects again into the basic primitives that went into the that exist in OTP in the Alling system. So it's not often well it can happen of course when I'm doing my thing I'll start up another something else that I'll run in parallel to do something and then get the value back from it. I might send requests to other services of course most likely I will but they're already there right they're already there and running. So there's, well just some examples so the system IO it's process based of course everything's process based here it's built around the concept of IO server for time almost. It's built around the concept of IO server and IO server is something which on one side talks to it a device whatever that might be and on the other side it talks to the Alling system and it's the interface between them. So it allows me from my code side to have generic IO functions so I can do an IO I can do an IO write to something it will go to an IO server and the IO server knows exactly what the hardware or the outside world expects from this write. So back in those days for example the bad old days you would have file systems or files which were say a set of lines of a predefined length the IO server would handle that I could write lines that were any length it would handle the fact that it might need padding or whatever it might be or things like this for and the other way around is I've got my generic IO server which knows how to talk to a device and into that I can plug in IO functions so I can have one device and I can do lots of different IO to it I don't have to read Alling terms or lines or characters I can freely mix between these things because I've separated the functionality where I've merged it in one place and I found out later that the concept we've been done for doing input that would have been rediscovered in the Haskell world and they called it an ITERAT but it's exactly the same thing how the input side from the IO in the IO servers work and it uses concepts called process groups and group leaders so there are actually process groups in Alling again we're talking about the Alling system here they're much simpler all the processes in group they just have the same group leader the group leader has no idea the process in this group is just a thing and that for example is how the front end is used so when you start up an Alling system you get a shell if you press control G in the shell you'll get back to this thing I call the user driver there and that's got a small little AI that's got a small little interface where you can for example start new shells which is what Chasse showed and I can switch between shells so I can have multiple shells running or multiple other things they don't have to be shells they can be any program any system that you start with with a start function running there and they will get a group leader and also all the default IO for each of those tasks or jobs will go to the same way and this user driver I can then use to select which one of these jobs I'm looking at typically around the problem if you look at most say running on Unix systems you start up a lot of things in parallel and they're all doing IO and you screen just a big mess this allows you to choose it that's just something in the system again it shows the operating system thought think behind the system and it's one of those things which is not really documented anyway so yeah in general cases some more specific things which I think most people here have found already pattern matching is great it's fantastic we use them everywhere you'll find the same thing in Alexia everything is for controls for doing everything for them and we have the goal which we managed to fulfill that you have a constructor how you construct data the syntax for a pattern for pulling it apart is the same so if you on the right hand side it builds things on the left hand side it pulls things apart just to make it easier to do it that works right guards they were added because sometimes you can't express everything in a pattern I might have something here which I want to be an integer and I want it to be bigger than 10 but less than 20 trying to put that it's impossible to put in a pattern I have seen patterns not alling patterns but other patterns in other languages where you can put this type of information in the pattern but I think it became very unreadable I found this was much easier for it so guards are they're tests they're not expressions they guard tests slightly not clear here they're tests most simply because they behave differently on failure so when an expression fails you got a process crash or something like an exception a process crash when a guard test fails all that happens is the guard itself says it fails and you go on to the next option so if I try and add two atoms together an expression I'll get a bad error there if I try to add two atoms together in a guard the guard just fails and being able to do building expressions in them it's a good thing but it's made the difference between between them less clear so that's see we got some alling stuff right variables just bind one references yes I know I like it if you want to complain about it don't complain about it to me what we did get what we actually got wrong I think is the variable scoping in alling so in the body of a function of a function clause there is just well no scope or one scope however you want to look at it and a variable there is the same variable everywhere there is no scoping for it and that can be confusing to be honest so if you're running from the alling point of view and you get an unsafe variable error from the compiler that means you're trying to use a variable in a strange way because of the scoping rule you might be thinking you're getting a new variable but you're not and yeah ok this affects patent well not the scoping but it affects the patent matching so if you use an already bound variable in a patent it's an implicit test it's not a rebind in LXC it's an automatic rebind unless you've prefixed it with the up arrow in which case it means you're testing against the value and the equals operator originally was a simple assignment but then we found we needed something more sometimes you need a patent so it was extended turn the left hand side you have a patent so it's typically a right to left type thing and yeah we added records which we've had a lot of flak I think after semi-colons, commas, dots and the if the records syntaxes things most people complain about if you go on to the net they were there to solve the problem so our first user group were actually doing a building a product using it said they wanted named fields and tuples so they were using tuples for our gates but wanted named fields and tuples it's a perfectly reasonable request but they wanted exactly the same efficiency as the normal tuple operations otherwise they would not use them this meant we couldn't do anything dynamic without slow so it had to be compile time so it's a compile time feature and the lack of typing means you had to include the record name for the record operations that's just a fact of life it also means from the Alling point of view I cannot use that syntax x-person.name equals Robert to set the name field of the x record because what that means is I'm taking Robert and trying to extract it from the name field and check against it that's actually a function call so I'm going to get an error we've got the Alling if, we're almost done this is let's say after semicolons commas and dots this is the thing people complain about it's a hack to be honest it's a hack so what we've found is originally there was only functions we didn't have cases we added a case but sometimes you've got the case we've got the case of the case on the left hand side there I wasn't actually interested in the pattern matching I just wanted the guard tests so you get people writing things like that on the left hand side which however you look at it is not very nice it's not beautiful so then we did the simple hack we just removed the pattern matching bit and keep the guard tests in and call it if instead that went into the compiler it was an easy thing to do and the compiler was syntactically easy it was a quick fix they only used guards it was very simple well the result was it wasn't used very much so we didn't really think about it it wasn't used very much but it does have its limitations so I'm almost done now so getting back to the problem domain if you look at these things we were thinking from the telecom side if you look at these features or these requirements of the problem you'll find they are actually quite common so most systems today say web systems or whatever any form of servers have these requirements on them if you really start questioning the people who want them most of the time they don't realise they want this to be honest they'll say yeah we want lots of throughput but you want to do lots of things at the same time yes of course a lot of these things of course you want these things type of requirements on them and there's one I forgot to put there not in there low latency that of course was a definite requirement but what goes into timing requirements for example low latency of the system and if you ask people what things they take for granted and they'll complain about when you don't provide it but they won't be in the requirement list so this thing is not just most of these things are not just telecoms okay and that's the reason why our link and elixir in this case is actually useful today because it provides a solution to a lot of these problems and most other systems don't you'll find a lot of other systems that don't provide these things GO for example provides concurrency but it doesn't it can't handle errors you just can't do the error handling you cannot use it to build a fault tolerance system or a great difficulty and also another feature is that the our link concurrency model of doing communication and doing error handling scales the fact that everything's asynchronous the fact that processes don't share state scales and non-sharing a state for example scales but actually it makes many things more efficient so you get this quite funny state you have for example cash coherency yes the system provides cash coherency but the further away your cause are from each other the more expensive the coherency becomes so yes you get it but you pay for it but you don't see it and in some cases having separate copies of it is actually more efficient because I can look at my local copy the local version of it that's much faster than looking at someone else's version of it so a lot of the things that the standard what to say thoughts about building systems break down when you get into concurrent systems we get into parallel systems especially we get into real high-scale parallelism so the model scales that's another reason why it works and just as a final pat on the back here to say how good we are these are some companies using Alling in this case I mean using Alling seriously in products so yeah I mean what's that of course they're the big one but a lot of the other ones are using Alling here as well too Eric's now it's used in their products used in some of their products used internally for testing environments Haruku, Basho, of course React and you've also found I haven't got a slide for it but there are quite a few things implemented in Alling which people use although they're not interested in Alling for example the React I'd say a lot of React users don't care it's written in Alling because it's not going to do an Alling interface anyway same thing with CouchDB part of that's implemented in Alling because they don't care they're not going to see the Alling anyway they can but you don't have to RabbitMQ same thing there eJabbaD same thing there so yeah that's it okay that's about done yes