 Okay, so We're going to be looking at a few spaceships Okay, we're going to look at a number of different things here both Well running our line will be running our line together with other languages and Also looking at observer. I don't know how many people observe some have So one of the things with the Alan system is very good at is introspective An observer does some of the introspection work. You do a lot more So we will start up the system So what are I'm not going to show any more on this show But what I'm running in this show is actually a separate node I'm going to run observer for that node. So we run an observer in one ally node looking at the node doing all the ships So we can start up the observer. Yeah, I'm not going to make this any bigger because all I'm all I'm writing now is I'm not going to do any more So now I've popped up observer and here is observer It's looking at itself. It's not doing anything. So for example, we have load charts We can look at the schedules and we'll see if there is no load on the observer But now if we go back to our For this one here, I've now started up the ship The Alan system I've done Now I'm going to run the spaceships I'm going to start The system I'm going to say I'm going to have a space The universe of 400 by 400 and I'll run 2000 ships Now we start up the universe there And yeah, there's nothing in it yet But now If I do a start run I'm now going to start the run And the 75 here is a tick time for each ship This is basically how often a ship is going to update its own state Every 75% Now we start up Now suddenly all these ships pop up Can you see them? We're running 2000 spaceships In our universe And these spaceships are very naive If you look at them They go on a straight line until they hit the edge of the universe When they bounce out of here They follow one down the edge of the universe Now this system is written now Of course It uses the STL graphics Doing this It's very simple as the basic use I didn't write it And the logic is written in law So each of these So I have an implementation law in our lang And each of these ships run its own Runs its own little law machine That's wrong So now I can go back and start on the observer here And now I say okay I'm looking at observer I now want to look at the other nodes I can see which node And I want to look at the simulation node So now we'll see on observer It's getting all this information from the simulation Now we suddenly see that The schedule is doing quite a lot of work here We see that they are load balanced Examples of load balanced On the left I can't Unfortunate to make that bigger But we can see here for example We can see the memory usage And most of it is in the processing How the memory allocation works I cannot explain this Literally I cannot explain There are a few people in the world Who are inside the processing So what I can do now is Of course this code is written in So what I can reload it I can change the code So what I can do For a certain number of ships Is say now they should run some other type of code So we can change the logic And what we can do here is I can do set ships So we can set say the first 700 I can make them something called a run ship Now what a run ship does So the original one Was default ship And what a run ship does So it goes in a straight line But when it starts getting around Near the edge of the universe Changes the logic And if we start this You'll see some of them are starting to go free Can you see that But there are some green ships You can look at those and they do behavior It's not a green program We saw the spike on loading code And now we see there's memory increase as well So yeah And again here is the thing is These are, each ship has a separate Outline process So if we go back and look here For example we can see in the system We are running 2049 processes So 2000 of those there They are the activations Now the 49 processes There are a few system processes Well there's a basic outline system Process is about 28 I'm running a few processes And very simply So when a ship changes position We're saying where is Every 20 milliseconds is scanned We can do a lot more things here We can make some ships We can make them timid ships They're frightened They're very timid So when they go forward We can set this up We can set this up Make these timid ships You'll find they have another color They go slightly yellowish What a timid ship does If we go in a straight line It's a ship just in front of it It goes back the other way I can show you the lower code They also bounce And if we went back To our server here Load charts We see there's another spike And I was loading versions This is running in increased load Of course this is game And what you have to do is Kill them In a straight line When they get a ship They kill them Attack ships here We'll just make 100 I don't want to make too many Because otherwise they die Now they turn R&B red Unfortunately I don't have a sound interface Every time a ship dies it goes boom We have sound We also can see That means when a ship is shooting Another ship The yellow squares here They're exploding So we see the ship is dying And if we go back And look at our observer here We can see the load is going down Because ships are dying And if I look at the system Here we'll see the number of processes We've almost killed a thousand ships We've killed 800 ships I can do a lot more fun things So if we go back to load charts There So I can go here So we have this concept Of schedulers Basically there's one scheduler per core So I can change it I can say how many I want to run So we can do Alang Call on system flag So I'm going to change these And the flag I want to change Is schedulers online So I want to say And the load is dropping I don't want to run 8 schedulers anymore 4 schedulers So we see 4 schedulers are now shut down And all the work's been moved over to the other 4 schedulers So we saw some load increase here And we saw a decrease So I'm changing this wireless system I haven't deleted the schedule So I've just said do more work Of 4 schedulers on the 4 schedulers This is basically what this is We still see even though the process Ships are dying We're still seeing everything pretty well low balanced So the beam makes quite a lot of effort To keep the load balanced We also see down here we saw some strange things With schedulers were dropping What it tries to do Occasionally when the load gets down too far It tries to say well I don't actually need to run all these schedulers Maybe I can install them If you go back to look here We'll see we don't have any schedulers Yeah this doesn't work So these are some things I want to stop this and start up again Show one other thing So for those who are interested I can show you the lower code So yeah we'll start this up again If I set this at 100 Okay so What was happening before With the timberships and the attack ships They were basically Especially attack ships What was happening was that one guy Look at the universe Look at the university And then it would send them a zap signal A zap message That would hit the ship And the ship would then die Explode and die It was just nacing and sending nacing If you do other things as well There are other versions of doing stuff Which will also communicate However we can try another one What would be nice Is if we can make the ships run in flux So these are all running independently Maybe we can get them operating So we can try that There is a They all turn orange After a while we notice things We can see things happening in our system Some of the ships are stuck Why are they stopping? Well we can stop a few more If we want to stop to see them They are doing secret communication So each ship when it finds a ship in front It sends it a message and says Tell me your position and your speed Then it sits and waits for reply So now we have ships sending secrets Messages to each other And of course When they start talking with each other They'll be sending a message For another ship asking it's speed And someone else asking To help them to stop the engine So this is a very simple example Of the danger of doing Synchronous message pass For remote procedure calls It is so extremely simple How could we go around and we can start putting it The easiest way is just not So instead of asking your ship What's the speed It should be saying Send me or send me Then we can see Look here on the load The system will eventually go online Eventually get on to GitHub And need a bit of cleaning up What I'm doing now with So now we're running The logic is implemented in the door That was one of the purposes of doing this I'm now rewriting Of course they can implement the logic Well Either in our language I had a lower implementation I wanted to test it I was giving a talk And I wanted to demo Two things both demoing the lower Of course and also demoing this is a way Another way of writing Gaming code Classic gaming code You have one central loop that runs through the objects It's got a bit better now But it's still very much parallel This is just an alternative way of writing So each of these objects Each of these things has its own little loop To communicate with others There's no central There's no central loop in the system As I've said I have a ship behaviour Almost done now You can plug out like Or LXCN in the system Anyone else How much can you do in one line Quite a lot Okay, great. I'll just Cue myself That's fantastic Hi, my name is I am an architect at Redbus So basically We have been running in production For now three months In our line system This is not specifically Directly related to that project But this is a very small portion Of what we've heard Which is In our production system There are certain APIs Which you want to generate on AROC And you want to run it In a jail environment And you want to take it off And add it, change it At your will You don't want to defy code There are many ways of doing it But this is one small way of doing it Just to also demonstrate What Erlang is capable of And they want to look at How the Erlang can actually compile Code or the fly loaded into memory You can actually store it Bring back again Recompile And do all that magic And you can have as many functions As you want You can even tie them together You can do a lot of introspection All that power Plus you can also look at The complexity of a function Of production It took While you execute the function You can do all sorts of things Which I think are useful And we would definitely want to try that First in staging And then go to production But this is a sneak peek of what That you want to achieve So this is just a web socket Interface into using Cowboy Into Erlang And there are a couple of commands These are the functions Which are These functions are defined If you ask something new If you ask questions There is no function Which is like ORNLP Defined So it actually doesn't know about it So let's Why don't we define it Let's have the same question again So it's actually querying The Stanford project In Format It's a simple function But you're defining on the fly Even if I shut down the virtual machine And start it up It's getting from a database So you can load it up It can even be shared across Your cluster machines Likewise you can Just skip a couple of them It's kind of the same If you've got the context here So what if a function Is inside something And understands So let's define a function Which makes another function call Let's try that again So now this time it's supporting that It doesn't understand what it may get If we get called Just stop me if I'm running too fast At the end of it I'll show you All these screen shots Which Now it understands Putting HTML within it So you can do it As many times as you want As many functions as you want Basically it's like a concept Of defining function on the fly And you can actually delete the function Do whatever because it's all powered by a lang So there's hardly anything which I had to do Regarding this The biggest advantage of this approach Would be to run certain Functions which we need On a daily basis We have a lot of data stores And a number of them in unique ways For either debugging, recon Or whatever it is We don't want to pollute the Platform which Was compiled in So if you want to define on the fly On production in a safe manner Then this kind of provides That kind of workflow So that's pretty much It was actually very fast But it just run through What was captured Of course So it's basically Because real language is able to identify And giving all the constructs Where you know What is unknown and what are known So it is able to prompt What it doesn't understand And if you define it It will start making sense That's pretty much Yes, these are compiled So they are loaded into memory And they're compiled in memory And then they run The anonymous functions Eventually stored in a data store A t-value store Good that the internet is holding up So it's actually going over the internet Saving it in so you can Potentially have another application Which is also getting all the learning So technically if I were running in production Then one of my colleagues Could have called that option For it So the learnings are basically shared So if you want that function to be run In an application So you can get that For any of those The applications which it's kind of shared Because these are not This is the logic It's not to be fast The logic is supposed to be something add up And something to do with So we get A lot of requirements all the time But we don't want to implement them in production Because they're just not right In a real life environment And with Erlang we can actually So if I were to show you the screen The debug screen you can look at the Functions that are being invoked So I don't know if you can see this You can see a local function call If I were to highlight that Is that clear enough So it's Erlang is giving a very nice Functionality of Looking at which function calls Are you being invoked Whether they exist or not And there is a custom code which is Written to intercept that And then you can either Allow those functions to be executed Or you can actually be empty and say No, you have to take too much of time And you're not allowed to authorize All certain parameters And with distribution you can Actually do many more things Because if you can run one function You can do anything Hello everyone And I want to give a quick Top on repeatable builds in Erlang The idea kind of came Today when I was listening to Today's topic Where a few things were mentioned About rebar 3 being Better than rebar 2 in this one So that you can get some repeatable Bays and that reminded me of a problem That I was solving very recently Less than a week ago So the title of my talk is Repeatable builds in Erlang And why actually No, it is Repeatable builds for the Really parameter Or not really It's repeatable builds for the Really really parameter So we have a problem in the Erlang That when we build a release We have no guarantees But when we repeat it We get the same exact thing again Mostly because we have a few problems Before I go into that Working as a developer Happened I am also a system administrator On and off Mostly these days I am working on Erlang building my own K gateway for Erlang Back to the problems Generator release Given some Erlang code base And give some guarantee That you will be getting the exact Same release at any point of time In future when you repeat And by the exact same release I mean functionally I think The problem is this Take a very simple problem People coming from Erlang Why is this difficult But how many of you have actually deployed Erlang And what were the build tools That you were using Not the release tools The build tools that you were using To build your project before releasing Previously there were Erlang MK, then Reba 2, Reba 3 These days mix Mix can also compile Erlang project We have a lot of tools that are disposal Barring some other Isoteric or rarely used tools I am going to say that Every one of them has an issue And mostly it is Because of a few things Erlang has a very serious dependency We still Have used dependencies as Source dependencies And we refer to branches Now imagine my surprise When just a couple of weeks ago I had to deploy a project One of my dependencies somewhere had an update That I had not paid attention to in the last few months And everything broke Things were working just fine And they broke exactly When I was demoing in front of a client And I got really frustrated And set out to solve this So I will be referring to Are these Visible So let us look at how Erlang MK Fatures and builds its dependencies It says that it will first Go to the first dependency And recursively It will try to build it If it has a makefile It calls the makefile The makefile might be calling Reba 2 or 3 Or else it might Erlang MK typically patches it And if it is an Erlang MK project It gets another depth folder under it And there and combined This is similar to how Node.js works We have node modules where each module Has its own node modules and so on But we have a very big difference between these two You can have the same module With different versions Loaded into the same Node.js process It is not the same thing in Erlang land You have a global module names Given a module name You can have only one version of the build Barring hot code upgrades are not going there But when You are building here You can potentially have a single build folder Single project folder Where you are having multiple versions Of the same application Being brought in When you make a release Do you have a guarantee that exactly This is what is going to happen This is the version that is going to get used And why again are we using Multiple different versions When building we cannot guarantee That they will be used simultaneously That is the problem with Erlang And it is clearly defined In the documentation Nothing too... Now rebar 2 was kind of good It had this beautiful option Rebar people, depth slipper lips If you build If you Manually get every Single dependency Try to build it And add it to your system Or to the environment variable By setting the path And while using rebar 2 to compile If you just set this environment variable It will try to It will not try to use the dependencies Of itself It can use the dependencies from the Erl lips on the system Which means that one by one very carefully doing You can have repeatable builds But as we saw in the morning Rebar 2 is not good It has its problems and its People are moving quickly to rebar It does not provide any single Rebar 3 wants to manage its dependencies By itself And it does not give us any Serious helpful And while explaining while rebar 3 does What it does We have a very beautiful explanation here It considers the dependency management To be Using versions to be informational Because semantic Versioning is something cool It is something very recent Not there when Erlang was around Erlang is Ancient It is not old, it is positively ancient Back then people relied on each other To keep their code compatible with each other But no longer Everybody develops their own project Everybody moves fast and breaks things Eventually So semantic version is out the window People use source dependencies And if you have a problem with some dependency Deep inside You will have to start working it And so on Not a viable option And not everybody is subscribed To the same versions things Git tagging is not prevalent And even then There is no guarantee that somebody Does not go back and update Their git tags or whatever We have Hex these days Hex is kind of nice Because once you publish a version Hex registry prevents you from Publishing to the same version Not increment the version or something But again People do not pin to specific Hex versions People use semantic version Vanges And there is no guarantee that you get the same thing If the developer misses some update So Even with Hex We have a problem that has not been solved So And Semantic versioning does not work But we introduce rebar.log Introduce mix.log These are very recent These are step in the right direction In terms that there is some reference That they will try to repeat when building But using the same tools If any of your dependencies internally are using Erlang.ly Or so on It kind of works using plugins And bridges But it is not working And to add to all of this I recently had an issue where The same version of Erlang Was not usable By Dializer that is running On a different version of Erlang This is an unholy mess So what we Next to the recipe So I am using something called mix To those people who do not Or haven't heard of it It is a fairly recent one It is a purely functional package manager But it means It allows you to do a reproducible It has its own It mixes also its own language Where we define some expressions That define a derivation It is a logic on how to Build something The interesting part is You can bring along the entire closure So I will skip over to the Here is the project That I have done recently I could not load that Exact mix file because of Internet issue What I am doing here Is that I have taken every single dependency Locked it down by its content Hash and used some Helpers that were defined within the mix Ecosystem to wrap up Building using Erlang and K Rebar 3 or mix To ensure that I get the same Repeatable thing And this is Particularly interesting Because even if I repeat the same thing 10 years on the line It is guaranteed that I get the same version Because I am locking not just The versions of the dependencies I am locking the Erlang compiler I will go one step further I will lock the GCC that was used to build the Erlang compiler And the hash of the source code that was Used to build the Erlang I will go one step further So it stutters all the way down So from a very minimal bootstrapping base I have Complete guarantee That in a decade down the line I will be able to repeat everything By building GCC first, then Erlang first Then Elixir first Then each and every dependency And I don't care if GitHub is down Or xpm is down If I have these contents cached Because the contents are verified by the content hash If I have them somewhere I can explode it And the system is smart enough to substitute And in fact Rebar 3 of this has been patched And I don't know any network calls It's a simple patch Made by the next ecosystem If you try using Rebar 3 to build something And you make a HTTP call The entire build fails Saying that you are violating my policies It's not airtight Because the response of the network is unreliable So this is just something That I want to do