 Okay, so We're going to be looking at a few spaceships Okay, we're going to look at a number of different things here both Well running our line will be running our line together with other languages and Also looking at observer, but how many people observe? some have So one of the things with the Alan system is very good at his introspective And observing does some of the introspection work do a lot more So we will start up the system So what I'm not going to show any more on this show But what I'm running in this show is actually a separate node under the run observer Observer in one hour like no looking at the node doing all the ships So we can start up server. Yeah, I'm not going to make this any bigger because all I'm all I'm writing now is Do anything so now I've popped up observer It's looking at itself. It's not doing anything. So for example, we have low charts But now if we go back to our This one here. I'll now start up the ship. Yeah, my system Starts the system. I'm gonna say I'm gonna have a space Universe of 400 by 400 and I'll run 2000 ships. Now we start up the universe there Yeah, there's nothing in it yet but now if I do a start run, I'm now going to start the run and 75 here is a tick time for each Now we start up now suddenly all these So now we're running 2000 spaces And these spaceships are very naive They go in a straight line Now this system is written now, of course It uses the SDL graphics graphics doing this very simple And the logic is written in law So each of these so I have an implementation law in our link and each of these ships run its own run So now I can go back to start Server here. Now it's okay. I'm looking at server. I now want to look at the other nodes I can see which node I want to look at the simulation So now we'll see your observer. It's getting yours in relation to the simulation Now we suddenly see that I can't unfortunately make that bigger But we can see here, for example, we can see the memory usage And most of it is in the processing How the memory allocation works? I cannot explain this I mean literally I cannot explain it There are a few people in the world know how the intelligent work is inside the processing So what I can do now is of course this code is written in In law, I can reload it I can change the code So what I can do for this certain number of ships is say now they should run some other type of code So we can change the logic And what we can do here is I can do set ships So we can set say the first 700 I can make them something called a run ship Now what a run ship does So the original one was default ship And what a run ship does is it still goes It sort of goes in a straight line But when it starts getting around near the edge of the universe It changes the logic around the edge of the universe And if we start these you'll see some of them are starting to go free Can you see that? But there are some green ships You can look at those and they do behaving different It's not a green program We saw the spike on loading code And now we see there's memory increase as well So yeah And again here is the thing is these are each ship Is a separate island process So if we go back and look here For example we can see in the system We are running 2049 processes So 2000 of those there The action of ships Now the 49 processes There's a few system processes Well there's a basic island system process It's about 28 when you start them up I'm running a few processes And it's very simple So when a ship changes position It just goes in right to the table We can do a lot more things here We can make some ships We can make them timid ships They're frightened They're very timid So when they go forward We can set these savers We can set savers We can make these timid ships You'll find they have another color They go slightly yellowish What a timid ship does It keeps going in a straight line It's a ship just in front of it Clips and goes back the other way I can show you the lower code after They also bounce And if we went back And look at our server here Our load charts We see there's another spike And I was loading those This is running in 3 slow Of course this is game And what you have to do to the game Is to kill them So we can make tax ships Now tax ships they also go in a straight line But when they get a ship They immediately in front of it They kill it I don't want to make too many Because otherwise they die Now they turn R&B red Unfortunately I don't have a sound in the face Every time a ship dies it goes boom We have sound We also can see Right zaps That means when a ship is shooting another ship And you see little yellow squares here They're exploding I'm not a graph So we see the ship is dying And if we go back and look at our observer here We can see well the load is going down Of course because ships are dying And if I look at the system Here we'll see the number of processes Gone down we've almost killed a thousand ships We've killed 800 ships I can do a lot more fun thing for the island system So if we go back to the load charts There So I can go here So we have this concept Of schedulers Basically there's one scheduler per course I can change this I can say how many I want to run So we can do our link Hold on System flag Change these And the flag I want to change is schedulers Online So I want to say in the load is dropping I don't want to run eight schedulers anymore So I can now run four schedulers And solve everything correctly So we see four schedulers are now Shut down and all the works Have been moved over to the other four schedulers So we saw some load increase here And we saw a decrease So I'm changing this wireless system I haven't deleted the schedulers I've just said move all the work Of four schedulers on the four schedulers This is basically what this is We still see even though the process Ships are dying we're still seeing Giving everything pretty well low balance It makes quite a lot of effort to give low balance We also see down here with sources Trains, things, and schedulers are dropping What it tries to do Occasionally when the load gets down too far It tries to say well I don't Act to run all these schedulers Maybe I can stop if you share this online We go back to look here we'll see We don't have any schedulers Yeah this doesn't go So these are some things though I want to stop this and start up again Show one other thing Those who are interested I can show you Show you the lower code Now we'll start this up again If I set this at 100 Okay so What was happening before With the timberships and the attack ships They were basically Especially attack ships what was happening Was that when you look at the universe And look at the university And then it would send them a zap signal A zap message Which would kill the ship And the ship would then die Explode and die It was just nacing and sending nacing If you do other things as well There are other versions of doing stuff Which will also communicate However we can try another one What would be nice Is if we can make the ships run in flocks So these are all running independently Maybe we can get them to operate So we can try that There is a flocks ship We can set it to ours After a while we notice things We can see things happening in our system Some of the ships are stuck Why are they stopping Well We can stop a few more if we want to They are doing secret communication So each ship When it finds a ship in front It sends it a message and says Tell me your position and your speed Then it sits and waits for reply So now we have ships sending Secret messages to each other And of course when they start talking With each other They'll be sending a message To another ship asking it's speed And someone else asking it to help So this is a very simple example Of the danger Of doing synchronous Message passing a remote procedure Of course it is so Extremely simple How could we go around and start putting it The easiest way is just not So instead of asking you should Send an message and say Send me or send me And we can see Look here on the load The system will eventually go online Eventually get on the guitar But need a bit of cleaning up What I'm doing now with So now we're running The logic is implemented in the door That was one of the purpose of doing this I'm now rewriting of course So I can implement the logic on the way Well running on the island system Either in the island or in the sea Or something like that I had a lower implementation I wanted to test it Well I was giving a talk on And I wanted to demo Two things both demoing the lower Of course and also demoing this is a way Another way of writing Gaming code Classic gaming code You have one central loop that runs through the object It's got a bit better now But it's still very much parallel It doesn't always show an alternative way of writing So each of these objects Each of these things has its own little loop It's only communicating with others There's no central There's no central loop in the system As I've said I have a ship behaviour almost done now Which means you can plug out like Or LXCN in the system And XT Anyone else? How much can you do in one line? Quite a lot I'll just queue myself That's fantastic So the question was how My name is Roger Huy This is what I can do Robert asks How much can you do in one line? It's what we call a plugin operator So depending on the data You can give it a different So for example you sort numbers You sort HSE So to further explain it Omega is the Write argument Write that you want to sort If it has One Or an item or less Then you just give Your sorting zero item Otherwise You go into this thing And what it does is it selects To select what's called Private And the offering function Dispect And so in this part it's selecting Where S is Rather than zero And then it's going to recursive these In the middle part It's just selecting Omega And then you're going to recursively Sort the items out Where S is negative There's two So here I have a I'm going to give it Which is the Signum of the difference Between the items that I'm comparing Sort it again Of course you can do the same thing Now I'm going to show you a different A slightly different quick sort To see it So what Q1 does is Instead of catnating The stranding And you'll see So instead of catnation The stranding It's going to provide Adjustable And then what that gives Structure The arguments that they encounter So at each level They're going to be triplets So overall there's a triplet The first triplet Are the items Less than the triplet In the middle one Items equal to the triplet Third triplet Recursively So if I do this again You'll get a different structure Because it chooses the pivot Right here This time the pivot is 4 That is it for Quick Sort If you want to see more There's a paper called History of APR-50 functions to do When I was here last year And it's now done And I'll give you the We have been running in production For now 3 months This is not specifically Directly related to our project But this is a very small portion Of what we thought We would be able to do Which is in our production system There are certain APRs Which you want to Generate on AROC And you want to run it In a chain environment And you want to take it off At your will You don't want to deploy code There are many ways of doing it But this is one small way of doing it Just to also demonstrate What Erlang is capable of In terms of people A lot of people who are new And they want to look at How these Erlang can actually compile Code or the fly Rotate it to memory You can actually store it You can bring back again As many functions as you want You can even tie them together You can do a lot of introspection All that power Plus you can also look at The complexity of a function You can find out the number of reductions It took While you execute the function You can do all sorts of things Which I think are useful And we would Definitely want to try that First in staging and then go to production This is a sneak peek of what That You want to achieve This is just a web socket interface Into Erlang And there are a couple of commands Which understand These are the functions Which are these functions that define If you ask something new So if you ask questions There is no function Which is like ORNLP Defined so it actually Doesn't know about it So why don't we define it Let's have the same question again So it's actually querying The Stanford project With ORNLP and getting all the results As a JSON format It's a simple function But you're defining on the fly Even if I shut down the virtual machine And start it up It's getting from a database So you can load it up It can even be shared across Likewise you can Just keep a couple of them If you've got the context here So what if a function Which are calling other functions So it doesn't know how to fetch a page Let's try maybe the website Something So let's define a function Which makes another function call Let's try that again So now this time it's supporting That it doesn't understand what it may get If we get called So far At the end of it I'll show you All these screen shots Which Now it understands Putting HTML within it So kind of you can do it As we're in under time You can call as many functions as you want Basically it's like a concept Of defining functions on the fly And you can actually delete the functions To whatever because it's all powered by a lang So there's hardly anything To do regarding this The biggest advantage of this approach Would be to run certain functions Which we need on a daily basis We have a lot of data stores And we want to query Number of them in unique ways For either debugging, recon Or whatever it is We don't want to pollute the Platform which Was compiled in So if you want to define on the fly On production in a safe manner That kind of workflow It was actually very fast But it just run through What was captured So it is in the reverse So it's basically Because of your real life is able to Identify And giving all the constructs where You know what is unknown And what are known So it is able to prompt What it doesn't understand And if you define it Pretty much These are compiled So they are loaded into memory And they are compiled into memory So you get the functions Like anonymous functions Eventually stored in a data store Like a t-value store Good that the internet is loading up So it's actually going over the internet Saving it in so you can Potentially have another application Which is also getting all the learning So technically if I were running in production My colleagues would have called that Option for it So the learnings are basically shared So the application If you want that function to be run In an application you have to define In a central store for cluster of machines So you can get that from any of those The applications which it's kind of shared Because these are not This is the logic It's not to be passed The logic is supposed to be something add up And something to do with So we get all the We get a lot of requirements all the time But we don't want to implement them in production Because they are just not right So we need a safe environment And with Erlang we can actually So if I were to show you the screen The debug screen you can look at the Functions that are being invoked So I don't know if you can see this You can see a local function call If I were to highlight that Is that clear enough So it's Erlang if you are A very nice functionality Of looking at Which function calls are being invoked And checking it at run time Whether it exists or not And there is a custom code which is written To intercept that And then you can either Allow those functions to be executed Or you can actually be empty and say No you have to get too much of time And you are not allowed or authorized To call certain functions And with distribution you can Disguise them because if you can Learn one function you can do anything Hello everyone I am Pani Mahesh And I want to give a quick Top on repeatable builds in Erlang The idea kind of came Today when I was listening To today's topic Where a few things were mentioned About rebar 3 being Better than rebar 2 in this one So you can get some repeatable Plays and that reminded me of a problem That I was solving very recently Less than a week ago So the title of my talk is Repeatable builds in Erlang land For the paranoid Actually no it is Repeatable builds for the Really paranoid Or not really It's repeatable builds for the Really really paranoid So we have a problem in the Erlang land When we build a release We get the same exact thing again Mostly because we have a few problems Before I go into that I will quit about myself I am working as a developer Happened I am also a system administrator One and all Mostly these days I am working On Erlang building my own K Gateway But the problem is Generate a release Given some Erlang code base And give some guarantee That you will be getting the exact Same release at any point of time In future when you repeat And by the exact same release I mean functionally identical The problem is this It's like a very simple problem People coming from other languages Might be laughing at this But how many of you have actually Deployed Erlang And what were the build tools That you were using To build your project Before releasing Previously there were Erl-C, Emake files Then there was Erlang MK Then Rebar-2, Rebar-3 These days Mix can also compile Erlang project We have a lot of tools at our disposal Barring some other esoteric Or rarely used tools I am going to say that Every one of them has an issue And mostly it's Erlang has a very serious dependency problem We still have Use dependencies or source dependencies And source dependencies Keep referring to source dependencies And we refer to branches Now imagine my surprise when Just a couple of weeks ago I had to deploy a project One of my dependencies somewhere had an update That I had not paid attention to in the last few months And everything broke Things were working just fine And they broke exactly When I was demoing in front of a client And I got really frustrated and set out to solve this So I will be referring to Some of these Visible So let's look at how Erlang MK Purchases and builds its dependencies It says that it will first Go to the first dependency And recursively It will try to build it If it has a makefile It calls the makefile The makefile might be calling Rebar-2 or 3 Or else it might Typically patches it And if it's an Erlang MK project It gets another depth folder under it And the depths get fully there and combined This is similar to how Node.js works We have node modules where each module has its own Node modules and so on But we have a very big difference between these You can have the same module With different versions And load it into the same Node.js process It's not the same thing in Erlang land You have a global module names You can have only one version of the code Barring hot code upgrades are not going there But When building here You can potentially have a single build folder Single project folder Where you are having multiple versions Of the same application Being brought in When you make a release You have a guarantee that exactly This is what is going to happen This is the version that is going to get used And why again are we using Different versions When building we cannot guarantee That they will be used simultaneously When running That's the problem with Erlang And it's clearly defined In the documentation Now rebar 2 was kind of good It had this Beautiful option If you build If you Manually Get every single dependency And add it to your system Erlips Or to the environment variable Erlips by setting the path And while using rebar 2 to compile If you just set this environment variable It will try to It will not try to use the dependencies Of itself It can use the dependencies from the Erlips on the system Which means that one by one very carefully doing You can have repeatable builds But as we saw in the morning It has it's problems and it's People are moving quickly to rebar 3 Unfortunately rebar 3 does not provide Any single dependency Rebar 3 wants to manage it's dependencies By itself And it does not give us any Serious helpful And while explaining while rebar 3 does What it does We have a very beautiful explanation It considers that the dependency management To be Using versions to be informational The semantic version is something cool It's something very recent It was not there when Erlang was around Erlang is Ancient, it's not old It's positively ancient Back then people relied on each other To keep their code compatible With each other But no longer everybody develops their own project Everybody moves fast And breaks things eventually So semantic version is out the window People use source dependencies And if you have a problem with some dependency Deep inside You'll have to start working it And everything that depends on it Not a viable option And not everybody is subscribed To the same versions things Git tagging is not prevalent And even then There is no guarantee that somebody Does not go back and update Their Git tags or whatever We have Hex these days Hex is kind of nice Hex we just prevents you from publishing To the same version Using some different code You'll have to increment the version or something But again People do not pin to specific Hex versions People use semantic version ranges And there is no guarantee that you get the same thing If the developer misses some update So even with Hex We have a problem that has not been solved So And Automatic versioning does not work But we introduce rebar.log Mixintroduce mix.log These are very recent These are a step in the right direction In terms that they store some References that they will try to repeat When building But using the same tools If any of your dependencies internally Are using Erlang.org Or so on It kind of works using plugins And bridges I recently had an issue where Beam files compiled using one version Of Erlang were not Usable By Dializer that is running on a different version Of Erlang This is an unholiness So what we Next to the recipe So we I am using something called mix To those people who do not Or haven't heard of it It's a fairly recent one But it means Is that it Allows you to do reproducible It has its own It mixes also its own language Where we define some expressions That define a derivation It's a logic on how to Build something The interesting part is You can read along the entire closure So I will skip over to the Here is the subject That I have done recently The next I could not load The exact mix file because of It's an issue What I am doing here Is that I have taken every single dependency Locked it down by its content hash And used some helpers That were defined within the mix ecosystem To wrap up building using Erlang and K, rebar 3 Or mix To ensure that I get the same repeatable And This is particularly Interesting because Even if I repeat the same thing 10 years on the line It is guaranteed that I get the same version Because I am locking not just The versions of the dependencies I am locking the Erlang compiler I will go one step further I will lock the GCC that was used to Build the Erlang compiler And the hash of the source code that was Used to build the Erlang I will go one step further So it startles all the way down To a strapping base I have complete guarantee That even a decade down the line I will be able to repeat everything By building GCC first, then Erlang Then Elixir first Then every dependency And I don't care if GitHub is down Or hex.pm is down If I have these contents cached Because the contents are verified By the content hash If I have them some way I can explode it Every of this has been patched To disallow any network calls It's a simple patch Made by the next ecosystem If you try using VBAR3 to build something You make a HTTP call The entire build fails Saying that you are violating my Policies It's not airtight because The response of the network is unreliable So this is just something that I wanted to do