 That's kind of scary All right, is that too loud in the back or okay if it works for you all it works for me All right Let's kick on through All right We'll go ahead and start now because we've got lixir by the belly fold There is a whole lot of content packed into this talk and so I'll do my best to make it through. I've got my stretchy pants on I hope you all do you've just finished up your meal and so here we're gonna go So here's me on Twitter. There's fnconf 17 and If any of you are new to elixir, this is the way you tell the elixir community what you're up to you say my elixir status here And so that's the way you say hi So let's begin So when we come to a new language out of the room, how many of you are elixir developers already? Okay, how many earlangers in here? Okay, a lot of earlangers. That's awesome and people that are brand new to it All right, perfect. Okay. So anytime we come to a new stack. There are all sorts of questions We have we're wondering it's like what are the things I don't even know, you know I know these things I don't know about and then there are other things is like I don't even know that I don't know those things and so in this whole process. We're out there scouting we're out there looking and we're looking for where the dangers are where the sea monsters are and where Pitfalls and so on but we're also looking where the fun is there There may be things that are fun out there that are good that we just don't know about yet And so this talk we're gonna try to hit as many of those things as possible And so we've got this kind of list here. These are kind of big high level things that I've been asked about specifically I'm curious Is there something on here you don't know a whole lot about Show hands like is there something on here that maybe you haven't bumped into yet? Okay. All right, and so good We'll jump on in then So a beginning We'll go ahead and start with the history here of how we got to this whole elixir thing so We have Jose Valim Who is a core member of the Ruby on rails team? And so he was facing serious challenges around concurrency around scalability around performance on Ruby But that's not just specific to Ruby. There are a lot of people that are having those kinds of problems and other stacks so But Jose he's in this interesting position being this core member of this team and so he's out there He's always curious. He's always out there scouting and In 2011 He's been reading seven languages in seven weeks by Bruce Tate. Have any of you read that book? Okay, great. And so when when Jose grabbed the book it lit him on fire when he got to this chapter the Erlang chapter and he It says okay every problem that I'm having day-to-day It looks like these guys are solving these problems. And so that's good. So Question why is it that he didn't just become an Erlang developer? Why did it? Well, he really did become an Erlang developer He's probably written more Erlang code than any of us except for maybe Francesco or Robert He's in here, but outside of that. He's written a devil a lot of Erlang code now. So Why is it that he didn't just move over and become an Erlang developer? And so some people might think syntax and that's really not the right answer That isn't why he did this it had to do with these other bits And we'll be talking about a lot of those but it comes down to the tool chain and the community And so rather than jumping ship and leaving that community He thought it would make a lot more sense to bring that community along now The other thing is why didn't he just steal the good parts of Erlang the things that he was missing Steal those ideas and bring them back over to Ruby and So we'll see why that didn't happen and why in 2014 after three years of work It's V1 and he brings this whole community into the Erlang VM into the seco system Francesco is taking a picture Alrighty so the big question comes down to the why Erlang and we can start off with with that one here on This old crusty phone that some people over here recognize. So at Ericsson Swedish telecom giant they had problems back in the 80s that most companies are just now having they had to deal with Massive concurrency they had to deal with high availability and all these sort of web scale things We have all sorts of buzzwords for the things that they were dealing with back then and so We move on into what happened at Ericsson. So you've got this group here you've got Joe Robert and Mike and They were in the computer science lab and they were they were focused on these problems of telecoms these problems that Ericsson was dealing with Like what do you do to have high availability? What do you do to make sure you've got a hundred thousand phone calls that you don't do something that just drops a hundred thousand phone calls? Because if you remember before cell phones came along Phones were reliable people expected them to work and so the standards have changed But back then they were expected to work and if they didn't it was bad and so here I got these guys They're solving these general problems in telecom. So the first one high availability So well Ericsson they had money and they bought carrier grade equipment carrier grade is this idea of five nines And so they'd spent the money on that but still If you have perfect hardware still things go wrong sometimes the perfect hardware fails Sometimes the environment gets weird sometimes your code messes up, you know, and so you have all these problems So if you want to have high availability you really be need to be able to tolerate faults And so that was the number one goal of what these guys were working on there's like how do we build fault-tolerant software and They had models that they were using The research that had come before and all this but they knew that this was essential of being able to when you had a fault to be able to recover from it and so if you have the hundred thousand phone calls that you're dealing with you have this problem of concurrency and You need to be able to have things fail in isolation Because if you have one big process and you have all hundred thousand phone calls there And you've done something jacked up in your code or there's some environmental weirdness and it Leaks and you try catch and do a weird spot the whole thing falls down. You've lost all your calls terrible terrible and so You need to be able to have this isolation between Things and have them fail independently and also be able to have if this thing fails have something else bring it back Before anyone notices that's really nice and then to you've got a big code base You want to have this really nice concurrency model that where you know who thinks threading and thinks oh This is gonna be easy. You know no one no one raises their hand on that because it's not gonna be easy But here the goal was to make sure that concurrent programming was easy And so in Erlang elixir we code sequentially we code top to bottom everything is just me And I don't have to think about outside other than through message passing so concurrency when okay? So you've got your box. It's running. It's perfect. Everything's great. The code's great You're on good hardware. You've got concurrency down. You've got this fault tolerance thing built up You don't really have your fault tolerance thing built up because what happens the server catches on fire All your calls are down again So we have to have distribution on top of that and so this is came a little bit later But these primitives were there from the beginning on what would allow the distribution to happen So you have multiple servers one falls down you keep on churning and so that's the space that we're in and On the maiden voyage of of Erlang going out into the world This is the one we've replaced our one three nines with all of nines because the xd 301 switch this maiden voyage goes out to British telecom and the The lore and the legend is this thing hits nine nines of uptime out there So 30 milliseconds of downtime a year is what that number means which is absolutely stunning and It wasn't expected So and this is used the whole world then and after it was open sourced people That were building things that also need to be reliable jumped on to the stack Because there weren't alternatives and there still are not alternatives and so this is an interesting thing So why is it? How does it even happen? Why is this nine nine thing going on? It comes down to this this is the most important thing the Erlang VM and this is why Jose didn't rip off things and just bring them over the Ruby community and so Erlang VM. So what is it? So if we look here, we've got in context We have over here the operating system that it might run on so Windows Linux Mac and We've got this stack. So we have the Erlang runtime system Erlang VM in the bits there and up on top of that. We have OTP and elixir Erlang list flavored Erlang So notice something funny about this a lot of times when you see a stack like this You see the runtime would be up on top of the operating system This thing is way down into the belly of this This this box on the bottom here and that's because Erlang is an operating system and That's an interesting thing for us to get our head around and we're gonna be talking about that as we go forward But so you've got this operating system That's at the bottom and on top of it You have these patterns that get cooked into OTP that the the gang was working on from the 80s on of making These patterns that to build reliable software on top of these primitives that were exposed by the Erlang VM and up on top We have these languages that all compile down to the beam. So the binary format This number here is part of why Companies don't come along and just beat Erlang at being Erlang Because this VM here was built for the things that it does really well And it's had a lot of time by a lot of really smart people put into it This is why Joe's they didn't just you know ruby up and and so There's an interesting thing over here look at the none I've got a pointer here I can do this I guess kind of yeah, there's none. That's pretty neat. So the none We can have bare metal Erlang and so here we have a Slide showing Zerg Erlang on Zen org and What this thing is is so they have they have taken a version of Erlang and they've they've targeted Bare metal they're bare metal against the Zen hypervisor and So what we're seeing here in this demo screen is a request came in So this is from my browser. I did a screen capture this I send in a request to this site They spin up a brand new instance on EC2. They booted into Erlang Bare metal Erlang on the Zen hypervisor. They load up a web server and They take my request they service my request They do all the Django template stuff in there And then they send the results back with the timings and so on and this all happens in point three seconds And the machine is gone and dead and shut down all but in that point three seconds there absolutely stunning and You'll also see these other interesting things You could see it Erlang user conference. I believe the fellows behind gris They're probably talks recorded there. They can go ahead and check about where bare metal bits here And then if you're into this whole thing go off and check out nerves project super cool Okay, so operating system. We're talking about this world of being an OS. What does it even mean to be an OS? so you have to think about process management and you have to think about interrupts and Memory management you have to think about file IO network IO and so on and what is it you have to think about in your code Like what is it that see? See sharp JavaScript and so on. What is it they think about? Well, their job is to eat brains brains brains brains and so Their whole job is to eat as much core as they can they've got their zombie hands And they're pawing at the core and the in the process the OS knows it cannot trust your code It hates your code it knows that you're out to get it And you can't keep your zombie grabby hands off of it off the core And so it can't trust you and it's doing all this locking down a memory every time it tries to grab the talking stick Away from you and it's it's a hostile relationship here as we move over to the Erlang VM We have something that's a lot healthier. We've got a cooperative system here and this cooperative system It is possible not because it knows that you're all Erlang elixir developers and you're good people you are But that's not why it trusts you it trusts you because it knows you can't do anything wrong It's set the constraints on the language And the language is completely cool with that because it's getting something good out of the bargain It's getting all this all this stuff. We were talking about the the the fault tolerance concurrency and the distribution and So nothing can cheat there's no sort of halfway of doing things here And so as a result of that we we have massive improvements over what you would get Say with thread context switching where in your C code it wants to grab core grab core Grab core the OS is saying smack smack smack and you have tens of thousands of CPU operations every time you have a thread Context switch we don't have that here and so we'll see about why in a bit so let's start off our first bit of elixir here is going to be inside of the shell so Interactive elixir. This is the most powerful tool in the elixir toolbox IEX and we come in we see Erlang OTP and we see interactive elixir here and we're going to type in self So self we get this process ID back This is the process ID of our repl here of our of our interactive shell And so it's processes all the way down everything in elixir You're not sometimes in a process. You're always in a process and you're always talking to other processes You don't flip into this mode of actor mode Okay, so Concurrency there's a interesting thing here It's sometimes hard to get your head around what the word means because it gets misused a lot of times But concurrency is not equal to parallelism. These are different concepts It's about concurrency is about the structure of lots of things at once and parallelism is about the execution of many things at once and so this is Interesting thing to keep in mind in the context of when Erlang was created Erlang VM was built so in 86 They didn't have to think about multicore. It just wasn't a problem You had it was you know, you're happy to have a single core and and so That wasn't why the concurrency there We've already talked a bit about the why there, but there's this interesting line here We go from 86 to 06 at 20 years So in 06 is when they said hey, let's flip on multicore so updated the Erlang VM hooked up the schedulers and said hey why don't we run a schedule our each core and Interesting thing happened code that had been written good Erlang code that had been written before Could now be ran on two cores and it ran twice as fast four cores ran four times as fast and in 2011 I think there was a study where up to 40 cores They saw a linear scale which is just mind-boggling you look at this this whole all this stuff was just built in from 86 It's a moonshot. It's absolutely just stunning. I mean just makes you want to just hug the guys Okay, so here we are we're gonna go in and we're gonna now look at the idea of the actor model Which you're here from ever other stack, but you know processes So the props of every actor Every process we've got these things that they all come with so each Erlang VM process each elixir process has its own dedicated isolated memory and So one kilobyte or on 64 bit you get two kilobytes that are allocated to start off with and it can grow Following a Fibonacci series as you need more and more storage. So you take more messages. You're doing things You're holding more state But this memory is yours and no one can go in and wiggle your state and change it No one actually can even reach in there and read your state that's really powerful and so there's a Stack heap That ends up being a pretty nice thing if you're a garbage collector so you've got this functional programming language where Things are immutable you change things. You can't You set a value. You can't change it. I have immutable values and You have no one sticking their paws in your memory. So as a GC. It's a pretty good place to be garbage collector and So did they just do this as a trick writing see if they could do it because they're awesome and all this Well, they did it really around that fault tolerance Thing we're looking at earlier because what happens on the JVM you have a server humming along at servicing requests you're just busy busy busy and Everything's great Memories starting to build up a little bit But yeah, you're all right and then you need to do GCs But you keep on putting that because you're still pretty busy and you need to surface these requests And then you have to stop the world GC and then things aren't so great things cue and then the whole world gets a Nasty place so here we have these I these dedicated garbage collectors per process and So you think about these tiny little GCs that are happening is deterministic GCs coming along on each process So that's pretty sweet. No stop the world GCs there. Okay, we move on over to This other prop that they all have and this is the mailbox And it's a good thing they have a mailbox because it's the only way that a process can talk to the outside world So you can send a message to another process and that other process can Can wait can it doesn't have to do anything with the message it just shows up in its mailbox It's there and whenever they feel like it they can fall into a receive block and catch that message Okay, and it's a good thing that there's this message passing because it's the only way that you can talk to the outside world And I really mean it's the only way Even so you would have this idea. It was like well sure I can talk to the file system I can call the file module and do this you're doing message passing that because what you're doing when you do like file I Oh as you're saying file module I'm gonna talk to your client API and I'm gonna ask the clan API to go read a file to write a file That then talks to a gen server which on the back is another process And you're really just sending messages to that other process an interesting thing So often other FP communities there's a lot of talk around side effects And you don't hear that really mentioned too much on the Erlang side or in the elixir side But we have a really good story there and this is that If you're doing any sort of IO You kind of are doing something in in a similar vein to what happens with the IO monad in the Haskell community These things that that that are known to provide safety So what we do instead on the Erlang VM is we have port drivers So any time we're talking to the outside world that's all happening in the C code It's a port driver that C code looks to the world looks to the rest of us as if it were just an Erlang process out there But it's where we're poking state on the network and we're wiggling things on the network We're reading things in the file system. We're wiggling things on the console. That's where our IO happens so About those side effects And links and monitors. This is another bit of gear that's on each process. So we can set up We can set up a link which is a bi-directional death pact And so this is if you die, I'm going with you You know, I can't make it alone. And so this is this relationship you have with this other process these two They're ganged together. They're a Thelma Louise. They're going off the bluff together. And so An other way of doing this would be through monitors. This is unidirectional. It's more like reading the obituaries It's not like you don't care. It's like I want to know that you died But I'm just not going with you and so and that's what that's what monitors are about And so we'll see both of these all these primitives are used as we build up on top of this already So process scheduling on the Erlang VM We have a single CPU core. We have a single scheduler three processes up here and The way this works rather than the thread concept switching in the slap-and-hands and all this instead We have a sane model this trusted model where the scheduler is going to give each one of these processes 2,000 wax at the core and so these wax we're going to call reductions And so each one of these you can sort of think of as like a function call Roughly and so so we go through here. This process gets 2,000 immediately We move the talking stick to the second process churn churn churn third process And there's almost zero cost between these because it's not like it had to shut pinned anything down It was just like processing this I'm moving out of this and moving out of this and I'm back to the first Here's a visualization that shows the same thing But this is much prettier And this is actually a PowerPoint animation, which is just crazy So and so we can have we have two cores we get two schedulers And we have that same sort of thing happening and so on so another idea that builds up on that links and monitors is So we have supervision and so we talked earlier about it something goes wrong We want to be able to if it dies we want to be able to bring it back So this is kind of how this works. So we have a worker something weird goes on in the environment It crashes supervisor brings it back and assuming it was just a hyzen bug Assuming it was just something in the environment everything will churn back that process is restarted to its to a known good state, okay On top of that idea we build up with Supervision trees and so on we'll talk about in just a bit. Okay, so say we have three processes here We've got a core scheduler. Okay, this process is getting this turned this one's getting this turn And he falls into receive blocks says I'm done with what I was doing. I'm just waiting for a message Anyone have a message for me? It's like no, there's no message in my message box. So at this point he's He said, okay, I'm gonna block now forever and ever. I'm gonna block waiting for a message So did we just ruin the whole show this whole Erlang thing just turned out to be like oh that was not really such a great idea After all because this guy's blocking is it killing everything? This is not no JS This is the Erlang VM and we're elixir is we're alchemist and so what instead happens is this process It falls under C block the operating system Erlang VM knows that this process doesn't have a message So I guess that and takes it out of the scheduler rotation moves over the side let's it go to sleep and We move on to servicing this other one at the point that the operating system Erlang VM knows that it has a message It'll bring it back into the schedule rotation So you can think about this you have hundreds of thousands of processes running Your core might be sitting at zero or one percent because they're all in a receipt blog Just waiting for something to send to them. So This is a common thing it's such a strange thing to see you've got 60,000 processes going and the cores are just sleepy and it's just beautiful and so Okay, let us move on and talk about how how we get from processes to different cores into different schedulers So we're talking about this game of balancing and compaction here Okay, so we're going along. We're busy. We're busy We're not that busy And so what we're gonna do is we're gonna work steel Let this guy go be sleepy and save power Okay, have the opposite problem over here This one's getting a little too much loaded up on him and so we're gonna say I don't have anything to do And so he's gonna work steel and he's gonna migrate these over and this happens just as part of the of the whole rotation This is just a deterministic pattern that just happens and this is all by the Erlang VM for us We don't have to think about this, but it's really cool to know that it's all happening for us And how some of this stuff works and so maybe if you came in Alchemist you're you're playing around you came from the Ruby community somewhere like that and but you didn't understand this Maybe this gives you like a firmer footing now and you feel more comfortable on the Erlang VM But from those things we get massive concurrency Granted multitasking soft real-time low latency over raw throughput So that last one is really interesting because I don't know of other languages that that value Low latency over the raw throughput So we're not so concerned about this process of this this one over here how fast it can calculate Pi We're wanting to make sure that everyone gets even deterministic Grab at the stick and so and so no one gets to hog Okay We'll talk it about Line from Mike Williams where he talks about the performance of a concurrent language is predicated by three things Context switching time message passing time and the time to create a process And so this is a little demo Have that basically proves Mike Williams his idea of fast concurrency that Lixar has it Okay So we have this thing that's going to come in and it's going to say live a full life and to live a full life We're going to take Some number of generations that are going to come after you and we're going to take the original processes process ID We're going to take that and we're going to say okay. I'm going to spawn a brand new process I'm going to spawn a brand new process. It's the same process that I'm in the live a full life But I'm going to pass it how many more minus one and I'm going to thread through that initial runner pit Okay, so we spawned that brand new process and we have created a child then we're going to send the child an okay message Then we're going to go down and fall into a received block waiting forever and ever and ever until someone sends us an okay message Which is of course going to be our Parent right because our we send our child the okay message and so this stack it shows the message passing time It shows it shows all the bits from from what Mike was talking about the context switching time and Let's see what this looks like so we do this with a million and On this laptop I time this at 1.1 second or 1.2 seconds Rounded and so very very fast. So if you thought about like I don't know it Maybe a little heavy to bring up another process for this. It's not too heavy You can go ahead and just bring up another process It's so if you can bring up a million of them that go through this whole chain. It's good All right, OTP we're gonna talk about safety and fault tolerance here so gen servers are where you do your work as an alchemist and and Gen servers though a lot of people come to it and they're like I use this thing I don't completely understand this thing though And so we're gonna take a little bit of a dive to try to to remedy that so we're looking here at a gen server We're gonna call this thing chunko worker and Really it's just a counter. We're gonna bump a counter We have our one clan a client API bit on here that says bump level it's gonna take a pit and they're gonna tell how much to bump by and That's all it does so This is maybe still a little thick to read if you're If you're new to elixir, so let's get another view of this for just a second. Let's see what we actually have here We have code. We have this module The chunko worker, but we also have some other code. We have the code. That's an IEX We're actually gonna call this is our client in this case Just the shell and then back in the back We have gen server and then gen server and gen and so those the ones at the bottom are the ones that Robert and Joe and Mike and Francesco built and then gen server is the wrapper that elixir puts on to it to take away some of the pain and so Because those guys weren't as focused on dev joy. They were focused on servers that ran forever and That's what Jose brought in the dev joy side of things and so we have gen server here So let's look at this module to stack and left. This is our code and the thing on the right is our process So these are different concepts. You can have a single module. You can spawn a million processes off that one module You can also have Dozens and dozens of modules that are all running in the space of one process And so these are separate concerns separate ideas entirely. So let's map through here. So we're on our module We're going to start our chunko worker that start in there is going to call start link back on our gen server Our start link is then a go back to our call back function inside of our chunko worker And you see over here. We have two processes We've got this one over here the initial one and when we called over to our gen server start link we ended up with this other one here and So as we chunk on through our init is going to set our initial state for the thing and then we return back to the caller So with that view of what was going on the messaging now Let's look at it with a code and let's look at a little bit bigger. Okay, so we have our bump level our API and We've started link so our gen server is just sitting here in the server loop That's forever and ever going to just tail call itself It's a loop and it's going to do some things and it's going to call back into itself And that's how all the servers work and So here we fall into our receive block We're just waiting for someone to do something our client comes along and says gen server call bump by an amount We receive that message and we and we gen server Dispatches it back to your call back where you do a handle call you do your awesome business logic of bumping this amount up and At the end of it you're going to return your answer back to the caller and you're going to update with your new state So that comes back. We send our reply gen server since I reply back to the caller Send reply updates the state and we fall back into our loop again. And so this was sort of a two or three angles of you know the butcher's view of a gen server and so Other things in OTP we have applications and this is where we're going to bring up a group of things together that need to be Started together and stop together their life cycle is together And so we can have many applications that orchestrate back and forth this application can depend on this one and so on But this is our unit of things coming up and down and so our applications will generally have Top-level supervisor that's going to make sure that everything down below gets watched and brought back if it needs to be brought back We might have a supervisor at this top level that's supervising a gen server Maybe it's also supervising another supervisor So if that supervisor crashes it'll be able to bring the supervisor back and this supervisor might be watching a whole series of Gen servers over here and we can have different strategies on how these things live So it might be if one of these crashes we're going to kill all of its siblings All of them go down or maybe it's just we just bring that one back on and so there are different strategies You can hook in into your supervisors. So in In elixir we code the happy path So we don't code all the edges we code the happy path this same concept with a less manager friendly marketing like phrase Was let it crash in the Erlang VM the elixir group. They usually say code the happy path. It's the same exact idea This is this is really fun here because this is why it's so easy because we're coding Sequentially all this concurrency is is nice easy because we're just top-to-bottom Another idea in the Erlang VM that's taken seriously is no masters so over here on the left We're going to look at if we have two components in the series that are both of three nines of reliability So we put them in series they actually become less reliable and it's funny if you go back to the Microsoft DNA architecture That was kind of what they prescribed It's like we have this thing that does this job and when you're done with this job You need to pass it off to some other tier and it's gonna do its job and the southern tier will do its job Well in that point if any of the three layers failed you're fully down and so really crappy advice And so what we have over here instead is we generally have a series of peers and we when we do this We get a lot better numbers out of this. We have the same reliability components We put those in parallel and we basically get to double our nines So 17.5 hours versus 31 seconds is quite a difference there, okay Let's language, okay, so we've got this language and Super accessible language productive language and so the accessible part let's look real quickly about this So if you're new it really is worthwhile going to the website and going to the guides and going from one to 22 and You'll actually know the language you'll understand all of the parts of it And this is unusual most most languages you cannot go to their website and learn the language is ridiculous Is this they've done such a great job And the meet-ups, you know go there you see these you've got Meet-up now who here is in the Bangalore Bangalore meet-up for elixir all right, there's some in the back yay awesome, okay, and The community it's really warm so you'll see this This sort of ruby hug kind of thing that comes into the early VM, which is really kind of fun so You do a pull request and you'll very likely see something from Jose Where you get a merge. Thank you and a bunch of multicolored hearts It's just it just is just hilarious, you know It's just it's just such a great warm community, and he's a great steward of the language there So inside of the language again this ecosystem is about being accessible about being productive We have hex all these packages out here and the quality of them really good One of the things too, but hex packages is they don't just disappear on you when Like your five line left pad thing, you know doesn't just go away on you so So we have here plug and we look over at the docks over on the on the left here It's just the quality of this you have all these examples of the usage and so on really really nice So it gets this approachable and productive with modern tooling modern tooling Now means the shell the terminal and so So inside of IEX if we say H a noom and We dot C and we tab through we get a Autocomplete list of all of the things that a noom C has on it And if we go out here and tab through again We see help in line in the middle of IEX showing us A noom count and the way it works and So here's the code that was behind a noom count and what we see up at the top This isn't just documentation. This is a doc test And so if I come along and I'm gonna do a pull request and I somehow goof up a noom count Well, it won't ever get released to production because this documentation Caught the problem and would fail the build and also if you read the docs You know they work because again it passed the build And that was stolen from Python and so everything was so many things stolen stolen stolen. It's it's beautiful They didn't just borrow they stole it and so So mix this is stolen from line again from the closure community This replaces decades of IDEs right here mix and we have this pluggable system of things where we can build out everything And we get rid of all that junkie tooling that we're just Suffered through for two decades and so we can scaffold out new projects. We can build docs. We can run tests and so on Okay, so we scaffold out a new project. We scaffold all new fizz buzz we go through it built this we have our tests It's scaffolded out the whole deal for us and we can say code dot if we have visual studio code And we have a nice editor that plays nice with elixir So the whole tooling is just so easy and quick to go through just beautiful Expressive dev joy. So now we're gonna move on to pattern matching. So this is a thing that Erlang shines at and elixir shines that So ABC we're gonna destructure the thing on the right We're gonna take this tuple of this three tuple apple banana cherry And we're gonna pattern match the left. So B is gonna be banana Okay, so we can do similar sorts of things with list. So we're gonna have list we're gonna capture list to the thing on the right one two three and We're gonna do a pattern match off of a case expression here we're gonna say case list do and We're gonna try to matter pattern match against 11 12 13 Okay, this is not gonna match because 11's not one and 12's not two and so on this is not gonna match We're gonna fall through because it's gonna match Nope, not gonna match because our list is not the same thing as a tuple right three elements have the same numbers But these are different data structures. So this is not gonna match. All right, this one is it gonna match That's gonna match. All right, we're gonna capture the two to our X Okay, and this would also match if the one before it didn't match But we're greedy and only the first one is gonna match And so this last one this one would have matched the one against the one and the T would be bound to the tail Which is two and three here Okay, call on three point And so this is nice for right in the middle of the IEX and we can do these multi-line things with case statements really cool Okay, digit separators Just pure candy. There is there's just nothing but candy here Wasn't needed. It's just added in and it's just nice and there are all sorts of things and the elixir community brought in like this One thing that Erlang was always just blasted for was being so horrible at strings because people didn't really understand That it wasn't called the string. It's binaries and all anyway The name change is the big thing but on top of it Lot of really great libraries Modules built up around string handling but utf-8 all the way and Here we have a combination of two things. We have utf-8 and we also have an if statement that behaves like Anyone from a non-Earling background would expect an if statement to behave at we have if and else we could also have Just an if without the else and we wouldn't crash if if the if didn't happen. So All right anonymous functions. So we use a lot of anonymous functions. So there's gonna be multiple ways of doing this. We're gonna define this Variable odd question mark idiom here of question marks for Booleans and we're say fn of x So a function anonymous function is gonna take x it's gonna pass it in to remainder of x comma 2 is not equal to 0 So this is basically gonna be our odd function, right? We end our function Okay, we can call that five odd 8 All right So those We're had the shorthand syntax of being able to do the same thing with ampersand Friends and then we instead of having x yz would have ampersand 1 ampersand 2 and so we just ordinarily match like that. So rim one gives us Our first argument same exact thing here Okay, one other way of Hooking up functions is because we have multiple function heads in Erlang and elixir and we can do this inside of a fun as well We can say area fn square width Rectangle width height circle radius here and When we call this we can say area and we pass it in a square in a three nine Our wit times our width Rectangle two times four and so on and so we've had multiple function heads here that are matching inside of the synonymous function, which is really neat so Functional programming a goes to be so we transform an input to an output and that's So common that we have an operator built in The pipe forward that lets us take the thing the expression on the left and pipe it in as the first argument to the expression on the Right and we'll see that in action here with our odd function We'll define that and we're gonna take the numbers from one to a hundred thousand We're gonna pipe them forward into a new map. So our first argument is actually our enumerable coming in our second argument is our is our Anonymous function of one our argument times three. We're gonna pipe that forward into a filter We're gonna filter just for our odd and we're gonna sum those things get our big number here Let's also do something a little bit different here instead of a new map new filter Some we're gonna do stream map filter and then some get the same answer. So what's going on here? We have the enumerable protocol So Enumerables are like interfaces and C sharp or Java so on but it's it's a shape And we're going to have created an implementation around it these two implementations enume and stream We can visually see what they're doing here. Enume is about batch processing chunk chunk chunk and Stream is more like a one-piece flow Okay Okay, we're gonna do some maps here really quickly name Brian beardy Well, I'm not that beardy. I don't have a bird flying out of my beard right now. So I'm gonna say false and So there's me. I'm gonna say person. I can access that property through an indexer here I can also access it through candy. This is elixirs full of candy like this. So we can say person dot name It's a person we can build a new map based on this Old map and we get the new map back out But if we do something like this wake is wake of true That's not a key and the old one so it bombs and if we really want to do that we can go ahead and put Our new thing in like this and so we get a wake true beardy false name Brian Okay, if we want a little bit more tightness than that we have idea of structs So we're gonna define a struct name and these have defaults. So our name is blank beardy false and We go through and we see our our code there for our def module See our defaults same kind of stuff But here if we say bar defaults boom and so quite unlike Jason here right because we we get this check here Okay, macros So this I almost feel bad showing this but I'm gonna go ahead and show it so Macros are dangerous, but so in a certain funny way so result equal fridge check So we're gonna call off into their fridge and we're gonna get our temperatures of 34. These are all Fahrenheit Sorry, I forgot to convert those so this is cold, but not super cold It's above freezing so so fridge check we get our okay and our list of things and we're gonna take that result And we're gonna get the maximum temperature for our readings and boom Our protocol a new rule will not implemented for okay list. It's like It's like okay The okay is there and so this is the idiom of tagging a tuple and our tag tuple came back with okay We got our results, but we had our list we popped this through and so we're gonna whine and say well We could be grown-ups and do it like this we could say okay, and then capture temps and result and Then like grown-ups then take temps and then get a new max Which is a completely reasonable thing to do or we could be wild scape punks and build a macro And so let's build a macro. So we're gonna create this thing called bang pipe It's gonna have a macro that's gonna define this left forward pipe thing here this That guy and so To get your head around what's actually happening here. I'm gonna mask off some bits so we import our bang pipe and We're gonna do a fridge check on this back forward pipe a new max and what it's gonna do we've got we're into the body of this thing the unquote left and We pipe forward and we pipe forward into unquote right and so what is this we're gonna replace this unquote is gonna replace That with fridge check because it's expression on the left It's gonna take the fridge check it's gonna pipe into this case statement here if it matches against okay value We're gonna get the value if it just if it didn't have an okay We're just gonna return the value and we're gonna then pipe that forward into the expression on the right Which is a new max and we get our value. Okay, this is really a bad idea So don't do this, but I just wanted to this is a silly example It's let's you show the power of macros and how they can change the language and a lot of elixir is macros so there's some guidance on When you write macros So you write macros when you're Jose You can also write macros when Jose asked you to fly his sleigh tonight and Then you can also When you realize that the that not having the macro is going to make your team more miserable than having the macro it's like does it carry its weight is really the thing and Wow, okay. All right, this happens And I am I need to find my mouse We're gonna have to skip through this pattern matching. So I'll put this into the I'll put this into Talk for tomorrow. So, okay. So that was more than a belly full because we're out of time. So We're past our belly full limit and that's after lunch even so the closing here is Really looking at these guys on the left and just being thankful for this VM and this this stack They built OTP that they built because it's was built in a special time. This isn't gonna get built again Google's not off building the Erlang VM Microsoft's not gonna build an Erlang VM the Erlang VM is out there And it's the one that's gonna be out there and if you want this stuff It's the place to be and on the right a lot of thanks to Jose for having the wisdom to see what was here and Then having the compassion to bring this whole community of really bright people along with them And so the Erlang community is really thankful to have that The elixir people are thankful to have that and so it's it's a group of a lot of people that are thankful And at this spot, I'd say, you know, we've got this great stack We've got this great language. So go off and use it in your work and have fun and build good software and With that we're out of time, but so thank you all and Hit me with questions on Twitter and just around the conference, but thank you all for being here