 Thank you for coming. My name is Brian Carderello, I'm the CEO of DockGuard. If you missed the opening keynote, we're one of the primary sponsors of the Phoenix project and we're a consultancy. We can help you with building out your Elixir Phoenix applications, training, staff augmentation and code audits of your apps. My talk today is on code splunking for knowledge and profit and I have to make a confession. I cannot promise profit out of this presentation, but I hope to promise some knowledge. So Elixir has great documentation, it's because it treats documentation as a first class system. For many libraries that have this great documentation this early in the language's ecosystem is pretty rare. Historically, many open source communities have had to be shamed into adding documentation or improving the documentation to the point that it's actually usable. However, Elixir made it a point very early on to make this a priority. However, some people don't learn well through documentation and I am one of these people. So I figure why spend five minutes reading the freaking manual when I can spend five days reading the freaking code. But that's just me, I suspect, but I suspect others are the same way. What I'd like to share with you today is some of my experience in diving into many popular Elixir libraries over the past year and what I've learned from that. So some of them we'll be going into are Ecto integration tests, Phoenix U inheritance, Phoenix template compilation, some stuff that I've added into Elixir, the pop-in function and test module attributes and finally we'll take a look at Elixir guards and special forms. So one of our first clients, client projects in Phoenix was actually to rebuild an existing application. We were routing the front end with Ember but also replacing a Node.js backend with Phoenix. The pre-existing app had been around for about a year and had a database that was pretty considerable size already. So at this point, Ecto and Phoenix didn't have any way of actually pulling in pre-existing schemas. This was something that we needed because if we were going to be running our test suite against pre-existing database, we couldn't start the migrations from zero. I didn't feel like actually going back and writing migrations for a year old database. So I wanted to have was a way for us to actually pull in an existing database in them that we dumped out and have this be the starting point of the test suite. So anyway, I had to look into how to go about adding this into Ecto. I come from a Rails background, so I figured it would be easy enough. What I first learned was that when I went and ran Ecto's test suite, just run mixed test, was that it was not actually running all of the test suite. I had been writing Elixir for a little bit of time at this point and I was familiar with the regular test directory. But looking at Ecto's own source code, I realized that there was an entire other directory called integration test that was not running normally through mixed test. So taking a look at the Travis script, I can see here that there are several different test scripts that are being run. Right below mixed test, we see mixed test adapters and there are some environment variables being set to actually run these. So this gives us some clue on how we can run the entire test suite for Ecto, but how does this work and why does it work? So let's take a look at ecto's mix.exe file to understand this. As you can see here, and I don't know how well these screen shots are showing up on big screen, but as you can see here, there's a key in the project function called aliases, the purpose of which should be obvious. Aliases is a given mixed task with a list of other tasks. And these other tasks can also be aliases themselves. In the case of test adapters, you can execute a function. In this case, ecto is using the function to run both the Postgres and MySQL integration test suites. So let's keep diving down into the env run function. At first, this looks confusing. It appears that the env values are being set for the mixed env. So how does this actually run anything? So for one run, it will be pg and the other run will be MySQL. There's a module attribute towards the top of the mixed esf file that sets the add adapters you can see in the test adapters function right above. So what this is actually doing is that it is iterating through that list. And it's actually running a system command and running a new mixed task. But setting mixes on environment to either pg or MySQL. Okay, that's interesting. But how does that actually run the files in slash integration test? We gotta go back up to the top of mixed esf to answer this. There's a new key called test paths in here. Test paths is being used to actually instruct Elixir on where the test files live. And it passes in the current mixes in environment variable. So the definition of this ends up being one that one that just captures using pattern matching. And we'll say, okay, if it is pg or MySQL, then we're just going to return back slash integration test. And if it is anything else, which is use a greedy pattern matching, then it's just going to be our normal test path. And that was pretty interesting. We learned some really neat tricks that can possibly bring back to our own projects by editing our mixed esf file. However, all this was very obvious and easy to discover once you start poking around a little bit. In Elixir, we hear the saying, explicit is better than implicit. This means that we want our code to be very obvious in intention revealing. This can result in more verbose code. However, all rules are meant to be broken where it makes sense. And in Phoenix, the controller's render function will assume the corresponding view module based upon the calling controller module. This convenience is acceptable violation of explicit greater than implicit. Let's see how this works. Here we have a plug that's actually going to be adding into the connection object the corresponding matching view and layout based upon the current controller module. If we were to look into the second function, PhoenixController.underscore, underscore view, underscore, underscore. We can see in here that all it's doing is taking the controller module, parsing it out, and just appending on view and converting it back into an atom. And for those of you that don't know, modules are just atoms. But what's interesting for me is that when I looked at this a while back, I was unfamiliar with Phoenix.naming. I was like, unsuffix, okay, let's take a look into that, what does that do? Well, it turns out that Phoenix.naming has a whole bunch of convenience functions that I was actually re-implementing in most of my applications all the time. For example, underscore, or camelize, or humanize. So the number of times I've actually written functionality like this, when it was already pre-existing and I can just reach into Phoenix to use it, is a little bit embarrassing. But now after looking at the code and diving in, I can save myself a whole bunch of trouble. And these are pretty awesome tools to add to my tool belt. I have several elixir libraries that I've been publishing. One of which is one that has to rely upon external non-elixir files. And when the contents of these files change or if we add or delete or move a file, then we want to kick off recompilation. We want to actually instruct elixir and say, next time you recompile this, maybe the actual source code didn't change. But because a dependency file of that source code changed, we have to force recompilation. The reason for this is it was reading these files and loading that data into a module attribute. However, by default, elixir is not tracking non-elixir files to determine what needs to be recompiled. In this case, these data files are being read at compile time and data module being stored in the module, as I said. But how do we get elixir to recompile our module with a new data file or change data? Well, there's a really great example of how this is happening in Phoenix itself. And that's with Phoenix templates. So the template files, the EEX files are not elixir, not necessarily elixir files. But Phoenix is still watching these files. And we can actually, if you go and edit the file, the next time you load the page, the template is recompiled and we get the result. So there's two halves of this. And the first half is this module attribute called external resource. So with external resource, we just give it a path to a file, to a specific file that we're saying, okay, for this module, it has this external resource. And this module attribute is actually accumulation true. So with module attributes by default, unless you declare them as being accumulative, they'll just overwrite. But if you say accumulation true, then when you add more references to, if you add more module attribute of the same name, it'll just append it onto a list. Anyway, all this to say that for the given file here with external resource, when you give it the path, when the corresponding path file's content changes, the next time we actually go to recompile, our consuming module will also recompile despite the fact that the consuming module's source code may not have changed. However, this only covers one of the use cases that we need to cover. The other use cases would be, again, adding a new file that our consuming module may be interested in deleting the file and moving that file. So to understand that, we have to take a look at this line right here. So if you've looked at your Phoenix mix.exe, you may have seen this before. It's right in the project. And it may look like, compiler, that's pretty advanced. But I hope that this still does down, it's actually pretty simple. So we can actually instruct Elixir and tell it to add additional compilers and have some customization on how that compiler actually operates. So Phoenix's compiler and all of them really are just mixed tasks. And we can take a look at the contents of this one right here. So it's a little long to read, but I'll give you the TLDR on what's happening here. So the run function is run for each compiler. And what it will do, it will call a touch function. And the touch function will actually go out and determine whether or not the list of file paths for a given directory that we're watching. And the hash of that list changes. So if we go and add a new file to a directory, we expect the list of all the file names, file paths, rather, to change. And the hash of that value to change. If the hash value doesn't change, then recompilation doesn't kick off. Or at least for this given compiler. If the hash value does change, then the compilation does kick off. Then this is pretty much it. There is some a little bit of additional code in the Phoenix template module to support this, like the hashing function. But beyond that, we've added a new compiler that Elixir can actually go and watch. And you can go and take a look at this mixed task on your own. And probably implement this in some of your own applications if you need it. It's actually really powerful and it's really, really simple to do. And it took me about 10 minutes to incorporate this code into my library for the recompilation that I needed. So if you're not familiar with kernel.getin, here's the primer. When working with deeply nested objects, you may find yourself having to do multiple map.get. To walk down the path to get your nested value. Kernel.getin allows you to save a lot of time by passing a path to walk. So something like this here, we can simply rewrite as this. And this is great. But before Elixir 1.2, there is no simple way of reaching deep into a map or keyword list and deleting a value. So the idea that we have something that is fairly well nested and we want to actually delete one of the keys out of it within that nesting. So I implement this on several projects where it just became a recursive walk down and then when you delete the key, you pass the value back up and assign it back to the parent. So I figured, okay, this is something I've implemented several times. I don't want to extract it to a three line library because we are not the JavaScript community. I should probably try to contribute this back to Elixir Core. This is the first time I was going to attempt to actually contribute something back to a language. So the solution was not as straightforward as actually what my implementation was in my own applications. Because it's a sport of many different use cases in the language itself. And in fact, when we're using these kernel.getin, put in update in etc functions, the access of the path that we're walking may be coming back with another map, keyword list, or something else. So heading into the actual kernel module to see how getin is implemented started to be pretty interesting, a little bit more complex than I had hoped. But I wasn't daunted. The existing functions already there were getin, getin, update, and put in. These functions rely upon the access module while walking the path. Because we may actually have a map that's a keyword list, we need that in different access. The access module is what enables this. The path is walked and the parent value is passed in. Through some simple pattern matching with access.fetch, we walk the path. So when the list of path segments are passed in, essentially those path segments are passed into a fetch with a given object. And we just use fetch here to pattern match on what the segment type is. And we can now say, for a map, we're going to use, if you can see on line 243, maps.find, if it's relying upon the early one rather than going to the elixir one. Same thing with line 246, if it's a list. So this allows us to walk that very, very easily. So after a few revisions and a rename to pop in, this functionality got accepted into elixir 1.2. And this was, as I said earlier, this was my first command into a language. And I was able to do that probably within a week without ever having committed anything to a language before. And it was all through just leveraging the existing code within the actual source of elixir. The next is test module attributes. And this is something else that I've added to elixir. So you may be familiar with the at tag in your test files. This is a nice kind of, but a document or add in some very specific values on top of your test functions that will get reset on every single test call. But if you've played around with this and other module attributes, you realize the other module attributes are not being reset on every single test call. I had a library that I've been writing that I wanted to actually add an API where I relied upon not necessarily a tag, but something else, a different name. But I also wanted these module attributes to be cleaned up after every single test call. So here's how I went about adding that to xunit. So let's start how elixir implements the module attributes to clean up. When we look at xunit.case, we see in this function, module.gleat attribute, module, which is our test module in this case, and then tag is called. This is defined within the on definition function. So when is on definition being called? So within the test macro, we can see down here the AST that's being emitted out, or rather the quoted stuff that's being emitted out. We'll actually call this odd definition. So now we know that for every single test that's being run, we're the actual deletion of that module attribute is taking place. This should be pretty easy to extend. The solution that I ended up implementing was adding an API to xunit.case called register attribute. And this function will tell xunit.case that this is a new module attribute that should be respected by your test suite, and should be defined and cleaned up after each test case. So if you're interested in actually making use of module attributes that operate and behave in this way, you can actually call xunit.case.register attribute. And then now you have access to more than just tag. And it'll be put, everything but tag that is done in this way will be put on tags considered special. But let's say foobar right here is added on to the context object under registered. So during my time of diving down into Elixir source code, and adding some functionality to Elixir, I did some more poking around. I had the opportunity of answering a few questions for myself that I had. And the first of which was with guard statements. So I think a common question a lot of Elixir devs ask themselves fairly quickly is, okay, I have these guard statements these are pretty nice, but I have this other function and I would like to use this as a guard statement, why can I not do that? If you were to take a look at one guard function, let's say, is binary in this case. You can see a note that it references on line 286. It says that this function is inline by the compiler. Okay, why is this function inline and why can I get, how can I inline my functions or can I even do that? If we take a look at towards the top of this given module in the documentation, we can see that there's some flagging that's occurring. So if we do a project-wide search of the Elixir source code just for inline, we're going to be pointing towards a file called elixir underscore rewrite. Elixir underscore rewrite simply is just checking to see if this given function is allowed or flagged for the guard statements. So on line 77, we see that we are just pattern matching on, for those that aren't familiar with Erlang code, is binary there? That's an atom at that point. It doesn't have any special syntax around it, the way Elixir does. But that would be considered an atom, so Erlang there, atom, is binary atom. Anyway, this is just instructing the runtime to actually, or rather at compile time, to say that, okay, this function is binary, is allowed to be a guard. And at the bottom, we can see the catch all, the pattern matching part of it, saying that anything else that we pass in is not going to be a allowable guard statement. So now we understand why our own functions can't be used. If I had an adventure, yes, I would say that this was done because we cannot guarantee that the function you want to use would be available at the compile time and it could lead to compilation errors. So some total guess there though. But I guess you can go about compiling your own version of Elixir if you'd like and add it in there, but I don't necessarily recommend that. So special forms, I recently wanted to do something like this. Adding a macro that could take a nondeterministic number of kind of bare values being passed in and give you back a list like this, just compile it to this. And Elixir's own quoting system supports this style. So it could theoretically be possible. However, a spike of this proved that it wasn't possible because of course the macros will just map back anything with commas being delimited as separate argument values being passed in. So but there are other functions within Elixir that are violating this policy. And how does it do that? To find out, I took a look at kernel.with, with the function with is an example of a function that allows for non-standard syntax formula we brought in. And with is defined within the module kernel.specialforms. And that was new to me. I didn't really look through that before because I did not read the freaking manual. A quick look at this module showed many other macro definitions that would also support non-standard syntax. And Elixir has some early code to declare these functions as quote, quote, special with a catch all the bottom to say everything else is not. So again, if you wanted to go and recompile Elixir and add in your own function that could take special form, you could override this. But this would show you why you're not able to do it out of the gate with your standard Elixir build. And these are some small examples taken from my own experience in Elixir development. I hope that I've been able to impress upon you two ideas though. One, functional programming takes makes reading and diving into code very straightforward. Figuring out how something works doesn't require understanding application state, which is very powerful. Because I don't have to go and perhaps boot up the application or and put a price session to understand what the state of the application is to give a point. I don't have to go and really look too hard at the test to see how things are being set up. I can just assume, okay, data in, data out. And it's very easy for me to see where the next piece of code that's jumping off to. And two, and this is incredibly powerful. A lot of Elixir itself is written in Elixir. In fact, nearly 88% of Elixir source code is written in Elixir according to the GitHub numbers. I'd actually say it's probably technically higher. Because a lot of the earlier in code is in tests and then tokenizer and parsing. So in most cases of what you're probably gonna be interested in is going to be available in the Elixir code, syntax and code that you should be familiar with. However, I would impress upon you that while coming to, if you're coming to Elixir from not Erlang, you're probably not too psyched about Erlang syntax or you may have been something that, for me personally, I looked into Erlang years ago. And that was kind of the blocker was really the syntax. It's kind of a ridiculous reason for the being, but that's what it was for me. However, through Elixir, I've really had the opportunity to start learning Erlang. And I think that if you want to really dive down, once you learn some of the high level concepts of Elixir, then the syntax doesn't become as daunting in Erlang. So I would really, you don't have to go out and really become an Erlang expert, but you should become comfortable enough with poking around in it. So go out and build this community and thank you very much. I ended early intentionally, because I know that lunch is next. Do we have any questions? Yes, all right. So specifically, what I was trying to do was, I'm running a fixture library. And the fixture macro is being defined before your tests. I wanted to have, rather than having it, rather than doing something like, oops, something like fixtures, having to do this like, sorry. Let me just turn on mirroring. I can keep going bigger. Bar, baz, I didn't like that. It felt like just too much kind of ceremony around it. I wanted to have, because it was a macro, I want to do bar, baz, with, oops, something like that. But the macro, and you could pass, one could just be this, one could be bar, baz. But because the macro, if I was gonna do fixtures like this, it would have to be like, arg1, arg2, key words. And it's just going to take each comma, delimited argument being passed in as separate arguments on the macro definition. However, with special form, the special form functions appear to violate that policy by actually saying, okay, we're going to just parse it and allow, I think if you were to look at some of the definitions of those functions with an elixir, most of them just take one argument, even though they can take several different things being passed into them. And then it's just up to the parsing of the function to tell it what to do. Anything else? Okay, thank you.