 So, my name is Jason Vegley, work at Bashow where I'm an Erlang developer and I work on the React database there, maybe you've heard of that. Pretty cool stuff. Today I'm going to be talking about Dialyzer, which is a type checking tool for Erlang and Elixir. And I'm going to lead into that topic by first talking about technology, holy wars. These are these incredibly productive online discussions that usually result in someone being compared to Hitler in some way or another. If you're old enough to be on Usenet back in the day, maybe you participate in or at least witness some of these discussions. I guess it's moved to Reddit nowadays, but Granddaddy of them all, of course, is the old Emacs versus VI Holy War or VIM. You've got people on both ends of the spectrum here. You've got really strong adherence to both these tools. I'm a VIM guy. I love my VIM. Yes, and I know there are many Emacs people in the crowd too, so no offense. So you hear the loud people on either side of this extreme here, but you've also got some people in the middle who are trying to take the best of both worlds here. And they've created things like vial and evil and more recently, space max that kind of try to give you the best of both worlds, the nice modal editing power of VIM with a nice runtime and Lispy environment of Emacs. So it's not everybody is on the extremes of this issue. Another one that used to be an argument was the old tabs versus spaces debate. How do you format your source code for your programs? If anybody watches the TV show Silicon Valley, you may remember an episode that hinged on this debate here. Pretty funny. In any case, this battle is actually one. I think space is pretty much one out this battle. I don't think anybody really formats their code intentionally with tabs anymore. So we have a clear victor there, I think. And then another one I remember participating in back in the day was the old KDE versus GNOME debate. If you used the Linux desktop environment, these were your two big choices. There were some other niche players, but this basically boiled down to those two. And I guess the sad fact is that this argument kind of became irrelevant because the year of Linux on the desktop never happened, despite being promised every single year. And I'm sorry, there's probably some adherents out in the room to Linux and I mean no offense. Now, in programming circles, one of the big debates is the old static versus dynamic typing debate. People still argue about this one to this day and there are lots of strong arguments on both sides. Do you want a strong static typing system where the compiler checks everything ahead of time for you? Or do you want the looseness and flexibility of a dynamic type system where you can have the freedom to code as you like and not worry about appeasing your compiler? And last year I was at Strange Loop and saw this talk by Gary Bernhardt that kind of crystallized the differences between the two extreme positions here. So his talk was called Ideology and it was about the ideological reasons underlying a lot of our beliefs. In terms of static typing and dynamic typing, he classified them as either type bigots or test bigots. He said that type bigots believe that correctness comes exclusively from categories and categories is just another word for types, right? So these are maybe your Haskell programmers or ML programmers who maybe you've heard him say, well, it compiles, ship it, it works, right? On the other hand, you have the test bigots that believe that correctness comes exclusively from examples, meaning like your unit test. You have tests that prove that the code behaves correctly, at least given these example input conditions. These might be your Ruby programmers. I'm sure we got a lot of current and former Ruby programmers in this room and it also probably includes a lot of us Elixir developers, too. But like I said, these are kind of the extreme polar opposite viewpoints and today I want to talk about the sort of middle of the road approach here where we kind of compromise and maybe get the best of both the static and dynamic typing worlds. And that's in the form of gradual typing. So gradual typing is a type system where some but not all of your variables and functions can have type declarations associated with them. It's up to you as the programmer to decide which ones it makes sense to do that for. Now why might you want to do this is to kind of combine the benefits of static typing such as, you know, you have a tool or a compiler that checks your program ahead of time before it ever runs for correctness and consistency of usage. And maybe even more important than that, having type declarations in your code can make it that much easier for other developers to read. On the other hand, there are some drawbacks and burdens to static typing systems such as the fact that you kind of have to get all of your types correct upfront. And you have to kind of have all this figured out in your head and translated to a form that the compiler can check for you before you can get any real work done. So that's a big burden that kind of hinders productivity a lot. And in gradual typing, since you don't have to get all the types correct upfront, you can kind of use a more incremental and sort of laissez-faire development approach. So you can still do your prototyping and kind of figure out the shape of your code before you really have to figure out all the details of the type system, please, your compiler. And of course, with gradual typing, duck typing is still an option. So I'm sure a lot of us here have utilized duck typing for its advantages in the past, and gradual typing doesn't take that option away from you. So that's gradual typing, what I'm going to be talking about today. But I want to take a little quick detour into the history of type checking in Erlang first. So we know that Elixir is based on the beam, and Erlang is a strong foundation for Elixir. So I just want to give a little instructional historical overview of how we got to where we are today. So Erlang, like Elixir, is a dynamically typed language. But I'm going to argue that it's also a strongly typed language. Different people mean different things when they say strongly typed or weakly typed. All I mean when I say it is that it doesn't allow you to combine types in incompatible manners. Like the example up here, you cannot add the number five to the string containing the digit two. That's an error. It's not some nonsense like you get in JavaScript where you can add an array to a number and expect to get something and depending on which order you put them in, it might be something else. So that's what I mean when I say it's a strongly typed system. So in Erlang, like in Elixir, you have pattern matching and guard sequences. And these do provide some rudimentary form of type checking for your programs. But it's important to note that it happens at runtime, not compile time. So you actually have to execute your code in order to have the type checks done for you. So there's no sense of a type check at compile time. Now over the years, there have been several attempts at adding a true static type system into Erlang. The most famous of which was in 1997, Simon Marlowe and Philip Waddler of Haskell fame. That's up there is a Waddler in his land demand costume. They spent a year or two trying to build a strict static type system into Erlang. And it was ultimately a failure for a couple of different reasons. And one of them was that Erlang was already a dynamically typed language. And they're trying to bolt on the static type system on top of it. And the type system they came up with cannot express all the types that are important to use in an Erlang program such as processes and messages between processes. But more important than that was the fact that at this time, 1997, Erlang had already been around for what, 10 or 12 years. And there are already really large systems running in production for a long time and running flawlessly. We know about Erlang's reputation for building robust systems. And what happened was they came along with this grafted on type system. And it's throwing all types of errors at programmers who say, well, my program is running flawlessly all this time. And now there's type checkers telling me everything's wrong. So clearly something wasn't right there. And it never really took off because it wasn't an aid to the program. It was really just a hindrance. But not everything was a failure because we learned a lot in that process. And what ultimately it resulted in was a tool called the dialyzer. The name I won't try to defend, it's a bit of a stretch. But what it is, it's a static analysis tool that performs type checking. So a static analysis tool, it's not part of the compiler. It's not part of the runtime. It's a separate tool maybe like find bugs or lint if you use one of those that you run separately. But it performs type analysis on your code and will print warnings and errors that you might need to address. It features type inference. So it will analyze your code and try to determine what types of values you're using in your code by your usage patterns. But you can also perform or insert optional type specifications for your functions and your variables. And that's the idea of gradual typing coming into play. So you add those as needed. Now unlike the Marlowe and Wadler type system which was very strict pessimistic system, the authors of the dialyzer decided they wanted an optimistic type checking system based on this concept called success typing. So what is that? All right, the authors of the dialyzer system wrote a paper describing the type model in there. There's a whole bunch of words that will tell you what success typing is. So if you're the academic type, you wanna look up that paper. There's a lot more math and cryptic vocabulary in there if that's your thing. But basically it boils down to it's an optimistic system. Meaning that it never cries wolf. So if dialyzer emits an error, it means it's really something wrong with your code. It's optimistic in the sense that if it cannot prove that your code is definitely wrong, that it's going to assume you knew what you were doing and just basically move on silently without flagging an error. So that's in stark contrast to other type systems which are very strict and they know better than you do. This is the opposite in dialyzer. Okay, all right, so we've been talking a lot about history and Erlang and I thought we were at Elixir Conf, why aren't we hearing more about Elixir? Well, I mentioned before that I'm an Erlang programmer and like Clint Eastwood here, we tend to be a bunch of grumpy old men. We resent all you Elixir programmers with your shiny new tools. And the real reason is of course is in order to understand the things that dialyzer can do for you it's important to understand the historical context out of which it grew. And to understand why it uses an optimistic model rather than the pessimistic model or the strict model that most people are used to in type systems. So from here on out, we're going to be talking strictly about Elixir. You can forget all the history if you want. And I mentioned before the dialyzer does type inference. So even if you don't add any type specs to your code, you can get a lot of value out of using the tool because it will analyze your code and try to determine what types of values you're using. And it can actually flag inconsistent usages of variables, even if you've never added a type spec to your code at all. So let's take a quick look at how that might work. So here we have a few different Elixir functions. The first one is an add function. Simply takes in two parameters X and Y and adds them together using the plus operator. Now in Elixir, like in Erlang, the plus operator is defined by the language to mean addition of two numbers. There is no operator overloading. It's not used for string concatenation or anything like that, like you might see in other languages, which means that if you see a plus sign in your code, the things to either side of it must be numbers, either integer or floating point numbers. So using that language rule, Dialyzer is able to infer the types involved in this function. So I've added that as a comment below what Dialyzer infers the types to be. So we've got numbers as your two parameters, and the result of the function is also a number. The next function divide is very similar, except rather than adding the two numbers together, it's doing a division operator. And again, the language definition states that this is valid operation only on numbers, but it also says that the result is always a floating point number. So even if you divide two integers, the result of division is always a floating point number. So Dialyzer is able to infer a slightly more specific type signature for this one. You can see that it infers you're returning a float from this function. Okay, so one more slightly trickier example of type inference at work. Here we've got an and function defined. So let's pretend that Elixir didn't already have a built-in and operator, and we wanted to define our own and function to do a logical and. This is a very reasonable way to do it. What we're doing is we're providing three different function heads and doing pattern matching on the arguments passed in to determine whether the result should be true or false. So the first one, by definition, and false and anything else is false. So that's what the first function head tells us there. Second one says any value ended together with false is also false. And it's only when you pass in true is both arguments that the result is true. Okay, so that makes sense, but what does type inference do with this? Well, we know that the result is always going to be a boolean because in all three variants of this function, we're returning either true or false, which are both boolean values. But if you look at the way the functions defined, we've got these like wildcard, the underscore there says we can pass any value as that because we're not really using it. So we could pass false and the number 42 to this and function, and it would work perfectly fine. That's perfectly valid usage of this function, but it kind of throws a wrench into the type inference mechanism because it can no longer be certain that you're only passing boolean values to this function. So it throws its hands up and says, well, you can pass any type of arguments to this function. So maybe that's not our intent. Maybe we want to write our end function this way, but we really want to insist that only boolean values be passed to it. Well, that's where type specs can come into play. So in Elixir, you can add type specs to any function just by prefixing it with a line that says at spec, the name of the function, and then the types of the parameters, and finally the result of the function, the result type of the function. So let's skip down to that end function we were just talking about. Here we've just added a type spec to the end function, and all we did is say that we want to insist that any arguments passed to this end function must be boolean values. So now this doesn't change the behavior of the function in any way. The function is still defined as it was previously, but now since we've added a type spec when we run dialyzer across our code, it will look at any invocations of this end function. And if any invocation involves anything other than a boolean value, it will flag an error in that case. So that's how you add type specs to functions. What are the things that you can put inside of your type specs? Well, there's a wide variety of them. So you've got all your basic built-in types. We've already seen some booleans. You've got types for characters, binary strings, PIDs, ports, references, things that the original Marlowe and Wadler type system could not represent are represented in dialyzer. You can even include literal values in your type specs. So if you have a function that always returns okay, you can incorporate that into your type spec. Or if you have a function that always returns the number 42, that could be part of your type spec for that function. And you've also got different range of numeric types that you can incorporate into your type specs. You've got, of course, integers and floats. You've also got a kind of super type of those as number. So number refers to anything that is either an integer or a float. Or you can be more specific about the range of values you want for your numbers. So you can say you only want positive integers, negative integers, non-negative integers, or you can even incorporate a range of integers. Like say you have a set of functions that deals with calendar dates and you want to say that they accept months. You might encode that as a range from one to 12. And the type system will enforce that insofar as it can. Of course, it doesn't know if you're pulling a value out of a database, what that value might be. But if it can tell that you're using an integer value that's not in the range, it will flag that as a type error. And by the way, I know I'm flying through these really, really quickly. And I don't expect you to remember all the details of what these type specs look like. You just want to kind of give a sense of what's possible. And then if you want to use them, then you can look up the documentation about how the details work. So your type specs can include lists and tuples. So you can either use the keyword list to represent a list or just the square bracket notation like a list literal in Elixir. If you want to say that your list expect a specific type of value to be contained therein, you just specify that in your type spec. And you can even say that you want the list to be non-empty either using the non-empty list keyword or the square brackets with the triple dots in there. So that's actually the real notation, those are three dots. And that will say that the list must be non-empty. Can also express tuples in your type specs either using the keyword tuple or the curly brace syntax. And if you want to say what the types of the individual elements of the tuple are, the syntax for doing so is right there. We just say that the first element must be an atom, second must be a binary, etc. Type specs can also represent maps, instructs, and even compound collection types. So there's a whole range of different ways of representing a map in a type spec depending on how specific you want to be about what you expect the keys and the values in your maps to look like. The simplest case is just say map or using the map literal syntax, the percent followed by curly braces. But you can also say that this map should have this particular key and the value associated with that key should be of this specific type. And you can even do the same thing with structs. So if you have a function that expects a particular struct as an argument, the syntax for expressing that in your type spec is the same as it is a struct literal in Elixir. And you can combine all these things in arbitrarily complex ways. So if you want to have a list of tuples of maps or whatever nesting you want to provide, it maps the Elixir literal syntax one to one. And of course Elixir is a functional language, which means that functions are first class citizen in the language. You can pass them as arguments to functions or return them as results for other functions and of course you can represent them in your type specs. And the syntax for doing so is basically the list of argument types followed by an arrow and the result type that you expect for that function. So if you have a callback function that you use and you want to provide a type spec so that users know what the shape of the function should be that they pass in, you can use the function type to do that. Okay, so that's basically all of the built in types. And you can see it's a pretty expressive type system even just using the built in types but to be truly useful you have to be able to define your own custom types. And so I just wanted to show an example of how that works with this module here. It's called playing cards just to represent a standard deck of playing cards. And we have a few types defined in this module. So you define your own custom type using the at type syntax. You give your type a name, double colon, and then the definition of what that type is. So let's look at the first one, it's a type to represent the suit of the card. And what we're saying here is that the type suit must be one of the four atoms either spades, hearts, diamonds, or clubs. So that vertical bar is sort of an OR operator there. Algebraic data type if you're into type theory is what we're kind of looking at here. So that's how you represent your suit. Of course, a card also has a value. It's a number in the range of two to ten, or it's one of the atoms Jack, Queen, King, or Ace. So this is how we're deciding to represent our cards. We've got a type for a suit, a type for a value. And now that we've got those two things defined, we can define the type for a card itself. And here we're representing that as a tuple where the first element of the tuple is of type suit, which we defined above. And the second element is of type value, which was also defined above. And the final type example is a deck of cards, which is just a list of cards. So we use the list type syntax to say that decks are lists and that the elements contained therein should be cards. And the triple dot notation, of course, means that that list should not be empty. Okay, so that's our type definitions for playing cards. Let's see how these are used in type specs for functions. We have a function called suit here. And the intent of that is that it takes a card as an argument and just extracts the suit component of the card and returns it. So we're just using simple destructuring pattern matching to do that. And you can see the type spec describes what I just said. So the argument should be of type card and the return type is of type suit. So that all works fine, but let's jump down and take a look at this function called broken here. Called a broken for a reason. It's calling the suit function with what looks like a card. But if you look closely, you might notice that the elements inside the tuple are flip flopped compared to our type definition for what a card is. So the type spec for a card says that it's a tuple with a suit as the first element and a value as the second element. But the way we're calling the suit function in broken has the value first and the suit second. And this will actually be flagged as an error by dialyzer, right? So I said that dialyzer is an optimistic system. It's not going to flag any errors unless it can definitively prove that there is an error in the code. And this is one case in which it can do so. It says there's no way this could possibly unify with the type specs that we provided for our function. So that's nice that it catches that error for you at compile time or at least at dialyzer time. Okay, so I'm going to jump into a demo or two here. I think in the interest of time, I'm probably going to skip the function composition demo, but at least we'll get to take a look at another example of using type specs in Elixir and also how you utilize the dialyzer tool in your Elixir projects, all right? So here I've got a stack module. And this is just your basic bread and butter last and first out stack. And so before looking at the type specs for this, let's just do a quick overview of what the functions are that are provided by this stack module. We've got a new function, a push function, and a pop function. These are your standard stack operations. And you can see by looking at the implementations of these functions that they all use lists as their representation of the stack. That's a very natural way of doing things. So if you were doing this in an object oriented language such as Ruby, you would define your stack as a class and this list would be sort of like an internal instance variable for that class. Now, we don't have such facilities in functional languages like Elixir. So what we end up doing is we pass the data representation around to functions. That works fine, but what it does mean is it kind of limits your ability to providing encapsulation. And it means that other parts of the code that are not your stack module might use the stack as if it were a list. And if you decide to change your representation of the stack in the future, you'd have to worry about all that other code out there that you might break by doing so. So what we would like to have here is the ability to encapsulate our implementation details to the stack module and still be able to use it in a convenient manner. And the way to do that is to use this concept of an opaque type. So we've got this at opaque declaration here. And this is almost just an alias for at type. So it's a way to define a new type. And the type we're defining here is called t. When you, since it's defined in the stack module, it's fully qualified name is, of course, stack.t. Common Elixir convention to do that. And we're defining that type to be a list of values. And value up here is just an alias for the any type. We don't really care about what types you're putting into the stack. We just want to say that the stack is being represented as a list of those values, all right? And so now that we've got this opaque type, all of our type specs can refer to it and say, so for example, the new function says that it returns a stack.t type. And the push and pop functions say that they take stack.t types as their arguments. And also as part of the return values, okay? So we use it just like any other type declaration. But let's see how it differs from regular type declaration by looking at example usage of the stack. All right, so here I've got another module called stack user that just uses the stack module that we just looked at, all right? So here we are creating a new stack calling stack.new. And then we're through our pipeline here, we're pushing three elements onto that stack. So after this pipeline, the stack will be a list of three, two, one, all right? Here's just an example usage of the pop function. So pop takes a stack as an argument and what it returns is the top value of that stack combined with the rest of the stack as a tuple. So this is all just standard usage of the stack module. Let's see how the opaque type comes into play here. So on this line here, line 13, we are pattern matching, kind of destructuring the stack variable, which is a list. So this should work, we're pattern matching. So this syntax will grab the top element of the stack and put it in top and the rest of the stack will be referred to in the rest variable there. That all works perfectly fine, it's valid elixir code and nothing will happen at runtime or compile time if you choose to use the stack this way, because a stack is a list, remember, as we saw it was implemented. However, because we declared the stack type as being opaque, Dialyzer will actually flag this as an error. So we can use Dialyzer as a tool to kind of help us enforce the encapsulation of our data types into the modules that kind of own them here. So let's see what that looks like when we actually use it. And I'm going to show Dialyzer in action in an elixir project here. So we've got an elixir project and of course we have generated with mix. Since Dialyzer is a command line tool that ships with Erlang, if you have elixir installed, you already have Dialyzer installed on your system. But it's a lot more convenient to use it if you use this mix plugin called Dialyzer here. All right, so this is available on GitHub. You just put that in your dependencies for your project, and you'll be able to use that. And what that does is adds a couple of mixed tasks and makes them available to you so that you can use Dialyzer in your mix-based projects. So let's take a look at how that works. Okay, so the first time you run Dialyzer, you have to run this test called dialyzer.plt. PLT stands for Persistent Lookup Table. And what it basically is, it's a cache database where Dialyzer will go in and analyze and scan all of the built-in Erlang and elixir library code. And so those are really big code bases. It can take a long time to scan through and analyze all that for type information. So you only have to do this once to build that database. And it takes maybe five or ten minutes, depending on how fast your machine is. So you have to do that once when you first run Dialyzer, and once each time you upgrade elixir on your system, which is a little bit unfortunate. But at least the mix task is friendly enough to tell you when you need to do it. So you don't need to remember, it'll yell at you if you haven't done it. But every time you upgrade elixir, you have to run this dialyzer.plt test to update that database. But I've already done that, so I'm not gonna make you sit here and watch it run for five or ten minutes. Now we'll jump into the dialyzer task itself. And this is the one that analyzes your code and flags any potential type errors in there. So Dialyzer, as a tool, understands Erlang source code and bean files. It does not understand elixir source code, all right? So having it integrated in the mix means that your code will first be compiled down the beam, and then Dialyzer can transparently use those beam files for an output, so that's what's happening behind the scenes here. If code is first being compiled in the beam, and then the analysis is running on those beam files. And what comes out here, remember we had in our example code, we had a line in there that was using the stack as if it were a list. And we, since we declared the stack type as being opaque, we said we didn't want that. So this is the kind of error that you'll get out of Dialyzer in this case. We're actually getting two errors here. The first one says has no local return. That's kind of like a generic message that means this function had a problem and details are to follow. So you can usually ignore this has no local return error message and look at the one immediately below it. Which is usually the real cause of that error. And this one is a little bit tricky, but it says the attempt to match a term of type stack.t against this pattern breaks the opaqueness of the term. Believe it or not, this is one of the friendlier Dialyzer error messages. If you've used Dialyzer before, you've probably been quite bewildered. But this one is actually being pretty specific about what's going on here. So it tells you what line the problem is occurring on and the nature of the problem, which is this pattern here is using the type in a way that breaks the opaqueness that you had declared for that type. Make sense? Okay, so as I said, just in the interest of time, I'm gonna skip past the other demo I had in place, which was about function composition. And just kind of bring things to a close here. There's a lot of features in Dialyzer that I didn't wanna take the time or the depth to go into here. But if you do use Dialyzer, and I suggest you do, these are some things you might wanna look up on your own time. So you've got overloading of type specs. So in Elixir, you can have multiple function heads for your function and differentiated based on guard sequences or pattern matching. And the type spec syntax allows you to express that in your type signatures as well. So that's what overloading of type specs refers to. And you've also got parameterized and polymorphic types. So if you've done some C++ template metaprogramming, this might mean something to you. If not, maybe don't worry about it. You've got type variables, tag tuples in type specs for structs or some things that you might wanna check out. So you can actually add type specs to your structs as well. But just to kind of bring things to a close, I'm gonna offer my personal assessment of what Dialyzer brings to the table, both positive and negative. So first and foremost, I wanna say that I do believe gradual typing is really a good compromise compared to people banging heads over whether you should use static typing or dynamic typing. They both have their benefits and drawbacks. And gradual typing kind of allows you to incorporate them into your own development style in a nice easy and effective way. And without a doubt, type specs make the code easier to read. Maybe even noticed on my small toy examples, just having those type specs there, it tells you what a function returns. You don't even have to read the documentation to know that this thing returns a tuple and the first element is a list or whatever the case may be. So it makes code a lot easier to read and maintain. And it really does find real errors in your code. So at Bashow, it's kind of our policy to put type specs on all of our public functions at least and as many functions as we can really. And it has definitely been the case where Dialyzer has flagged errors in my code that my unit tests failed to find. So it can really surprise you in the types of errors that it can find in its analysis. On the other hand, it can really surprise you on the things it doesn't find too. So it finds some real errors, but it can't find all real errors. And that's just due to the optimistic nature that they chose for this tool. And we talked about the reasons that they chose that approach and why those are good reasons for relying on Elixir Universe. And also, as we alluded to before, the error messages can sometimes be a little bit tricky to read, but you kind of get used to that. You start to notice the little quirks in the output and you're able to figure out what it's trying to tell you. And if it is telling you something, you know that it's a real problem that you should definitely address. And finally, I think the Elixir integration is a little bit lacking. So Dialyzer is a tool that came out of the Erlang Universe and is very well integrated into that workflow. And just by virtue of building on top of that, Elixir gets to take advantage of that. But there are some integration points that could be improved. So I mentioned that type specs for structs are a thing that you can do, but they're very verbose and repetitive, so you're basically repeating your struct definition but just inserting type specs instead of default values. And even worse is the fact that since that information is not compiled into the beam file, it's basically just documentation. Dialyzer tool will never see it. So you can add type specs to your structs, but it's really just for documentation purposes at this point. And the other unfortunate consequence of the fact that it's working on beam files rather than Elixir source code is that if you have Elixir script files, the .exs files like your unit tests and maybe many other things, they can't participate in the Dialyzer game because they don't get compiled to beam files. So those are all some kind of slight disadvantages to Dialyzer as it stands today. But that said, there's really no downside to using Dialyzer. Even though some things could be improved, it's definitely going to help your project by incorporating type specs into your code and Dialyzer into your workflow. So I would encourage you to use it despite the fact that it's not a perfect tool. And so that's my assessment of it, and that's all the material I have. So I just want to say thank you for your time and I'll open it up for questions if we have a few minutes for that. Thank you for this, it's a great speech. Thanks. So I've worked in Node, so I haven't used Dialyzer, but I've been looking into it and I've also been looking into Credo. Do you have any experience with that? I do not unfortunately. I don't know if it's between the two that well. Yeah, I'm sorry. All right, thank you. Do you have any tips for overlapping domain errors? Overlapping domain errors. Can you expand on that a little bit? Sometimes that's a message that Dialyzer gives and it means that somehow the types built in the types you're using have some overlap somewhere. Yeah, so it's hard to answer in general. Usually, it means that you're saying something that you probably didn't mean to say, but the type that you're specifying is, I believe, a super type of the type that didn't inferred. So you're probably doing a disservice by adding your type spec with a less specific type declaration than the type inference mechanism inferred on its own. Is my understanding of that? The other thing is I've weirdly fixed them by moving them all up to be all together instead of them being separate at specs. And I was wondering why Dialyzer cared about the difference. Yeah, I'd have to see a specific example of that. I can't answer that off the top of my head. Sorry, but yes, I think there are some bugs or maybe misfeatures in the implementation so that I've come across cases where just expressing the types differently, like you said, using the OR syntax rather than having a separate type spec with a name by putting the ORs in the function type spec itself rather than in a type spec had different results. And I don't know the reasons for that. Might just be an implementation misfeature, or there might be something that I don't understand is causing that. But I've noticed the same unfortunate behavior that different ways of expressing what should be the same idea lead to different results. And going back to the Holy Wars, there's another package just called Dialyze that also does the same stuff. But it saves the PLTs in your underscore build, so it's nice if you're switching between Erlang versions because it keeps separate copies for each version of Erlang and Elixir. That's good to know. I'll look into that. Thank you. Is there any work being done or that has already been done with compiler optimization and this type spec information? I don't believe so. Now, it does require some cooperation on the part of the compiler just to have the type specs be interpretable. So even though those type specs must be parsed by the compiler, the compiler currently doesn't do anything with it, and it's done by a separate Dialyzer tool. That's not to say that nothing could be done with it, but I'm not aware of any work being done in that direction. Would be a fun project though. Is there a way to view the specs that Dialyzer infers for a function or better yet, to auto-generate those? Almost. Well, the answer is yes and no. The answer is yes if you're working in Erlang, no if you're working in Elixir. So there's a tool called Typer that ships with Erlang, and it will analyze your Erlang source code and spit out the type inferences as type specs in Erlang syntax, which is slightly different from the Elixir syntax. And if you have Erlang source code, you can use that. But it doesn't operate on Beam files, unfortunately, so you can't use it for Elixir. So, quick question. If you define a type, let's say, T is any, and then you have a function, and you say it's T to list of T, is T going to be the same in both cases, or is it really just an alias for any? All right, so this is a very good and deep question. I was struggling with this one myself, so it's basically you're saying whether the two instances of T are kind of constrained to being the same type, and unfortunately in the current implementation, not really, no. If you do a Google search for Elixir parameterized polymorphic types, you'll probably come across the question I asked on Stack Overflow about this, and you'll get a lot more in-depth answer about it, but it's not interpreted as strictly as it should be, I think. Where would one find more information about dialyzer in general, like in Learn You Some Erlang and Erlang in Anger, is there any other resources? Yeah, so Learn You Some Erlang is a really good reference, at least if you're comfortable enough with Erlang to be able to read the Erlang type specs. The Elixir documentation includes a type spec section in it, but they moved it outside of the, like the main module documentation and into like a separate section of the docs that don't seem to be picked up by Google, so it's in the Elixir docs somewhere, but I can't point you directly to where, because they recently moved it, and that covers the syntax for type specs and all the available options there. It doesn't really get into the dialyzer tool itself so much, but it will tell you what you can do with type specs, including like polymorphic types and all the things that I kind of had to gloss over here are available in the Elixir documentation, but you might have to be a little diligent in searching for those. Anyone else? So dialyzer is strictly a static analysis tool, right? It doesn't do anything at run time. Correct. Is there any, when the types are defined, is there ordnality to the definition, like when you had two through 10, Jack, Queen, King, Ace, is there any sort of analysis it does around the work, like you can have Ace two through 10 or? So like it can only analyze what it knows at the time it's running, right? So I hope I'm answering the question you're asking here, but like if the data that's being passed into these functions is outside of its domain of knowledge, it can't do anything with that, so it assumes that you know what you're doing. And that only happens at run time anyway, so yeah. Yeah, correct. Any advice for running dialyzer just on some of one's code? Like we have a Phoenix project, where Phoenix just like does not like dialyzer, so is there any tips of how to set things up to skip all the Phoenix code? So to run dialyzer on a subset of your code, you would probably not use that dialyzer mix plugin that I mentioned, what you would probably do is run the dialyzer command line tool itself, it supports arguments for running against a particular directory or against specific beam files or whatnot, so but as I mentioned, you first have to compile your elixir code down to beam files and then run the dialyzer tool against the resulting beam files from there and then you can constrain it to whichever files you want to. I have a dialyzer running on Erlang 18.3, but when I upgraded to 19, there's a whole lot more new errors and stuff to fix. Do you know what all changed? No, I do not because I work at Bashow and we're still on Erlang 16, so I can't help there. Thank you so much. Thanks everyone.