 Welcome everyone to the session with Evie Ciobanno. She's going to be talking to us about writing great tests with Haskell. So without me blathering on any longer, Evie, it's over to you. Thank you. Hey, everyone. Before I start, I want to do a very, very small. I'm definitely not an expert. I don't publish papers. I'm not the verification or anything expert. I just have some experience and I want to share it with everyone. And I think we can do better. So the title is a bit of a exaggerations because I've seen this sort of test before, it's not necessarily new. I just never, I don't know of any other talk or presentation about it. So hopefully it will be interesting for everyone. OK, so the overview is kind of, I wanted to start with a short presentation of software verification. I know a lot of you probably know about the world of software verification, but I kind of thought just in case there are some things that you might not be aware of, some areas, I would just mention them. And then I'm not going to go into any of them other than unit tests, as you can probably see from the overview. I just kind of want to give you a quick feel of, hey, if you haven't heard about something from here, maybe ask me or maybe look it up or whatever, because there are a lot of things. And I strongly believe that they somehow complement each other a lot, right? OK, so and yeah, feel free to ask questions during the presentation. I will be like taking breaks and answering them as they happen. OK, so we are going to start with like a very incomplete and very brief overview of software verification. Again, from my perspective, I'm probably missing a lot of things, but hey, especially like it's also from a lens of like Haskell and FB quite a bit. OK, so first off, I think we can all agree that writing software is really, really complex, right? It's like we know that like we rarely see a program without bugs, right? Even trivial programs often have bugs. So given that like we can we often can write a bug in a maybe 10, 20, 100 line program, it's it's quite understandable that there will be quite a few of them in like 10,000, 100,000 or a million lines code application, right? And we of course, we do our best to not write bugs, right? So we we do code reviews. We do where like we kind of have people read code, right? And that's one way we can avoid adding bugs. And like, don't get me wrong, this list will not be in the order of from bad to good or from important to not important. I think all of these like go well together and they each spot a different class or somewhat different class of problems. So I think code reviews are great. OK, then like we also have some sort of best practices sometimes, right? We have maybe a style document or a coding guidelines document somewhere that we point to when doing, especially when doing code reviews or when when starting on a new code base. And we say, hey, we we attempt to do this or that in a in a code base just so that we can kind of avoid some problems. Like this is not necessarily verification in the sense that, hey, we know this is a bug. If we do this, it's just hey, we like we kind of learned that not doing this or doing it in a certain doing things in a certain way can can make it easier for us to reason about code, maybe or to to read code later. And then, of course, since we are at a functional programming conference and I am a Haskeller, I strongly believe that using functional languages and particularly strongly type functional programming languages help at least help me and probably a lot of other folks in reasoning about code, right? We have some guarantees. And then we can that helps us from creating at least some classes of bugs. Then there's tools, right? We can that can analyze code. And when I say tools and to point out, I'm mostly thinking about things like maybe linters, right? Like HLint for Haskell or anything that kind of reads your code and kind of heuristically tries to figure out that you've done something wrong. We can also like maybe, for example, there's some some things that are in between, right? So in between like the language Haskell and tools, there's maybe enabling all warnings, right? Which is not a default Haskell, but maybe incomplete pattern matching for somewhere in between, right? The language and using tools to analyze, right? Depending on how you view these sort of things. Okay, then we can start thinking about, hey, what if I run the code? Like manually run the code once, like the program to see what happens, right? Does it do what we expected to do? And that's quite valuable, right? Like there are some class of errors that we will never see or it's very hard to see unless you actually run it, right? Because like everything might be perfect, but you forgot to like connect the main with the actual program and it does nothing or whatever, right? It's, of course, that's an extreme example. But that sort of thing is very hard to test, right? And it's a lot easier to just run it once and see what happens. Then we get to the part that everybody thinks about and they think of testing, right? Or automated verification or something like that, right? Especially like folks in the industry. Like writing the unit property integration test and that's kind of like the meat of like testing as far as software practices go today in most of software programming. So I'm not gonna go into a lot of details because we're going to go through it later. Then writing documentation is also a way that a lot of people don't think about when as a way of verifying code, right? Because it's kind of a very, it's not checked, right? It's kind of closer to a code review, right? Like when you add documentation, you kind of explain some things and maybe you give the reviewer a chance to kind of double check his understanding, right? Because he has some understanding or they have some understanding from reading the code. And then that might change while reading the documentation. It might pop up some, wait, but the code did something different than what I've done documentation said, all right? I'm not sure, let me read that again. Right, so writing documentation is a great way to improve or rather to decrease the amount of bugs in a way, right? Because you get one more thing that you can use there. Then we can go like, and it's kind of from now on, it's kind of going into more like software verification as a computer science, right? Where we can write, sketch out some proofs on paper, for example, right? Where we can say, well, we know like all of the cases and we kind of want to prove on paper, maybe not very formally. That's some property holds right that we can never go into an infill it loop or we can, we will always have a well-behaved output for our inputs or whatever, right? And again, this is not, this is very close to writing documentation, but it's a very specific sort of documentation, if you will, a more formal, if you will. Okay, then you can, we can use model checking. There are a lot of software, which allows us to kind of create a model of our program and verify that it has certain properties. Now, this is of course not like, this will not guarantee that our code is correct. Because when like, there's like the model tells us, okay, this model is correct and does what we think it does. But then it's our responsibility to kind of very, like visually or manually verify that the model and the software are similar in ways, right? Like the code that we write, the Haskell or whatever code we write, kind of follows the model's logic. But again, it's a good way to do it and it's not very expensive in the sense that let me go to the next slide. So next we have symbolic execution where this is kind of brings things together. It's very hard to do because there are very few programs that do this, very few frameworks, I don't know how to call them, that do this, right? And because it's a lot of work. So what this requires is you need to have a formal description of your programming language, then you have to have a, like, or rather the target programming language, let's say Haskell in this case, right? You need to like formally describe it in like a language. And then in the same language, you'd probably have to describe what you expect the program to do. And then kind of input your program as well and kind of see if everything matches, right? The like, the pro, like the software kind of trying to symbolic execute your program through the language semantics. And they expect like the thing that you're trying to prove. This turns out to be very complicated to do for non-trivial programs and more complicated programming language. So it's something that we don't often do in the Haskell world. Although maybe hopefully at some time we will be able to, I don't know. And then there are other kinds of automatic verification. I'm not going to go into like what those could be. I'm going to mention a few examples later, but yeah, that's kind of like, for me, this is the big order. I'm sure there are other ways to verify. But for me, these are the main ones, I should say. Okay, so there are quite a few, like, let me go back for a second. So there's quite a few things here, right? So again, kind of group them into some sort of categories just to sort of make it easier for us to reason about like, hey, how do we make sense of this list, right? So one of one group, one obvious group for me is mentally reasoning about code, right? So here we can do code, we can add code reviews, like the set of best practices, documentation, even sketching out proofs, right? We use our like, we read things, we don't like formally verify anything. It's not automated in any way. We just read code, think about it and say, okay, this might be a problem. It's probably not a problem and so on. Then there's the set of like, trying examples out, right? And that goes from like, running things manually to like having like, automating running examples, right? Or the three-unit property integration tests. And then kind of using tools to reason about code, right? And here we can add stuff like the compiler in case it's statically checked in any way or has static types. Tools and all the other like model checkings about execution or verification. Okay, and I'm going to like slowly dive into how I think and why I think these are like complement each other well, because I don't think like, I'm never going to say one of these is better than the other. I think they're very well at like, they work very well together. Okay, so when we mentally reason about code, we shouldn't try to, at least for me, I don't really try when I do a code review to kind of make sure that every, like I don't try to be a compiler, right? Or interpreter. I don't try to make sure that the types, the like types check because that's what GHC already does. I'm not better at that than GHC and never will be. So what I try to do is I form some mental heuristics at reading code and I know what kind of things from experience are often, like we often get wrong as programmers. And again, like this is in general at SFP and then in particular in our code base, right? So the more experience you have with like programming general SFP in particular and then the code that you're working on in particular, then the better you would probably be at this sort of heuristics. So sometimes you can like just reading to code, you can figure out problems that like tools might not easily find or not find at all. And also especially like styling things that deal about styling like, okay, this might be a problem later because this that's something that tools will very, very hard to have specific rules for tools to find that sort of problem. So it's a great way to improve code quality. You can find like big picture problems, right? Because like we're better at taking a step back and looking at the like big picture. And sometimes you can detect that cases if you're really familiar with the system or the problem domain because you can think like at least as programmers who usually want to think about edge cases, right? So sometimes you can find them but not always it's it's kind of it depends on a lot of things in my experience. OK, then trying examples out can sometimes surprise us, right? We've often ran something and didn't believe the compiler, like not the compiler, but like the program, like the results are like, wait, what is going on, right? So that's something that happens quite often when in the life of a program, right? So that's a great way to find problems, right? Very early that you should like the code basis should have a simple way to run things and and test simple examples out. So I think it's a very useful case. You usually have a quick feedback, right? We can just run it and see what happens. Of course, some programs will be slower than others. But yeah, and also like they can increase our confidence, right? Because it works. So we at least know that at least a happy case or at least some cases to work. Of course, you will never be able to guarantee the absence of bugs in when just trying a few examples out, right? Right, and then. Tooling itself, we can split into several other subcategories, if you will, right? So some some tools give us tools give us some guarantees, right? For example, static static typing or other checks, like the one I mentioned earlier, that is any GHC warning that can be enabled, like incomplete pattern matching, right? This this is a pretty strong guarantee, right? Because in a lot of other languages, you can definitely have a lot of bugs due to specifically this class of bugs, right? Dynamic typing or none like not having incomplete pattern checks. So this is definitely some level of guarantee, although, of course, it will never guarantee no no bugs. Again, tools that enforce best practices or heuristics where it can be anything from like running the code formatter. I did that's something that goes into like best practices, because like we want to have a code base that's like feels like a I know that like doesn't differ from model to model, right? It feels like a whole thing rather than the separate things or HLint or any other like your tools, which usually use heuristics to kind of find or suggest things. And then tools which are used to formally prove properties of our code, right? And here we will see some some examples of tools which work with Haskell and some things that can use like the components code. But like these tools give you usually stronger guarantees than the others and more customizable guarantees, if you will. OK, so like I can kind of going quickly go through like this subcategories here just to like make sure that we are clear what they are. So static typing, as I said, give us some guarantees about the code. Sometimes you can like languages with a lot of advanced types. I would say I'm Haskell or even more than Haskell. I depend on the type languages like Agda or Idris or whatever can give us opportunities to encode a lot of advanced variants in our type system, which are going to be checked by compiler, which we can be sure that they will or relatively sure, I guess, as we make errors at a type level that will protect us from a certain class of box, right? Of course, some of them would be easier to encode and others would be a lot more complicated. So again, it's something that should be discussed or like kind of there should be some sort of OK, this is too complex for you like for what we want to do. And this is probably should be left as a like manual check rather than a like type, a strong type check, right? Again, usually quick feedback loop. Of course, compilation times can be a problem, especially with like the more complex type level machinery used that usually translates to a worse compile time. So yeah, be careful about that because you can definitely get to a point where it's unbearable. And of course, even with the dependent type languages, there is a limit to what we can reason to be encode and that that makes programming and like a lot harder and making changes to system with harder and so on. So sometimes it's literally just impractical to to encode some properties into like correctable construction variants or in other ways in the type system. OK, then linters and other cytokines are a great way to find like common set of problems or rather to... I'm not gonna say necessarily fine, although that's what I wrote because they will not say this is a bug or this is a problem. They rarely do that. They will most more often say, hey, this might be a problem or hey, like this repository prefers this style over that style for some reasons, right? So yeah, it's a great way to kind of avoid a certain type of problem. And one other thing that I think is important is the sooner these sort of linters are introduced to a code base, the better it is because if you have a very large code base, it can often be annoying and like high effort and also like kind of a end up in a huge PR which changes the whole code and it needs a freeze or whatever. So it's kind of, yeah, it's better to start clean if possible or start as soon as possible rather. You also usually get a quick feedback loop from linters because they are usually faster, right? Since they don't have to go through, like they don't have to prove anything or to go through a lot of examples or even to run code, they just need to analyze the code at the static level. They're reasonably fast. I'm gonna say very fast as, yeah. Okay, and then formal proofs are a way to like this kind of prove things in a rigorous way, right? You can be sure that if something, you can formally prove something that's it will always stand or be correct. Of course you can always have like, you can be proving the wrong thing or you can have, you can assume something that is wrong and then the whole proof doesn't stand. So again, there is, you can never be 100% but of course this help a lot. So there are some tools which are formally proof parts of programs at least. And in the Haskell world, we have liquid Haskell, which is, if you haven't looked at it, I highly recommend you do it because it's actually quite like a great project and very interesting from the verification perspective. Other ways that I've seen more or less often used when talking about formal verification in the world of FPM Haskell is using the penalty type languages to model part of the code or even to do translations, but that's a bit more complicated. And usually the languages used by people who are close to Haskell are Koch, Idris, Agda and more recently Lee to prove various parts of the like model. And also it makes it easier because languages like these are at least in syntax, especially Idris and Agda are closer in syntax to Haskell. So it's slightly easier to then follow the model and write that same code in Haskell and some there are even tools to automatically translate from like a dependent type language to and from Haskell, although they don't produce the best code and sometimes it's very, very slow, especially Agda to Haskell is, or at least used to be, I haven't looked in a bit but it used to be a lot of unsafe courses which makes the optimizer actually be worthless. So the result is very slow. But then again, like if you have a very small core of things that you need to prove and you need to like have a formal proof of then that might be reasonable, right? Just like a small part of the code which is just translated directly or something. Okay, so we are done with the incomplete overview of software verification. And now we're going to kind of zoom into software testing and then unit test and then like the main thing that I wanna say about unit test. So if there's any questions for like the big overview thing, feel free to pop them in the Q&A and I will probably take a moment to answer them then. So, but Mino, I will continue with the testing. Okay, so testing specifically. So why do we write tests? Well, they're easy to write, right? Like compared to formal proofs, writing test is a lot easier and like we learn about it more. Like there's a lot more material to learn about. The material is at the level that most software engineers will be easy. Like we'll have an easy way to follow. There's a lot of examples and courses and so on, right? It's just pretty much instrumenting your code to execute more or less, right? Or parts of your code. So we understand how to do that. So it's easy to write but it's like it's cheap for us to do so. It also provides an immediate value, right? You don't have to think a lot before like what kind of property do I want to prove about as a formal proofs, right? All kind of property does my subsystem have or anything like that, right? I just like I have an immediate value can write it and like I know this part works as I expected to work. Then of course, writing tasks can act as additional prescription documentation, right? Because it's you kind of like everything, like whatever you do when you try to verify something you'll have to repeat ourselves, right? That's the core thing that I think we need to understand when we are verifying something, right? We have the code and then we have to kind of repeat ourselves either through tasks to proofs or through documentation to kind of like double check that these things are in accord, right? So of course, tasks can also act as additional documentation because hey, if the test says something should pass that gives you more information about that thing, right? Again, you can like you can write once and run many times, which is great. You can and definitely should add your test, your CI and make sure that you don't accidentally break them like future code doesn't accidentally break it. And as I was hinting as I explained it, it's I think it's a very practical middle ground between like manually running your program and formal proofs, which is not to say that either of those things is useless or shouldn't be, doesn't have its place. It's just something that is often a very good middle ground. Okay, so we've kind of hinted about this earlier but we have like three big categories of unit tasks or of tasks, sorry. And like the sort of that mean like some order that I think is better for presenting. So I'll start with integration tasks where usually you will have like manual inputs, right? You'll have like this specific scenario that you're testing. You'll very rarely automatically generate integration tests because they are slow, right? We don't want to, we usually don't want to like use a automatic generator and run like hundreds of samples of for the same integration test because that will take forever or like at least with some software it will take a long time. And by accuracy, what I mean is the accuracy is if a integration test fail, how quickly and how easily can I pinpoint to the actual programming code? And usually since integration test tests like a big subsystem or even your whole system, it's accuracy is pretty low, right? You might have an idea depending on the error you're getting about what is wrong but it's often the lowest kind of accuracy you can get through a software test. Then we have property tests, right? Where inputs are generated, the execution is usually fast because we're testing a small module or small function or like a small subset of our code. And even though we run a lot of examples, hopefully each example that we run is quite fast. And the accuracy again, because we're testing a small part of our code if one of the property test fails, we know for each input it has failed. We can, some of the property testing libraries have shrinking mechanisms where if a large example fails, it will try to shrink the example to the smallest example which still fails, which is great because we all like to have the minimal reproduction case. So usually the accuracy is quite high. And then we have unit tests where we manually generate inputs. The fastest execution because it's like just a few tests, right? You don't have to like, it's not automatically trying things until something fails. And again, the accuracy is technically high if it's an actual unit test because it again, it has the small area of the code. Okay, so integration tests, as I said, it's often, they're often easy to write and understand by non-programmers because it's kind of like, we use the oftentimes the public quote on quote API of the thing that we're testing, right? So we will like call an API or like pass some, like a file or some console flags or whatever, right? And that's easier to understand for like product or domain experts. Again, it's slow, so that's not great. And the setup sometimes can be complex, right? If you need databases or other services or a specific setup, then it can be quite a pain to set things up for integration tests. And yeah, when they fail, finding the code can be tricky. Sometimes it can take even days or more, right? Sometimes you just know something is wrong but you have to essentially just debug everything to find out what, which is, I mean, it's better than not to have, than not to know you have a problem, but it's kind of frustrating because it doesn't help you in pinpointing the problem sometimes. Okay, so property tests. They are, you can encode high-level properties rather than test cases, which is usually a stronger guarantee about your program. But then again, the downside is you have to find out what those are, right? And it's not always trivial to do so. They usually run reasonably fast. You can find unexpected bugs here because if a generator is well-written and then it can generate examples that you might not think about or you might have a hard time coming up with. And yeah, again, depends a lot on how well you define the generators, right? And you have to be very careful because property tests are not proofs, right? Even if you have a property which kind of looks like a proof where you can say this property always holds even if it passes every time that doesn't mean that it will always, that's the case because your generator might not be producing the right inputs because you haven't thought about them. So they can give you a full sense of security sometimes. Okay, so that's the software testing I'm going to quickly go through unit tests and then we can have a look at what I'm proposing. Okay, so as I said, unit tests usually run reasonably fast. They usually or should I would say test a single thing whether it's a functional module or interface like something that makes sense in your code base but it should test a single thing because if they fail, we want to be able to pinpoint the problem. We should attempt to isolate the thing although like there is a whole debate here on how much should we isolate? Should we like mock everything? Should we allow the rest of the system like of our code to be semi-okay to mock or to leave as is or whatever? I'm not gonna get into this. There's a whole bunch of other folks talking about this. I'm just going to say that what isolate means depends on your preferences in your code base in your actual program. It should be easy to handcraft interesting examples for obvious edge cases. And again, as I said, you might miss some of them because well, that's what happens. And like one of the things to be careful about is even if all your unit has spas, that doesn't mean that you don't have bugs definitely because you can miss non-obvious edge cases or other interesting cases. Now for whoever's not familiar with how HUNIT works, I thought like I actually didn't know some of the things that because I never looked at the source code for HUNIT. So I thought it was kind of interesting. So I felt like sharing it. So like HUNIT, the unit test library in Haskell has this core data libraries called test. And test has a constructor called test case, which gives, which allows you to add an assertion. And assertion is just IO unit. Right, so it's literally a type alias to IO unit. Then we can create like this testless constructor tells us that we can have a tree of tests, right? We can literally have a arbitrarily deep tree of tests. And this is just for the sake of kind of grouping tests together and like providing some sort of hierarchy to display tests in a reasonable way. And then the last thing that we need to be able to do like to properly display test is to have, to give them tags or labels or names, right? And that's all, like this is the whole type. Like you can think of it as this is the, like test label is a way to name things. Test list is just the hierarchy. And then assertion is the actual test cases. Okay, so then like a tree looks kind of like this, right? You can have a label at the top where you can say root or top level or whatever. Then of course you're not going to write them like this. I'm just trying to kind of give you an idea of how this data type looks like. If you have a easier time looking at an example than that data type. And then you can have labels with like cases here and then you can have sub lists of sub lists and so on, right? Or like in a different kind of format you can have like, you can see it as a tree like this, right? Have the top level test and then some checks and others. Okay, and then how does like how do we, how does HUnit show errors? Well, it shows them by this HUnit failure type which has a location and the reason it's an exception. So that's why it uses IO as assertion. So you can track exceptions like IO exceptions and that's how a unit fails, a unit has fails. And the reason that we see in HUnit failure can be either, hey, failed and that's it. Like just a reason or expected. And that's what the first known maybe string is, but gotten that's the second string is and the maybe is just like an optional like extra message to show. And the failure is just like given a error message with like this string, we generate an IOA, right? For errors where we make sure like deep seek is just a way to make sure that message doesn't throw. Like it's not an exception while like instantiating message. And then you use an IO exception throw with like a unit failure for the location and the reason. Okay, and then like we have the assert equal which unless actually it's the same as expected we will do a, unless they're the same we're going to deep seek you everything just to make sure that we don't have any exceptions in the messages or the values. And then we throw IO with like the other constructor, right? And everything is pretty much just showing the values and the prefaced message is kind of if it's an empty string transforming to nothing otherwise just show it. Like if this is just for who's curious it doesn't really, it's not that important. I just thought it's interesting the way that they're represented as IO exceptions. And then oftentimes we will see the actor into third pattern in unit test which kind of looks like this, right? So we have like this is the tree, right? This is the top level, this is the next level and this is the actual test and these are all get our labels actually in our test tree. And then we usually have this sort of a range, right? Where maybe we need to flash the database and then like create a user using this sort of API. And then we can actually run the thing we're testing for example, get the count of users. And then we can assert that the result is one, right? That we have only one user after we've cleaned up the database and added one user. Which is a reasonable way to test things probably for this imaginary API. Okay, actually that's a good time to get that question. Okay, so the question is, I'm gonna read this one. So why do we use deep CQ to fully evaluate all arguments before throwing an error? The reason is we don't like, we want to know exactly when the error happens, right? So if we have an exception while evaluating the arguments then we want to throw that exception. And if not, then if everything like evaluates properly then we want to throw the exception in the unit test. We want to like throw other exceptions first because those are more important. Okay, so why again, why unit tests? And I said it's very easy to write. They can complain property test very well, right? Because you don't have to worry too much about generators and why generators are and whether they generate the interesting cases. You can just forget about that, right? The interesting cases manually through unit tests and then the generator generate like the complex cases and cases that you might not think about. And I think you can do a better job at making them reasonable and maintainable, right? So what do we want to get out of unit tests? Where we want to gain confidence in the implementation. We want to allow reviewers to double check their understanding while reading our code, right? Through like unit tests. And we want to like to give more context for people reading the code later, right? Which is kind of like the context for when we're adding a change and context for later, right? And again, kind of in the same category you want to not break something we introduce now later. Okay, so going back to the example from earlier this is the same example from before, right? So given this example, let's say we fold this example, right? In this like comment here. And then we add a new test which creates by email, right? Creates any user by email. And then we assume that that works as well as creating a user by name, right? And then we want a new test. And again, this one is folded here, right? That when we don't add any user the result is zero. And then we maybe add a new test where we add two users, right? And we expect the result to be two. And as you can see, this becomes like, imagine everything was unfolded. This would be probably around 50 lines of code and pretty hard to kind of scan visually, right? Because it's kind of has like a lot of things in between. Like there's a lot of noise in between the tests, right? So our value say we have to gain confidence or we'd like to gain confidence in our reviews to double check and so on. But it's kind of hard to do that when things are hard to read, right? And unfortunately, a lot of the tests that I've seen in code bases, I've seen are like that where like things are just written down and people give less importance to the test and how they format tests. So like there's, we can easily notice that there's a lot of repetition. So we can start by trying to remove some of that repetition, right? So one of the repetitions, hey, we have the same sort of similar range block, similar act block and kind of the similar flow to all tests. So let's factor that out, right? So we can have this run test helper which takes description and arrange act and expectation and then kind of fills everything up, right? Rather than us having to do it manually. And then a run test would look like this, right? Where we say run test, description. Then the next line is what to do in the arrange part. Then is what to do like the actual test and then the expected result, right? Then we can rewrite this whole thing. Of course it's reverse cause reverse is always the answer. Like this, right? Where we can already see we can have two examples in this slide rather than a single example. And it's easier to read, right? Because like all the examples are like on top of each other. So it's a lot easier to kind of get an idea of, hey, what are the things we're testing? What is expected? Like it's just information here and no syntax, no Haskell to kind of get or like language to get in your way. And I like this a lot. And basically this is the main idea and the rest of the examples are just slight improvements on this idea, right? So you can easily see that instead of having this sort of tuples we can have, for example, a data type like a record where we can just name all of the inputs, right? And then our test can look like this, which is again, slightly better because rather than trying to remember what each element in the tuples, they have a name and we can write them as that. Then again, let's have a quick look. This is the before, right? And this is the after. So I hopefully you agree that this is easier to read by everyone. This can be read even by the non-Haskler and kind of get an idea and even might be able to change it. Whereas this might be a lot scarier to read and change. Okay, and then I think you can do even better in this case especially, right? Because like I was going to say that we can even make two, like fit into examples in a single slide, which is kind of awesome. But we can do even better, right? Like this whole thing is we can see that we always do posts here to create users and we always do gets here to get the thing. So like in this particular case, we can even improve just this modules testing, right? By kind of, let's say modeling it out with some better types, right? So for example, the arrange, we noticed that it's always like either create or by user create by email. Then we can simplify our count test rather than having IOs for arrange and act. We can just have a list of our NGAPIs, right? Where we just create this simple thing. And then the run test becomes kind of similar, but instead of doing the arrange that we get as a parameter, we just traverse some run arrange, right? Which just kind of goes over this and does the appropriate post, right? And then instead of passing the get, we can just say, well, the result is always get count, right? And then we can assert the result with what we got. So again, this makes it even easier to read and now the complete example, you can fit three. Three on a single slide. And arguably this is even easier for a domain expert to read, right? So, and even maybe modify, right? Cause you can have a description of what to do before and what the expected result is. And that's like literally the exact amount of information you'd expect to see from this sort of test, right? Its name, what to do before and like what's the expected result. But it's all in a way that it's more, I should say optimized for reading and for kind of understanding the test rather than writing it out, right? So yeah, that's the main thing I wanted to share. And yeah, so let's quickly go to the takeaway. So if there's something you'd take away from this, guys, not even the unit test part, I'd say just don't, I don't think it's healthy to kind of say unit tests are better. Property tests are the way where we should always use integration test. I think they'll complement each other in very interesting ways and we should all just figure out the way to use them depending on the code base together to the best effect. And also like look into the other kinds of testing, especially formal verification. Some of them might surprise you and you might have some fun. Yeah, of course, make sure CA runs the test because that's how it's all the tests and the same test that you can run locally and so on. Make it easy to run this locally is also very important because if you're not then folks are not gonna run them. Make it easy to debug tests, especially if you have a lot of integration tests, people are going to be debugging a lot. So that's also very, very useful to do. Again, test messages, like the error messages of tests, that's also very, very important for future us, right? Because when you write them, it might be obvious, but later, like a few months later, it might not be as obvious. So great error messages are very important. And yeah, try out the thing that I just wrote down now, right? Try out to think about how do I make this, like what kind of data makes sense for this test and how do I make it such that that's the only thing, that's the first thing I see, right? And everything else can be like an afterthought in somewhere like some helpers and so on. Okay, so that's pretty much it. I will be, I'll be taking questions, I'll be in the hangout afterwards, but yeah. Let me like go quickly finish the slide and say that we are hiring. So if you're into Haskell, we are hiring Haskell engineers and engineering managers. You can find more info at the link on the screen. And yeah, you can, Haskell has a booth, so you can come to our booth and have a chat and ask questions about what we do. So yeah, you can, and you can find me on social media. I'm especially active on Twitter, the others, not so much. Thank you very much, Evie, for sharing your experiences there, your insights, and hopefully that's been helpful for some people as they think about their processes and how they manage things as well.