 In this session, I will mainly be discussing some testing techniques, which have been quite popularly used in especially in the Erlang and Haskell communities for quite some time now. And because of its inherent synergy with the principles of functional programming, this technique is becoming more and more useful when we use other functional programming languages as well. The usual focus is on immutability of data, referential transparency and things like that. So, it is a natural fit to the functional programming paradigm. Just before I start the main presentation, how many of you use some form of X unit based testing for unit or integration testing? Most of us use that right. And how many of you feel that if the testing is not automated, it is not testing, but X unit based testing is not without its deficiencies right. And what is the main pain point according to you when we use an X unit based testing, test cases right. So, basically it is generation of data. Data has to be hand crafted. So, it depends upon the domain knowledge of the creator of the data. So, if there is always a chance with a sufficiently complex domain model, there is always a chance that I may miss some of the edge cases. So, this is where property based testing adds value to the process of testing. Let us see. So, in course of this discussion, we will try to find out what value exactly it adds. This generative data, this phrase expresses the intent of the process that it will inherently generate data for us. And that takes a lot of headache out of the testing process. And the next one is executable domain rules. We will explore what exactly do we mean by a property. Particularly when we are dealing with a complex domain model. A domain model has a lot of properties. Say we are dealing with a banking system. A banking system has some core domain rules, which in our earlier days we used to document those rules in our SRS and SDDs and things like that. But now, can we make these rules explicit and run those rules regularly as part of our build process. So, that we are sure that the system which we are going to, which we are building and which we are going to deliver to the client, honors each of these business rules. Property based testing highlights on those properties and gives you a framework through which you can document those rules not only as dumb documents, but also as executable artifacts. We first start looking at X unit based testing. And I am clear on this that this is not a tool talk. I am not going to be focusing on tools. I will be focusing mostly on the ideas of properties. What properties do we need to verify and how exactly do we verify those. So, we start with X unit based testing. We try to see why it is not enough. X unit based testing has its own uses. I am not bashing X unit, but we can supplement it with these kinds of testing to add more value to our process. What exactly is a property? And what properties do we need to verify? We will see that if we use a decent statically typed language that has some specific support. In that case some properties can be verified for free. We do not need to write tests to explicitly verify those properties. Maybe in some dynamically typed languages we need to test them explicitly, but we get a lot of properties for free if we use a statically typed language with parametric polymorphism support. As the tool I will use ScalaCheck. ScalaCheck is one of the families in the QuickCheck family of testing tools. And I will be using ScalaCheck to demonstrate some of the code snippets. And I will end the talk with some ideas on domain model testing and how your domain model can be more robust. If you have a bunch of properties to back up your actual system, X unit based testing, all of us know this, convenient, widely used, integrated with most of the ID, so we have rich tool support. But the typical cycles in an X unit based testing is you need to define your own unit. Typically if you are using an object oriented programming language in a class is a unit or in a functional programming language a function can be your unit. You typically define the unit, you typically define the setup and teared down rules, what exactly, what data you want to setup, what data you want to destroy after the test is executed and then you prepare a set of data and then you write the tests based on those data. But is that enough? If your system is complex, often your X unit test which goes out of bounds, verbosity, you need to define teared down, you need to define setup, you need to define your own data. So X unit based testing is mainly targeted for testing at a very lower level of abstraction. Difficult to manage data logic isolation, often the data grows out of bounds and it is very difficult to decouple data from logic. And as I was telling that I am all, when I am writing a X unit test case, I am always scared that I may have missed some edge cases and boundary conditions. Because whatever tests I write, it is completely dependent on my knowledge. Let us take a very simple example. We define a function append, which takes two lists as input and generate one more list as output. How do you show the correctness of the above implementation? One of the properties that this function needs to satisfy is, if the size of X s is s 1 and size of Y s is s 2, then the resultant list will have a size s 1 plus s 2. This is an absolute invariant, which my implementation needs to honor, which my implementation needs to guarantee. And besides testing, there are a couple of ways we can ensure this thing. Let us start with the, let us start with one of the basic ways of proving this, theoretical way, theorem proving. We want to do it using induction. Using mathematical induction, this code incidentally is in Haskell. I have intentionally done it because when doing theorem proving, it is easy to demonstrate code using Haskell because it is consigned and has this mathematical flavor in it. So, we define the length and append functions, length of a list with length of an empty list is 0 and length of a list is 1 plus the length of its steel. And similarly, with append, appending something to an empty list results in that same list and then the recursive rule. Our induction base is this length empty list plus plus Y s, appending something to an empty list gives me the sum of the empty list, sum of the size of the empty list and the size of the other list. And our induction hypothesis is this one. And we need to prove that for two lists where the sizes are greater than our hypothesis, this thing holds good. Testing the base case is easy. In the left hand side, we have the length of the empty list plus Y s. Length of empty list is 0. So, this by our rule, the first rule of append, the left hand side reduces to length of Y s. And in the right hand side, we apply the first rule of length and we get the similar thing length Y s. So, our base case is proved. Similarly, for the induction step, I am not going into the details of this. It is obvious that it will be proved. So, theorem proving is one way of proving the correctness of our implementation. But as we all know, it is not a practical approach for mainstream programming. Of course, mainstream sucks, but still. So, one more option is using your favorite unit testing library. Here, I am using Scala test. And as I mentioned earlier that I have manually crafted this data. Being a simple example, it may be obvious that this is correct. But once again, it does not scale with the complexity of the model. But this is definitely one of the ways we do it. And it has these disadvantages. Let us look at the third way of testing correctness. Here, I have used the dependently typed programming language. It is Idris. And the feature of such languages is that the type system is much more rich. You can encode values within types as part of the type. So, here in Idris, when I define a vector, the length of the vector is part of the type. So, when I depend an append function using vector, using the vector defined in Idris, then the return type of the function itself tells me that the length of the resultant list will be n plus m. So, there is no way I can go wrong with this implementation if it correctly type checks. So, types depend on values, powerful constraints which can be encoded within the type signature as we saw with the definition of vector. And it is correct by construction. So, dependent typing is one of the most actively researched fields in programming language today. We know at the Idris, these are being actively researched upon, but these are not yet ready for production. And in fact, a gentleman named Miles Sebin has been doing some similar work with a library named shapeless in the Scala world. Miles's library has these flavors of dependent typing in Scala. You can define a list and where you can give the size of the list as part of the data type. So, you can get similar guarantees as a dependently type language using shapeless. But once again, shapeless does not give you the total power of a dependently type language. You can use shapeless today. It is a great thing, but it does not have the power of a full dependently type language. It defines a custom data structure vector which is dependently type where you can include length as part of the data type. It will error during compile time. It will give you error during compile time. And that was precisely the premise of this dependently type language also. The dictum is if it type checks, then it is correct. No, the functions will take care of that. If you have a list of length n and if you increase it by 2, then the resultant data type will have n plus 2 because it is part of the data type. Length is part of the data type. So, it is a different list, different data type. So, till all these things, all these fancy things mature, we need to have some better way. Implementation is incorrect. It cannot prove that my implementation is correct. Actually, it can prove your implementation is correct only with respect to that constraint. But it can have lots of other problems. It will error out during compile time if you are using dependently type language. The creation function will take care of changing the length also. The append function or the cons function or whatever will return you the list whose data type has the appropriate length of the list. Yes, actually Scala's type system is fairly rich and it has some specific features which enable implementation of these types of features. In Haskell, they are adding lots of capabilities of dependent typing. Yes, yes. On existing this thing, yes, yes. Regular Scala, regular Scala code. Regular Scala code, you will have to use the functions of shapeless. Normal Scala, yeah. It is absolutely normal Scala obeying the standards defined in the Scala document. It depends on what you mean by conventional type. For example, if you look at the Scala code which Scala Z has, Scala Z is one of the functional programming library. It looks a bit different than standard Scala code. It is correct Scala code, but it is not what is popularly used. It is not mainstream, mainstream Scala, so to say. Now we can explore the other way of proving correctness of some of the properties or ensuring correctness of some of the properties which my model defines. See if I want to, if I have to define this property, this step and adds up the two sizes. What exactly we are trying to verify here? We are trying to verify that the two lengths add up in the resultant list. This is the invariant of our function which we can encode as a generic property. Now think of this property as an abstraction as a declarative expression. If we have this declarative expression and the library generates lots of data for me, then all I have to do is think of the property which I need to verify. If the property uses primitive data structures, then the library will take care of that. If I use my own custom data structures, then I need to define the data generators. Defining a data generator is much more scientific and much less tedious than writing the data themselves. So this takes us to this question, what exactly is a property? Constraints and invariants that must be honored within the bounded context of the model. Every model that we design has a bounded context. The assumptions are valid within the bounded context of the model. And within that bounded context, the model needs to honor various constraints and invariants. For example, in a personal banking system, I cannot do transaction on a closed account. Say for example, this is a property. This can be a property which validates the sanity of your system. If I do a debit and a credit of equal amount, then the balance should remain the same. This is a valid property, a valid invariant which my system has to honor. These are typical examples of properties. These are sometimes called the laws and they ensure the well-formness of abstraction. Let us take a slightly complex example, a monoid. A monoid is a generic abstraction which has a zero and which has an associative operation, associative binary operation. But that is the definition of monoid, the straight monoid. That is the definition of monoid. But in order to be a valid monoid, it needs to honor certain laws. We have three laws. These three laws define a monoid. They are not validated by a compiler. That is why it is important to write tests which will validate these laws for my monoid. If I define one of my custom abstractions as a monoid, then I need to ensure that during the life cycle of the system, at every instant of time, that monoid must obey all these three laws. These are typical candidates for verifying through property-based testing. These are the laws which have written down what it means, left identity and right identity, and the associative binary part. Every monoid that you define must honor all the laws of the abstraction. Unless you have, unless you explicitly execute these laws, execute these laws you will never be sure that the monoid you have in hand is a valid one or is a lawful one. It is not that if you violate the law, your system will be totally incorrect because even in the standard Scala library, there are abstractions which are strictly not lawful. But it is intentional. The creator was aware of that and he intentionally created that abstraction because according to him, depending on the pattern of usage, it will not create much harm to the user. But it is always recommended that we have this lawfulness enforced by some means. Once again we come back to this question, what properties do we need to verify? And as I was telling you that using a statically type programming language with parametric polymorphism support, there are some properties which we can get for free. Let us take this example. Polymorphic function, polymorphic on the data type, on a data type A, it takes an instance of A and returns another A. If we strictly obey by the parametric law, which by the way is known as parametricity, what will be the sample implementation of this function? When I say that it is parametric, it is polymorphic on A, the body of the function, the implementation of the function cannot assume any specific type. I cannot assume that A can be an integer or A can be a string, A can be anything. That should be the assumption within the implementation also. Now I know there are languages, almost all languages allow you to do casting, type casing, matching on types and things like that. You can also throw an exception from within it. The body can be a throw an exception or you can launch a missile from within this function. But for the time being, I am ignoring this. I am ignoring, I am considering that the function is pure and we obey all laws of parametricity. Then the only implementation that satisfies this contract is the identity function. This is the only possible implementation. If you take one instance of a type and you can get back only that same instance, if A is integer and if it takes 12, then the return value will also be 12. Now we can look at this at some other angle. If we look at this from the point of logics and theorems and proofs and apply the principles of Kari-Hawada isomorphism here, in this case, we can in that case, it takes a proof and returns a proof of the same theorem and ignoring bottom. The only way you can do this is when the proofs are equal because for a polymorphic data type we cannot construct proofs out of thin air. So it has to be the proof which the function took as input. So there are many such theorems which apply if you honor the laws of parametricity and you do not need to write any tests for these kinds of properties. These are known as free theorems and there is a very interesting paper by Phil Wadler called Theorams for Free which says that if you give the data type, then you will get back a theorem which is a property which will hold true for the function. In fact, some time back Edward Kematon scholar said that parametricity constantly tests more conditions than your unit test with Averwell. It is an extremely powerful property parametricity and if we obey the laws of parametricity, then lots of things are taken care of or tested as is without us having to ask without developers having to write explicit tests for any of them. So now that we understood what a property is, what properties do we need to verify? We are once again back to the same question. As I was telling you, if you are programming language as a decent static type system and support for parametric polymorphism and you play to the rules of parametricity, you get a lot of properties verified for free. But for those things which are not free, we need to write tests. I will give you some examples using Scala check which is a library for property based testing in Scala for Scala. But you can also use for testing Java code as well. It is inspired by QuickCheck in Erlang and Haskell. Property specifications you need to give, the programmer needs to give and it does automatic data generation. And this last one, this automatic data generation is an extremely powerful capability. You specify the property to be tested, Scala check verifies that the property holds by generating random data and you can also control this randomness. You can specify the distribution. What distribution of data you would like to have? You can check what distribution of data is generated and you can tweak that. So, it is almost guaranteed that the edge cases will be taken care of. This is a quote from Brian O'Sullivan et al who are the authors of this book, Real World Haskell. It is an extremely significant quote and aptly summarizes the benefits of property based testing. Let us look at a few examples. No, it could be done with Erlang QuickCheck or Haskell QuickCheck. Actually, the reason I am using Scala is that I have been using Scala check for the last couple of years for this thing. So, I just wanted to share my experience. Mostly equivalent. So, look at this last part, past 100 tests. You can run this test repeatedly and every time it will generate a different set of data. So, your, literally your code fragment, your property is being tested is really being hammered with lots of data which is impossible to generate, which is impossible to handcraft. So, this is the property. L1 dot length plus L2 dot length equal to append L1 L2 dot length and this is the result. Yeah, I leave. So, that it runs the exact same test again instead of generating a new random set. Exact same test again. I am not very sure. I think I think you can, but I am not very sure. Say for example, I ran into an issue and I found some issue and I fixed it. Yeah, I know. I will show the failure. No, no, I know. I am just saying like I want to show it to somebody later. Like this is what I have, this was the test case. So, the next slide in fact shows the failed cases. You can run this on this specific value. Even this failure case is extremely useful because what it does is it is taking a million data and running your property on that and it may fail on some of the tests. What it does is it minimizes the failure cases to the minimum test case, the simplest test case that can fail. It does this within itself and then it reports that. Read that, that part, minimization of test cases. So, if you want to show some someone that the property failed on this value, you can just invoke that function using that value. The thing of that value is actually a property of that class. So, okay, if it is actually going to call some external modules within that function, how will this work? It should be completely referentially transfer and immutable or since I am calling some external modules, what will happen there? No, I didn't get your. So, I have a method called square root inside my class that does the same thing, multiply n by n, n into n. But before doing that, it is calling some external module to do something. No, but your intention is to test that square root thing, right? Correct. The property that you are testing is that square root thing. Correct, correct. So, to the property very far, you are giving that function only, right? So, I am asking, in turn, it is definitely going to invoke that function to test with completely random sets of data. So, when it is actually invoking that function, that is going to call some other dependent module which is being used inside that function, right? Yeah, so in that case, the only option will be to mock. Will be to mock the rest of the stuff and test only the... So, it is even possible in... Yeah, yeah, yeah, yeah, yeah, yeah, yeah. This one, right? This one? Yeah, so, the example that we have seen till now, the argument type is int. But how well does it go with different with other types? So, here it is randomly generating different data because it knows for integer, this all data can happen. But int... Yes, custom generators, I'll show you, I'll show you. The generators that we saw in the last slide, like you were telling, integer, integer is a known data type, the primitive data type for which Skala check knows how to generate lots of data. I can have my own custom generator for any data type. For example, here I am testing, I am generating, I am creating a generator which will give me a small integer. I don't want to test on the whole set of integers. I want to set on a small set of integers. There I can give this kind of thing, gen.choose 1, 100. It will generate, every time, every time this generator is invoked, it will generate an integer between 1 and 100, okay? So, I can verify like this, using my custom generator on a primitive data type. But in real life cases, we have custom data types also, right? I may have my own data type, I will show you examples. Before that, let's see some of the variants, some of the other variants of custom generation, custom data generation. So this is, I may want to generate values in a range. Or I can, I may want to have a generator which will generate me one of a fixed set of values. I can give gen of gen.oneof or conditional generator. I can give gen.choose 0 to 200, which will give me an integer within that range. Satisfying the predicate which follows it. Then I can control the frequency. As I was telling you, normally the data which is generated is purely random. But I can control the frequency. I can say that I want to generate vowels with this kind of frequency. I can generate containers also. These are all built in functions. These are all built in functions which come with Scala check. Now comes the more interesting part. Suppose I have my own data type. I have an algebraic data type account which has these attributes. And typically when I have a complex data model, complex domain model, I will be having lots of these, right? So these are the most useful things. I may not be interested in testing with integers and strings and things like that. So this is my model. And for this model, I can write a custom generator like this. So once I invoke this generator, once I declare that this is a generator for my algebraic data type account. Then when it will generate, it will invoke this generator. When data will be generated for generating arbitrary accounts, this generator will be used. In such examples is that normally this is not the case in real world. In real world, say if the account is of type this, then and the person is a male or a female, then certain data can be generated. Yeah, there are independent on each other, right? Right now it's all independent of each other. That's not normally that's what happens. Yeah, typically how we address this in, I will take the example from Scala. There's an idiom called smart constructor idiom. You take care of that in the constructor. Instead of here for simplicity, I've just exposed the basic constructor of the class. Instead of that, it's customary to invoke a smart constructor, which will take these kinds of decisions and generate the appropriate account for you. You feed it the basic data. And depending on that, it will create the appropriate type of account. If you have multiple types of account, you are talking about that or? Yeah, yeah, yeah. Yeah, so typically if you have multiple types of accounts, then that entire subtype hierarchy you can abstract it within the smart constructor. The smart constructor will take some of these data and it will give you back the appropriate instance of account. Similar things you can do within this generator also. You can customize generators. You can customize, say, if these two fields are related. In that case, you can write one more generator which will take these two fields. And it will generate valid combinations depending on the logic which you put there. So I can say, if this is that. Yeah, yeah, sure, sure, sure, okay. Do all of the things that you mentioned, which is use the combinatrix of, okay, if this is this then do then the related. So in this case, NO is one, then NM should be David and so on and so forth. So the focus of the test now shifts from testing what you are actually meaning to test to writing test for the generator itself. No, that's what I was talking about when I was telling about smart constructors. The constructor, if you have specific business rules, in that case that rule should be part of your code also, right? And typically if it's related to the construction of an object, in that case the constructor has that rule. That not all valid objects, not all objects constructed are valid. There are specific domain rules which you need to honor in order to construct a valid object. So you can invoke that appropriate smart constructor thing. We have a class called account and the name, let's say I have a constant on name that the name must have let's say six characters, okay? Typically I would put that logic in my initialization of my smart constructor, right? That the name must have six characters or I throw an error. Now, when I'm writing generator for this class, since generators work based on types, I don't think it would be able to capture that constant. So I might end up repeating the logic that I wrote in initialization. Is that the case? No, actually, what's the idea? The idea is that if you give that constructor a name which is more than six characters, in that case it will throw an exception or return some error, something like that, right? And in the test, typically you would like to verify both of them, right? So you set up two properties, one which tests for the valid data, and the other which tests for invalid data. And use conditional generators here. You define two generators, one of which will generate invalid data, using conditional generator, and the other will generate valid data. But when you say both the parts. But when we say invalid data, hello? But when we say invalid data. All the constant that this thing must have less than, let's say, six characters. Yes, yes. It's a duplication of logic, right? I'm duplicating the same logic. Yes, the alternative is to have one single generator, which generates all sorts of data, and in the property verification you check. Whether it's invalid or not, whether it throws an exception or not. Okay. The main problem with X unit test cases is that you need to give the data. You are not defining any data. Here you are constraining the search space. Here I have given, here for simplicity I have chosen one, two, three. But here, actually you can give arbitrary int, arbitrary string. In that case, you don't need to constrain the set of data. Often it becomes that if the search set, if you have a large data structure like this, and you specify arbitrary, like I have specified arbitrary date. If you specify arbitrary, then the combinatorial explosion happens, and it generates so large data that the test case takes a lot of time to execute. So the typical idea is to constrain the search space, so that the tests also execute in a meaningful amount of time. So correct me if I'm wrong, but in C-sharp there is a library called nBuilder. Which actually, nBuilder, when you mention the type, it will actually generate some random data based on that type. So can I safely tell that this is something akin to that, but it's generate all types of data on, I mean, all possible sets of data of that type. Something similar to that. Right, right, right. So this is another slightly more complicated example where we have the recursive data type. You can define generators for that also. It's a tree data type which has a leaf and two nodes which are recursive. And you can generate a leaf or you can generate an arbitrary node. And then if you define this gen tree as one of gen leaf and gen node, it will give you random instances of tree and leaf that type those things are taken care of. Yeah, one thing we need to remember is that it's only Scala code. So whatever gets generated is Scala code only. Or real life example from some banking system. And here I have defined property, some property which can be used for testing some of the constraints of the system. So this arbitrary combinator is very important. It actually takes your generator and generates arbitrary data from it. The generator goes as an input to this. So and it's mostly self-explanatory. The code is declarative so that, sorry, which one? No, they are not by default, they are not laziness. Laziness you can incorporate using the standard techniques of Scala by passing it by name and things like that. And these are some examples of properties which we can encode as part of our test suite. Equal debit and credit retains the same position. And the most important point, as I was telling earlier also, is that you can keep these properties as part of your test suite. And every time you run it, it runs on a different set of data. So that you have your system verified, the core business rules of your system verified every time you run the build or run the test. So I have a question here. So here, I'm convinced that this will allow me to make the management of data test. It is in one place. But I'm still not convinced that it takes away the fact that I need to be aware of the domain that I'm working with. So definitely you need to be aware of the domain you are working with. There's no escape from that. You need to be a domain person to determine what property to verify. The thing which the library does for you is it gives you the capability that it generates data for you. And it gives you a declarative way of managing these properties, not the data. The focus shifts from managing data to managing property. Exactly, exactly, exactly, exactly. Is it done fast? Yes, you can do stress testing. You can generate lots of data. And the easiest way of doing that is to run the generator in a loop. Because every time it generates new set of data, right? And you can also run it on huge set of data. Yeah, citizen. And so if you're up to 60, you get 9% interest. If you're above 60, you get 9.5% interest. That's a simple rule. Then my generator for the age fields should need to generate only two possible values. One has to be 60 or below, or maybe less than 60, 160. And one greater than 60. I don't need to be continuously generating. How are you sure that your program runs with negative input, runs for negative input? OK, so these are the kinds of things. These are the kinds of things. Yeah, but is the solution then run the test 100 times? Or identify, like I identified only three. Maybe it's not three, five, or six classes. But because if you're going to do it 100 times, you're going to take an enormous amount of time to run the test. Yeah, actually for some systems. For example, I will tell you an example. I will tell you one of my experiences. I was working with a workflow system, which had a very complicated finite state machine and lots of transitions. And some of them are valid, some of them are not. Typical FSM, which has 1,000 plus nodes. So for that, writing data by hand using x-unit base, it was a nightmare. Yeah, actually they are instances. So coming back to the point he raised about, you need to understand a domain well. But at some particular point in time, the amount of intricacies of the domain will overwhelm you. And at that point in time, you overwhelm the problem by just generating a lot of random data. Is that the solution then? No, but if you don't, first of all, if you don't understand the domain well, you will never be able to come up with a proper model, right? Yeah, but like you said, you have an FSM with 1,000 nodes. At that point in time, you may understand the 1,000 FSM nodes, but you will not understand the transition between them. Your ability as a human to actually understand the transitions perhaps somewhere as to encode them for all the test cases will be just too hard. So at that point in time, then you get into generating lots of random data. Yes, if you have 1,000 nodes, your domain model has the logic of the transitions, right? So the generator will use that. The generator that you write will use that. You don't have to handcraft those data by hand. So you are not duplicating. You are just handing over the generator. No, but the portion of the code which says that this is valid, that's being called here. You are not rewriting that part here. Here, real-world example, so we had a lot of config values. So if this config is set, do something else, and there were like six, seven of them. And we had logic if this config is there and this config is there, then do this. But then how do you write property-based testing for that? So you will have to generate values out of that. The logic that you just now told, if this is this and this is this, then do this, that's written somewhere as part of your code, right? That's part of your domain model. But I want to test that exactly. Yeah, so invoke that function. But I will be calling that same function again. Whatever my testing is, the question that I want to ask then. If I'm calling that function again to generate the value, what am I really testing? Am I testing the test coverage of it or what is it? No, you are testing your implementation and you are testing whether for all the values, all types of values that it receives, and for all cases. But then what is the property that I'm testing for then? Then I'm just generating the value. Whether for a specific value of these data, my config is working okay or not. But that is exactly the question. How do you understand if it's working okay or not? Is what confused me? I understood the generator part. But if I have to define the property, it will be exactly the copy of if then else cases from the code. And if I'm copying that code itself, what am I really testing? No, why are you telling that you will need to copy the code? So how will I define what is the property that I need to? The property will invoke the function. The function that you have in your domain logic, the function that you have as part of your domain logic, that function will be invoked as part of the property. And say that if that function, if that function takes three data items, A1, A2, A3, then the property will check, the generated data will check, that whether for all values of A1, A2, and A3, your property holds good. If there is a logical error within your property, within your definition, then this cannot test it, right? What you are saying is a finite state machine. What he's talking about is the alphabet set. Your alphabet set is constant based on your finite state machine and you check the transitions. So basically your property with testing is a combination of your alphabets for that FSM. So you need not really re-implement the logic to know what your alphabets are. So given, if you're at any particular state, there are only finite number of alphabets that you can have. So you can probably run all of them. If you have written code that will actually fail for a wrong transition. I think this is a valid test case. Can we discuss this offline? Because I am out of time. Okay, so some of the other features, conditional generators we saw, classification we saw, and you can also generate sized generators. You can specify how much size it should be. And test case minimization for fail test. This is an extremely important feature which I've sold. One thing I couldn't cover is stateful testing. All the things which I mentioned were sort of referentially transparent and things like that. It didn't have any inherent state, but stateful testing can also be done using this library. And if you go to the wiki of Scala check, you will see an example. So the essence of property based testing is that it helps you identify the constraints and invariance of your model and which you can encode as property. And then with the help of the generated data, you can verify those properties. Testing domain models, properties help you think at a higher level of abstraction because you are thinking of the system as a whole. Basically you encode domain rules as property. I told you. So it's executable domain rule. At the start, at the beginning I told you that in earlier days we used to document them as dumb statements in SRS or HDD. Now we have a way to explicitly specify them as part of your test suite. And it has a synergy with functional programming because it focuses on the stateless thing, no global state, immutable data pure functions, all these things fall in place in this kind of testing. Yeah, downside means if your data structure becomes quite complicated, in that case if you try to generate arbitrary data for all of the elements, then it makes loads. In that case you may need to limit some of them. Some of those. If it finds a failure test case, it will shrink it to the minimum thing. So I really doubt if you can absolutely prove if it's the smallest test kit that fails, but what are the general heuristics? How does it shrink? Yeah, general heuristics means once it fails, it tries to find out which condition it didn't satisfy. And then it will try to minimize the test case and convert to that particular condition. I'm not very sure of the implementation, how it's implemented. But the general principle is that it tries to find the minimal test case, minimal failing test case. No, I get that, tries to find it. I was more thinking of how does it do that because it's not very intuitive to me. No, I need to look at the implementation. I'm not very sure. Some sort of backtracking, yeah, because while generating it generates based on some three kind of things. It generates data for this property, then this, this, this, like this, and then it will backtrack and find where it failed and then it will try to construct a test case which has only that condition in it. Something like that, but I'm not very sure of the exact implementation, okay. Thank you.