 Today, we will briefly look at testing of SD models. Today's agenda will cover a few topics or rather the concept of testing of SD models can start with model debugging, model verification, model validation and sensitivity analysis. So, these are broad topics that we look at when we think about model testing. So, how do you want to do model testing? The perfect models are going to be difficult in the case of SD, since correctness of model is relative to the purpose and varies widely depending on the modular user as well as the modeling conventions. However, we need to build confidence of the model that we have built so that we can gain useful insight as well as present our finding appropriate to the purpose on hand. So, this is what we want to do. So, we want to test our model so that we can build our confidence on the model so that we can have some belief in the results that is going to do us. So, in model testing we should be designed so that we can uncover the various flaws or errors that we do in the modeling. Some are pretty straightforward if it is programming kind of errors, but logical errors are a little more difficult and we can use it to improve the model to make it better. However, we do testing to prove model is right, rather than try to uncover what is right model we will come to that later. Sometimes key tests need not be done, modelers fail to document results that is nothing new, it has been there for ages, then always we started computations, documentation has always been issued and modelers and clients of confirmation bias and preconceptions despite evidence to the contrary, sometimes you do not want the results to show what you want to or you are not happy with the results it shows, so expect the model to be wrong. So, we need to address these things systematically, we will start with a few basic steps and go ahead and look at how to validate the model etc. So, the first step towards this is model debugging, we already had an introduction to it last class and some time ago where we looked at a model description and try to uncover the various errors within the model, we already had an experience with that. So, what you want to do is debug the model consists of tracing the errors that prove the model from simulating properly and correcting them, it is pretty basic step. The common errors some of it are as follows, faulty numerical integration method or time step is being used, so one way to counter it is to reduce the time step and choose an appropriate method. So, what is written above on the right side of this arrow indicates how you have to counter that flaw, like if you have to use Renjuketa method, we use R K method, Euler method or the time step is too large instead, so we have to reduce the time step which is an appropriate time step, so that we can simulate it properly. A wrong sign is in stock equations, one way to avoid is to check and avoid the net flows, many times we end up instead of having an inflow and outflow explicitly, we can actually modulate as a net inflow. So, to do a net inflow, then we have to be careful with the signs, so that the model simulates correctly. Same point overflows as values are too big or too small or we end up dividing by 0, only way to avoid check that is to trace the computation, figure out at what point error occurs, look at the table of values and correct the model to see whether that is indeed that large value is it realistic or not. There are functions in Vensim like z, id, z or x, id, z, so this is when you divide by 0, what is the value to return? So, instead of returning, when you divide anything by 0 it will be indeterminate, to avoid that this x id z and z i z z can be used, you can look it up in Vensim's help, when you divide by 0 then what should it return, should it on 1 or should it return some 0, what should it return that we have to specify and Vensim supports that. So, maybe that may be required if indeed division by 0 occurs, what is the code the table function to see where is extrapolating more than what you desire it to be especially in extreme, in that case then we need to give the value such that it extrapolates correctly. So, two types of errors, one is a warning that the computation is beyond the table function that means extrapolating, in some case it may be ok, other is floating point errors where it actually indeed be dividing by 0 or due to some error in computations it is getting a large value that we need to fix. So, when you simulate and we get an error then that is something we need to fix and there could be error in the structure itself like flows are not connected to stocks, a drawing was not proper, etcetera. So, these you need to check the model equations and the structures directly. So, now we can use temporary hacks such as explore function or ceiling function or look at a separate part of the submodel of the entire model to see, understand the issue and fix the actual problem that is actually occurring within the model. So, that is the broad steps about debugging. Closely related debugging is this model verification, overall what we want to under answer is this question have we built the model correctly. It includes all the debugging steps, but also goes beyond to trace it and test the model to see of the all the logic is correctly modeled and captured as per the specification. So, this is what we want to do. So, that entails what we call as model verification. So, model verification starts with basic debugging and goes a little beyond that one also. We have few tools in Vensim itself, one of the two of it we have already used is check units and check model. They are necessary, but not sufficient. That means, even if you give the model and click check units, it says units are fine, still there could be errors inside the model. Based on what values are you giving, what connections you are giving, there could still be errors in the model. So, they are necessary, but not sufficient. So, we need to actually check for correctness beyond what we see here. Over the years people have come up with various checklists that we can go through to see if the model is correct. First is to check units and see if very proper names are given to the variables or it makes sense to us. No constants are embedded in the equations. We should try to mention the parameter values only before the analysis, choosing our appropriate time steps. Stock values can be changed only by flows. Every flow should be connected with a stock. We should try to avoid flow connections. We should try to avoid if then else on min max and other logical statement as much as possible, but the problem entails and we need to include it. We also need to use proper initial values and clearly specify them so that model can start in dynamic equilibrium. And lastly, popular one is to make model aesthetically pleasing. When we as we organically grow the model, we find that it is looking quite complex, but finally it has to be presented to an external audience. How do we make it aesthetically pleasing? So, one easy way is to use curved arrows instead of straight lines. Curves are aesthetically pleasing than straight lines and sharp lines. So, this is also quite important. So, some of these things may seem quite obvious that we ought to do, but many times in our case we do not do that because it seems too trivial for us to worry about, but unfortunately that is the one which ends up causing the errors or causing the various issues in our actual model. So, the one way or rather the only way to ensure that doing proper debugging and proper verification is to practice. So, that is what we are going to do today. We are going to take up a few models and try to practice the debugging. In fact, this time what we are going to do is I am going to lead the debugging, that is I am going to read the description, point you to different things so that you can observe and follow. Even if you are not able to catch up in the model, my suggestion is you go back and look at this video again to see how I am tracing through the model so that we can also follow the similar steps. So, typically how I go about checking the model is what I am going to go through. So, let us and for a so we have three examples we are going to do today's class. Reach out the example the model with errors is already online. You can download them from Moodle and when each scenario comes, we can try to open the model and see whether it confirms with our understanding. I hope I will remember to mention all the points that I am going to check, but let us see. Again these are only some of the cases that you are going to see. So, it is important for you to practice with other kind of models, actually make some mistakes at least from say for example, from the exam point of view. You may not get exact same errors. So, there may be any other form of combination. I try to give as many examples as possible, but let us see. So, first is let us just understand the model. No need to look at the Wensham model yet we have time for it. It is a very simple model. There is something called as masquerades. It is like a big large rodent. It is actually native of North America and invasive species in large parts of Europe and North Asia. That is regions near Russia and stuff. So, let us go with this. Suppose there is a masquerade population area, initially there are 100 masquerades. So, as and when we read the description, we need to visually, not visually, what can I say? We need to imagine how the model is going to look like. As soon we use the term like population, masquerade population area, 100 masquerades. So, it could be a stock. If it is a population, so maybe it is a stock. So, let us picture that. Autonomous net increase in the number of masquerades per masquerade per year amounts an average of 20 masquerades per masquerade per year. So, that means we are talking about some net increase in the masquerade population. So, this must be a flow that is going into the masquerade stock. Suppose that each year, 10 licences are granted to set masquerade traps. These licences are valid only for one year and each person holding a license may set 10 traps. Assuming the number of masquerades caught per trap is proportional to the number of masquerades and catch rate per trap which is close to 0.2. So, minimally 0.195 and maximally 0.205. So, this is the only description given. So, the second part, since we are already talking about net flow, the second part may be something referring with how much is removed from the population. So, this must be the outflow within the model. Explosively, as you can see, there is nothing about birth and death that is given. But if you carefully observe the description, there is a word net given. So, this could mean there is something about birth and death that is happening. In spite of that, an average of 20 masquerade per masquerade is increasing. So, that is what the description that is given. So, now, we need to download and debug this model. Sometimes, when you open the model, it talks about scaling and preferred is 100 percent. So, you get the exact same size. This is the model that you will see. First is the first step is you look at the structure. So, we imagine there is some masquerade population, some new masquerades come in and some masquerades is getting caught. So, they are removed from the population. So, description is not fully complete, but given the description, we can assume that the masquerade is caught and terminated and not caught and released back into the population. So, there are some reasonable assumptions to make. There are some variables like how much masquerade is caught per trap, how many traps per license, how many number of licenses. It seems kind of structurally okay. The next we can try to do is look at the model and say check units. Units seems to be okay. Model, check model. Model also seems to be okay. Then we can quickly trace through the equations. So, new masquerade was about 20 masquerade per year. So, before we go into that, we can actually look at model settings to look at what is the time units. First item, you see that okay. All the description were in years and we have time units also in years. And let us look at this time step 0.0625 Euler method seems fine, seems reasonable, quite low. But let us see we do not know which is having impact. Then let us look at the equations. We just move the mouse, whatever the units pops up below that we can see. Unit seems to actually be fine. To open it, it says new masquerade into masquerade population and new masquerade rate is about 20 which is consistent with what numbers we have. See any change in constants is not that big of issue. We just need to ensure that it is fairly accurate. Units that is a bigger problem. So, masquerade population should be roughly the masquerade cot per trap into the proportionality factor as per the description. And number cot should be number of licenses into number of traps per license multiplied by number of masquerade per trap that should logically give us the number of masquerades cot. So, which is a product of all three, masquerade cot is into proportionality factor, traps per license is 10, number of license is 10 which is roughly the what the description has said 10 and 10 traps proportionality constant is 0.195 what is taken 0.195. So, minimally it is there. Right. So, seems okay let us assimilate it. At least let us see if there is errors. All right. Yes. Okay. We get this error floating point overflow. This is an error and error it says floating point error computing masquerade population at time 4.06. A simulation run length of 10 years at around 4.06 itself is getting a floating point error trying to save results anyway. So, let us click. Okay. Close it. Let us see the graph. The graph is growing. How is the graph growing? No. It seems to be growing hyper exponential. There is difference. Exponential means it has to have doubling times is constant here. It is growing hyper exponential much more than what it has to be. Right. So, there must be some error. There is 1 and 2. Actually, if you look at it, this is a positive feedback system. It has to keep increasing, but this is a negative feedback system. So, it has to decrease. Right. So, only descriptions it can have is exponential growth, exponential decay or equilibrium. Pinflow and outflow is equal. Right. This system cannot have an exponential hyper exponential growth. For hyper exponential growth, your proportionality constant or new masquerade rate or something has to keep reducing or keep increasing. Number of new masquerade should increase or proportional rate should keep decreasing. So, none of the proportionality constants are changing. Here the proportionality constant is 0.195, 10, 10. Right. So, none of the constants are changing. Correct. So, hyper exponential growth should not occur. So, I will try to follow my steps. So, but still masquerade population is changing. You can look at that population is just increasing all the way and what affects the population. Population is affected by both these flows. So, let us include that also in the table. I clicked the time down table and here masquerade population and then suddenly you find that new masquerades are 2000. New masquerade first period is 2000 which makes sense, 20 into 100, 2000. Number cot is 0.195 into 10 into 10. It should be 1, 9, 5, 0 which is fine, but it is minus 1, 9, 5, 0. Let us look at it. What is happening there? Let us look at the population here. It is new masquerade minus masquerade cot. Okay, inflow minus outflow. This is fine. Let us look at masquerade cot. It is minus masquerade cot per crap into this. So, this minus should not be there because it is minus and then you are already in computing the stocks, it is minus of minus. So, it became plus. So, stock value started increasing more than it ought to. So, let us remove this minus sign. Let us click okay. Let us run the model. We did not get any errors when we ran it this time. Still we are getting exponential growth which is fine. For at least doubling time as you can check it, doubling time should remain constant. So, this looks reasonable for this kind of errors. It does not seem to be any other errors using the model, but these kind of small bugs also can creep into the model. So, even though when you ran it, it did give some results. It did not make logical sense. So, expectation is you actually look at the result and then we try to see how we can understand the dynamics and see whether the model inherently can cause it. When we move to sensitivity analysis and other concepts also, we will end up using these models itself. So, save the, what can you say, this version of the model. Now, we may not still be done. It says catch rate per trap is 0.2, minimum 0.195 to maximum 0.205. We can logically see, for example, Muscat population is 100 and this is 20. New Muscat is 100 into 20, 2000. So, instead of proportionality factor instead of 0.195, if it is 0.2, then we get 0.2 into 10 into 100 which will also be equal to 2000, correct. So, in that case, we need to get a straight line. So, we can check if the model is actually producing all the behaviors as per expectation. So, if it is anything beyond 0.2, then what should it be? It should have exponential decay. So, we can check if the model actually producing all the behaviors, only then it is a completely verified model. So, one way to do is without say changing the model as let us instead of MR195, I am going to write 200, click SIM setup, click proportionality factor and write 0.2. It is okay. Let us stop it. Now, let us do Muscat population. I am going to stop this. Fine. Let us not use SIM setup. Now, we will do data. So, MR200, let us change it. Let us go to the proportionality factor. Let us make it 0.2, click play. Now, actually we get two graphs. One is a small red line in the bottom. It is actually being constant. Another is increasing exponential. You can change it to 205 as a simulation file result name and change the proportionality factor to 0.205, click OK, click run. Now, if we do the plot depropolation, it is very difficult to discern here. I have exponential growth and the other one is supposed to be exponential decay, but I cannot make it out from here. So, in that case, you can just go to our control panel, data sets, remove the 195 data set, 200 and 205 is what we have, click OK, click the graph. Now, you can get a constant line corresponding to proportionality constant as 0.2 and exponential decay when it corresponds to 0.205, which is exponential decay. So, it is able to produce the results that we can expect from a first order system. So, model is verified. This can be used for further discussions and analysis. Just to quickly go over it, so that we have a record how to go about it. So, you revise it, it is quite easy for you.