 I will not so much discuss about resonances now. I will more concentrate on how to produce and propagate on certain these related to measurement results. So I'm Patrick Schiller-Biggs and I got my PhD at the Grand University like Jan did, but on fissures fragment properties and neutron induced fission cross sections. Then I went to the ILL where I did more nuclear structure measurements using the trans spectrometers where I was local contact for the crystal spectrometers. Then I moved to a complete other field in ISPRA at the GRC in ISPRA where I helped on the development of non-destructive techniques for nuclear safeguards and nuclear security applications. And then I came back to the GRC and IRMM in Hill and now since 2001 I'm producing and we try to produce neutron induced reaction cross section data and this is still open and that's probably within six years and a half. So this is an overview of what I will discuss. So I will go through the basic law of uncertainty propagation and how to produce and propagate correlated uncertainties and covariances. And I will try to show you that these are no secrets behind and correlations you can fill them in yourself. You don't have to guess them. It is straightforward once you understand your measurement process you can easily produce covariances and you can easily propagate them. Then I will discuss a bit about least squares fitting and some problems related to least squares fitting, but they are all related to reliable covariances. And then a few comments on the use of definitions and terminology. So first of course when we produce a measurement result it's only valuable if you also give it some certainty. So what we need to do is to start with uncertainty propagation to get a proper measurement result. And I give a scheme here and this scheme illustrates how we try to measure for example the alpha activity of a sample. So what we do is we have a sample which emits alpha radiation and we have a detector here. So what we have is what we measure, the measured quantity is the count rate in the detector and what we want to determine is the alpha activity of the sample. Now first of all in the detector we will not only detect the counts which come from the sample here but there will also be a background component. So the background component has to be controlled. Secondly the count rate, the net count rate here is related to the alpha activity but there are other factors involved. First of all the alphas have to escape from the sample so we have an escape probability to take into account the escape probability from the sample. We have the solid angle between the sample and the diafragments which we put in the detector. So that solid angle depends on these distance and on these apertures from these diafragments and then we have also to take into account the detection efficiency of our detector which detects the alphas. So this goes a bit fast. So this is in this case what I call the model. So this model links the quantity that we measure with the quantity that we want to determine. And you see that there is some difficult relations between here the relation is quite simple but sometimes this relation can be very complicated. So again the same thing here. Now what we can also have is for example if we count the alpha, if we do a count rate as function of time then we can try to determine the decay constant which is related to the alpha decay in the sample. And then to do this we do a least squares fitting and so I will try to explain how we can properly propagate uncertainties which are related to these two analysis procedures. So summarizing, so we have a lot of input quantities. We have a model and this model leads us together with the input quantities to the output quantity. So now the input quantities can be measurement results. We have the count rate in the alpha detector but we can also the background is mostly also part of a measurement result. Then we have including in the input quantities can be calibration constants influencing quantities like temperature or pressure and we can have physical constants like the velocity of light or even in our case in cross-section measurements also nuclear data which are related to nuclear data standards like cross-section standards for neutron induced reactions. So all this leads then to the quantity that we want to observe. Now all these input quantities mostly have an uncertainty. So that means that these input quantities the input quantities we can also link a probability distribution and so ideally we propagate full probability distribution which is related to each input quantity to get the output. Now that is in an ideal case what we mostly do and what you are all used to do is we apply what we call the general law of uncertainty propagation. And this general law of uncertainty propagation is in principle based on two concepts that is that we suppose that all the input quantities are novel distributed. And in this case we have a non-linear problem that we first tailor development. So you see here this is now a given simple scheme of operations that you are mostly used to. So we have a background correction and we multiply with the factor. You are wondering Ralph. I think I will come to your questions. So these are practically the most frequent operations that you will apply. Background correction and multiplication with the constant. So we have an experimental observable. Why? With some certainty. And the background with some certainty. And so we have to propagate this to the variable z. And the same when we do correction factor on our observable we have the measurement here and then we apply a correction factor which also has uncertainty. So now what you are used to is you just we can propagate these uncertainties and you in this case you quadratically sum the uncertainties when y and b are independent. And the same thing here. Then you can multiply with your factor and then you have a very simple equation where you practically, quadratically sum the quadrat of the relative uncertainties. This is for independent variables. And this you all know how to do that. So and these relations are practically based on the fact that we suppose that they are all normal distributed, the uncertainties. And why is that? So if z is a linear function of independent random variables which are normally distributed, then the probability distribution of z is also a normal distribution that we know from statistical theory. And then the mean of z is nothing else than this. If z is a linear combination of independent random variables this is the mean of z. It's a very simple function and we also know how to make the variance of z, of the normal distribution of z. It's just given by this equation. So that is the equations that we normally use and it's all based on this property of the normal distribution. Now if we have a nonlinear function of independent random variables we can apply the same formulas because what we can do is we can tailor develop the function in the region of the mean of when we are already close to the value that we want to determine. And then you just apply exactly the same equations. This is for a nonlinear function. So in principle you just apply exactly the same formula. But you have to calculate the derivatives with respect to the input quantities which create an uncertainty. Now why can we do this and why this all works? That is practically due to the fact that we can suppose that everything is normal distributed. And why is that? So if you have counting statistics, counting statistics we know they follow our own distribution. But as soon as you have a large mean value already it is a normal distribution. Secondly the central limit theorem says that if you have a large number of input quantities which are distributed of independent input quantities with a rather equal distribution, if you combine them you end up with a normal distribution. And last but not least there is the principle of maximum entropy which says that if you only know the mean and the standard deviation of a given quantity then the optimal probability distribution is a normal distribution. So these three theorems say that we can suppose that everything is in first order normal distributed. So here I show for example if you have counting statistics here I show Poisson distribution with the mean of 10 and in green I plot the corresponding normal distribution so we can almost suppose that it is normal distributed. We say that above 30 you can practically suppose that it is normal distributed. Then for the central limit theorem I give you three different distributions a Gaussian distribution, a triangular distribution and a rectangular distribution. If you combine all these three then you end up here with a blue line and then in green I plot the corresponding normal distribution so this is the central limit theorem. So this says that the z value which comes from x1, x2 and x3 we can suppose that it is normal distributed with these parameters. So that is why we all apply all these formulas. Now I will go through you with you through an experiment. Let's suppose that we measure y1 and y2 with their uncertainties and they are independent so I count twice the activity of a sample and that is y1 with some certainty and y2 with some certainty and I have once determined the background which is related to my experiment so I have b with some certainty. Now I want to calculate the ideal value or the value which is related to y-b from this experimental input. So what I can do is I can make the average of these two values here and then afterwards I can subtract from this average the background so what do I get? The uncertainty which is related to this average is very simple. It's this one. It's given by this equation. Then I go to this z value I subtract from this average value the background and then I get it's also very simple how to combine the uncertainties of the background together with the uncertainty now of this mean y. So I get this as a total uncertainty on my z value. Now let me do it in another way. I first calculate c1 which is y1 minus oh this should be b sorry this one and this two which should go away. So it's y1 minus b and z2 is y2 minus b and then I will calculate from these two values my average c. So then the uncertainty on z1 is this one and the uncertainty on z2 is this one very simple. Now I calculate the uncertainty on this z value supposing that they are independent and then I get this value but I suppose that they are independent using the previous formulas which I showed and then you get this as a global uncertainty and you see that these two uncertainties from the two different ways of analyzing the data are completely different and that is due to the fact that z1 and z2 I was wrong by supposing that they were independent. So this is all what is about the covalences and now I will show you how simple it is to take this into account you just have to understand your experiment. So basically we go back to statistical analysis theory and z is a linear function of random variables if they are independent we have these equations, when they are dependent we have to take into account the covariance. It's all statistical theory, it's all in textbooks. Now this looks like this in matrix notation which makes it a bit more simple to understand. So what we have here is the covariance matrix of my input quantities and here I have the gradient matrix. So these are nothing else than partial derivatives of your model with respect to the input quantities and I will do this now again for my simple example of y minus b. So I produce z1, z2 from the input quantities which are y1, y2 and b. I will first determine z1, z2 and then in step 2 I will calculate again the average c. So let me first now determine z1 and z2. So the only thing what I need to do is to understand the variables, the input quantities of my experiment and in this case they are all independent so I can easily construct my covariance matrix. I just put the uncertainties on the diagonal term and all the correlations here are 0 since that I know I understand my experiment. As soon as you understand your experiment you can build this up. If you can go to your first step you can mostly build it up such that all your correlations are 0. Then I just have to calculate the derivatives of my quantity z with respect to my input quantities and that is z1 versus y1, z1 versus y2, z1 versus the background and you have z2 versus y1, z2, y2 and the partial derivative of z2 versus b and that is this simple matrix. As simple as that. And then I have to do a matrix multiplication. I have the transform of transpose of this matrix. Here in the middle I have my covariance matrix and here I have this matrix again. Very simple matrix multiplication. And if I do this one, this one is 3 by 3. This is 3 by 2 and this is 3 by 3. In this case the c is 2 by 3. That's correct. And this one is 3 by 2. Because I ask for two additional output quantities. So the final covariance matrix here is 2 by 2. And that you see here, this one is my 3 by 3 covariance matrix of my input quantities. It's often also called the design matrix or the sensitivity matrix. It's all related to the model. This is from the experiment and this is related to the model. Now you just do a few matrix multiplications and you get your covariance matrix of z1 and z2. Now I go to step 2 and I calculate from z1 and z2 I calculate the average set. I again just apply the same formula. So now the sensitivity matrix is a partial derivative of z versus z1 and a partial derivative of z versus z2 which is twice one half. And I calculate my covariance matrix of, I calculate the covariance matrix which is now in this case only one value. And now I compare this one with my first evaluation of the average set and I get exactly the same uncertainty. And that is all what it is about covariances. And nothing else, nothing more. There is no secret. You just have to understand your measurement. Now in this case people report y1, y2 and b. That means they report fully all the experimental input. They practically report the full experiment. In this case they already report z1, z2 which is already at some kind of data reduction of the experiment. So that means that if you report z1, z2 you have to give the full covariance matrix. Since that covariance matrix contains some information of your experiment. So that is covariance. Now that was with two variables. Now you can go to the mathematics and it's all again based on the properties of normal distributions. And you can easily expand it to more than one quantity. And you can also expand it to nonlinear functions based on the Taylor development. And you get always the same equation. And here I give them now in matrix notation which are easier to read. So you have here the linear approach and here the nonlinear approach based on the first-order Taylor development. And so these are the uncertainty propagation equations which you just have to remember. And that is what is called the sandwich formula. And that's all what you need to know. So one, and here I give you just, so if you have more than one input quantity, this is the covariance matrix if you have a background subtraction. If you have n values which are all have to be corrected with the same background, this is the covariance matrix. You can make the mathematics of this one it's very straightforward. That is the yaw covariance matrix. And then if you calculate the average set from this result here you get this as an uncertainty. And here now you see that if I increase the number of measurements on Y I reduce the uncertainty on Z. But I will never reduce the uncertainty on Z due to the background component. That is a common component to all my input quantities to all the Ys. And I can do exactly the same what I did for the background. I can do it with multiplying. You can do the mathematics yourself and this is the covariance matrix if you multiply with a constant. If you have an n number of input quantities and you all multiply them with the same constant then you create a correlated uncertainty component. You have the uncorrelated components which is the Y1 here, Y2, Yn and Yn. But you have always a correlated component. Now I can again calculate the average set very straightforward. And here you get a formula for free. There is one property. I go back now to the covariance matrix. There is one property of a covariance matrix or two in fact. It is symmetric since the non-diagonal terms are symmetric. And covariance matrix, if we want to use it, it has to be positive definite. And Jan will explain why it is so important to have a positive definite covariance matrix. Now what we all, you also heard people mention quite a lot is the correlation coefficient. And there I want to spend some time on the correlation coefficient. Now you see the covariance matrix. The correlation coefficient also comes from statistical theory. It's all about statistical theory, statistical analysis. So the correlation matrix you get by dividing all values with these cross terms. Use it one here. That is the definition of this correlation matrix. So this one is divided by itself. So it is always one. All the diagonal terms are always one. And this one here, because on this side I have z1 and here I have the dimension z1 z2. So here I divide by z, the uncertainty on z1, multiplied by the uncertainty of z2. And this one is symmetric. And this is the correlation matrix. Now what we'll calculate for you, I calculated for you the correlation matrix which is related to this simple exercise. So we have here the covariance matrix when we do a background subtraction. And this is the correlation matrix. Now the correlation coefficient between the two values of z is given by this equation. And now you often hear that the correlation is this. This is an expression which I don't like because the correlation reflects our experiment. Suppose that I have an experiment where my uncorrelated component is very large, the correlation is practically zero. It's all about how the uncertainties are combined. And so they reflect the quality and the input quantities and they reflect how the experiment has been done. And there is not something like a fixed correlation coefficient. When you do background subtraction, never. Multiplying with a constant, never. You just have to understand that it comes from your experiment. And this here, so my model is the background subtraction here. And then it comes from the model. And I will come back to that one also later on. So I don't do a quarter of an hour and then. So now we come to models. And one of the models that we have is, for example, the analytical expression of our background due to our black resonances. So what we do is we fit an analytical curve through these points. So how do we do this? We mostly use high-square minimization. And then with high-square, so we also have to do this when we fit for resonances. Then the model is a bit more complicated than our previous model. In the previous model, I only have exponentials. Now the model, I cannot write it down on this transparency here. But high-square minimization, in principle, you only have two equations. This is the parameter that you get out of the high-square minimization, your optimal parameter. And this is the corresponding covariance matrix. If you have an experiment Y, where you have Y and values on Y, and you have the covariance matrix on these points. So here I give you a very simple example. I have three points and you can fit a straight line through it. So then the parameters of the straight line come from these two equations. To solve these two equations is also straight forward. So you just have to calculate the gradient which is a sensitivity matrix with respect to your model. And you need to have the input from your experiment, your covariance matrix from your experiment. Now in this case I just suppose that they are diagonal, but you can easily get also covariance terms here. But I just, for representation purposes, I put them to zero here. But you can easily put in covariance terms here also. Then the model in this case, the gradient with respect to the model, is also very simple. So the partial derivative of the model with respect to parameter a0 is 1 and the partial derivative with respect to a1 is x1, x2 and x3. And then you just have to plug these two matrices into your these two equations and you get the result. You get your parameters together with their covariances. So in this slide I show you some people sometimes say maximum likelihood high square minimization. In principle it's the same business. As soon as you suppose that your input quantities are normal distributed they are exactly the same. Since if you maximize the likelihood of your parameter here and if you maximize it you minimize this expression which is nothing else than a high square minimization. So maximum likelihood and these squares is in principle the same thing if your input quantities are normal distributed. So and therefore because if you take now, if you only take the diagonal terms you suppose that the covariances terms are zero here in your matrix you practically minimize this expression which is a sum of quadratic terms and therefore it's called least squares adjustment. Now here is again the mathematics behind a least squares adjustment. So we have a linear model where we have the partial derivatives of the function with respect to the parameters that we want to adjust and then you get these equations for the parameter and for the covariance of the parameter. Now you see that this term here is nothing else than the covariance of the parameters and it's in this equation here. So if you program your business you probably better first program that one and then you calculate the parameter. You first program the covariance and then the parameter. So here for a linear model you just need to calculate these optimized parameters, you just need to do a few mathematics, you just need to calculate partial derivatives. If you have a nonlinear model we again go to first order Taylor development and the only difference is that you need to do an iteration before you come to your optimized parameters since you need to know your parameters before you need a first case. So here again if you have a nonlinear model then you just calculate a few partial derivatives and you can solve the problem. Now I come to an example here. So here I created an example of experimental points that are totally artificial. I used them to be clear in what I want to say. So they have an uncertainty and I fit with a straight line. So I just use these two equations from the least squares adjustment I get my parameters with the corresponding covariance and I suppose that all the input quantities are fully uncorrelated in the beginning. And this I get as a result. So I get back a0, 1, a1, far with this covariance matrix and the correlation coefficient here is minus 0.9 far. So strongly correlated. Now I can calculate back using this covariance matrix I can calculate if I would calculate at a certain point here using these input quantities my model value the red line I can also calculate the corresponding uncertainty to the model value and then just using here the sandwich formula and using the full covariance matrix here it's this dotted line that is the uncertainty on the model values which come from my fit. If I would only use the diagonal terms and forget about the correlation between a0 and a1 I overestimate completely the uncertainty here and especially in this region. Now I make another experiment here which are practically based on the same parameters and now I fit here the straight line and I get here again my parameters out together with my correlation coefficient and my correlation coefficient here is completely different because my experiment is completely different the relation between the parameters is the same the model is the same but the correlation coefficient is different it is just it reflects also the uncertainties of my input quantities and not only the model the model is there but also the uncertainties of the input quantities then I do the same game here now with the full uncertainty on the model line and by supposing only the diagonal elements of course now I practically get the same uncertainty because here this correlation coefficient is 0. This shows that you really need also that the correlation coefficient even when you have a model is determined strongly by your input quantities and the covariance of your input quantities. Now here I made the slope is completely different the uncertainty structure is practically the same here and you see that you get practically the same correlation coefficient so although the slope is different the correlation coefficient has nothing to do with this slope here so now perhaps I can stop here for 5 minutes and I will take a glass of water and then if you have in the meantime some questions throughout since now I can since what I want to do now is I want to go to I can do this one I will first do this one so these are equations which I always thought these are very simple least squares adjustments now these equations are the most general least squares adjustments that you can imagine and that is not due to anybody else in Fritz Grohn so with these equations with these equations I can include on certainties also on the x values and I can also include the prior on the parameters I just have to use them very intelligently and they are much stronger than the Bayesian equations that people are used to so basically if you have what you can do is if you have also a covariance on your x values so you have an experiment which is yx so you measure accounts as function of time so the counts are y and x is time if you also have an uncertainty on your time what you do is you include your time also as an experimental input in addition you include your time also as a model parameter and then it is very straightforward and you just have to change your model you include in your model the identity x equals x and that is all what you have to do and you include you extend your experimental input you extend with your experimental data and then of course your covariance matrix also contains the covariance on your x and then you can exactly use the same equations here so I did this for you using so you have here the linear equation so I include now also x as an experimental input quantity again for case of simplicity I put it down on diagonal terms at zero so I extend to covariance matrix with my x I can even if I want make a correlation between x2 and y1 if needed this I can do if that exists then I extend my model I have not only the a0 and a1 as model parameters but also my x as a model parameters and I just have a slightly more complicated sensitivity matrix so I have a partial derivative of x1 versus x1 of x1 versus this is of x2 versus x1 and then this is x3 versus x1 this is y1 versus x1 ok and so on so it is straightforward to make these partial derivatives and create your gradient matrix and then you can you add also the uncertainties on your x values you just have to solve this then also by iteration since now you need parameters for the a1's here therefore you need an iteration here now I can do exactly the same if I have a prior on my parameters I can play exactly the same game I can add the prior as an experimental input I extend the experimental input here and I include the parameters together with their covariance as an experimental input and I can solve the same equations and so like this you can include the uncertainties on the y on the x and you can include the prior and this even allows you to make cross correlations if needed and that you can so that is explained here and that you cannot do if you do Bayesian theory since Bayesian theory is based on the fact that your prior is fully correlated it cannot have any correlation with your new input you cannot apply these equations if your prior is correlated with your new input so in principle to do fully to propagate on certainties you can base yourself on two very simple concepts so that is a conventional general law of uncertainty propagation here the sandwich formula you have to understand your experiment that means the input quantities your model so the input quantities determine this covariance matrix and your model determines this sensitivity matrix and then if you do least squares fitting you have to solve these two equations you have to understand your model to get the gradient matrix and you have to understand the input quantities to get your experimental input data and that is all what you need to do to do proper uncertainty propagation so now I will show you some problems so first of all I have here again this black resonances so I have my black resonances here and I want to determine my background so this also is part of the background and I fit this function through it if I use this data here like it is represented here and I fit it and I use my functions which I showed you I get this grid line as the best fit now this is of course complete nonsense and what is this is due to the fact that my covariance or my uncertainties or my experimental data points are only estimated so you need to do a proper evaluation of your uncertainties because I cannot suppose that these points follow our normal distributed so I cannot take the square root of the counts as the uncertainty and that creates this function here so if I group the data and I create a much better statistics in my data points then I get this fit through the data so what it means is that you need to be careful with your uncertainties if you do a weighted least squares adjustment and just to compare this is the fit if you don't with correctly it's the same line as this one now this example is also a textbook example and that is what is called Pilspartner personal so what it is is it shows again how careful you have to be with covariance matrices so we have two data inputs y1, y2 with uncorrelated uncertainties and you multiply them with a factor and so y1 is 1 plus minus 0.1, y2 is 1.5 plus minus 0.15 and k as a 20% uncertainty and then we are going to combine this information to get the best value of set where the model is set equals k multiplied by 1 so I will go through this with you so I can solve this in two ways like I did it with the background I can first calculate the average of y and then multiply this average with k to get my best estimate of set that is what I would do I am the experimentalist since I have all my full information about the experiment but if people report only set 1 and set 2 and this is the experimentalist report set 1 and set 2 it will give a covariance matrix and then somebody else will calculate set just having this information available so now let me first do scheme A so scheme A what I do is y1, y2 I go to the best estimate of y using least square adjustment nothing else than a weighted average that is the least squares adjustment if you only have if you calculate the mean of these two the weighted average of these two so I get this as y1.154 plus minus 0.083 and it is represented here by this in the figure so this is the mean value and this is some certainty I multiply by 1 with k and I account also the uncertainty of k on k and I get this as a best estimate of set so in this we all understand now we go to the second scheme so where I create first set 1 and set 2 out of the covariance matrix so here is the covariance matrix of y1, y2 and k so they were not correlated so this is the covariance matrix of y1, y2 and k so and from this I can produce set 1 and set 2 I showed you the equations how to do that before and I get the covariance matrix of set and this is set 1, set 2 represented here but the uncertainties are only the square root of the diagonal terms so then from these two now I am going to calculate my best estimate of set and what do I get if I apply least squares adjustment I get 0.AA2 which is far below the two values and people were puzzled with it a long time but so you see here I compare the two values 1.154 and 0.882 so this is just not acceptable now what is the reason for that is that your covariance matrix is not well built up the covariance matrix of set 1 and set 2 and if you do the mathematics of the weighted set from the set 1 and set 2 you see here that if you decompose all the components that you end up here with something like y1 minus y2 which always pulls it down will always pull down your result always all the time and I can show hundreds of examples and we had even I don't have it on the transparency here it's perhaps a pity so we have structured cross sections if you do a house or a fishback fit through the structured cross sections you always are even you can be 10% down and our theoretician he was really puzzled with it until he came to us and then we said look but you have been puzzled and you always pull down the result due to this artifact here it's not an artifact if your covariance matrix is well defined now you can read more on the mathematics behind in these two papers now the solution to avoid Peel's pattern in person is very straight forward and I gave already also a hint before we just include we create a new model we included the normalization constant K into our model so when we fit y1 and y1 is set over K y2 is set over K and K equals K and we fit versus set and K these are the model parameters and then we fully avoid Peel's pattern and puzzle this you can always apply and you always avoid it so Peel's pattern and puzzle will always be there if you have a normalization factor and if you are y1 and y2 when they are very far from each other and out of uncertainty values so now if I do this and if I apply the model y1 equals set over K y2 equals set over K equals K and I again just use very simple least squares adjustment I get my best value here this value which completely corresponds with if I were to first change the average of y1 and y2 and then apply the normalization factor it's exactly the same solution so now a few words about definitions and terminology since it's also very important when you report on certainties that you report with the correct terminology first of all there is a difference between an error and an uncertainty now precision and accuracy we have found from Nicola already and then there is a way how to report on certainties now a measurement error is an error is the difference between two values it can be negative or positive now an uncertainty is related to the width of a distribution so an uncertainty is always positive so by definition they are different so and it's explained here so the measurement the error is the difference between the result and the reference value most if you put the other true value we can never determine the error since we never know the true value unfortunately so that is an error and an uncertainty is related to the width of this distribution so there is a clear distinction between the two so don't write error when you mean uncertainty so then if you to avoid errors of course you need to try to include all systematic effects which are related to your measurement and in my simple example here I showed the background so in case I would not correct for my background I would systematically make an error but it is not my uncertainty of the measurement but I make systematically an error by not correcting for the background so once I correct for the background I can, I go of course closer to my true value I will never know how close but we hope that by properly evaluating the uncertainties which are related to the input quantities that the width of this green distribution here covers also that my result is also close to the true value but this we will never know since otherwise if we know the true value we don't have to measure so don't so we will never know also the error that we make never since we don't know the true value but we can just try to do our best to estimate as good as possible the uncertainties which are related to all the input quantities of the measurement process same is true for if we apply a correction factor so the background is a correction and normalization is a correction factor so we can come closer to the true value if I would not apply this factor I would systematically make an error but it has nothing to do with the systematic uncertainty since it's completely different now here I can try to increase to, I can try to lower this width of the blue distribution here by increasing the number of points or by increasing the number of measurements in some cases but I will never be able to decrease the width which is due to this systematic component of my uncertainty so this is in relative terms here the uncertainty on the z value now so then measurement precision and accuracy that has been nicely explained by Nicola already but if you are inaccurate it mostly means that you did not include all the systematic corrections that you had to apply so another thing is if we report based on the standard deviation of the normal distribution then we call it a standard uncertainty and sometimes people will also and that means that we have a coverage of about 68% of the distribution so a coverage interval of about 68% some people report by multiplying with a factor about 2 and then they go to a coverage interval of 0.95 if you do this then you have to report it as an expanded uncertainty and you also have to report the factor that you applied so the most easy thing is mostly just call it a standard do one sigma on certainty and call it a standard on certainty and then everybody knows what you are reporting if you multiply with a factor you have to specify your factor then if you also sometimes will see the term combined on certainties this does nothing means nothing else then that they have propagated all the uncertainties which are related to the measurement process that is what they call combined on certainties so but that is nothing else then trying to include all uncertainties of your measurement process so and this brings me already to the end so to produce reliable experimental data and results we just have to understand the measurement process and that means the experimental observables which are involved in the model and once you understand this you can easily define the covariance matrix we just have to apply these two formulas so the sandwich formula if you do just normal or certain deep propagation when it's a conventional function of quantities and here if you do least squares adjustment now how to do this for time of flight measurements we have created what is called the H.S. formalism and this Jan will explain tomorrow and we will also have some exercises on this H.S. formalism but the formalism is nothing else than the sandwich based on these two equations here and with an additional component that we use one nice thing of the covariance matrix that is positive definite that we can always apply a Koleski transformation and then we can make this equation even more easy to use and it will allow you to report nicely all the uncertainties which are involved in the final result that you will show so and now I come to the end so if you are interested we will have dedicated workshop on the generation of use of covariance data in nuclear applications it's a combined workshop combined organization of SCKCEN our national institute in Belgium and our institute and it will be from 10 to 11 December at our institute so and that is all what I wanted to say something I'd like to introduce yesterday is specifically meant to realize the gene of new people what you presented what you have presented in a very youthful way the second remark I would like to make is that the new version of the gamma space known as the new garage of the gamma is more or more on avoiding these linearization things and go really into propagation using multi-colonial techniques that's true so I have a presentation on that so in principle what it does and then you need biasing it's my second slide I think which I showed what you do is you propagate probability distributions and you do not do the conventional uncertainty propagation so with conventional uncertainty propagation we start from a normal distribution and use the properties of the width of the normal distribution and that is all what we do we use the properties of the width of the normal distribution and with this we can propagate the uncertainties now in the new due to improved computing power of course what you can now easily do is propagate probability distributions but then again you have to support something about your probability distribution it can be not normal it can be not normal but if yeah that is an application of it so this is just here I propagate the distributions no no due to the computing power now you can do multi-colonial calculations now if you go to least square fitting it's a bit more tricky but what we show already is we can determine resonance parameters avoiding the least square fitting but we have to use Bayesian theory then and then we use probability distributions and we start of course from a normal distribution but you can easily determine resonance parameters based on Bayesian on propagation of probability distributions but this is all what I said this is based on propagating the width of the distribution that's all what it is this was a demonstration version which I encountered in your lecture on the purpose of this goal and obviously of course since he wants to earn money he has switched off certain things among these things was the possibility to change the kind of distribution this is why it was not working, it was not working and I have put another version which you need to install there it works but there are other things to not work it's not a demonstration version, this is what I get for you if you want to use it we have to pay, I'm sorry this gum work bench you find now one version in my codes folder which is ready to run in Georgia to install it, this was the one we had on Monday there is another one which you can install which has different deficiencies it's also a demonstration version which you can download from the web page of this company so you can try this out and if you like you can try it the previous picture which you showed the previous one where there was a different distribution then I have to go a bit further that you can do analytically this you can do analytically for that I don't need Monte Carlo nowadays people would probably do it with Monte Carlo then they don't have to think anymore but in principle it shows that conventional uncertainty propagation is not so bad and I wanted to at least I, when I started with it they made me afraid about all these covariances and correlations and what I tried to do was to say look there is no secret behind if you know what you are doing you can easily build up your covariance matrix it's just as soon as you can go back to the origin of your experiment if you have that and you understand your experiment the covariance matrix is there for free yeah so maybe the people is right about this the covarices to make it transparent on this point of view Mr. Zerkin where he, I believe he will show that you are essentially input your parameters say it is uncorrelated, it is correlated correlation is such such value and then you generate that is something that I don't hear so we know that they are correlated so like Z1 and Z2 if you do the background subtraction they are correlated but I cannot tell you in advance but it is the correlation coefficient that depends from the experiment and that is sometimes people think that the correlation coefficient is a fixed value and that is not true and a lot of people believe that the correlation coefficient is really almost like a nuclear data and that is completely wrong since it just reflects the full uncertainty, the full it includes the model through the sensitivity matrix but it also includes the covariance of your original data yeah but there are not some people and even I don't tell names here but I think that correlation coefficient I can, resonance parameters for example which are nuclear data, so they ask me is it correlated with gamma gamma I can build an experiment that the correlation is zero, I can build an experiment that the correlation is one and I can build an experiment that the correlation is minus one you just have to ask me, I can design the experiment because relations they really reflect not physics, there is no correlation but they reflect how the data was received it is how you measure, if you for example measure something because the function of energy but you use only one normalization factor then it brings to that all the data are correlated if you use normalization in the complete calibration at every instant images then it will point to you again so it reflects your methodology, how you obtain the data it is only you who knows it, nobody after you will know how to detect it so this is the why please to submit your data in the X4 one again yes yes yes yes I give two simple examples I subtract the background and I multiply with the constant and these are practically the most common operations that we apply, we subtract the background and we multiply with the coefficient so here it is, here it is I got who again with you no no no I thought that I was much too fast since this is the most important so basically this is my experiment I measure Y1 with an uncertainty and Y2 with an uncertainty and we know that they are not correlated and then we have a background on these two, this background I measured two days before and now I measure Y1 and now I measure Y2 to other days so now I do a background subtraction, I calculate Y1 minus B and Y2 minus B I know and this is the clue if you can start here then it's straightforward since I know my experiment and I know that these are not correlated then I can fill up this covariance matrix and then they are all zero here I know my experiment and you have studied one hour later something like that I don't know, I'm not able to do it I can only go back to this you already mentioned 50% of it and there you have to be very careful because it can happen that your matrix gets non-positive definite and why this is the problem here so you have to check out elements which might be not but then you have to be we as experimentalists, if we are not able to do this then we don't understand our experiment if we are not able to do that then somebody who doesn't understand the experiment and that is the point I just can give an example during this last key comparison on flag measurements our dear evaluator he just guessed numbers for the off-dival then he did something and I sat down at my desk and I just checked whether the matrix is positive definite because it simply guessed up without really knowing the thing so the off-dival terms really have to come from knowledge my question was exactly is there a sample where you can say for specific experiment some of the off-dive basically you go all this way for example if you what I can do is with you so this is for the background so this is the background so that's easy to fill that one in so this is when I multiply this is another model when I multiply with the constant but again this term here if you calculate the correlation coefficient of this term you divide by the square root of this one multiply with the square root of that one so it all depends on the magnitude of your uncorrelated component with respect to UK so I can make this one this correlation coefficient zero or I can make it one depending on the uncorrelated components here so what you ask me is very difficult to answer and like Raoul said if somebody gives you straightforward the answer it means that he doesn't understand the experiment since we have to think I would not be able to do it but the point is that from the point of view of the experimenter we have to give something that is with zero we have to understand our experiment but we provide is not this original quantity this is not already elaborated and in that moment we have to provide the covariance but I provide them here but I can only provide them if I start from a zero and then at the end of the day I give you this matrix but I cannot give if you say look this guy did an experiment what is this correlation coefficient I will not be able to give you the answer so in this case you need to explain why it is here so this here comes from the K value so that is the correlated component due to the multiplication of K so this is a model this is a model this is a model that is the y sorry yeah and that is the K factor here yeah but you have to go and try to estimate them and you can only try to estimate them starting from zero values if you need to use information from the gauge at some point it happens in your analysis so you have to use information from the gauge to introduce a correlation but in the experiment itself there is a correlation correlation is not something you have to get by the gauge the gauge is a model and practically it comes here that is your model and then if you have you can have two values K1 and K2 but then you have to do your best and try to provide this uncertainty component so a big job for us we did not stress it in the presentation of Friday but a big job for us is to estimate uncertainties and we sometimes spend as much time we did already a lot of measurements to estimate uncertainties and give you the example of this black resonance fitting here so we will give you on Friday values of the uncertainties that we have and then I will show you and then on Friday we will show you how we in principle introduce in the uncertainty propagation what we can call a model uncertainty but it is based on measurements so basically you even ask the question how many fixed filters you put in normally if we would measure here we do not put fixed filters here but we put them in here now then we know how to correct using these filters now to go to estimate the uncertainty in this region here what we do then is we do additional measurements with each time filters in we apply the procedure when we have the sample in exactly the same procedure and then we check how close we are to the background then we do a statistical analysis of all this data and from that statistical analysis we determine an uncertainty component and that is then an uncertainty component which tells us how good we are in estimating the background and then what we do is in principle it is very easy if you sort of formula C in minus B in in transmission what we do is we multiply this equation here with the K factor and the K factor comes from our evaluation of the uncertainties and it is related to the model yeah yeah it is one it is one it is one so first of all we check that the difference between the measured values here and the fitted value that the average is one otherwise we have a bias and then we make an error but that is not the uncertainty it is the variance of the ratios which determines our uncertainty but we do dedicated measurements to do the uncertainty evaluation which is sometimes more time consuming than to do the measurement itself and a lot of people then unfortunately I have to say they take our uncertainty values and they quote them and it is not so fair you have to evaluate yourself your uncertainties and they are related to the experimental setup you have yeah yeah yeah