 But I know especially the students you can ask me enough questions, my students compulsory. So, first I will talk about the problem of segmentation and that of multi-atlas segmentation. So, by segmentation I basically mean this, we have a picture in which there is some object of interest and we want to do one of these two things, we want to delineate the boundary of the object automatically or equivalently say for solid objects. We want to label each pixel in the image as a whether you know to indicate whether it belongs to the object or not ok. So, this label image as we call it is a segmentation of the intensity image that we have as an input code. Now, the so the the framework of segmentation by example is basically this. So, of course we want to do we want to perform an image segmentation task on certain class of images and what we are given is a large database of pre-segmented images. So, these are pre-segmented by manual experts in that domain and the object of interest is weakly visible in the image ok. So, of course if there is clear contrast between the intensities in the interior of the object and the exterior of the object then the data basically gives you all the information to perform the segmentation task or the delineation task right. So, those would be simple problems of image segmentation. We have a problem where either because of biological reasons or because of imaging constraints or constraints put on by the imaging device or the application we have images where there is not enough contrast between the intensities in the interior of the object and those outside or there could be or the images could have a high level of noise that again make it difficult to differentiate between the interior of the object and the outside and. So, we want to so for such difficult image segmentation problems we want to exploit this vast knowledge of information that is there in the database put in by human experts to perform our task. So, just to set up some notation or jargon we are going to call a template. So, we are going to call an acquired medical image as a template. We are going to call an image of labels for pixels or 3D equivalent being voxels and image of labels for the image or the image of probabilities. So, labels could be interpreted or probabilities could also be interpreted as a generalized notion of labels. So, instead of labeling each pixel at whether it belongs to the interior of the object or not we could assign each pixel a probability of it being in the interior of the object or not. So, we could have a softer or a fuzzier segmentation of the object. So, we have the notion of a template we have a notion of a segmentation and the notion of an atlas is a pair of a template image and its corresponding manually segmented image. So, the framework of multi-atlas segmentation gives us an entire sequence of atlases hence the name multi-atlas and here is for instance an example of a structure that we want to segment. So, the red boundary is say the true boundary which is unknown and we want to infer the red boundary somehow from the information in this database. Now, as you see in this structure yes, so this is some structure in the human brain as it appears in a specific kind of MRI. So, the contrast between the colors in the inside of the structure and the outside is quite clear in these regions, but as we go close to this end of the structure the contrast between where the structure stops and a new structure begins is not clear. Now, of course you may ask how did an expert come you know infer this boundary and the answer is that the expert has other sources of information not necessarily MRI, maybe histology or something else they know exactly they know they have a much better much more information about say the shape of where the object lies or where the structure lies as relative to the locations of other structures that can be well identified in the well identified in the image and so on so forth. So, all this training background information that is there in the head of the expert goes into giving us this segmentation although this particular data is not very indicative of the segmentation in this part of the object. So, here is another pictorial description of the problem. So, this is these are CT images of the spleen and the idea is so we have so this is our multi-atlas database we have images of spleens and we have expert segmentation which are these binary images where the pixel is inside the object or not. Now we have this a target image where there appears a spleen and the goal is to assign a label to each pixel in this image indicating which pixel belongs to the spleen or not. Is that clear? All right. So, these kind of algorithms have been explored over the last five, six years or so and most popular algorithms handle this problem in the following way. So, this is the recipe underlying most popular algorithms out there. So, given a target image we find we first find those templates a few templates in our atlas database that appear most similar to the target then knowing that we deform or warp the template images so that they match best the target image right. So, for instance the shapes of the spleen here and their sizes are quite different from the shape and size of the spleen in the target image this is just because of natural variability. So, imagine that we apply a warp on the image domain here the 2D space such that this image gets warped to best match the target image here right. So, we apply so we compute such warps between template images in the database and the target image then we take those warps and apply the same warps same special transformation to the segmentation image that we have in our atlas database alright. Now, applying these k warps to k such segmentations 1 warp each to 1 segmentation we have essentially k answers we have k label images we have k spleen segmentations for the target image and then the last step is basically somehow try to combine or fuse the information some kind of weighted averaging or whatever you want to use that is that is so it is typically referred to as label fusion. So, you want to combine this information into our k segmented images into 1 segmented image so here is the corresponding illustration we have atlas database target image. So, this map here this fuzzy map is essentially obtained by warping each of those template images to the target image applying the same warps on these binary images 0 1 to this to the coordinate space of this image and we are just overlaying the intensities overlaying the labels obtained from each of those k warps right. So, we have this image and using this information the last the label fusion step gets us somehow and say a binary segmentation of the target image or a segmentation not necessarily binary. So, the easiest kind of warps would essentially do a rigid body transformation applied on the coordinate frame a more general warps would do affine transformations and even more general warps would do non-linear transformations, but under the restriction that these warps are diffeomorphic. So, there is a 1 to 1 mapping between the special coordinates in the template image and the target image and this is smooth and of course invertible right. So, the method is right. So, there is smoothness right. So, yeah. So, there is a well well the shapes of there is there is no prior yeah. So, right now the framework that I am talking about does not impose any prior information on the shapes of these objects, but in most cases this is it is assumed that these are compact. The transformation is always smooth alright. So, if these labels are varying smoothly over space, then after the transformation is applied the labels continue to vary smoothly over space, yes you can do better of course, yeah. Yes, yeah, sure, yeah. Oh yeah absolutely right. So, yeah so that is the topic of interest in the in the sub field of image registration or image warping. So, yes so for instance if you are warping the two images of say exactly the same modality this is CT scanner image scan using the same parameter sequence, another CT image scan using the same parameter sequence, then you may want to design a warp that minimizes the mean squared error for instance that that is the most yeah between right integrated over the entire special domain. Right, now if these images are of different modalities right, so this is an MRI that is a CT the appearance is completely different and mean squared error may not work out very well, because you know dark colors here may correspond to light colors there and so on. So, you can use I mean you can use more general measures of similarity say using some information theory criteria like mutual information or others correct, they are yeah right. So, the hope is yeah so the hope is that exactly so yeah so the hope is that you have a large enough size database such that any random guy comes in like me there is a there are some people in the database whose a spleen sizes and shape match mine yeah we do not right yeah so the matching works out well yes yeah I will tell you no even if things are normal well you go to hospital when there is always a suspicion of something not normal fine. So, let us say right let us say there is some risk of some person having a spleen disorder and say you want to find either the size of the spleen in this guy or the shape of the spleen in that guy right. So, for to get such measures you have to first segment the spleen out now the segmentation of the spleen here is you do not want to do it manually just because it is too you know it is too it is too difficult now the spleen is actually a 3D structure we have because we are 3D so manual segmentation is difficult good point. So, the thing is there is where in part so in this part of the image in this part of the spleen for instance there is a very clear contrast between intensities inside and outside so if I ask what should be the segmentation here almost all of you are going to say that this part should be inside the spleen that part should be outside the spleen, but what about this region things are not very clear all right. So, we may make some assumptions right some of us may have more knowledge of the spleen or the than others and so some may come up with better estimates than others even doctors right if you ask 5 doctors to segment the spleen using just this image and information they will come up with 5 different boundaries in this part of the spleen. So, the question is what is the ground truth and there is no real ground truth unless unless you really slice open the person and look at the spleen, but you want to do that. The objective so given that manual segmentation is tough plus automated segmentation is also not going to be super easy right as you suggested to get a good segmentation of the spleen especially in this part you should be exploiting information on you know about shape a spleen shape and size all right. So, that information is implicitly encoded into this database all right. So, we want to get all this you know all that information in this database and map that information to the target image space and come up with a good decision no it just to process a large database yeah. So, that right. So, you are going to use this information you will use the information of the size and the shape of the spleen to later detect you know to have you know to give some information to the physician whether this guy size and shape spleen size and shape is tending to abnormal or not annotations you mean ok. Maybe for this particular application all right. So, I will tell you another application where there may not be any annotations all right. So, let us say you just want to let us say there is a new disorder some new spleen disease that nobody knows how you know how it is going to affect or let us say well let us say forget disease let us say we just do not know for some structure of interest. So, for some structure of interest we just do not know the normal variability of some say the size and the shape of the structure in the population ok. We just want to study that let us say nobody did a quantitative study before we want to study that. So, to study that we have to we have to segment that structure all right how are you going to segment that structure manually is too difficult because 3D large population ok. So, you make right. So, you do a few you want to ask the experts to do a few segmentations say on 50 images and then use those to segment a large database may be 2000 images. Now, you have segmentations on 2000 images all right. Now, you do your subsequent quantitative analysis say you measure size or you measure shape variability and now you know how the size and the shape of this particular structure varies across the population which nobody ever did before. Now, you have a baseline of the variability for normal individuals ok. Now, imagine that there are individuals who are deceased all right and now right. So, now, they will have so they may have spleens of different sizes or different shapes all right. Now, but now, but so now given that you have a statistical model on the normal variability you could use this model for instance to give a probability of this new guy being an outlier or not or for instance let us say you want to understand whether how a particular variant of say a new disease is going to affect the shape of the spleen ok. Does it constrict this way or does it constrict that way or whatever ok. So, in that case then you would want to do you would want to process a large database of normal people spleens and a large database of deceased people who have their disorder and now you have a one statistical distribution on you know the control cases we have you have one distribution on the deceased cases and then you want to do some kind of a hypothesis test and determine whether there is a first of all whether there is a significant difference and then if so of what kind and so on so forth ok. So, underlying these later level applications is the basic task of segmentation all right. . Yes, how will you choose the weights I am coming there ok. So, this is what popular algorithms do now, but the problem is that of course, there can be errors in segmentation there is definitely inter expert variability registration methods right depending on what method you use what algorithm to solve the optimization what is the objective function what is the similarity measure so on so forth you will get different answers. So, yeah. So, there is some variation here as well and there may be right there may be variation in nature itself relative. So, for instance before I go there ok. So, what so why are we exploiting image registration methods. So, we are exploiting image registration methods for the following although there are parts of the structure that are not easily where the boundary of the structure is not easily identifiable. There are structures around this part that are clearly identifiable these structure in clearly identifiable not only this image, but all images in the database ok. So, when we are trying to match template images in the database to the target image the registration is going to be driven by features that are clearly seen in the image and which have corresponding features in every other image. So, the features that are clearly visible are going to get mapped well and the labels for that part of the image where features are not clearly visible are going to be interpolated implicitly using the spatial regularity constraints that we pose on the deformation right. So, the deformation maps this corner in this image to a similar corner in the other image the deformation maps this corner in this image to a similar corner in another image and it says that well if these two corners in the other image map to those two corners there then some point in the middle has to map to some point in the middle here yeah yeah yeah yeah yeah yes yes so yeah so right. So, the most trivial way of doing K nearest neighbors would be or an expensive, but probably a good way of doing K nearest neighbors will be to do all registrations of targets in the image targets in a database to this sorry all registrations of all templates in the database to the target here and only after the registration you can pick and so right, but of course, this would be slow if you have a very large database and you know you are pressed for time. So, then you can do some engineering tricks to make this process faster ok, but at an abstract level I am right now I am just talking about an abstract level I have not I have not dived into the main problem yet I am still covering the background ok. So, the problem is all right. So, the problem is even though right nature has these two features corresponding in every image the location of say this point on the boundary relative to these two features may have a natural stochastic variation across individuals all right. So, there are different types of variability that we may come across and right. So, the problem is how to deal with that variability, how to model you know such variability and yeah and how to answer the questions that we want to answer ok. So, a few questions that we want to answer is first of all how to choose the number of Atlas SK based on the size of the database all right. So, it would make sense for instance right. So, we would want we would want templates that are nearest to us it would probably do harm to take templates that are very far away from us because maybe the registration did not work perfectly you know the mapping between the labels is just not good enough as given by registration and so on so forth. So, how do you choose K? Now, once you have K the number K which K Atlas is to use and there were many works that talked about this then given that you have these K Atlas is that you have chosen how do you perform the label fusion and then the most important question because we tried to solve it is yeah how large a database would it take to keep the segmentation error before a below a specified tolerance level all right. So, imagine again if you want the application scenario it is this right we do not want expand segmentation on a very large database of 10000 images that might be an overkill expert segmentation are expensive and laborious we might as I said before only want the experts first to do segmentation on say 40 images and then evaluate how the segmentation error decreases as the number of atlases in the database increase all right. So, we want a model for the segmentation error as the number of you know with increasing number of atlases in a database and you want to extrapolate this model this curve to right to that level where the error is below our specified tolerance and then we can use this curve to look up how many atlases would we need we want to answer that yeah we want to make that prediction that is the problem yeah. So, given the quality of the atlas whatever quality right you give me an atlas of this quality of this class for this class of images for this particular anatomical structure for this particular imaging modality you using this registration method and a particular level fusion method right under this scenario. So, this is a specific scenario. So, if somebody has to deploy this method in in a hospital for instance right they will be working with a very specific scenario. So, for that specific scenario I want to predict how many atlas do I need all right of course, k varies I am going to talk about how to choose k varies as a function of database size. Well the well k will depend well but what is a good image we do not know what a good image is. So, k will depend on the class of images. So, instead of like one specific image k would depend on the class of images that we want to segment or we want to apply this to and let us assume all images in that class are equally good or bad correct yeah, but we are not optimizing yeah I mean right is right now we are not tackling the problem of optimizing k for a specific image we are only tackling the problem of optimizing k for a class of images. So, on an average they will do good that that k will do good. So, do you repeat that please? Yes, yes, oh of course, oh yeah of course, yes both, both I do not know what active learning exactly means correct, want similar images yes very global average yes indeed very global average right. So, the simplest scenario is yeah just assume everybody is in the same class I mean this was the yeah. So, there are no there is no let us say there is no disease class everybody is normal the problem is difficult enough just because of normal variability right. So, I yeah so, I will go into the mathematics of it and it will be more clear, but intuitively intuitively as your right. So, let us say your database consists of images of everybody in this world there is just more it just more it makes it to instead of 100 people in this city you have images of everybody in the world there is probably going to be there is there will be a larger probability of finding templates that are more closer to the target in a larger database. So, given templates if we find so, if we find an exact template I mean that is great. So, the more templates we find that are closer to the target the better we are going to do yes 6 k arbitrary database size yeah I will talk about how to optimize k as well. So, right so, because we are allowed to re represent each image in different coordinate frame based on a defumorfic transformation on the spatial domain. So, we yeah so, we assume each target represents the entire family of such images we constrain the we assume that the defumorfism is constrained. So, that very weird right. So, even though a smooth extreme warps are not allowed. So, let us assume right. So, let us so, I am now I am starting going I am starting to go into notation. So, let us assume f as a random vector that corresponds to the intensities in the deformed template s is a random vector that corresponds to intensities in the deformed segmentation image we are given a database of size capital M which has these pairs f m s s m of templates and segmentations the target image is f naught and we want to find right we want to estimate the segmentation s naught associated with f naught. So, the first thing that we do is right we have to choose a regression function. So, we choose a regression function to be the expectation of the conditional probability density function of the segmentation image given the template image right this is optimal for the mean squared error risk function. Now, that we have chosen a regression function we choose a regression estimator r hat that takes in this at last database of size m and r target image say f. Now, we want to right we want to measure the performance of our algorithm under varying databases and database sizes. So, we treat a m also as a random variable r error like as motivated through our choice of the regression function itself is standard mean squared error right. So, the expectation over all templates and segmentations the target templates target segmentations and databases. So, we decompose the error which is computed over the entire image into over individual pixels this is just sum of squared errors over individual pixels and right. So, e v so e of m is the global error it is decomposed over e v of m for pixel v and that error at pixel v is basically can be written as a sum of three terms one of which is the variance of the conditional pdf right this is the intrinsic variability in the data and the other two terms are the standard squared bias of the estimator and the variance of the estimator right. So, even though our estimator converge to the regression function that we wanted it to approximate we would still have some error limited by the variance of the conditional pdf. So, now we go on and say that we want to choose a particular kind of a regression estimator for the conditional expectation and we choose the generalized k nearest neighbor regression function. So, basically what this does is follows so ignore phi of v for now. So, basically right I mean this is quite standard so basically we are going to look at so given so given k which are which is the k nearest neighbors that we want to choose r k is the distance to the kth nearest neighbor w is a waiting function. So, if a neighbor is within this set the waiting function is 1 if the neighbor is outside the set of the k nearest neighbors the waiting function is 0 and the regression estimate there is nothing, but the weighted average of the segmentation of the k nearest neighbors the simplest thing that we could use sorry right. So, you can measure distances between two images in several ways. So, in general the framework allows us to you know apply some linear or non-linear lifting map to our image features you go into some kernel feature space and you do regression there. So, we are using as the Mercer kernel the normalized cross correlations on patches around the pixel of interest. So, we are using we are basically using normalized cross correlation as our similarity metric no by distance I do not mean similarity distance would be inverse of similarity. Yeah, but so yeah. So, in this case k nearest neighbors would be k most similar to the nearest neighbors maximum similarity would translate to minimum distances. Yeah, they are equivalent, but inverse way. Yeah, correct. Oh, yeah no no no no no these are distances so the formula is correct the formula implies distances to compute these distances we can use a kernel. So, basic so think of so just think of these. So, given a kernel similarity given a kernel we have a we have implicitly the lifting map phi defined and what I am saying is just you take we take distances in the space phi of f. So, the way we compute it is of course using we we can compute this using the kernel values themselves depends on the kernel. So, right now we we choose a very similar kernels we just looking at yeah I mean yeah I mean we we do not have a lifting map that is going to lift us to some infinite dimensional space or anything, but you could do that somebody could do that and come up with a better scheme. Yeah, correct. Yeah, if f k was the kth distant neighbor yeah yeah yes yes it is it is similar they will have similar effect. Yeah, so we yeah so we here right. So, here yeah I am going to show one way of tuning that k for a given problem given problem means given anatomical structure given the matching modality given registration method segmentation method. Now yeah so given this regression estimator somebody else thankfully has worked out the bias and the variance as a function of several parameters including the database size and the k in the k nearest neighbors that you see. So, of course, the estimator bias also depends on the shape of the probability distribution of the images, the feature map that we are using the dimensionality and so on so forth. So, alright so right so it turns out that the bias the bias is approximated using you know via this form and the variance is approximated using that form. The main variables of interest the main expressions of interest here is that say yeah the variance of the estimator dies down yeah as a function of 1 over k. So, the variance of the estimator dies down depending on the variance of the conditional pdf of the segmentation given the template and the bias goes by yeah bias has this multiplicative factor of k divided by m to the power 2 divided by d. So, d is the dimensionality of the image batches in theory m is the size of the database and k is the k nearest neighbors that we chose. Now, we choose a particular weighting function such that psi this term psi just amounts to the constant 1. So, now the expected error at each work cell is the constant 1. So, this represents table in this parametric form. So, we have alpha v which is basically the expectation of the conditional pdf's over all f's times 1 plus 1 over k plus some other constant beta v multiplied by k by m to the power this guy. So, now right so now as right so this makes sense because for instance as the size of the database m goes to infinity we will make we can make the number k also go to infinity, but at a slower rate such that the last term goes to 0 and the first term. So, this number so and 1 over k goes to 0. So, all we are left with is alpha v which is basically the intrinsic variability of the segmentations given the images we just cannot do any better than that. Now yeah so now this was our model at a per pixel level, we sum this up over all pixels and now we have a model for how good the segmentation is over the entire image. So, now the basic idea is we have a model for expected segmentation error. Now, given a training database so to speak we can compute the segmentation these expected errors empirically and you via so we use bootstrap sampling of the targets and the segmentations and the databases and for each database size m we will get a curve of these error functions and we fit the parametric this parametric model to that curve all right. We do that at every pixel v the fitting problem is a standard non-linear curve fitting problem. Now, we would yeah we would expect our parameters to very smoothly over space and we use a spatial smoothness prior on the estimated parameters. So, this is just in the parameter optimization part the curve the curve fitting part we are essentially fitting all curves together because we can get stuck in local minima and then for any database size m now for any database size m this parametric form gives us the error as a function of k all right. So, we can easily minimize we can choose k to minimize the error for any given database size m yes good point yes. So, we could have computed a k for every pixel in the image separately all right we could have because we have a regressor at every pixel in the image in this one we just did not do that many we thought practically it makes little more sense to choose k for the entire image yes this is a good point what did I do about that there is there is a restriction there is there is no restriction on picking the same k images for every pixel it is the same yeah. So, the initial broad frame work that I mentioned was just a you know an umbrella framework within that several you know there are several engineering issues there right. So, in this case yeah. So, we are not picking we are not picking the same k the yeah we are not picking the same neighbors for every pixel some k, but the value of k is the same right and yeah and I am saying we could have even chosen a separate value of k for every pixel we thought it was an overkill yeah all right. So, results we took a large database where they we have manually segmented images instead of looking at the error we thought it would be more practical to look at relative error you know scaled down by the size of the object yeah and this had some you know useful semantics to other error other performance evaluation metrics that are more popular in the literature yeah. Now, apart from sampling the target images and the the smaller databases of size m we also do bootstrap sampling on the available database itself to know the variability of our you know curve fitting and prediction for under variations in the available database itself. So, here are the curves that we get. So, first we are just interested in you know whether the model fits well to the data. So, we just choose fix yeah we just fix k to 8 and for that k we evaluated we evaluate the mean square errors for various number various database size values and these are our box slots and yeah. So, the box slots come because of sampling on the larger available database for every curve we do a parametric fit and yeah we have one such fit for every sampled available database of size 186 and right and we see that the fitted curves match the mean squared errors reasonably well all right. So, as we can imagine right our parameter fitting happens at a per pixel basis remember alpha v was the lowest error that that we could get at any pixel v now of course, we would assume. So, here the object of interest is the is the so called thalamus which is which basically has a boundary there. So, we can see that the left and the bottom left part of the boundary is relatively easier to determine than the right part of the boundary. Of course, segmentation in general is also an easier task for pixels well inside the object and pixels that are well outside the object it is a easy regression problem. So, the errors are really low and most of the errors most of the high errors are in those parts of the boundary where the contrast is low or the registration did not work well and so on so far. So, this is intuitive a similar right beta the parameter beta represents the complexity of the regression function and the probability distribution of the images and so on. So, again right the complexity of the multi atlas segmentation task is much larger for that part of the boundary which is not easy to see and D is the dimension. So, now the formulation right the theory would say that the parameter D is the dimension of the image patches that we take, but in theory also in practice what happens is that D actually. So, when we fit D to the mean spread error terms as well and D actually comes out close to the intrinsic dimension of the data that we have right. So, k nearest neighbor regressors and there is related work that talks about similar results on kernel based regression as well that talk about the convergence rates going down not as a function of the dimensionality of representation of the data, but as a function of the intrinsic dimension of the data. So, this is what we see as well and we find this intrinsic dimension somewhere you know around 4, 6, 8 and so on. So, now we go to our prediction task that we need to solve. So, here what we do is yeah so we treat these same baseline curves as we had the same curves that we had in the last slide as our baseline curves. Now, we take we sample databases of size 40 from this large database. Now, only using the database of size 40 we do this box plots right they go only all the way to 40. We fit our parametric model to the box plots where to the errors available only till database size 40 and then extrapolate our model and see how close it comes to the fitted curves that we fitted using the entire database. So, here the yeah what you want to say is that the predictions come close that the predictions using a 40 size database come close to what we actually observe using the entire 180 size database right. Now, on top of that we can optimize k for this class of multi-atlas segmentation problem for any particular size m and consistent with what the theory predicts k does increase as the size of the atlas database increases and this is the curve. So, we somehow magically maybe luckily we picked we had picked our initial k to be 8 in the first set of experiments and that turns out to be not completely off. So, things weren't things didn't work that bad alright and now then we can do the same prediction using optimal values of k. Now, it actually yeah it turns out since the variation in k in our database isn't too much it goes probably from 4 to 14. So, changing that changing the fixed k around 8 does not give us too much improvement in the errors here ok start of life maybe some other data set it did. Now, we compare these results to something that was done a few years ago by somebody else where they had a much simplistic model more intuitively derived which basically predicted another similarity measure as a function of as this particular function a minus b by square root m where m is the atlas database size and yeah. So, we ran their predictions using the same data waves that we had and we saw that their errors were. So, their predictions were more off than their model fits using the entire database. So, that is all I have yes different image yeah. So, ok. So, we are registering right we are aligning each image in the database each template in the database to the target. So, this alignment process make sure that all images are roughly in the same coordinate frame are in the same coordinate frame. So, they are aligned rotationally and translationally and scale in fact they are non linearly aligned. So, it is little more than that. Yeah sure yes yes there are yeah it just that such a database was not available to us. So, we did this yeah, but the theory is general enough to handle such cases as well. So, if you have multiple right. So, if you ask instead of one expert if you ask ten experts or two segment the same image you will get ten different answers and you could there are methods to convert. So, in this framework we keep those ten different answers exactly as they are you could have converted those ten different answers into a fuzzy segmentation image and then use that instead. So, yeah, but yeah usually experts do not do fuzzy segmentation. It is yeah assigning a label to an object or not that itself is a laborious task. Now, if you tell them on top of that that no you have to give a probability they are not going to do it. Yeah anything else yeah anything yeah you could apply to anything yeah the math is general enough oh I am not looking into that, but if you are looking into that then let me know. Yeah yeah so we assume that is yes. So, so typically registration methods may do some reprocessing and yeah so we do not do anything extra than what the registration methods are doing different sizes. The alignment takes care of the size issue we are doing a non-linear war right we are so yeah. So, if one guy has a larger spleen than the other guy if I map the largest spleen to my spleen the mapping is going to do the appropriate shrinking. So, yeah this relates to her question as well. So, yeah so yeah there are no issues of scale effects that are not accounted for or rotations that are not accounted for or anything like that all that is taken care of you are always you are in the same quadrant.