 So today we will talk about 3D analysis. I'm most specifically with the 3D Images Suite. So I am a social professor in Paris, but I'm visiting a bit around the world. I was in Singapore, but now I am in Taipei in Taiwan. So I would like to thank the Bias Academy for this nice initiative and give me the opportunity to present my work. So why do we start this 3D Images Suite? It's quite old software actually. It started in 2006. The question was quite simple how to analyze these distances between genes in a flow-island sleep visualization images. So we wanted accurate measurement. We wanted a lot of measurement for good statistics. So we need to develop some 3D algorithms and tools to translate this. So 3D Images Suite actually is a suite because it's a set. It's a lot of algorithms and tools. So the idea is to package an algorithm in a core library called MCIB3Dcore with all the algorithms on all the what is related to 3D image processing, everything related to 3D image analysis. And now on the other layer, it's more like a population of objects inside an image on how to analyze this population of objects inside the image. So now we also have plugins. This is because we want our algorithm to be used by people. So we package our algorithm into plugins tools. So basically the user will see the plugins. But the plugins we basically call the core algorithm. So we have a set of four different kinds of plugins. It started with processing, like 3D filter, people may know the 3D filter. We have a couple of 3D for algorithm for segmentation. But we are focusing more on analysis. So this is the strength of this package. It's mostly analysis, especially 3D analysis. Even though if you notice that a lot of 3D analysis algorithm may also apply to 2D, but it's not completely sure. You have to check if it's working in 2D. And some utilities are mostly for displaying 3D objects. Everything is open source. You can go to the GitHub to get both the core and the plugins. If your developer will suggest you to go first to the plugins to see what class to be using the core and how the classes are used and called inside the plugins. So I would like to acknowledge the people who were developed by the 3D Magia Suite. So I started with Cedric on these distances between GIM project. And then the second version was with Jean. So Jean actually developed the Tango, Tango software. So it's a very integrated software for 3D analysis of a nuclear organization. So a lot of the core algorithm are based on these tools and processing tools and algorithm that we developed for Tango that we package to be more generic in the 3D Magia Suite. For me, I'm more like the supervision of these young people. I'm also some maintenance, try to debug to add new features. It's a very concerning task. And I receive a lot of questions. So I try also to develop some documentation. I apologize that the documentation is not full. I'm not completely detailed. I will try to answer many questions in this webinar. I would like to thank Philippe André for nice ideas and discussion about analysis, especially statistical analysis and mathematical morphology. Very nice. Also a lot of PhD students developed these sort of tools and algorithm for this. So the documentation, it's not complete, but it's kind of a good start if you want to know a bit more about this 3D Magia Suite. So now we have a new website in the Magia Inlet that will just be a mirror at the original documentation on the Wiki Magia website. So why do we need a specific algorithm for 3D? So of course, one problem is the anisotropy in Z. So the size of the pixel in XY and Z is completely different. So it can be three times or four times larger in Z than in XY. So you really need to take this into account when you do some analysis and also have a choice of processing. Of course, usually because of the system that you do the acquisition slide by slice, you may tend to get to have more noise in 3D images than in 2D images that are basically projection that are less noisy. And of course, the shape is a bit more complicated in 3D. You have more coverage, so it's everything a bit more complicated in 3D. And also you have, of course, more pixels and more voxels in 3D to process. So you need a robust and fast algorithm to process so many of these. So 3D Magi Suite is not the only tools or suite to do 3D analysis. There is like a legacy plugin is the 3D Object Content by Fabrice. This is still a great plugin. So you can do segmentation on analysis at the same time. It's working quite well. You also bungee morphology that you will find some algorithm on analysis quite similar to the 3D Magi Suite. Maybe there's some slight difference in the details of algorithm, but on the focus maybe a bit different. Bungee has more focus on the analysis of both. So I don't know much about this. Morphology is focusing more on mathematical morphology. And IC is quite great for visualization. I use IC a lot for visualization. And there are also a lot of nice plugins, especially like for 3D spatial statistics with Soda on the tracking user. So this 3D Suite is quite old, but it's quite complementary to other tools in Imaging. So it started with filtering, because at the beginning in Imaging, it was mostly 2D. So we also wanted to do 3D filtering. That's it. We wanted to, from one pixel, to get the neighborhood in 3D, not 2D slide by slice. Because if the noise is 3D, you want to average the noise in a 3D sphere or ellipsoid, not in a 2D slide by slide manner. So for 3D filters, we use mostly medium filter for 3D, for fluorescence images. It's working quite well. So basically, you take a one pixel. You take the neighborhood or the neighborhood pixel in a 3D ellipsoid. So that means the size may be different in X, Y, and Z. And then you do the medium or the mean. Also, you can do the classical minimum filter or maximum filter. Gaussian filter is not part of 3D Imaging, but it's also a great 3D filter to use. For, if you want to detect spot objects, this is what we developed at the beginning of this tool, like to detect spots in fish images. So we use mostly the top hat. So the top hat will complete the background of the image. And then do the difference between the image and the background. So you will get only the spots for 2D text spot in the image. So the 3D filters have the plug-in name. It's called 3D fast filter. So it's fast because it's a multi-freaded. So it's like if you have 10 CPU, each CPU will analyze like you divide your number of slices by 10 on the CPU will filter like one tenth of the image. So if you have a lot of CPUs, that's good. It can go quite fast. So the other thing is like it's an elixoid neighborhood of the cube or whatever. So usually you can have a larger size in X, Y, and smaller size in Z just to respect the size of the image. Now if you use the 3D filters from the image menu, you will get 3D filters, you will get mean, medium, cushion. So the mean, medium, minimum, maximum are actually the same as the 3D fast filter. But if you use this 3D fast filter with a very large radius like 10, 20, it will be very, very slow. Of your very big images, it can be very very slow, like 10 minutes, one hour depending on the size of the radius of the image. So you may have a now you have a faster version like CLIG. Don't want to spend this one, but please, whatever, CLIG. So this one is the GPU processing. So it's very, very fast. So the only drawback is that your image need to fit into the memory of the GPU. So if you have a normal GPU like 234 gigabytes, your image must fit this memory. As opposed to 3D fast filter, your image must fit the run of your computer. So you can process larger images. My GPU, CLIG is a great plug-in. So some plug-in people are maybe not less aware of are the edge edges and also use these edges to detect spots and to detect seeds inside objects. So if you have an image like this, so you detect the edges, the edges just the difference between one pixel and the pixel in X, Y, and Z. And then you just add some of these differences. On the ID, you can use this to detect seeds because the seeds may be like the center of all the edges. And you can cast our rays from the edges toward the center. So once you detect the edges, you cast a rays from the high value edges. And you will get the values in the center, something like this in this image. And then you will just filter a bit to get a more smooth version of the center. So this edge symmetry was described in this paper. Actually, it's quite good to detect center of objects, of world objects in images. So for segmentation to detect seeds, we have the classical thing, of course, is a 3D local maxima. So you detect the highest value in a neighborhood. So the idea of this text being seed is very important for segmentation. Basically, you have one seed, meaning you will get one object. So this is very important if objects are crowded to change. So the idea is really one seed, one object. So you can use the 3D local maxima from the 3D fast filters. There is also a 3D maxima finder. This is the same, but for 3D data, 3D images is similar to the find maxima in images. But of course, it's slower because it's a 3D. But you can detect 3D local maxima more accurately. In this image, we detect 3D local maxima. So if you have noise, you can detect local maxima here, here, here, very next to each other. If you just focus on the local maxima with maxima finder, you will focus on the best seed, the best local maxima for each object. So once you detect the seed, you can easily segment all the options. This image is quite interesting. Yeah, maybe I'll just keep to this one and we'll come back. So for this image, if you want to do the 3D, a global 3D, if you select only one 3D shot to detect all the image, you will detect only maybe the brightest object. So you will need to lower the 3D shot to detect this object. Very, very fast, but you will get a lot of noise. So the idea is to try all the 3D shots like you detect a very, very high threshold, you will detect this way. But you will get a lot of noise everywhere. And then you increase the threshold and maybe you will detect this way. And then increase, increase, increase, you have a very high threshold, you will detect this way. For each different threshold, you will detect different objects. So the idea is to, okay, for this threshold, you have this object, I just keep it. So with this iterative thing, you can detect objects with having different values, having different thresholds to detect. So you will get something like this. So it will detect all the objects and eventually also the noise. Because if you have a very, very low threshold, you will detect very small things. Actually, they are just noise, but that are inside the image. So the idea is just to have a range of available, permitted size for the object. You will discard two small objects and you will discard two large objects. So you can get like a nice detection of all the objects inside this image. So this one is a threshold, it's called iterative threshold. So you can detect objects with different sizes, with different thresholds. So then you use a size to filter the object you want to detect. So this idea of a mini threshold, we can have also the hysteresis. So the hysteresis is instead of using one threshold, we use two thresholds. So if we use like this image, if we use none of this image, so it's the block, so I don't need to have the raw data on this one. So if you get one threshold, you need to detect everything in gray. So what I put in white is like another higher threshold to detect like one or two percent sure, white is an object. The gray is, it should be part, it may be part of an object or maybe just noise outside. So the idea of this two threshold is there is this two threshold thing is to get three areas. One is the background, black. One is, maybe it's an object, maybe it's noise. So in gray, on white it's kind, 100 percent sure, a very high threshold to ensure that it's a real signal. And then what you do is just you have some gray pixel connected to white pixel. Okay, this is everything will become white. So this one, everything will become white. Here also, you have a very faint area, very small area of high-confident threshold, high-confident pixel, but you will connect all this to this kind of scene here. On this one here, there is no high-confident white pixel so you will just discard this object and discard this object and this one. So you will get a nice segmentation for this image. So this one is the iterative. Instead of two thresholds, it's like a mini threshold. If you have eight-bit images, you can go up to 255 thresholds. If you have 16-bit images, it may be a bit more complicated because you don't want eventually to test all the 65,000 thresholds. So you have to find some compromise for 16-bit images but you can use this for 16-bit images. So the idea of having two thresholds or having iterative threshold is to separate touching object that is a big problem in biology. So if you have this kind of image, but actually this image and this image are the same image, it's just the display of the contrast is different from here to here. Here, the contrast is focusing on the nucleus, let's say, but the two nuclei here in the division they appear like only one object. So now, but if now we adjust the contrast on this object, actually it's two objects. So you need two thresholds, one threshold to detect this object and another threshold to detect this object. So to detect this object, you may get one threshold that you will detect like a big mess here and you will get another threshold to detect two objects. So using some criteria, you will detect these two objects instead of this one. So the criteria can be the size or can be also the shape or also the edges, this kind of thing. So because you will use many thresholds, so you need to find some criteria to detect objects with different resolution. So something very classical to do segmentation is the watershed. I think that was described quite nicely by David LeBlanc for the morpholipsidia during the morpholipsidia webinar is the watershed. So the watershed is basically to get seeds and then to cluster a pixel around the seeds and try to separate zones based on the seed. So in this image, like you see some seeds that may be like a nuclear line actually, and then you may want to detect the seed and then, okay, I will try to create zones around the seed. So this is the idea of a watershed to detect seeds. So here we just detect local maxima here and then we will cluster voxels around the seed. So you will get something like this. This is far from being perfect, you see. But in some cases, this can be quite useful. But for this kind of image, I would recommend to use more advanced segmentation. Like a worker or a stardisk that is very, very good on set both also to detect a 3D object in like a tissue, a nuclear ion tissue that is very, very difficult to detect. For spot segmentation, on nuclear segmentation, we have developed also some basic tools, some basic algorithm. So the idea is always to detect the seeds. It's for local maxima, this kind of thing. And then around the seed, try to have some layers. And then so we can draw a profile around the seed. So this is the profile of the decreasing fluorescence around the seed. So it should be like kind of Gaussian. So we would just feed the Gaussian and just zoop. Find a tissue based on this Gaussian. So depending on the object, this object on this object creates two objects. So of course, locally, there will have a different fluorescence decay, decreasing. So we may have a different local tissue around the seed. So we will detect these two objects. This one. For nuclear segmentation, we developed the problem of 3D nuclei segmentation. So it's a 3D thing. In 2D, it's not too difficult to detect this kind of object. You just do a normal tree-shoulding and then you do a normal watershed. 2D, major watershed. And you can separate the objects. So in 2D, it's working quite well. So the idea for 3D is to take the 3D image, do the projection on a 2D plane, maximum projection, do the segmentation in 2D. Do the separation in 2D and then extend the segmentation in 3D. So you will get the 3D nuclei separated in 3D based on a 2D segmentation. But of course, now we'll just use this nicer machine learning, deep learning stuff that's working very well. So segmentation. So after segmentation, sometimes segmentation is not perfect. You may want to improve a bit the segmentation. So this is where we can have 3D mathematical morphology algorithm on tools that can be very useful. So the basic operation are erosion on dilation. So error, you just shrink your object. On a dilation, you just dilate your object. So for, if you use filters, the erosion is a minimum filter, dilation is a maximum filter when you use like a binary image. So to remove small region, it's what's called opening. You error it first, so you will remove small region and then dilate again. To close small holes here, you will dilate. So this object will be bigger, but the holes will be closed and will be closed. And then you shrink again to get a normal size. So we use this closing a lot to improve the result of segmentation. There's one problem with this closing. If the object are very, very close, if you dilate this one and this one, they may touch. This, you don't want this kind of thing because you want one object, one object. So we have this plugin, this algorithm, binary closed level that is actually very useful. So it will take this object, crop it, do the dilation erosion, so the closing, and then put it back into the image. The same for this one, crop it, do the closing, put it back in the image. So at the end, you will get three objects and don't worry about if they are touching, they are closed with the closing, so the shape will be improved. So this end, the first part, that was most of the processing and segmentation. Any burning question from the panelist? Yes. So there are two questions which are a little bit similar, I think. So related to this iterative thresholding, I think you also mentioned later there's also something like local thresholding for seeds. And actually, there was another question because in ImageState, there's also this auto local threshold. So I think the questions are a little bit how does iterative thresholding compare to, let's say finding locally a threshold or something. So it's completely different, actually. Finding a local threshold is that's mean you need to find first a seed and then find a local threshold around the seed. For iterative thresholding, there is no seed, it's just threshold 10, what are the objects that I can find with threshold 10, threshold 20, what are the objects I can find with threshold 20? If the object, the size is okay, I keep it. And so on, threshold 40, 40, 40. So there's no idea of seed, it's just like testing all the threshold that if you do manually like you imagine I just threshold, you may change the threshold and then, oh, for this threshold, I can find this object, for this threshold, I can find this object. So iterative thresholding will do this for you, test all the thresholds. But I mean this auto local threshold in ImageState, I think it also doesn't need seeds, it's just a sliding window and then for small regions that tries to find the threshold. Yeah, yes, so, but this one is more like on patches, on this page, on this patch, I will try to find the Otsu on this patch and go to, so it's working, not quite quite well, but if the, since it's page-bas, it's maybe it's not working so well in 2D and if your object is just between two patches and maybe it's, yeah, it's also a good iterative. I see, yeah, that's a good argument, yes. Okay, there's one more open question, maybe I should read it. Can I use filtering? Yeah, in fact, I think that in one of your slides, there was a title with the convolve in it and it triggered a question, which is can I use filtering on the convolved image? in 3D or should I do it in the raw 3D image directly? And the only thing I have noise in amplification. Okay, so I think it's a philosophical question, so I don't want to be too philosophical. So for example, for this 2D, first there is the decovolution. So for me, decovolution is like the best filtering ever because you know the noise, you know how the image is modified, I think. So if you can do decovolution, do decovolution and maybe you don't, you won't need any filtering. In fact, if your image is the contrast is good, don't do any filtering. Just do 3D, if you can get the object without any filtering and you can segment your object without filtering, just do the segmentation. Featuring really when your object, your image is very noisy and there is no decovolution or you want to do some specific or unsponsored records. So yes, decovolution for me, it's kind of filtering but I don't want to go deep into philosophical arguments. Okay, thank you. Thank you. And I think you can go on. Okay, thank you. So the 3D manager, actually the 3D manager is like a tool. It's like the 2D array manager for image and for 3D because when you detect object in 3D and then put it in the array manager becoming a bit tricky because you have different array for different slices. So we were not very happy with the 2D array manager. So we just developed this simple 3D manager that actually is basically the central tool for the 3D image as suite because you can visualize your object and you can do some analysis. So 3D, the 3D suite and the 3D manager are really working together. So 3D manager is a manager for 3D objects. So the idea is to get a label image or a segmented image and then you add the image to the 3D manager. So you will get the list of 3D object or 3D array. You just call it what you want. You can load and set this set of 3D objects as a zip file. It's a specific format. You cannot treat it with an image. So a lot of people on myself are using this 3D manager actually for visualization. You just do the segmentation but you want to check if your segmentation will fit nicely to the raw data to check the quality of your segmentation. I use this 3D manager for this. And also to check like a very simple, okay, I have this, what may be the best measurement for this kind of object. So the idea for 3D visualization is like for each slice you draw an array and then we can go through the slice. So for each slice we can see what's happening, what is the segmentation for each slice of management. Something people may not be aware of because there is no documentation. You can do like a manual classification. So you say, oh, this is an object plus one, object plus two, just simple manual thing, just plus zero to five and you can just have an idea of or this is object to type one, object type two. In some cases it can be quite useful. The idea is also you have all the measurement available. So the measurement, like the set measurement on the 3D object counter option, you have a 3D manager option where you can choose what measurement you want to do on the project. So this 3D manager on most of the plugins of course are macro recordable and there are also some extensions so you can use a 3D manager to do some more extensive 3D automated analysis but I will not detail here. Maybe it will be part of another seminar of a later. So the 3D manager is just a list, list of object. So you have one image, this image, you segment your, this is the raw data, you do some things, magic, magic of segmentation, you get the segmented image, you just add image to the error manager so you will get by default the value is object one, two, three, four and value is just the value of the object for inside the image. If in case you have a hyper stack, like a 4D data, it will just focus on the channel on the frame that is currently displayed. So you have the list of objects. So you can also do segmentation but it's just, you have a binary image which will just extract the default object, just label the image. So it's just a very simple three-shoulding and labeling algorithm. So here, so three, so you have edit, like you can change the name of the object, you select all the object and then change the name of everybody. You can delete object from the list, you can erase object from the original image, you can merge two objects, you can split one object in two and so be careful, sometimes working, sometimes not working. If your object are really too close object, it may work but it's not guaranteed. So for here we have all the measurements that we can do, all this measurement actually are, you can find them also in plugins, so exactly the same algorithm. So measurement is a geometrical measurement, quantification is the intensity, quantify the intensity of this object in this image or any other images. The distances between objects, the angle between three objects. Maybe it can be useful from time to time. Colorization, in case you have one set of objects coming from one channel, one set of object coming from another channel, and then you put everything together on a colorization. List voxel, you just select one or multiple objects and then you have the list of all the voxels X, Y, Z on the value of the object. So you can use it in the, maybe in the raw image which is more interesting, you'll get the X, Y, Z of all the voxels for this object on the value inside this image. If you want to do more advanced analysis of the value. This part is more for visualization. So you select one or multiple object, so by default if you select, and if you select, if you deselect, if you don't select any object, that means you will select all objects. So if nobody, no object is selected, if you do measure, it will measure all the objects. So no selection, all selection is the same. 3D viewer, you will just select the object and transfer them to 3D viewer. Fill stack, you will transfer them to the open slide, open stack, sorry. So that means you may want to create another image and then you just, if you just want to have another image with object one and object three, just create another image, fill stack. These two, 3D viewer and fill stack will work with the color pipette of images. So you have first to choose the color and then you will display the color and 3D viewer on fill stack. Select all deselect, okay. Since the visualization of the array can be quite slow, if you have a lot of object or a lot of slices, you can off all the visualization in this kind of image. Label, it just put the name of the object below, the object in the image that are currently open. Load on save the list of objects. So you don't really need this image. You can just get the list of objects. You don't need the label image. So I think this is something a bit special to 3D manager. You don't need this origin image, just need the list of objects. Open save, here you can just go directly to the option to see what you can measure. And the information about the work created is about the link to the documentions. So for example, this one you just click measure 2D and then you will get all the information of the geometrical measurement about the objects. So for visualization, you'll see you select the object you want to visualize. None object equals zero. So in this case, I put three different channel. The first channel, you have one nucleus. The second channel, I have three biological chromosomes origins, and I have a lot of space. So you don't have to, you cannot, you can have more than one image into one 3D manager. So you can, after it's for you to decide what to measure, what to visualize. But you can have more than one label image into the 3D manager just by using add image. So the set of area will be displayed on the current slice. So for the visualization, you can visualize the control. This is the most used option. You can visualize only the center of the object, the small sphere on the center of the object or the bounding box of the object. The problem is inclusion. Here, I select all the objects, but I have one big object that will include all the smaller object inside. Since it's based on the area of image, the bigger object will kind of include all the object and I will only see the outside object. If you want to see all the object inside, you need not to display the larger object. This may change the future, but this is something you should be aware of. Also, if two objects are touching, you won't see the separation between the two objects. You will just see one continuous control. So be careful with this. It's just like an error, right? So in the future, it may change. For visualization, most people on myself are using this for a complementary to 3D viewer. That means you can select the color you want to display your object. So it can be a random color. It can be a color based on the volume or any measurements you want to do on the object or classification. So basically, I use this 3D manager a lot to visualize my object with different information. So there will be a nice macro to do this kind of thing very, very soon. For analysis, so the measurements available are geometrical measurement, like the volume, the shape. So shape like ellipses or this kind of measurement, the compactness, of course, intensity measurement inside the different object. Object numbering, if you have two channels or some object inside of an object, you just want to count how many spots inside my nucleus or this kind of information. Relationship between objects, so that's called localization. And also distances between objects. And of course, the angle between three objects, the center of the three objects. So for the center, there is no big problem, just the center of the object. And the volume, the volume, there is no big problem. It's just the number of voxels. So this is the 2D version, this is the 3D version. So I have voxels, the size is one by one by two, a micrometer cube. So you have five objects. So the number of voxels is five voxels. The volume in the micrometer is 10, okay? There's no problem with that. Now, but there's a big problem with surface. What is the surface of this thing? I have no idea. The same, what is the perimeter of this thing? I have no idea. There are many algorithms, many implementation of how to compute the perimeter on how to compute the surface. So you chose the one you want. I'm not very confident with the meaning of area of surface or 3D object, especially for small objects. So I don't really use this information. But okay, if you want to know, the number, 22 faces, how many faces can we see from that side, you just count one, two, three, four, blah, blah, blah, 22. So this is the surface in voxels. Let's see if it has any meaning with this. The number of unit. So here it's like the, I face like one micrometer. Here I face two micrometers because I am in the z direction, in the x, z direction. Here I am in the x, y direction. So I just face one micrometer square. Here I face two micrometer square. So you can have the unit faces. And then you have a corrected surface. That is 47, that actually may be like a smooth version. That's maybe not so stupid. This is based on this paper, a surface or an estimation inside a binary image because here we are not real object. We're just a bunch of boxes. So this is actually a corrected surface. It's actually giving maybe interesting results. So we also have the volume, the surface, the ferret. So the ferret is the maximum distances between two points of the object. So the same in 3D. So that's meaning 3D, what is the maximum distance? It can be in any direction. The distances to center. So we have the center of objects. So for each voxel at the contour of the object, you will get a distance. You have your many, many distances. So you can call the minimum distance, the maximum distance, the average distance, the standard deviation distance, give you an idea of the shape of the object. You can do this in 2D or, of course, in 3D. Compactness and sphericity. It's a, so now we are computing more the shape. So the shape is like, is my object very compact? Maybe like this one. Or my object may be a bit noisy, not very compact and going in any direction. So you may want to measure the degree of compactness. The simplest formula is this one. The ratio between the volume square and the surface of area Q. This is the basic formula for compactness. And this is the sphericity, just this one, power 1. So the compactness, you can compute in voxel using the volume on the area in voxel or computing unit if you use the volume on area in unit. You can also have a corrected compactness if you use the corrected area instead of the normal area. And of course, the idea is to have some kind of maximal object. The most compact is the sphere in entry. There's also a nice way to compute compactness, the same way as the corrected area based on this paper for 2D and 2D shape in a binary image. Basically, you show this shape. This is the ratio between, this base on the number of faces of face we can see. So here, just we have like a 27 voxel. And then this is the less number of faces we can see here. And then when the shape is not so compact, we can see a lot, a lot of faces. So the compactness will decrease. So this is the idea of this discreet compactness. So it's giving quite interesting results. It may be more interesting, more reduced than this one, but maybe depending on your object. And it's up to you. But this one maybe I will just suggest to have a look at this measurement. Elipsoid fitting, boom, so many, it seems that there are many formulas. This is the formula I use. I'm quite happy with the result I get from this one. So it just, you have the center. You compute the sum of x to the center, the sum of y to the center, the sum of z to the center, and then you do some nice computation with all this number. And then you can get the three radius of this elipsoid. So in 3D, of course we have three radius instead of two. So there is a quick in 3D elipsoid fitting that will give you, if we add this shape, we get this elipsoid on this radius. So you will get all the information you want about the radius, the orientation of the elipsoid with the different plane. And also like the two poles, the two poles of the object, like the two, like the ferret, but based on the elipsoid. So of course you can get the ferret position, this one, sorry. This one, and this one, also the poles based on the elipsoid. So it can give you an idea. The idea of the elipsoid fitting just to smooth the shape because maybe just you want, this is an elipsoid basically, but it's a bit noisy because of the acquisition. And then this elipsoid gives you a nice shape, a measurement, allegation. This is the ratio between the major radius and the second radius. And also the flatness between the second radius and the third radius. And also the ratio between the volume of the elipsoid and the volume of the plane. That are quite, I think quite robust. If you compute the surface of this one, it will be a bit noisy. So maybe not so correct, but if you compute the elipsoid, it's quite robust and it will be more accurate. You also have a convex, convex L. So the convex L is like the same for 3D you will go and give you a convex shape of your objects. So you may recognize the crazy, but volume thing. And then this is the thing that will enclose this 3D object. So this is another way to kind of smooth the shape and you can get the ratio between the volume of the L object on the volume of your object. Giving you an idea of how convex is your object. So this is based on this plugin, 3D convex L. It just packaged into the 3D suite, but it's based on this object. The 3D moments, it's something a bit more advanced. Basically it's the same computation as in the for the elipsoid. For the elipsoid, basically we just use the second order computation, the difference between X and the center. For these moments, we use more order like the third, the code SXXX, XYZ, blah, blah, blah, blah. So it's more complex computation based on the position toward the center. And then you have like five moments, T1, T2, and so on, that give you more description of the shape on how close it is from an elipsoid on how far is it from an elipsoid. So we use this, for example, to do some classification of the cell cycle of this image. So we have like the metaphase in green, maybe the prophase in green and the metaphase in red, based on this moment. If you just use the compactness or the elongation, maybe it's not enough to distinguish between different shape that may be a bit close. So you may need more information to distinguish about the different. And actually the iterative plugin was developed to segment this kind of images. So we just try all the threshold and then the criteria is just what is the threshold that will give me the best shape, based on this moment to the shape. So this algorithm will both do the segmentation or do the segmentation, the classification. So yeah, this one's quite interesting. You can have a look to this publication. Intensity is a very nice standard. So you get an object and then you choose any the current image when you do the measurement. So you will get the intensity measurement at the center of the object. And of course the average value, the minimum value, maximum value, standard deviation value. You have the mod also. So the mod is the most abundant value. Also most likely maybe the most abundant value, maybe zero. So we also have the most abundant value that is larger than zero. So it can be useful to get some information. And of course you have the integrated density. This is the sum of all pixel values. So if I refer to image here, this one is the raw integrated density. There is no integrated density. This one is the sum of all the pixel value inside the object. And of course you can list all the values inside the image. So the idea is really to have your object the image you want to do the intensity. So you may detect one object with one channel. You put in the 3D manager and you add channel one, channel two, you select channel one, then the measurement, select channel two, then do the measurement. So the measurement is dependent on the current image. So one thing related to intensity measurement, but it's a bit different, I think it's numberly. So you want, that means the image you want to count the pixel inside the object is not a signal image, it's an image with segmented object, with label object. So here we have some object nuclei. And then we want to count how many spots are you having this one? How many spots in this one? How many spots in this one? So it's a very simple question, but it's kind of a very classical question. So this is part of intensity. You will get the average value, you will get also the numbering. So check the option. So basically you just count the number of different objects inside this one, for all of them, on also the volume occupied by all this object inside this object. So for the nucleus number with the value four, I have 68 objects on the 963 foccell occupied by this object. Channel frame, in case this image isn't here, it can stack just to remind you in which frame on each channel we did the computation. So this thing is quite useful when you have an individual object, but in case you cannot really segment all the spots because you are very touching and very difficult to segment, in this case, you may want to use the volume of object. What are the volume of signal inside my different object? So the number of objects or the volume of occupied by a signal object inside my image. We also have distances. This is why we started Imagee in the first place, get distances between my genes. So of course we have a center-center distance. What we use for objects that are quite specific volume, edge or border-border distance between two objects. Of course we have center to edge object. It can be quite useful when an object is inside another object. And also the colloquialization that is just the ratio between the collocalized pixel between the two objects. We also have only the plug-in, not in the 3D manager, the Osdorf distance. So when you have one object within another object, you can get this one is the largest smallest distance. And this one does what? Yeah, it's very complicated. I'm getting lost here. So this one is the largest smallest distance. And maybe this one is the smallest smallest distance. Okay, maybe the smallest smallest distance here. But basically you will get an idea of what is the void between your object and your object, your larger object. Yeah, on here you will get, yeah, what is the void between this object and this object. In some cases it can be quite useful. I have to check the formula to explain better. Colloquialization, okay, colloquialization is very simple. So you have some object, A, you have some object, B. And then you want to know what is the colloquialization of all object B with all object A. So you can get something like this, like a matrix, between object with value 12 and A, and object with value one in B. What is the colloquialization? Yeah, 49. You can also colloquialization only. So for object 12, it's colloquialized with object one. In B, with the colloquialization, volume is 49. That means it's a 79% of object 12 inside this image, and so on and so on. If you have multiple colloquialization with multiple object, you will get multiple columns here. So you have all the colloquialization or only the object with colloquialization. Colloquialization, also we have the surface contact. So just be careful with this. It's kind of a bit experimental, but sometimes it gives some interesting results, is what is the surface contact between two objects that are not colloquializing, but very close. And we want to check what is really how close we are and what is the contact here between these two objects. So basically it's like how many pixels from this object are close to another pixel from this one. So of course we will compute this on this. So we will get some information. We can get some contact surface like here, here and here. And if the two objects are intermigrant, it's a bit more complicated because this object can go inside. And so what is the contact surface? So if the two objects are like this, can give interesting results, you can get like a positive contact surface and negative contact surface. Just use it with care and try to see if it's useful for your data. So everything is explained in this paper by Jill for the Diana project. Yeah, now the question about 3D manager analysis. Yes, Fabrice, do you wanna go ahead? Or... Yes, about colloquialization, we have the first question asking if with the 3D array manager, you can define, let's say two sets of objects to maybe focus on group A, group B and get results about colloquialization. Split the image, the objects over the image into two groups. So 3D manager was designed to have only one list. If you want like two lists and to really deal with two objects with two different images, I will recommend to use a Diana. That is a nice tool that is a version two of Coloc Erofiaka. So Diana is, we have two sets of objects. So you can really focus on the what are the relationships between the two set of objects. For 3D manager, you have only one set of objects that you can rename object or object population, population A, object one, or population B. You can rename object or classify object coming from image A and image B. And then you can just select one object from A, one object from B under the colloquialization or select all of them, but it will compute all the colloquialization. So it's not, the purpose of 3D manager is not really to deal with two sets of images, but it can work. Okay, and I guess that with automation as well, you could save the Roy's, reimport the Roy's and so on. Exactly, exactly, this is the point. 3D manager is more to quickly check what's happening before automatic automating. Okay, another question about the labeled images. Can you import labeled images, such as the one that are generated through Stardust, import them into 3D Roy manager? An additional question from the same person. In that case, is there a size restriction in term of image that you can import? And what about 3D plus time? So this makes three questions. Yeah, so good. So of course, the idea is to import any kind of labeled image done with any algorithm, any software. So importing labeled image coming from Stardust, this is what I do. Because I think Stardust is very good. So this is what I do. So absolutely no problem. Something people don't know is like if you have a array inside your image, it will only import the object within the array. So maybe if you want to exclude some object, not part of your analysis, you can exclude using a 2D array, of course, it will do the same thing for others like this. For the size, I imported like big object, big size, it may take you if you have this, it's not a problem of the size of the image, it's the problem of the size of the objects. If you have one big object, it may take a lot of long time. If you have a very big image with a lot of small object, it can be okay. So if you have a big image with a lot of big object, it may take ages to import, but it's working. This is why I usually focus on a smaller array to have a feeling of what's happening for this object, but I'm not for only a small subset of object. But yes, you can import a lot of array, I don't think there's a limit to be on objects. People are importing like crazy number of objects. It's working. At the beginning, I didn't think it will work, but you can import like 100,000 objects. It's working, just be patient. On fourth time, if you have a hyper stack, it will, in the new version, it will just import the object for this particular frame on this particular channel. It will only import like a 3D image stack. So we'll focus on the one, on the current channel and the current frame. If you want to automate, you will need over like a macro or something. Okay, and finally, this is not a question. This is more like a request. You've got to know that you've got some of our viewers that are requesting to see the 3D, 3D street in action. Later. After the talk, is it okay? Perfect, this was just to let you know. Yeah, yeah, this is what they're doing. And there's one more question from Zara. So I think it's about like quantifying differences in distributions. So she has an example, so she has cells, let's say two cells, and in each of them, she has a couple of dots. And I guess maybe there might be the same number of spots, but they may be a bit differently distributed. Do you have anything to measure this? Yes, in the next slide. Okay, good. Right, there were more questions coming up, but I think we first have to digest them. So maybe you can proceed. Okay, thank you. So just now try to quickly focus on overanalysis that are not available in 3D manager. I'm just a different set of pre-gift. The first one is a eroded volume fraction. And the second one is interaction between object and spatial statistics, I think this referring to the previous question. So eroded volume fraction, maybe people know the Euclidean distance map. So you have one object, you have the binary version of the object. So for each pixel, you compute the distance from this pixel to the border of the object. So we have this kind of map. So the value of the pixel is the distance to the border. The eroded volume fraction is like a normalization. So all the values will go from zero to one. So zero at the periphery and one at the center. It's kind of ranking all the distances from zero to one. So that's mean if you have object with different size, you will always have zero to one information. So it's maybe quite interesting to analyze where are the objects? Are they always at the periphery? Are they always at the center? Whatever the size of the original image. So if you have one image with some spot and do the quantification. So this is the nice spot. And then I have a Euclidean distance map image. I just use the quantification tool. And then I get the value we get will be the distance from the object to the border. If I do the same with the EVF. So the same object, but I use the EVF image, I will get between zero and one. Where are they? Are they close to the periphery? Like zero or close to the center? Like 0.5 or 0.6 depending on the object. So the EVF, it's maybe more interesting to compute the distribution of the object. Where are they moving by volume? Are they at the periphery or at the center? So this graph is like I take four, I have layers of EVF from zero to zero to one. How many object do I have? Within the first layer, for the 10% of the closest object, the 10% of the first layer. So we, yep, some time is complicated. So let's say this is the first 10% voxel closest to the periphery. This one is the 20% voxel closest to periphery and so on and so on. So we divide my image in 10%, 10%, 10%, based on the volume, I will have all the same volume. So in the first 10%, how many spots do I have? The second 10%, how many spots I have? So here it's done with one person. So we can clearly see that in this case, we have a lot of spots at the periphery and at the center with high values, we have less spots and here is kind of average spot. So this is just a random spot. We have random values, of course, we have more or less the same number of spots in all the different layers. So this thing is quite good if you want to do statistics. If it's random, it's just flat. If it's not flat, it's not random. So you can say all my spots are at the periphery in this case. So I'll give this EVF a lot. So first we have the second, we have the interaction. So we have some object. We compute some zone based on the watershed that we saw earlier. So the watershed is done like outside of the object. And then we'll get some zone around each object. And then you can compute what are the objects touching each object. So you can get this object and number 27 is in the interaction with object 20, 22 and so on. With the number of contacts of us, if we want, this interaction 18, 19, so this one is working in 3D. Everything from the beginning is working in 3D. It may work in 2D, but I will not go further into this. Just interaction between objects. If you want to do spatial statistics, so this one is the base of the work from Philip-André. So basically you have three kind of patterns, random, aggregated, or uniform. And then you want to compute distances between one object and the closest object. Or distances between objects that are inside, the crosses here, or the closest object inside the image. If you compute the closest distance between one spot to another spot, if it's aggregated, you will get a lot of small distances. If it's uniform, you will get a lot of large distances. If it's random, you will get small distances and large distances. So the idea is to compare the distances you get in your data. So this one in blue. And in red, it's like, okay. Now I have random, I have the same number of spots. I have 1,000 spots in my data. I just randomly put 1,000 spots in my data. In my image, in the same thing, in the same structure. And I will get 1,000, so we get distances. So in red is the average of 1,100 times. I do 1,000 spots and I compute the distances. And in green is the 95% envelope. So here I just see my spot are outside, cannot be... My spot don't have the same distribution of distances as my random spot. So my spot are not randomly organized in my structure. So the same for the FMA, I won't go into details, I will just refer you to the publication. So this one is a bit outside through the image suite, but you have also this kind of paper, this kind of paper. So the idea is to always compare original with your model. So your model can be random, so you compute distances. Or you can also... Here we have like a green object in red. You compute like collocation for this one. Or distances between one green object and the closest red object. This thing we also have in actually manager. And then we just shuffle the green object. So we take the same green object, but put in another position. And then we compute the same distances between one red object and one green object. And then we will see that the distances are completely different. So that means that we cannot explain the distances, but by chance, just the red object are really next to the green object. It is not because they have a lot. Because here, if we have a lot of red object, we may think, okay, since we have a lot, they may be close to green object. No, so here we can prove that red object are close to green object because they want to be close to the red object. So this one is more part of Diana but not really part of the image. But to give you an idea of the 3D analysis, you do some measurement. And then if you want some statistics, you compare your measurement with some model measurement that can be random or shuffled or any model. So I will finish this part of analysis. Any question? For now, the questions are more related to specific cases. Yeah, okay. Yeah, I guess. So the idea was to give an idea and a value of what are the questions you can ask from your image, from your data, and now people are thinking, oh, I can do this, I can do this. Or can I do this? So yes, maybe it's a case by case study. No, I think if Tissier agrees and Matilda as well, we can move on maybe. Yeah, okay. Okay. So I would like to finish. We'll go quite fast about this. Just an idea, just to say, okay, just an idea. So now if you want to automate, if you have one image, okay, you can do 3D manager, and that's it, you get your measurement. If you have 100 images or 1000 images or different kind, you need to automate a bit organically. So the problem I guess now, it's a kind of generic philosophical question of image analysis. The problem is actually how to segment, deep learning is working quite well. How to analyze? We have some kind of nice tool to do some analysis, how to analyze a lot of images. So the problem is a problem of where are my results? This is a kind of a problem. So the problem is data organization. So for data organization, the idea is to rely a lot on the database, like Omero, this is a very good tool. So the idea is to have your, create some projects, you have some data set, and then you have some images inside the data set. So this is kind of basic idea for organization. And the idea is like all the images within one data set will be analyzed the same way. This is I think the key point, put all the images that should be analyzed the same way that maybe the result will be pulled together in a set data set. So basically you will do something like this, a raw image, proofreader, blah, blah, blah, same notation, you get some object, and you get some information, like the measurement. Most likely you will get some nuclei, and then you will get some the intensity of a molecule inside the nuclei, you will get some information. Basically, I think 95% of the biologists will do this. I think they can get a lot of information from just this kind of measurement. But how to organize this? So my view of organizing this, just a personal view, first you have the data set. With all the images you want to analyze. And then you have your protocol. So the protocol will be what we discussed before, input, okay, you get the image, okay, you filter the image, you remove the noise, okay, you switch all the image, then you label the image to get the object. So you get a first output, a first result. So the first result, it's kind of, should be part of your data set. You have one data set with the raw data, maybe you should have one data set with the segmented data, because this, you will use it to do analysis. And then you do some analysis with this label data. You will get some result. And the result are you don't put them anywhere. The result are for this image. So this result should be attached to this image. So if you understand this thing, I think your mind will be more better organized to handle a lot of images. On top of that, I just do to store the result before attaching the image. So this is the idea of a tapas to automate processing and analyzing the system. And this is why I want to focus on in the next few minutes. So what is a protocol? Protocol is the list of things we do to do image. So first we input the image. Okay, of course you will get a list of the image and it will be one by one. You input the image. Okay, and then I filter the image. With this radius, with this filter. Okay, and then a three-shot image with this method of auto-touching. Okay, so I have a binary image. And then I label the image. So I get all the object within my image, but maybe I don't want to remove the small object. So I just add a parameter for minimum. So now I have a first output, generic output. This should be the same name as the question mark. The same name as my original image, but dash tag. So I know I have image 25, blah, blah, blah. And then we have image 25, blah, blah, dash tag. So I know this tag is related to this image. And then if you just follow this kind of protocol, you won't mess up your things and you know what things are happening. So you define a protocol and then you define, okay, I want to apply this protocol to this data. So the data can be on your files, organizing dataset, project, blah, blah, blah or Omero, the database. The database will follow the same protocol, dataset, project, and so on. Is if I just want to finish the protocol, and then I have the measurement, okay. I want to measure the volume of the subtree of my object. And I will put the result in an image folder temporary. And the result will have the same name as my original image, image 25, dash result. So I know this result, this CSV file is related to this image. If you have 105, it's quite useful to do this kind. And then now I want to say, okay, this result is for this image. So I will just attach this, this thing to the image I am processing. This is automatic. This will automatically attach to the image I'm currently processing. And then I will just delete this temporary file. So the idea of tapas is to define a protocol. It's just a text file, exactly like this. This is the exact text file I use for analysis. And then you say, okay, now I will, I will use this text file on this, on this dataset. Here I use Omero, but instead of Omero, think this is your folder, it will be exactly the same. In my folder, I have two images. Just, but I can have 100. So I will just now decide, we asked tapas to apply the protocol to these two images. And then he will create the output. Here I have one dataset for the raw data, one dataset to say, this is the label data for this, for this dataset. So because I don't want to mix the raw data on the label data. So I have another dataset with the detection of the nucleus or the detection of my biological structure, the telomase. And then the attachment is just here. The result, I analyze the nucleus on the spot. I just attach the measurement from this object, from this measurement for the object, the spot to this original image. So this original image, I have this result that are related and attached to this image. So of course, it's quite generic. You get the first channel, first channel you will get the nucleus, you can quantify the intensity with the nucleus. So the trick is just to save as a temporary file. This original image before threshold. If you threshold, you will use the original images, save it as a temporary file, and you just file. So if you have two channels, you just open the first channel, save it to temporary, open another channel, you do the thresholding, you get your object, do the configuration. So if you understand this, you understand the process, you can do whatever you want. So the idea is first, you have some raw data. You have to think inside your raw data, what are the structure, the biological structure you want to analyze. So usually one channel will give you one structure, in some cases one channel can give you many structure or one structure and be defined in many channels. You will use this one channel, one structure. Of course, the first thing is to filter and segment the structure, and then get some geometry shape measurement of the structure, intensity of measurement of the different channel within the structure and analyze this relationship between the structure and distance. This is, I think, a general protocol for a political image of the process. Some final question now. Before I try to do a quick demonstration. Yes, I will ask one more question and then I unfortunately have to leave because I have to go to another meeting at 11, sorry for that. Okay, no problem. That management from my side. But the question I will ask, because I think it has been asked twice, it's a good one. Do you plan anything on object relationships like spots inside cells, like parents, child's relationships? This is the number one. For me it's the number one. So if you just want to know that this spot is inside object of this nucleus, it's just quantification. You just take the spot and then you quantify the value within the spot using the label image nucleus. So the value of the pixel will be the value of the nucleus. So you measure the intensity in the other label mask. Yes, yes, yes. I think that's actually true. That would be the trick how to do it. Very simple. Yeah, okay, cool. Then, Fabrice, I leave it up to you. I hope you will deal with it. Okay, but I guess that the next step is the live demo. Yes, so just to finish, conclusion, just one minute. 3D images suite. So it's a set of tool for 3D analysis that may work, hopefully in 2D. Of course, some geometrical measurements are not valid in 2D. 3D managers kind of the main graphical interface that will link all the 2D plugins and algorithm within one thing. And of course, there are a lot of macro extension that I will not detail here. And tapas is just a way to automate all the things you do step by step. Tapas is kind of interesting because you separate the protocol and the data. And then so you define a protocol and say, oh, I want to apply this protocol to this data. So of course, what next it will be, we will try to answer some questions in the energy forum. 3D images suite may be a better visualization, maybe some new plugin for association, like what is the best association like tracking thing. And tapas already some new module for filtering, but using, so tapas are like 50 modules. Most of them are directly from a 3D images suite. There are also some modules like GPU filtering. Probably we will try to implement some deep learning, a segmentation within tapas on maybe an obvious signal, a webinar on this new thing. Thank you. So click exit to protocol. So in this short video, I'll show you how to use a 2D manager to display and some label object that you've seen before. So here we have the raw data. So you see a nucleus with some spot in two different channels. Okay, just to display the visualization. Here I have the segmentation for the nucleus, a spot channel one and spot channel two. Usually people use a lookup table to display the spot with nice colors. So it's image lookup table. Usually people use 3D, 3D RGB or glass bay inverted, but it's up to you to choose the one you want. Don't forget to adjust the threshold to have a better visualization. You can use the synchronize window tool to visualize the same slice for all the data. I will now open the 3D manager, plugins 3D. So in 3D, you will get all the plugins from the 3D image suite. So here we have the 3D manager. So the 3D manager will just first check that everything is okay on your computer. And then basically it will open like a small window. This small window is will just, here we appear the list of object and the function you can do on this object. So first we need to select an image, a labeled image, add image. Basically you're going to eventually rename add the object so you know what is it, nucleus. And then we add the second image. Add image you can add as many images you want. Each time you will get the list of object for this image. You can rename them if you want. Spot A. So we will use now the third image, add image. Spot B, so you know what you're doing a bit. Spot B. Okay, so you have the list of all the objects. So usually people who want to do what they want to do is to get, sorry. So here you have the list of objects. So basically you want to visualize where your spot on your original image and then to check the quality of your segmentation. So we have the nucleus. So if you want to display the nucleus on this original image, so we need to activate the array, so live array one. So the array you can choose between many display for array, contours here, just a small spot at the center. The center, the one point at the center or the vulnerable, usually a preferred contour. I select the image and then I select what I want to display. You need eventually to click many times to refresh a bit the display. And then if you change the slice, the array of the object will be displayed on the original image or any image. So you can check the quality of this segmentation. Of course you can change an image, display on this one. So I just select this image, maybe I just refresh a bit the display, the array. So I see where my spot according to my nucleus. I can of course, of course, display spots. So let's display spot A. So of course on this image corresponding to this object on this image, I just maybe need to, sorry. So here I have the spot A on this original image. Yes, easy. And then don't forget, you can always change the color of the selection with edit option colors. So you have the color, the foreground, the background for the, yeah, this color picker on the color of the selection. And it should be like instantly. So you can choose the color you want. So magenta. Okay, so now, so you can display the array on any image you want. If you want like 3D visualization, so maybe we start with the nucleus. We want to visualize the nucleus in 3D using 3D viewer. So first we need to select the colors. We want to display for the nucleus. So let's display like blue for the nucleus. And then just select the object and select 3D viewer. So it will open 3D viewer and you will get your object display in the 3D viewer. Eventually you can change color inside the 3D viewer if you want. So now we will display the spots. So spot A first, so maybe we change color in green. So first we display spot A in green. Okay, select all spot A, 3D viewer. So we will get all the spots in 3D viewer display, but of course the spots are inside the nucleus so we don't see them. We need to make like the nucleus like transparent. So change transparency. And then maybe like 50% transparency. It should be okay for the nucleus. Uh, yep, it's not very clear on this one. What you can do also is, so select the nucleus. You can also select the nucleus or shade surface, that's mean. And sometimes the transparency does not work very well. So you can just remove the shadowing, the shading, sorry, the shading from the nucleus. So you'll see through the triangles making them. So you'll see all your spots. So we do the same for spot B. We change the color, of course, let's say red. And then 3D viewer work. So take some time to display all the different objects. So now you have all your objects display inside your image. Usually what I do is like I smooth a bit the objects over a less pixel, less pixel, just a smooth, smooth control. You just select all meshes or the mesh you want. And when you're like 10 or 20, usually it's quite enough. Okay, so now your objects are a bit smooth inside the image. Is it? Let's do it again with all meshes. Usually, yeah, it's better to click than put the number here, just the way it is. Okay, so all your objects are smooth. You can now just save a text knife shot, look at the rotation and then you can put it in your presentation. So that's the end of the error manager for visualization. In the next video, maybe we'll just have a look at some analysis. So here the record. So thank you. In this second part of the demonstration of 2D manager, we will have a look at the measurement we can do. So the measurement I can be accessible here for the options. So we have kind of a similar list of measurements from 2D object counter or set measurement, volume, surface, compactness, fit ellipse, to the moment convex hull can be slow, take greater density, so on and so on. Distance to surface, center of mass, object numbering. In case you have spots inside big object, you can easily count the number of this object inside this object. And burning box for the distance. So what is the distance of the, if we draw radius from the center to this object, inside this object in case it's clear. And so on and so on. Of course, you can load and save the list of objects you want. So first, we just maybe just want the measurement of the nucleus, nothing fancy. Like all the measurement already, okay, difficult to get this, yeah. Okay, like this. So you have the only measurement, the measurement are usually in unit, if they're taking into account the calibration x, y and z or in pixel. So here we have the line number, the name of the object, the label. So the original value of the pixel in the segmented image. The type in case we want to do some classification. So now if we want to measure the spot, so we select all the spots, so we select everything except the first one, measure 3D. Note that if you select all or deselect, if you deselect, if nothing is selected, that means all the objects are selected and a measurement will be done on all objects. So you can select all or just control, remove the first one on measure 3D. So you will get a lot of measurement. What is quite interesting, you can sort the measurement. So you can select a column and then see what measurement you want to sort from the smallest or the largest value. If you have this kind of not a number of symbols, means the computation could not be done with this object. This one is a distance to center, so maybe most likely the center of the object is not inside the object, so the measurement cannot be done. So now we have like three objects that are maybe quite big. So maybe have a look, maybe they are just major objects or they are quite big objects. So we have these two objects. You can save as a CSV file to be put in Excel to all the objects, all the measurement or the measurement for deselect. We can show the object onto the original image. So now the three objects that are selected, eventually we may want to have a look where they are inside this image. So we need to refresh. So actually, yes, three big objects here. So these three objects, maybe we just have to remind us that they are big. So we will just kind of classify them as, so just select the window, just press one. So the objects are selected as type one. So in case you do some measurement, you remember, oh, these objects are big. It's kind of a special object that may do something. The nucleus can be classified as a tool or something. If you press zero, you remove the classification. Yes, so the classification is quantification, so just maybe some numbering. So we select this object. And okay, so we select the nucleus and we want to count the number of this object inside the nucleus. So this object will serve like a quantification. We quantify the number. Quantification can be ever the raw signal or this kind of label data. So we just select this image. So inside this object, we do some quantification. So of course, we will get all the values for the intensity, but it's not relevant in this case. So we are more interested in like the number of objects. So inside this object, inside this image, we have 40 objects. On the number, the volume occupied by this object inside this big one is 3000 voxel. So we can do the same for this one. The same quantification. So here we have 45 of the objects. Okay, we're compiling like 2000 voxel. Of course, maybe we want to do some distances. So distances between the nucleus, maybe distances within object, within spot. So we select spot A first. Distances, that means we will compute all the possible distances between objects. So it can take some time. Here we have not too many objects, so it should be okay. But here you see the display. So it will compute at the center, center distance, the border, border distance, between all the possible pair of objects. So you have to read like two line by two line. So first line and second line between object, this object and this object. So object number two and object number three. So here you see the type. So here this is a big object, remember. So between spot A1 and spot A2. So the center, center distance, center to border, the center of this object to the border of this one. And here we have the center of the object two, border, border, border, and the closest. So the closest to object one is object 41. Closest as in center, center, or closest as in border, border is number, object number eight, within the list. So it's object B7, not object S7 should be. It should be only on the section, but it seems that it's like all the object. I have to check this one. For call equalization, so this is the same, of course, call equalization we need between all the A and all the B. So it may be a bit crazy, so we won't do it. So for call equalization, maybe we can use another plugin, plugin 3D multi-colloc, that's all. So we have object, we need to rename these two images. So this one is A, and this one is B. So plugins 3D multi-colloc, and so between A and B. So we will display only object that are collocalized. So here we have the result. So it starts with object one. So object with label one inside this image is collocalized with object with label seven in image B. With the percentage, so like 15% of object one is collocalized with object two. And there is no collocalization with another object. This one is collocalized with object 43 on object five. And so on and so on. So call equalization is, I think this is all we want for now, all the measurement. Thank you. In the third part of demonstration of the 3D Magics Suite, we will look at information we can get from the 3D IVF distribution. So first we need to compute the IVF of this object to see the different layers. So plugins 3D distance map, actually the IVF is a normalized distance map. So we can show both for this example. So for this image, the mask is the same and the threshold is zero because the value is one. We can compute the IVF on the outside object but here it's inside. So we have the two images for the IVF on the EDT. So this one is the Euclidean distance transform. This one is the volume fraction. So usually we use kind of a fire lookup table to visualize the information. Go to the middle slice. So here you can see the value of the pixel is the distance from this box cell to the border. And the value of this pixel is the value between zero and one. Zero means the box cell closes to the periphery and one means the box cell farthest from the periphery. So if it's 0.5, means this object is like the 50%, 53% closes to the periphery. This one is inside the 24% closes to the periphery. So thanks to this one, we can draw some layers like the 10% of object closes to the periphery, the 10% to 20% closes to the periphery, 20% to 40% and so on and so on. To be layers of equal volumes, it closes to either different, the same volume, it closes to the periphery, the 10%, the volume, first volume closes to the periphery, the second volume, 10% to 20% closes to the periphery and so on and so on. So we can divide this object in layers of equal volume closes to the periphery. With this one, the value is just linear so it's not working, the volume will be different. So now we have our EVF information, we have our images where we want to see where are the spots within this object. So we select 3D EVF distribution. So here we will just look 0.2 every 20% because we don't have too many objects. So the EVF image is EVF. The image we want to compute is A or we want to compute the density. Here we have objects, a label image, so we want to compute the density. In case you have fluorescence, you may want to have an idea of the fluorescence distribution within the layers. Here we have the density. Okay. So here we have the density, so that means in the first 20%, in the first 40%, we don't have too many objects, but from the 60% to 100%. We have most of the object within, inside of the center for this object. If it was like a random distribution of object, we will get like a flat distribution of all the object within this object. This is just the volume. As I mentioned before, the volume of all different layers, 0, 10, 10, 20, 20, 24, and so on and so on. But you see the two layers are quite due to numerical errors. The layers don't have exactly the same volumes, but quite close. And of course, you can use this if you want to correct eventually this information. So this was the first part, the third part of the 3D management, 3D part of 3D matrices demonstration. Thank you.