 OK, let me very briefly introduce David Leglan. Today, he's a researcher and engineer at INRAE, formerly known as INRAE, the French National Institute for Agricultural Research. And me and him, we developed already a few years ago, Morpholibier, who's going to be the main topic of today's presentation. So I hope you enjoy everything. OK. OK. Hello, everyone. And thank you, Ignacio, for the presentation. I will present you image processing with Morpholibier. So just to present myself, OK, David Leglan, I'm working at French Institute for Agronomy, and more precisely, in a facility for chemical analysis and image analysis of biological products. And to give for some historical traditions. A few years ago, I was working with Ignacio in an image analysis and modeling team at the Institut Jean Pierre Bourgain in Versailles. And we were a team of computer scientists developing methods and algorithms for image processing and analysis. We made a huge use of mathematical morphology field of image analysis. And to facilitate the interaction with our biologist colleagues, we started to translate the algorithms into plugins and libraries for the image effigy platform. And this is the origin of the Morpholibier software. So more precisely, what is mathematical morphology? It's an approach for image analysis, which is based on that theory. It was developed around 50 years ago by George Matron and Jean Serra in Ecole des Mines de Fontainebleau. And the good point of this approach is that it is very generic in the sense it works for 2D or 3D images or even larger dimensions. It works for binary or variable images. And it may even be applied to other data structures such as graphs or niches, for example. There are many operators that comes from mathematical morphology, including the skeleton, the watershed, or more generic morphological filters. I put you two references, two books that cover most of the topics from image processing with Morphology. The first one is not so easy to find. The second one is more recent and so easy to at least buy. The problem is that they are a little bit technical and not so easy to apply in daily life. And this is also one of the reasons we started developing Morpholibier. So Morpholibier is a collection of plugins for image. A lot of them have a graphical user interface to facilitate both the setting of the parameters and also to have a quick feedback on the effect of the different parameters. The main page for the Morpholibier is image.net. So there is a page about Morpholibier. You may also find some user manual on the GitHub page for Morpholibier. And also, as well on GitHub, I have put some additional data and some plugins that we use during this session that you can find on this following link. I will try to show them directly. Yes, on the elegant and mathematical morphology with Morpholibier. So this is a GitHub account. You have some plugins additional to Morpholibier. You have the sample images I will use during this session. There are some icons in the bottom. And also, the slides can be found in this repository. All right, so what I will present to you today is the global concept of mathematical morphology, about three parts of the typical image processing workflow. I will start with a session about enhancement and filtering of images. And the second part I will speak about segmentation mostly using watershed. And I will present some analysis, some image analysis using Morpholibier in the software. OK, let's start. Let's start. I will first recall some basic definitions. Something I like to recall is that in the beginning, the mathematical morphology was defined as a way to describe the shapes and the object. So the way to do it was to use a virtual probe called a structuring element. And by changing the position of the probe relatively to the shape and the study, depending on different binary operation with the set, we could get a result and create a resulting shape. So the first operation is a erosion. And the operation associated to erosion is that the structuring element is totally contained within the set. And the position for which the structuring element is totally inside of the set will result in a new shape that corresponds to the result of the original shape. This new shape is smaller than the original shape. And the components of the original sets that are smaller than the structuring element disappear. Then the dual operation of erosion is a deletion. In this case, the operation is that the structuring element touches set. And the result is another shape which is thicker and larger than the original set. And the result of deletion makes some components be connected and some holes within the structures disappear. So one problem of erosion and deletion is that they strongly change the shape of the original set. So they're often used together and in combinations. And for example, the opening is a result of an erosion followed by a deletion. It will remove the small object, but the object's larger than the structuring element will keep their original shape. And the morphological closings is defined by deletion followed by an erosion. This will remove the small holes within the structure. It will connect some components that may be separated. And the resulting shape will be closer to the original shape than with deletion. The morphological closing and opening are very useful for post-processing of segmentation results. When we have a lot of small particles that correspond to the noise that was segmented. And in this case, this is an image of plant tissue, some maize stem that was observed with the macroscopy. And a raw segmentation using simple threshold results in a lot of black and white noise. So using morphological closing results in consolidating the structures we want to keep. And from this image in the bottom, applying a morphological opening will remove the small dirt and results in an image that is more clean and easier to analyze in a second time. Okay, so morphological operations also can also operate on grayscale images. And in that case, the interpretation of erosion and deletion can be understood as applying, as computing the minimum value or the maximum value of the pixels within a neighborhood defined by the structuring element. This results in increasing the area occupied by white pixels in the case of deletions and increasing the area occupied by dark regions in the case of an erosion. And also applying morphological closing or opening will result in filtering the image in removing the bright areas or the dark areas whose size is smaller than the structuring element. So there may be also morphological closing and opening are also very useful for filtering images to remove speaker noise. David? Yes. Sorry, would it be possible to activate your arrow, your pointer so we can see where you are Okay, now I see it. Okay, yeah. Try, do you see it? Yeah, perfect. Okay, thank you. And I will, okay. Okay, so erosion, deletion, opening and closing as the basics of the morphological filters, we can go a little bit further. And for example, use the result of an opening with the original image to improve, to enhance some structures within the image. So when using a large structuring element and applying an opening, the results in this image will remove all the bright structures that are smaller than the structuring element. Then here we will make disappear all the grains within the image and the remaining image will correspond to an estimate of the background. Then completing the difference between the original image and the result of opening will improve, will enhance. So the difference will correspond to the image of the grains. And it will remove some variations of gray levels we can observe within this image. So it's a little bit darker in the bottom part and brighter in the middle part. And this effect disappears in the result of the top hat. For the people who know the whirling ball algorithm, it imagines very similar algorithms. The only difference is in the way we estimate the background image. And there is also a dual operation, the black top hat, that corresponds in enhancement of the dark structures whose size is smaller than the structuring element. An example of white top hat application of an image of leaf. So if we want to enhance the network of the veins within the leaf, we can apply a white top hat with a small structuring element, so just a little bit larger than the size of the leaves. And the result of the top hat enhance the network of the veins. Another use of morphological filters is to obtain, to extract the gradients or the boundaries of the structure within the image. So combining a dilation with an erosion result in morphological gradients that can be used in subsequent filters. Also, there is an equivalent of the Laplacian, obtained by combining erosion, dilation, and the original image. And that can be useful to detect spots, black or white spots within the image. Okay, until now, I didn't care about the shape of the structuring element. And very often we use either a square structuring element or a disk structuring element to better preserve the shape of the original object. But it is possible to make it vary and to focus specifically on structures with different orientations. So by using linear structuring elements, so line segments with specific orientations and applying different morphological filters, we can specifically select the features. So pixels in an image that correspond to a specific orientation within the image. So here is an example with horizontal structuring element, vertical one. And by combining the result of different orientations, it's possible to obtain directional filtering that will increase the quality of the structures that are curvilinear and that will remove the signal of the pixels that correspond to single dots within the image. Okay, I can exemplify this with a small demonstration. I will put it on the screen. I will show it now with a live demonstration using image J. Using the image of crops, I'll go back here to demonstrate it more. Okay, in one of the additional plugins that you can find on the GitHub account for this session, there is a directional filter one that consists in choosing the operation and the features of the line structuring element, the length and the thickness. And depending on the orientation you choose, the result will increase the quality of the signal for the different orientations. In the original morphology library, there is the directional filtering plugin that consists in choosing the same features, but also combining all the results. And here by combining the maximum value over a set of directions, we increase the quality of curvilinear structures. Okay, so here is the result of directional filtering on the original image. This is the same that you can find on the slide. It's possible to go a little bit more deeply into the directional filtering and an interesting application of this kind of filtering is it may be possible to separate curvilinear structures that intersect on the image. So here I have a demonstration on a synthetic image that correspond to small curves that cross which results. And using another plugin that creates the 3D image in which each slice corresponds to the response of a specific orientation. And let's go for different things. Okay, sometimes to compute. And okay, in the resulting image, we'll have the same kind of result. But as it is 3D image, so each plane of the 3D stack corresponds to a specific orientation. And here we can separate. So if we visualize in 3D dimension, we'll be here. We have transform original structures that we're intersecting into 3D structures that are now disconnected. And okay, from this image, it is also possible to compute the regions associated to each individual image. And to compute, to apply a connected component algorithms that will result in a label image corresponding to the different fibers. Okay, so now we have the different values within the 3D image. We have the value correspond to the index of the fibers. We can apply colorization to this 3D image and to visualize in 3D to have better representation. Okay, now we can separate particles even if they're interested. Okay, let's continue. And to finish about the morphological filters, another family of interesting filters is the attribute opening. Using morphological filters uses structuring elements can be disk, square, or line segment, as we have seen. But sometimes we want to be more generic and do not care about the shape of the structuring element, but simply consider connected regions to think about the size of the structuring element, but not about its shape. So this is a principle of attribute opening. The idea is to consider regions within the original image of connected pixels. And the algorithm will retain only the regions whose number of pixels is larger than the specified threshold. So here by applying an attribute opening, we use the pointer. Yes, so here an original image. Here the result of classical opening with a small disk as a structuring element. But there are many veins, small veins that disappear. The goal is to make disappear the small white dots within the leaf. Here we make disappear the white dots, but also the small leaves. And using attribute opening, we can keep the network of the leaves of the veins and remove the images of the small dots. So in some cases, attribute opening may be a nice alternative to a classical morphological filtering. That was classical morphological filtering. The problem of such filters is that usually when we apply morphological filtering, we distort and we change the shape of the original structure within the image. And here we apply morphological opening. We make disappear the small items. But we also change the shape of the large items that retain. That retain. What we want is also to keep the original shape of the remaining items. So for this we use morphological reconstructions. The principle is to work with two images. One is a marker image, black in this image. And we will apply dilation to this black image. But at each step of dilation, we will also compute the intersection with the original image that is used as a mask to constrain the dilations. Then by iterating the dilation, constrain to the mask, the reconstruction gradually reconstructs the original structures. And at the end, what we obtain is simply the particles that were containing a portion of the mask. It's possible to display it using a small demonstration. I will open two images. One corresponding to the image of particles, as you have shown. So here particles are in white and the background is intact. And two marker images. We have to choose the marker images here. The mask image as the particles image. And here the result is a 3D image that corresponds to the different steps of the reconstruction. So this is the demonstration to illustrate the principle in practice in morphology. The plugin to use is morphological reconstructions. It gives directly the results and also uses faster algorithms that is faster than iterating the different dilations. And also works for grayscale images. So in some cases, it may be so interesting to apply the reconstruction to the grayscale image. Yes, and there was also another demonstration for this. Let's suppose we have an image of different bright objects, grains image. And we want to select only a specific subset of these grains. So we can use grayscale morphological reconstruction on this image. So for this, we first need to create a marker image. We have to make it the same size as the original image. And then on the original image, we will use the multi-point selection tool. Click on different grains, the ones we want to keep in the final image. Make some different groups. Then we need to propagate the selection here. Selection, restore selection, drop the selection. And then we apply the morphological reconstruction. The marker image is this one. Mask image is the grains image. And the result is the isolation of the selected grains in the original image. Okay, the last thing about morphological reconstruction is that it is used internally in many other algorithms. So typically, the killboarder's algorithm is based on the morphological reconstruction. And filling the holes in the binary or in the grayscale image also implies using a morphological reconstruction. So this is maybe an algorithm you have already used, even if you didn't notice it. Okay, so to summarize on morphological filters, there are useful family of filters either for post-processing of binary images or for improving the quality of original grayscale images. In some cases, it is possible to play with the shape of the structuring element to improve some specific structures like curvilinear structures. Another improvement is to use attribute opening. In that case, we consider only the size of the connected components around the pixels. And in the second step, the morphological reconstruction can greatly help improve the quality of the segmentation by selecting objects and then reconstructing the object, the original shape of the object adversely. Yes, this is the end of the first section. So maybe there are times for a few questions. Yeah, there are questions for you. So one that people keep asking us is if the directional filtering works in 3D as well. Yes, it works. The difficulty is we need to... So it is not implemented morphologically in 3D, but it is possible to implement it in 3D as well. It is a little bit more complicated because in 2D, it's easy to define a set of orientations. In 3D, we have to distribute the orientation of the line structuring element, depending on azimuth and elevation. It's a little bit more complicated. And also in 3D, we can focus either on line structuring element or either on oriented disk structuring element. In that case, it will be more useful to improve the quality of 3D membranes. So in summary, it doesn't work in 3D in morphology, but it is technologically possible. Technologically possible, but it is not implemented in morphology at the moment. Okay, there's another question a little bit related to that. They say if it is possible to determine the spatial coordinate of the surfaces that are created with the directional filter in 3D, but I guess if we just say that we cannot get this in 3D, it doesn't make sense, right? Or is there any way that they can get the surface of the resulting object of the coordinates of the surface through morphology? Yes, so once we have isolated objects or regions within the image, there are also analysis tools in which we can extract the coordinates of the objects. Usually what we do is we extract the centroid, and so I can present a little bit more in the analysis section. If this is the question, yeah. Yeah, I think so. I think we're good to follow. Okay, thank you. Okay, so now I will introduce morphology, yes, morphological segmentation based on the watershed. And yeah, segmentation is the process of identifying the structures of interest within an image. And typically the result can have different forms. It can be a binary image in which the result has other true or false value. It can be a label image in which the value of the pixel correspond to the index of the region belongs to. Or it can be geometrical primitives such as a polygon or a collection of points. So the basic example of segmentation algorithm is the threshold that consists in simply keeping all the pixels whose value is higher than the given value. Okay. The watershed algorithm consists in considering that the intensities within the image correspond to an altitude. And the goal is to identify the position of the valleys of the basins with it. So the dark regions within the image. Another view can be we want to extract the position of the crest of the crest between the valleys. So the two results are complementary. The result is, so the principle of the algorithm is the fruiting process. So we start from the local minima within the image and start filling these corresponding regions until two adjacent regions will merge. As a boundary between these two regions, we start building them. So giving a specific value to the pixels of the voxels in between and continue the fruiting and continue the fruiting process until the altitude reaches the maximum value within the image. Actually, when we apply a watershed algorithms directly on the grayscale image, the result is something like this. So there are a large number of small regions and that doesn't really correspond to what we expect. The reason is that for each minima within the image, a region is created and on this image on the right, each white yellow dot correspond to the position of original minima. And due to the noise within the image, we have many minima and hence a lot of regions. So we need to select the interesting minima. Okay. So we can have a small demonstration. Okay. Yeah, I still have the image of the cells I have used before and if we use a classic watershed on this image without mask and using the other parameters as default. And this is the kind of result we obtain and change the visualization by applying color map. Okay. So this is too many minima. Okay. So the first possibility to improve the quality of the segmentation is to manually select the minima. So for this we generate marker image with different regions correspond to the initial position of the minima that we start the floating process and compose with the original image to remove all the other minima. It may be used manually. So I will demonstrate it now. Starting with this image, creating a marker image with the same size. And then we have to manually select the different markers we want to keep. So we basically create markers inside every dark region within the image. Okay. Okay. Propagate the marker inside of the marker image and draw them on the image. Okay. And then we can use in the segmentation menu of the morphology again, marker controlled watershed. We select, okay, try not to miss them. And this one and the last marker image. Okay. And we have a result that is very similar to the one given on the slide. Okay. So this is a manual selection of the minima, but for larger images or for more complicated images, it may be very tedious. So it would be nice to have an automated way to select the minima within the image. So the first possibility is to apply strong filtering on the input image, like median filtering or that will merge neighbor minima. But in using this kind of filtering, usually we also have an impact on the brightness of the of the cell walls here and it will be more difficult to keep the boundaries of the original region. So another possibility is to use the extended minima. So for this, we first need to go back to the notion of the region and the local minima and local maxima. So if you consider regional maxima, for example, they can be defined by a set of connected pixels with the same value, such that all the pixels around these regions have a value which that is lower than the value in the maxima. And in a typical image, oops, sorry, in a typical image, there are a lot of small minima. And what we want is to be more strict, to be less strict, and to use a tolerance value and consider in regions all the pixels whose value is as a difference with the largest value in the maxima that is smaller than the tolerance we have chosen. For the extended minima, this is the same principle, but we will consider regional extender minima as regions whose values are close enough up to the tolerance value from the smaller values in the region. And when we apply it for the watershed, it consists in considering only the local minima whose difference between the bottom and the nearest crest is at least the value of the tolerance. So when we, and this is implemented directly in a graphical user interface in 2D, and that was both for 2D and 3D images. So it's not to switch to a 3D demo, like some cleanup. I will open a 3D image of an embryo, of a plant embryo, of Arvidopsis thaliana. So this is a grayscale image that shows different cells. These are the embryo cells and the suspensor cells here. And to study the development of the embryo, it's necessary to identify the regions corresponding to different cells within the image. So to apply a segmentation to this, we use a morphological segmentation plugin in the segmentation menu of morphology. This opens a new graphical user interface. And if we run it directly, in some cases, it appears that, okay, it works not that bad. We may want to try to better separate some regions. And if we want to, if we increase the value of the tolerance we use, then there will be some more regions will be merged, and there will be not enough regions compared to what we want. On the contrary, if we have a value that is too small, okay, then we'll have too many regions that doesn't correspond to the real, to the express value as well. And a nice compromise could be this one. To facilitate the visualization, it is possible to change the visualization and overlay the result of the watershed limits or the dams, and superpose with the original grayscale image. In that case, it may be easier to apparent the quality of the segmentation result. And another way to visualize is to create an image of the label regions corresponding to the final segmentation. There are some additional options. We can choose to compute or not the boundaries between the watershed, so it may disappear as a small lines between the regions. And we can change also the connectivity that consists in changing the way the neighbor pixels or pixels are considered. So in 3D, we can consider either the six autogonal neighbors or using 26 neighbors. So we consider also the diagonals, the voxels in diagonal. Okay, let me just use the dams and run again. Fine. And then we can create a result image. Okay. So from now, it may be useful to clean up the result of a segmentation. And for this, we also have some different tools in Morpholejad. They're in label images menu, so it's possible to remove the regions that touches the border of the image to replace the value of different labels to merge adjacent labels or to extract a specific label to create a new binary image for me. Most of them are integrated into another plugin, so the label edition plugin consists in representing the result of the segmentation and to give access to different edition options. So for example, if you want to remove the largest regions that correspond to the background, then we make it disappear. And there are also some small regions in the bottom part of the image. So for this, we need to select them and remove the selected regions. There is another one somewhere. Yes. Okay. And then the result image can be generated by closing the plugin here. And now it's a good point when having an image with regions with a black background everywhere is that it's possible to visualize it in 3D. And so before visualizing, it's better to compare it to a color image and then to generate, and then it's possible to generate a 3D view using the, for example, this 3D view. And then we have a nice representation of the labels. Okay. Sometimes it may be frustrating not to be able to have a look inside of the labels. So we also have some options to expand the position of the different regions within the image to make it easier to visualize inside of the 3D image. So I just edit again. Simply consist in moving the relative position of the different labels. So we have just added some space in between adjacent regions to make them easier to visualize. So again, if we convert the color and visualize in 3D, now it's possible to have some, to explore what's going on inside of the of the state of cells of the embryo. Okay. I will come back to the presentation now. All right. So this was the main plugin for the morphological segmentation. I also presented the different options we have for post-processing versions for the editions of the labels. And some more talks on watershed is usually, so in the literature when we speak about watershed, we speak about the segmentation of an object that is contrasted, a bright object over a dark background or the contrary, a dark object over a bright background. In that case, the strategy first compute an image of the gradient of the image and apply the watershed on the gradient of the image. And we have the choice for the gradient can be linear or can be a morphological gradient. And in the regions, we obtain corresponds to the regions separated depending on their contrast. A typical watershed workflow is therefore as follow from original image we apply, we can apply morphological different filtering option to increase the quality of image. The watershed segmentation can result in a label, an image of the labels, the label map image. Then the label edition plugins results in a label map with only the interest, the regions of interest. And then applying different analysis tools I will present just after. We can combine with the original label image to generate a parametric map of the morphometric features of the regions within the image. Just to finish, there is also a watershed option in the J. In fact, it's an agglomeration of different tools that consist from an image of grains that touches each other that applies the watershed but after different steps. So first, there is a distance map is completed. So for each pixel in the regions, we compute the distance to the closest background pixels of Excel. We come to the inverse of the distance map. And then the high value correspond to the regions that are between the dark spots. So applying the watershed on the inverse of the distance map result in lines that can be combined with the original image to obtain an image of objects, whereas the objects are separated according to a convexity criterion. Okay. So to summarize on watershed, missing image. Yes, it's a generic algorithm. So it can be used to segment cells in the tissue images. It can be also used to segment objects based on the contrast. And in image, it's also used to separate particles that are close to each other. I have presented different ways to manage the over segmentation problems can be done either manually by selecting the markers, or it can be done by applying algorithmic strategy for automatically computing the interesting minima within the image. And also, we have a label edition plugin for increasing the quality of the resulting segmentation. Okay. So this is the end of the segmentation part. I mean, I have some time for questions. Yes, there are some questions as well. Let me go through them. One is saying from Andrea is saying, is morphological segmentation able to segment overlap objects with the defined shape? For example, road shape cells one on top of each other? No, it is not. Sorry. Now it's a very common problem. Now there is no tool actually in morphology to do it. And I think it's still a complicated problem in general. If I can add to that, I would say that if you somehow manage to find the right markers, then you can use the marker control version that we have for water cell to do that, but the border is not going to be in the right place. Yes, yes. So yes, the problem is if we have different cells that overlap, then the boundary of some of the objects may disappear. And it would be difficult to extract the boundary of the objects that is behind another one. And we have another question that is actually a redundant one that we keep having and I forgot to tell you before. People are asking how to install or how to get this morphology plus plugins that you are showing how to install them? Yes, I didn't mention it in the beginning. Actually, the principle is to connect to the GitHub to go to the plugin to download the file. So they save as and then you need to save this file into the plugins directory of the image installation. So it's highly dependent on your installation. But typically, if I find it, okay, so in a typical image installation, there is the IG.java and plugins directory in which all the files corresponding to the plugins are located. So here, there is a morphology library, some, yes, directional filters and windows in that sort. And the reconstruction demonstration plugins. Okay, in Fiji, it's much more integrated, but for experimental plugins, it's a little bit more manual way for installing. Okay, we have another question regarding morphological segmentation. If we are able to overlay the result with the original image and then change the opacity in 3D? Yeah, I'm not sure I got the question. Okay, once we have the image, the result of the segmentation, it is possible to generate a binary mask. And we can combine this image with the original image to make a crop, a selected crop of the original image. So I mean, I think the question was if we could overlay the labels on top of the original image and then change the opacity. I think the opacity is fixed, because one of the options, the display options, is actually to do that. You have it open somewhere. Yeah, there, yeah, you can open it. Yeah, okay, with this, no, this one, okay. Yeah, it should be possible. It is not implemented, but it should be easy to add. Okay. Yeah, I think it is 50% of opacity or something right now, but yeah. I don't remember, but okay, it could be possible to to, to, to, to exist as an additional option easily. One last question people are asking, okay, if we can use, for example, directional filtering to close in images like that, membranes that are nearby, but they have some gaps, even though maybe the, the membrane is a little bit diffuse. Yes, it's a good question. Yeah, for 2D images, I would say yes, because it's possible to use morphological filtering and to, to apply, yes, to, to improve the quality of the membrane on the sections. So, but in 3D, it is not so easy to apply directly as is a possible way to do it could be to iterate on the different slices and for each slice to apply a directional filtering to improve the quality of the membranes on the current slice and then create the resulting 3D image that concatenates the several enhanced images. But yeah, a better way would be to, to, to have a 3D directional filtering approach that we can do on the, on the to-do list. Okay, I think that's it for now. Okay, so let's continue. Okay, so the last part will be about analysis of the different, yes, of the segmented images. So I have two topics in it, three in fact. Once we have segmentation of the images, there are different ways to extract quantitative features from this segmented images. A classical way is to apply analysis of the morphometry of the regions and extracted morphometric features like the area, the parameters, or a number and so on. And in some cases, it may be interesting also to study the relative position of the object, each, each other, and to study the, how they are organized together. And third possibility we have with mathematical morphology is also to apply some texture analysis tools. And in that case, it's a radiometry analysis. So I will detail them. In two dimension images, it's possible to extract variety of morphological features. Most common ones are area parameters. Based on inertia, based on the inertia parameters, it is possible to extract, to generate an ellipse, which shares the same moments as the original regions. Popular parameters are also the fairy diameters, or a rated one is the oriented box with the minimum area. And we have also defined morphology some geodesic diameters that correspond to the largest path we can draw within the particle. Based on these parameters that describe mostly the shape, the size of the, of the regions, it is possible to compute shape indices that correspond in general to ratios of size parameters. And for example, the elongation parameters correspond to the ratio of the length of the two ellipses, the two axes of the ellipse. In, I was speaking about geodesic diameters, maybe one original parameters in morphology. So the concept of geodesic consists in from two points within the regions to define a path between them. A geodesic path will be the path with a minimum length that will join these two points, but still staying within the regions. And the geodesic diameter will be the length of the geodesic path, the largest, the length of the largest geodesic path within the region. So here it corresponds to the two geodesic extremum within this region. Then based on the geodesic diameter, it is also possible to compute derived shape indices. So geodesic elongation that correspond to the ratio of geodesic diameter over the size of the inscribed circle or inscribed disc. The shape factor may be also derived by combining with the area. Another one is the tortuosity that corresponds to the ratio of the geodesic diameter over the maximum ferrate diameter. In that case, it corresponds roughly to the complexity of the shape with a high tortuosity. We have a very complex shape where we need to have a lot of circumvolution to reach another extremity within the region. The new feature in the last version of morphology is it's possible to quickly compute the average thickness of a region. So the principle is simply to extract the skeleton in one side, to compute the distance map of the pixels within the particles on the other side, and then to compute the values of the distance map for each pixels of the skeleton. It can be made manually by applying distance map and skeleton but it's a little bit tedious and so now it can be computed directly with morphology. And also we have different equivalent features for the 3D regions. The volume and the surface area are the direct equivalent of the area and perimeter in 2D. It's also possible to compute an equivalent ellipsoid and in that case we have different, so we have three sizes, three radius, and three orientations to identify how it is oriented within space. We also have a geodesic diameter and a computation of geodesic, of the inscribable. We can also have some options to compute some shapes, factors, and we also have some tools to colorize the 3D regions within the label image according to one column of the results table. So for example it's possible to colorize according to shape index corresponding to the ratio of two ellipse radiuses here. Something I wanted to mention is that it's not always easy to have a good way to measure the morphometry of the regions and there are two problems that can arise. First, the measure we have doesn't correspond to the actual value we want to measure so we can have a bias in the measurement. And in the second problem is that due to the discretization and the resolution of the and the limited resolution of the email, we may also have some dispersion of the values we measure even if we measure the same object with different orientation and different position. So something we care about a lot was to provide some measurements that correspond to values that are as close as possible as the expected value. So one example is for the surface area. The first idea that comes to measuring the surface area is to generate an isosurface and then generate the compute the surface area by computing the sum of the surfaces of each triangle to the in the surface mesh. In fact this method is very, so when we generate synthetic shapes with now on surface area and measure the surface area using this method of isosurface, we obtain, we have a large bias and the better method is to measure surface area using different methods. So in morphology we have used method based on computing intercepts with families of lines of different orientations and in general it gives better results than using isosurface method. And okay, another family of tools was to better have insight on the relative organization of regions and typically in cellular morphology. We are very interested in knowing the morphology not only on the cells but also on the neighbors of the cells. So there is a region adjacency graph tool in morphology that provides a list of links between adjacent regions. We can show them quickly if we start from the segmented image here, not the color one. Okay, just remove the border labels and when we analyze, we have the region adjacency graph. We have a list of pairs of labels that correspond to adjacent regions. So here, for example, we have the number 16, that should be a neighbor of the number 20 and we find it on the list. So then we have one way to go into the analysis of the neighborhood of the regions, even if if you want to go further in the analysis, it's better to use scripting or more to develop its own plugin to be able to really go further into the interpretation. Okay, and then the last kind of analysis we can apply with mathematical morphology is texture analysis. So the plugin is not directly included into morphology but it's an additional plugin that requires morphology to be applied. You can find it on GitHub as well. So the IG tools account and IG granulometry plugin. It should be rather easy to find and the principle is simply to apply morphological filters with increasing size of structuring elements. So when we apply morphological closing with a given radius, we'll make disappear the dark regions that correspond here to itself. So we make disappear the dark regions that are smaller than the structuring elements. Then by increasing the radius of the structuring element, we'll make disappear larger and larger of cells. So the idea is to measure the quantity of cells that disappear between two sizes of structuring elements. And for this we built a curve that correspond to the sum of the gray levels within the image depending on the size of the structuring element. So this curve reaches a plateau after some number of iterations. And when we compute the derivative of this curve, we obtain a granulometric curve that correspond to a size distribution of the cells within the original image and that takes into account the gray levels of the image. So it's interesting because it can be an alternative to the analysis of the serial morphology of a tissue when it is not possible to have a precise annotation of the different regions within the image. And here is an example on two slices of tomato pericap images. So the upper image, use the pointer. Yes, so the upper image is characterized by large proportions of very small cells. And on the red curve, we observe that the majority of the cells correspond to small sizes. On the contrary, the image at the bottom has a larger proportion of large cells. But also the cells have a larger viability in their sizes, small, medium-sized and large cells. And so we have the blue curve correspond to a larger distribution of the cell sizes within the image. And also from granulometric curves, it is possible to obtain, afterwards, an average size of the regions depending on the granulometric curves within the image. Okay, start to reach the end of the talk. I just have some summary remarks on morphology. So it's a collection of plugins for filtering, for segmentation, for analysis of images. And yeah, it's rather rich at the moment, but maybe one of the difficulties is doing it yourself, library. So it's not totally integrated. You have to take different elements of the workflow and adapt each one and create your own workflow using the different components in the library. And that's why we have also tried to make it more, to make it easily adaptable to the use by others. So it can be used as a macro. So when using the macro recorder, it's easy to rerun a plugin just by copying the result given in the history. But if you go a little bit, so if you explore a little bit the documentation on the source code, it is also possible to be sometimes more efficient by directly calling some static classes. And in that case, you can avoid using the graphical user interface and improve a little bit the speed of the processing speed. And if you want to go really more deep into the library, it's also possible to extract algorithms that apply the operations. And in some cases, it is also possible to set up directly some additional parameters. Typically, if you want to monitor the execution of the algorithms, it may be more convenient to manipulate the algorithm class results using a simple goal to the plugin. Okay, so I'm ready. I give some references at the end. If you have some questions, there's still some time. They do have questions. So one that a couple of persons ask is if it is possible to compute the average thickness in 3D? Or if there is any thinking about implementing the average thickness? Not yet. So I implemented it in 2D for the moment. But as there is a skeletonization algorithm in 3D and as there is a distance map in 3D, it is possible to compute it in 3D. So at the moment, there is no direct plugin to do it. But it's still anyway possible to do it manually by first creating the skeleton, second generating the distance map, and then combining the distance map and the skeleton to compute the average only for the voc cells containing the skeleton. I do believe that in the forum, I have answered this question with the script or something. Okay, we have another question regarding the materials. Under which license they are, if it is okay to reuse the training material and repulse it, and of course, with full acknowledgement and you and your authors? Yes, there is no license yet, but they can be reused. Most of the time, it's materials that have been already published, so it's possible to reuse them. Yes, I have put some references of the papers when they were used for given, when they were published, I tried to give the reference paper. And one other question is, is there a way of calculating through mathematical morphology these granulometric curves in 3D images? Yes, again, it is not yet implemented, but in theory, there is no difficulty in doing this. As far as morphological opening or closing are defined for 3D, and they are defined in 3D morphology, then it is possible to compute granulometry in 3D, for 3D images as well. So we have integrated plugins for the 2D images, but it is possible to make a macro that iterates over different sizes of certain elements and computes granulometry. But, okay, so I didn't have the opportunity to apply it, so I didn't yet implement it. There is actually a very interesting question regarding the disparity of resolution usually in microscopy images in x, y, and the z direction. So they're asking how do we work with this and how do we adapt our structuring elements for that. Okay, when you apply filters, some morphological filters, it is possible to specify different radius given the direction. So if, for example, the resolution in z direction is three times the resolution in the x and y direction, it's possible to use a radius equal to two in the x and y direction equal to six or equal to one-third of two. Okay, so to adapt it, but manually. So there is no to say it differently. The resolution, the size of the structuring element is given in pixels or in voxels. So if you want to take care, take into account the spatial calibration of the image, it is the responsibility of the user to convert the relative size of the structuring element to match the ratio of differences of resolution within the 3D image. Actually, the question is very, I also ask the question very often when I process images because sometimes we want to give the size as a number of pixels or voxels because we are closer to the way the data is stored. But when we want to interpret, we want to have the size as microns or millimeters or something else, but better corresponds to the way we interpret the images. So yeah, it may also be an opportunity of improvement of the plugins to give, to specify the sizes as a user unit rather than a pixels. So this is maybe a good moment to clarify if the results that appear in all our analysis tools, they are in which units? There are, when the image has a spatial calibration, the image analysis tools take care, so take into account this spatial calibration. So if your image is in micron, the diameter or the geodesic diameter is in microns as well, and the surface error will be in square microns and the volume will be in cubic microns. So yes, the spatial calibration is taken into account. Okay, I think we are done then. There's no more questions there. Someone else wants to add something? Always thank you so much, David, for the very nice work. Thank you for managing the question and the logistic that was great. It was very good, David. Thank you. Thank you. I hope the assistant enjoyed and could learn things. Just to clarify, all the questions will be posted with the respective answers as a post in the forum. So you have, later, you think of questions that you forgot to ask. You can also post them there and all of us would answer them. Thank you so much for your participation.