 Good evening everyone. Thanks for coming. Thanks for watching the presentation. I'm gonna talk about my PhD project, which is Reconstruction of neuron microscope images Well at the beginning this is for a start this is gonna be more like an engineering presentation So I'm yeah, I hope you're gonna it's gonna be valuable for you to hear about it. So the neurons resemble as you all know the tree-like structures and within this delight Within this very very simplified model you can see that it's really like some tree-like structure and I'm what I'm gonna What we all know that is this morphology of the tree structure is influencing a lot on how the neural system actually works and It's connected with the functionality is connected with Simply how the the branches connect with each other now within that kind of structure. You know, we've got Points like the ones that I marked throughout the whole presentation with red and yellow and then those are and points and junctions and points in Yellow and Reddish are these junctions or bifurcation. So I'm actually gonna talk about the detection of these kind of points in in an electron in Microscopy images. So this is how the image looks like. That's what biologists use cell biologists is Wide field. It's like microscopy image and the annotated critical points are shown on the image now Each on them is associated with some kind of a spherical region and Basically what happens with the image processing in this case is that the algorithms for detecting the linear structures are kind of Already there and standardized but what happens in these points is that the space breaks and then there's a lack of methodology that deals with these Critical locations and also tips are very important with the automatic neural reconstruction. So I'm gonna talk about the methodology how to make a computer basically computer program that would automatically detect that and we have a model of That's how we tried to draw it What we're in we're gonna take for each location We're gonna take the neighborhood of each location and see what happens inside and what's going on with pixel intensities within this region and We're gonna look for patterns which Well, that's how it's most intuitive to describe it. So for bifurcations We're gonna look for three Kind of streamlines that spread out from the center of the region and for endpoints We're gonna look for one of them and at the same time It's also equally important to detect that there's nothing in the reminder of the region so that the program the algorithm is supposed to capture these kind of patterns and Just show them For that we use standardized kind of procedure in image analysis. So a feature of selection feature extraction and In the end of the detection For the start the first module is directional filtering some kind of a basic module where you take the locations of interest I've noted a couple of them which are very important and then apply a set of oriented kernels with Gaussian weights and With that you get something what we call the angular profiles Which are basically a unique signature description of each location in the image now starting with that We extract a set of features which will correspond to each of the locations Starting from the angular profile Optimization procedure will get detected the three angles up to four actually By in this case there are three and the first feature will be called likelihood Which is just the value of the angular profile at that particular direction We call them angles alpha one two and three Then now that we narrow down the search space we can extract the patches the same ones that we used in directional filtering in terms of size and Try to find the patch correlation between Each of these with again a Gaussian model patch with the Gaussian profile in the cross section and we use that because Actually when you do a microscopy the cross section of a neuron is modeled with the Gaussian intensity The last feature is called bending energy. So within the patches refine the set of points so try to follow a bit how it how it's how the new rate goes and Yeah, it's it's basically smoothing. So the higher it is the the less smooth the lines are So those three features will be extracted to all the detected directions and The next stage would be how to blend those features together and for that we'll use fuzzy logic role-based system and fuzzy logic is something that's been old concepts in 60s used in control systems in this case is gonna I tried to explain it in a really really quick way as is some kind of a controller Which gets an input and falsifies that input in terms of some linguistic variable and then applies set of rules Which are if 10 rules kind of expert Rules and then the output is also I again another Linguistic variable and with the process of the classification we can get the output. So there's just an example of a controller which will take an input Facified apply set of rules and each of the rules will be Accurate to some degree and they will accumulate both together and with the classification we can get the output so this is some kind of a very very simple controller and One input one output and it's very important to when you design the rules that they complement each other very well So how does it relate to this kind of problem? Well, you detect the features for each of the streamlines LC and you as marked in one of the previous slides and then you quantify them in terms of Linguistic terms sort of say so each of them is either high or low to some degree and Then you apply a set of rules, which I'm not said telling here But they're actually the core of the algorithm and then each of these streamlines is expressed in some kind of a fuzzy way Saying how much it is on off or I don't know Which means none and then there's another fuzzy logic system concatenated on top of that which takes all the possible streamlines and What we will have at the input at the output for each pixel location of interest We will have some kind of a fuzzy output saying The degree of belief that our point is junction nothing or or an endpoint That's in a nutshell so the the nice thing about fuzzy logic is that it models uncertainty and You can apply some set of non-linear rules and then it does basically information fusion in this case I evaluated the algorithm the evaluation will be using Basically, it's gonna use the hit or miss we will have a ground truth and will count the hits and miss True positives false positives false negatives with those we have a precision and recall an F measure Per each image in the end we sum up all that in three measures. So F score for junction F score for endpoints and the F score that combines both of those the data sets synthetic and the real data sets synthetic data sets are neurons taken from a available internet Database neuromorpha.org Different levels of signal to noise ratio and these are how this how the images look with annotations and the real images There's 19 of them manually annotated with different range of critical points per image Those are the results this course. So the F scores if we're looking just at the junction detection That you can see how the detection goes per each of the image and in this case We just care about junction detection and don't care about endpoint detection here We just end point and here we combine both of them at one go. So take the best score and This was all for signal to noise ratio of four, which is actually a high signal to noise ratio. So If we look at the performance of the detection per signal to noise ratio, it's Obvious that it's Substantially higher for signals noise ratio three, but we can roughly expect to have something around 90% as it looks there For real data things are not as bright as for synthetic. So again, the same process the F scores Junction detection endpoint detection and combine but in this case we got We're around 80 percent 78 percent for the for all for this group of 14 I think images so there's a drop of 10 percent with the real data Which is probably of course because real data are not as as good as synthetic data And it's harder to tackle and synthetic data cannot model all the variability that there exists in real life This is just a visualization how it looks in Visually and Yeah, there are cases when it misses cases when it's okay in general it captures the topology of the neuron To summarize this is I'm presented a kind of a novel method for detection of critical points in neuron images and It's been extended to detection of endpoints or tips of the neuron branches and there's another scheme Which uses generated synthetic neuron images which have already given ground truth because this ground truth has a problem I We have a lot of a lack of standardized ground truth and as for the future work There's Algorithm can be extended to 3d compared with other algorithms and used for the neuron reconstruction for tracing With that I'd like to conclude the presentation and yeah, these are thanks to my colleagues from Erasmus MC And if you have any questions, I I'd answer Thank you the real data that you were using was this just a cell culture-based data in 2d from a whitefoot microscope or did you also use sliced data or 2d 2d if biologists use they just Usually they have a sample in 2d. Of course 3d is is indeed necessary, but Experiments can be carried out with 2d as well But but also where your sample comes from is that cell culture or tissue cell The your system has a lot of parameters Including the kernels of the Gaussian filters and all the of the fuzzy logic system. I'm not allowing them too much So there are five parameters all together at the crucial parameter is the size of the kernel But you can apply also several sizes at the rest of the parameters fuzzy logic and stuff the rest of the stuff I don't really want user to change but you said I want them to be I want them to be to if it's NCC score then it's it's between minus one and one so if it's low you just set it to zero point five and don't care But when you set up the system you set up the membership functions the three schools of the membership functions and so on There now that does not really these sigmas are not so much of a they don't make so much impact I think I know no no I think so yeah, but in this case they're like zero point four So as long as you separate the categories, it's okay and are the same for the synthetic and for the Natural images You use the same set of parameters for yeah Yeah, well I was going through a grid of parameters basically for these experiments and choosing the ones that were good