 So, welcome to the Elastic Webinar and from what I see in the poll, most of the people here either didn't use Elastic or used only the pixel classifications, so it's right to the point of what we are going to have today is a more advanced and less unknown features of Elastic and we, the speakers today will be Anna Kreschuk and Dominique Kutra, while the developers of Elastic from MBL and we have also Maxim Lovikov, also one of the developers who will help answering the questions together with Carlo Bereta with the bio-image analysis of the University of Heidelberg and me, Ophagolani and Marianne Levo, will also help you. So, go ahead, Anna and Dominique. Okay, so thank you very much, Ophagolani, for the introduction. I will just turn my video on for a second to say hello, but my upload speed is not good enough, so I'll turn it back off now and I will start sharing my screen and yeah, so thank you all very much for joining. It's a pleasure to see so many of you interested in the deeper features of Elastic and yeah, so it looks from the poll, like Ophagolani said, that you're all at pretty much the right stage there. So, we are going to talk about many different things, but not about pixelation so much. Let me just find my, there you go, the screen, aha, yes. Okay, so on to Elastic beyond pixelation. I want to begin by actually thanking all the people who have been instrumental in making Elastic what it is now. It's a fairly, well, let's say mature project compared to others, right? We started a long time ago. A lot of people have been working on this really hard. The current development team you can see on the left, there's awesome people to work with, the people in the middle, the founding fathers, the heroes of the past, yeah, we are all very grateful to them. And then on the very right, you can see the funding agencies who by their generosity actually made it all happen. So, a bit of history here. The development of Elastic has started back in 2010, I think, in Fred Humperich's lab at the University of Heidelberg. And, yeah, so I also joined around that time and kind of slowly over the years took over the management of the project. And then when I started my own lab at Emble in summer 2018, the project has moved there with me and the development team has also moved with me. And now we are all up on the hill, but we are of course still, yeah, talk to Fred's lab frequently. Yeah, so you can see what pickboxification does. This is, I hope, the one workflow that you guys already know. And the same philosophy that we have used for pickboxification that you are familiar with is the one that really goes through all the elastic workflows, right? So we want to go with the idea where you give labels because you give the training data, right? Because you know what you're looking for in the data. And we do the algorithmic side where, given your labels, we solve the problem with machine learning algorithms. And on pickboxification, you give labels from different pixels. And on the other workflows that you are going to see today, you will have the labels as objects or labels as tracks or labels as dividing objects or labels as edges between pixels, all kinds of different labels. But the main idea remains the same. You give labels, the algorithm solves the problem. And with this, since you have now looked quickly at pickboxification, I will actually give over to Dominic who is a core developer of elastic, the master of it all. And he's actually sitting up at Embo now with a much better connection than me. So he can do all the demos and all the other fun things there. Over to you, Dominic. Okay. Let me share my screen. Okay. Hello from me as well. I don't have any excuses to close my camera. So you can see me all the time. Yeah. And what I want to give you is a little bit of an overview of all the workflows besides pickboxification. So I've seen the poll that a lot of people actually have not seen it. Now I feel inclined to show it really quickly. And I think I will do I mean, you see it here in the video, but for some things that I want to show later, this might be interesting. So if you open elastic and create a new project somewhere in the beginning, you have to load some data. I will use my favorite data ever. And then you select some teachers. And in pickboxification, we always say, yeah, just select all of them. And don't even think so much what they are. They are just like different ways of looking at the same image. And then as you've seen in the video, you provide some training data. So for example, you say this is a cell and like this is background, you go to live update, you get instant feedback, you get a prediction, and you correct it where it's wrong. For example, like this, and the prediction improves. And so on and so on. So now everyone has seen our working horse. Okay, let me hide this really quickly. The workflow I originally wanted to start with is auto context. And this is the only workflow I will not show live for two reasons. One is my little laptop with only eight gigabytes of RAM. So it will probably not be up to the task to annotating something impressive in 3D. And yeah, that's maybe the main reason. But instead of this, I'll show you just qualitatively what the auto context does. So it is like pixel classification like the one we have seen before. But you do two rounds of pixel classification actually on the left here. You see, first of all, on the top of the annotations on the bottom, some predictions of the first round of pixel classification. You can see like the task was here to find physicals and find mitochondria. And yeah, I mean it was somehow, but in some areas like here, for example, or here, you get like overlapping results. And now you do in the second round of auto context, you do the same as in pixel classification, but on the predictions that you have produced in the first round. So you give it a new set of annotations. And like what happens is that the predictions get much cleaner. And in the next slide, I want to give you an intuition on why that is. So what you see in gray is an image. So those are all little pixels that usually make up your image. And in elastic, I've showed you in pixel classification that I selected some features. So those features are actually new numbers for every pixel that are generated by looking into the neighborhood of this pixel and calculating a new number for these pixels and like to generate a new image. And we do this with a couple of different in a couple of different ways. So we get a couple of different features for each pixel. So you do this for every pixel. And then you add your annotations and you generate some predictions. Those predictions are here symbolized versus different color. So in the second round, what you actually do is exactly the same. You compute your features for your pixels. But this time, not only on the original data, but also on the classes that you have predicted in the first go. This means that this has now a lot more meaning. This gets semantic context because you know, oh, maybe this pixel was before like vesicle. And this can then be taken like implicitly taken into account for the current pixel. But what also happens is all the predictions, like for this pixel, for example, here, have been in turn calculated by taking into account filters in that region. And this is also true like for all the filters in that filter mask. So implicitly, in the auto context, you also get a larger field of view. So those are the key points why the auto context works better. You get the semantic information from the first round and also implicitly a larger field of view. Okay. And from auto context, I will go now to the next workflow. So until now this was all pixel based. So you looked at pixels and calculated like measures in terms of pixels and got as a result classified pixels. Something else is going on in object classification. It's an object classification. We don't look at pixels as isolated beings, but we look at them in meaningful groups. For example, here, all the pixels that make an M. And if you make the switch from pixels to objects, you can get actually more meaningful features on this object level. You suddenly have the notion of a shape, of a size of an object. And many more features will get into detail very shortly. But let me maybe quickly start up the elastic again and show you how object classification works in practice. So I got my elastic setup, we know. First of all, you see that object classification comes here three times. And maybe really quickly, pixel classification for object classification is a workflow that you should never really use. This is only for demo purposes, like not for serious work. So there come the other two object classification flavors and they are different in the type of input data you get, get, give it. So you can either give it a segmentation directly. So each pixel have a hard decision on what they are, foreground or background, or you give it a pixel prediction map that you can, like both you can produce in pixel classification. I already produced a pixel prediction map with this alphabet soup example that you have seen before. So I'll quickly load in the raw data. So and for this demo, I don't take the full null bias alphabet because it would mean too much clicking, but only some letters of the null bias. And you also need to provide here in the second tab a prediction map, which I've also already generated in pixel classification. Okay. And then up until now, you only have predictions. So this is just the pixels with different values. But you have to make a hard decision now, which pixel is actually an object and which pixel is actually a background. So this is what you do in the second applet, the thresholding applet. First of all, there are different methods. I will focus only on a simple thresholding right now, but we'll go into the other method later. You have to select the input channel, which should be the object channel. In this case, it's the yellow channel. So I leave it at this. You can do some smoothing to the predictions if they are not smooth enough, but those look good. And like the most important value here is the threshold value. And starting with a threshold of 0.5 is always a solid idea, I would say. Like the threshold goes from 0 to 1. And yeah, as you can see, now suddenly all the letters have different colors. Those colors are completely random. They don't have any meaning. They are just there to help you distinguish different objects. Yeah, and if you like change the threshold, for example, if I go lower, you can see that the letters get thicker. So a higher threshold gives me more accuracy. Yeah, in this case. And sometimes, as you might see here with the letters D and N, they might appear that they are connected. So this can just be by chance that they have the same color because those colors are just random. And in for those cases, we have this, you can right click on the final output layer and select randomized colors. So a new set of colors will be applied to the image and maybe do it one more time just to make sure it's really not coincidence. And now those are connected. I will solve this here simply by supplying a slightly higher threshold. And now you can see those letters are not connected anymore. And from here, I go to the object feature selection. Yeah, and if you see this dialogue, there are three main groups which might or might not like spark any intuitions. And because of that, I have prepared a slide where I want to make those features a little bit more tangible. Okay, let me go back to the presentation. Okay, and I will like explain all the features on the example of this A letter from the data set here. So the first group I want to talk about are the standard standard object features, which like include the basic shape of the object. So the count of pictures that the object actually encompasses. And also the gray level characteristics, you can look at the histogram, for example, of the gray level distribution of the object and also of its surrounding. And there are also some location features. The second group are the convex hull features. And yeah, what is the convex hull? I tried to illustrate this a little bit here. So you have the same A again, and I highlight the object for you. And now in order to construct the convex hull, you can think like you want to ride with your bike around this object, but you are only allowed to turn into one direction. So I will start here with my bike and go clockwise. And I'm only allowed to turn right. I'm never allowed to turn left. So in the beginning, that doesn't mean anything, right? I turn right. And I go down, down, down, down. And here I can also turn right. Here I might feel inclined to turn right again. But this is not allowed because I would have to turn left at some point. So maybe I show the convex hull now. So the convex hull is actually here around the object. Yeah, just like you would ride around it without ever turning into the opposite direction again. The convex hull is not the same shape as the object. As you can see, there are areas which are not covered by the object. And these areas we refer to as defects. So what can you do with the convex hull once you have it? You can compare the convex hull to the object. Like you can compare areas. You can look at defect characteristics. And of course, you can also look at locations here. And the last group of features is skeleton features. And here, maybe again, what is a skeleton? We start off with the same A. We have our object here. And in order to get to a skeleton, you can think of an operation that goes to your little object and removes pixels from the boundaries. And it makes it thinner and thinner until it's only one pixel wide. And what's left over in this one pixel wide representation is actually the skeleton. It looks something like this. You have characteristic points in the skeleton. For example, those are any points. And you have junction points where two branches of a skeleton meet. And a branch is always a segment between two points. So in this example, we have one, two, three, four, five, six branches. We also have one cycle because you can go in a circle here. Other objects don't exhibit this or have more cycles. This is something you can look at. And here for completeness, also the diameter, which is defined as like the longest path in your skeleton. Okay, with this knowledge, I want to go back to this feature selection dialogue. And I will only bring this over now. And maybe the features that I will choose make sense now. So I will, maybe a good point why even bother. In pixel classification, I said you can just use all the features. And in object classification, I am spending so much time on explaining them. And like matter of the fact is that in object classification, it actually matters which features you choose because like the simplest reason is that in pixel classification, where you annotate in brush strokes, you generate a lot of training data. Each annotated pixel is one training data point. In object classification, it's only click-wise. So one object you click to belong to a certain class is only a single data point. And you usually want to stay like with the amount of training data way above the length of your feature vector. So this is why you should take more care in object classification on the feature selection. So I will maybe just go very basic from the shape of the standard features. I just take the size of pixels. Let's look at the convex side of the features. I will take convexity. And the number of defects is in our example the same for each of the objects. And maybe I go into the number of holes. So I bring back the elastic now. So here it is. I selected those features and go to the training card. And as in pixel classification, you can add and remove labels. We have three different classes, which I will name accordingly. I will have the first class for the n, the second for the b, and the third for the s. And then I annotate. So first I annotate a few ends. Then I annotate a few ease. And finally some s letters. I don't know. And after three, I'm always inclined to press like update and see how it is going. Yeah. And like this is the next step always in elastic like explore your result. And I see that one of the ends got misclassified and I corrected where it's wrong. You can right click on objects and then set the label directly. One last look. This looks pretty decent already. Okay. And from there you can go to the object information export. You can export two different things here. One is an image. I will not go into the image export settings now because those are really very much the same as in pixel classification. But I want to highlight a different thing, which is the feature table export. So the feature table allows you to export all those numbers you have used during classification. So yeah, all those features are in fact only numbers, so the complexity and all that that we've seen on the slides. And you can export it to different formats. You can use a CSV to open it conveniently in Excel or in whatever the software from Google for this is. Or you can export an h5 file, which is very useful if you want to post-process this data later or do some advanced visualizations or stuff like this. You can choose which features to export. And in the case of the h5 file, you can also include either the whole image or smaller regions of interest around each of the objects. Okay. So what can you do with such a table? I just want to quickly show you what you can do if you can like do a little bit of scripting. Let me pick this up. Okay. So what I did here, I generated sort of a quality control PDF from the result directly where you have your object, your object mask, the predicted label, and you see the probability and whether you have labeled it or not. And these kinds of things can, I don't know, be really nice in your daily work, I think, for things like quality control, quality assurance. Okay. Yeah. So I will go back to the slides, which I have just closed. Hang on. Yeah. Okay. In the meantime, while the slides reopen, there's also, I mean, in elastic, there's always the batch processing where you can just drag and drop more datasets of the same kind. And in object classification, there's also this block-wise object classification step. So I have not mentioned one thing about object classification. So pixel classification, we always claim that you can do it in principle with arbitrary size data. Data is way larger than your RAM. If you just keep your window zoomed in, you can process it because elastic only predicts on the portion you are currently looking at. In object classification, this is not possible because there is one global step in the thresholding where the whole image is loaded. So it is really, you are limited by the size in RAM. However, in order to make it still possible to do processing of large files, we have this block-wise object classification where the idea is you would train on a small subset of your data, something that fits into your RAM and train your project. And afterwards, you process it without the graphic user interface in blocks or block-wise. And what does it actually do? So it will subdivide your image into rectangular or square regions and do one region at the time. This can be dangerous. And I want to illustrate this. You see, you can have multiple settings here. And I will illustrate this by choosing some really, really wrong settings here. I press apply. And now in the background, the block-wise processing is going to work. So in this case, it will work on super tiny blocks. It will, like every 64 pixels by 64 pixels square, it will process individually, do a feature extraction, apply the class attire to it, and show the result. So you can see this also takes longer than doing it all at once. But the results come slowly and come in. And you see the problem with this. By blocking the data set, you get these kinds of artifacts where probably here is a block boundary and the top, the upper block has seen like this portion of the B and the lower block has only seen this little portion of the B. And if you calculate your features on those, don't make sense anymore at all. You see, all the predictions are all wrong. So what we always tell here, let me just make this a little bit bigger. Making this a little bit bigger will not help because you still run into problems at the boundaries. So whenever you add the block boundary, there is a possibility that an object is not fully included. And there we have this notion of the halo. Halo is just an additional region around your current block that is also taken into account during the classification. But you actually only put out the result in the center block. But you still see a larger region. So if I put in something reasonable like 128 pixels, so this should be in the range of the size of your largest object. And hit apply, you should probably hopefully see once this is finished. Yeah. And now it woke up. That's the prediction is as good as before. Okay. So now let me get back to the slides on the features we already covered. And yeah, this is basically what I wanted to say about features. And maybe here's a good natural point to make a tiny break for questions that are immediate. So are there any immediate questions? Well, there are plenty of questions. So yeah, how should we do that? Should I read them to you and we both try to answer? Yeah, why not? Okay. Hold on, during QA, the threshold value is 0 to 1, a relative representation of the image bit depth. Now they are a representation of the prediction map, which, well, in our case, like in the what Dominic has shown, was generated with the elastic pixelification workflow. And what it predicts is the probability that a pixel belongs to the defined class, say the foreground, and the probabilities as usual from 0 to 1. So yeah, it has nothing to do with the raw image anymore. And of course, you can also load the raw image instead of the probability map and just threshold that. But we usually go the way of first finding them in pixelification. But yeah, if you have the 8-bit image, you can load that instead of the probability map and it will still work. Okay, next question. Could you explain again why feature selection is more important in object classification than in pixel classification? Sure, shall I do it? Okay. Yeah, you go. Okay, so in pixel classification, so yeah, basically this all goes back that you want to have way more training data, actually, than you have features in your feature vector. And in pixel classification, for example, your feature vector is approximately the size of 50 for each of the pixels. So you can generate and your annotations are done by brush strokes. So you generate a lot of training data. Each annotated pixel is one training data point. What you have seen now in object classification is that if I annotate an object, I actually only add one data point. So there I should take a lot more care that my feature vector does not get too long because then I have to annotate a lot. So in order to not get a good prediction by chance because having not enough annotations already, it's better to start with a smaller number of features which also make a little bit of sense and then go from there. Okay, next one. What to do with object classification situations where the object properties are not as uniform as letters, such as cell types which can different area shape size? Well, as long as the variability within class is smaller than the differences in between classes, I think you're still well set because we have a lot of features and it also has features which are, say, the moments of intensity and you can also have the average intensity and the standard deviation of the intensity and also all these kind of things for shape. And in the end, it will learn the right combination of features and how to group them together as long as this is in the data, as long as things can actually be grouped by these properties. So, of course, your letters are all the same. But if you, as a human, can tell these classes apart, I mean, these features are actually fairly descriptive. Okay, then we have one more. Yeah, I think it's sort of a duplicate clarifying the reason for taking care of the features in object classification as opposed to selecting the mall accept location. It is, well, I think it depends on how easy the problem is, like Dominic was now saying, and also on how much clicking you are not opposed to doing. These features are also very correlated, but this is also true for our pixelation features to be honest. So, I would say there you can compute everything in step location, then go to the object classification applet, and then try to like, subselect features so that you would have a smaller set. Because really, if you label five objects of each class, and you have 300 features because you use all the histograms, I would not be sure how stable it is, unless there are actually two features which are decisive, and it has learned to ignore the rest. Okay, then I think this was all that I have selected for during Q&A. Oh, I see there's more and more coming. Wait, I didn't even read the last like five. But I'll try to answer them in line while Dominic you then carry on with the thresholding, okay? Okay, yeah, what I want to show next is how to deal with situations like this where you have very crowded cells, and you still want to be able to separate them into individual, like you still want each individual segmented out. And you can do this in object classification. So, let me close the project. By the way, we will make all the data and all the projects available after the seminar, like not today, but tomorrow, or yeah, by the end of the week the latest, just so you know. That was not nice. Let me quickly open your elastic. And we go to object classification again. Again, there's a pixel prediction map and yeah, same somewhere. This time I will load use the second import button. It says add a single 3D body volume from sequence is a little bit misleading. So the important part is this your image, a single file, then you take the upper one and or if this image split across multiple files. And for me in this case, the data you have seen before was like data with two staining where each staining was saved in a different file. So I choose this, choose the files, and I have a derpy channel and the fits channel. I open them and I say I want to stack across C, across channel. So what you can then get is like this sort of composite color image with the nuclei here and the cytosol in green. What I did in preparation for the seminar, I already trained two pixel classification projects, one to find the nuclei from the derpy staining and one to find the cytosol for the fits staining and I will also load them as a stack. Also here, also stack it over C. And then let's see what we can do in thresholding to separate those sets. So first of all, I mean just really quick, a quick check using the standard simple thresholding. Yeah, cytosol we want. Here it's something like this, like not very good. A lot of cells are clumped together. This is definitely not the result we want. But what we've already seen here, the nuclei are actually quite nicely segmented. So next to the simple method, we also have this method called hysteresis. So this is basically doing seeded watershed where you have two input channels, a core channel and a final channel. So the idea is to have two different steps, one for detection, one for segmentation. And for the core, you take the channel where you are very, very sure that there's only one per cell, which would be the derpy channel in this case, so the yellow channel. And as a final, because we want to have the full cell shed, we take this fits channel. The filtering we have as before as well. But this time we also have two thresholds, one core threshold and one final threshold. And yeah, the core threshold is, you can set it a little bit higher to be really, really sure just to detect one nuclei per cell. And the final one relaxes it to something that you would probably also naturally do as a threshold, like 0.5 right in between. If I hit apply, oh, you see that nothing has changed because I didn't do the most important thing, which is click on this object's checkbox. And then you actually get this, yeah, sort of nice separation of different cells. Okay, and I want to explain a little bit in more detail what is actually happening. So I switch back to the slides. And what you see here on the button left is just yeah, a little piece of the data you have seen in Elastic before. And what we want to do now is to look at a line for this data and look at the intensities. And here, I plot you already the intensity of the dappie planet. And you see three distinct peaks across this line, which corresponds to those nuclei here. And with the core threshold of 0.8, we basically decided about which portion of those nuclei we want to take into account as seats. And seats for what? This I will show on the right, same image. But this time we look at the fifth channel. And because it's more fun and the energy works better, if you turn, like if you flip the fifth channel upside down, I will do this. So keep this in mind now zero prediction probability is on top and one is on the bottom. So pixels that are very, very, very sure to be fits on the bottom and pixels that are not so sure or probably background are on the top. And what you do as the next step in the seated watershed is you go to these positions here where you had the seats of the dappie and you apply those seats to those data. And those seats now you can think of little springs where colored water is coming out into those valleys. And during this watershed, you let water flow in until it rises. But at the beginning, we said, okay, we want to cut off the fifth channel at 0.5. So this is our playing field. So if we then start flowing water inside, we will find situations like this where the water, like different colors of water, this is red and green. And I apologize to everyone who is color blind. And didn't think about this. Where these two colors touch, you have to believe me if you're color blind. So those two touch now and what you do in the watershed algorithm is you insert an artificial wall and you go on with flood filling this valley. And you do this until you have the next clash between green and yellow in this time. And you can also see it here, like now these areas come together and you insert another wall. So these cells are kept as separate objects. And then you finally relax it to the final pressure. And this is really what makes this happening elastic. Okay, I only wanted to show this regarding object classification and want to switch to a workflow that is really, really heavily depending on the same interactions that you have found in object classification. It's the tracking workflow. And in tracking, like first of all, your data has a time dimension. So you are interested in like properties of objects over time. Maybe if they move around, if they divide, or if they change their shapes, change their appearance over time. And so and this is what tracking the tracking workflow will help you with. Okay, and since we are really not so good on time, I will continue with pre-trained tracking projects. So tracking, you have to reserve some time to train a tracking project properly. What did I just do? Close. And I will just open my pre-trained tracking project. Let's take some time to open. Okay, but here we go. So tracking, the tracking workflow starts off as all the other workflows with your input data. I have already added here some time series. As you can see here, you have a slider if you have a time series. And what we have here are the cells from the mito check and a project that's divide. You also have to supply a prediction map for your objects. I generated this with pixel classification as before. And the way to get to objects here is exactly the same as in object classification, what you have seen before. You go through thresholding. I just leave the details and you see you get all those objects separated in all the time points. Okay, and now in order to make the tracking work, you have to train two classifiers, two separate classifiers. The first one is to detect divisions if your objects divide. You could of course also have the situation that your objects don't divide in the whole dataset. Yeah, then you don't have to train it. But in this case, we have divisions. And I've said I've already annotated a few. Let's find some because I quickly just want to show you how to best annotate divisions. By the way, a really, really, really handy shortcut when it comes to object classification is the key I on the keyboard. So if I press I, it will bring the raw data to the top. You can see it here in the layer stack on the left. Usually raw data is on the bottom, so you have all the overlays on top of it. But if you really only want to look at the raw data, then you can press I to bring it up and I again to bring it down. So because this helps us spotting divisions. So I have spotted one here. See, in this time frame, the cells are each of those two cells is one. But in the next one, they divide. And the important thing to maybe mention here is that in Elastic, the division is really the event that occurs in the time frame before there are two. So it doesn't matter if your division event, like biologically, takes place over many time frames, like how you annotate a division. So how you annotate an object as dividing is really only in a time frame before. So here, they are already annotated. And yeah, it's good practice. I annotated the ones afterwards as not dividing. And you could do this to the previous ones as well. So those could also be not dividing. And this is the strategy how you annotate divisions. And you should, of course, annotate a few divisions. So not only like two, but go really through your data and like scout those divisions. I don't know if I can see one more. This one isn't annotated. So I might annotate this as well. But I annotated this already quite a bit. So I can go to live update. It's my text on time and get my prediction. So what is also important is also to label a few normal cells that are not just divided or will divide in the next frame. This is also really important. And check whether this is picked up. So you can see that this is correctly picked up as a dividing set. And you can go have to go through this through the whole time series and see whether this is still correct in the data frames as well. Because acquisition conditions might have changed. So in general, you want to train elastic projects with a representative training set. So if your data changes over time and most data does, then make sure to annotate a little bit in the beginning, a little bit in the middle and in the end as well and do some quality control. So this is the first classifier. The second classifier is the object count classifier. And as the name suggests, you want to train the classifier how to recognize objects that are touching. I mean, it's just like reality that you will not always be able to separate all your objects. So there will be touching objects. And in order to cope with this, there is this classifier in place. You can add as many labels as you like. If you add labels here, they will be automatically named like six objects, five objects, four objects. I know that in my situation, the maximum of touching object is three. And then we go to one of the later more busy frames to show you how this looks. So here, for example, are two touching objects. And I can clearly verify this in the raw data. I probably mark this as two objects. So I also train this classifier already. So I can go to live update and get my predictions here as well. Okay. Yeah, maybe one label that I have not mentioned before is the false detections that you see in yellow. And this is just, you can see those cells here are really, really faint. They have not been properly segmented and they will probably disappear in the next frames or reappear. And in order to not confuse the algorithm with those, afterwards I mark them as false detections because obviously the staining didn't work here properly anyway. Okay. And with those two classifiers trained, you can go to the tracking applet. And I think there's already a result. So usually you just, I mean, this looks very, very busy, but what you usually do is just click the track button. Nothing else. You click the track button and I will not do it now because it takes some time to this algorithm. But maybe let's just look at the result that I have generated before that was saved from the project. Yeah. And so what you get is that descendants of the same parent cells are colored the same way. Yeah. For example, like the green ones, I suspect it's some time frame all came from the same cell. I will track back a little bit. And you see how those cells move together and will come from the same parent at some point. Yeah. Okay. Of course, this is not really nice to do proofreading like this. And for this, there are already really, really good tools out there. And this is why the export and tracking is again, a little bit special. So we have a special export source, which is called plug-in. If you go to choose export settings here, you can see that there are quite a few that you can select. The CSV table is very similar to the one that you have seen. Or that you can get from object classification. But it now has time steps and additional fields that let you connect the objects over time. And one I want to highlight maybe is the Fiji Mamut one. So you can export the data from elastic into a format that can be read by Fiji Mamut and do all the visualization and analysis and proofreading of your data there. Because like that's really convenient. Yeah. If you want to know more about this, like we have a video online that shows how to do this on our YouTube channel. Okay. This is for tracking cells that divide. Maybe if I go back to this dialogue here, we have seen there are a couple of numbers here that you can even edit or calculate tracking rates. And I mean, those numbers can be adjusted. And sometimes tracking might not work because it just does not fit the purpose. Like it just does not, like those parameters don't fit your data. And in this case, you can go to training. But in view of time, like you can hear annotate manually a few tracks as ground proof and then learn those tracking weights afterwards. And maybe I will just show one slide before we go to another round of questions. And this is a little like a background for the tracking for the tracking problem. So you start off with, yeah, the situation where you have objects in different timeframes. And you want to know the fate of objects. What do they do over time? And of course, there can be multiple possibilities. For example, this object in this timeframe could either divide into two or it could only be one of those. And up here in the next timeframe. And you have to, you can encode all those possibilities in this graph. Like it's a graph that includes all those possibilities. And what you have done with the classifiers that you have trained is help, like help to make these decisions. And of course, each decision is, yeah, is connected to a cost. And those costs are actually those numbers you see here. So for example, yeah, appearance and disappearance costs, like can a cell just disappear the next time for in the timeframe or go away. And like, then you can do use a smart graph algorithm and minimize this cost to like get to a result that makes sense. Okay, this, because I will switch to a different class of workflows after tracking, maybe it's a good time for questions. Agreed. Yeah, we have quite a few and we tried to answer as many as we could live. I think the remaining ones, you know, if you feel like your question has been skipped, we are not ignoring you. It will just ignore more, it will just, sorry, it would be, it would require more typing. And we'll do that later. In the end, everything will be answered. But right now, let's go for the questions that we think everyone would benefit from learning about. And then so first, there is a question of when citing elastic, except for the workflow, what other data should I add to the method description? And it depends on how thorough you want to be. I already like citing the method is good. I think the ultimate way would be to have the elastic project with the, so if you save the elastic project, it has all your settings, it has all the labels, it has like everything, right? If you upload that to the supplementary data of your paper, right, you're like all set, you're fully covered, there is nothing more that you can do. If you want to do less, you can try to just save each options you have selected, but I would really recommend to just have the whole project together and do this. Second question, how well does the 2G segmentation through elastic perform compared to Stardust or Unet? This is not comparable, right? It's a completely different method. Stardust or Unet, I mean Stardust is also a Unet inside, is a neural network, which we'll talk about later. It needs incomparably more training data, right? And so the ones that you're using now or the ones that you have seen Martin show have been pre-trained on the existing data, elastic is not pre-trained, it's starting from scratch. You can really train from the few label strokes and that is kind of the power. The power is in the interactivity of the training. If you want ultimate performance, you have to go to Stardust, or well for Stardust or Unet for other things, and we will show a little bit of this to the end, but there are many problems which are solved well enough by elastic already, I would say surprisingly many, but yes ultimate performance right now is the neural networks. Then we have the third question and since we are kind of short on time and we only have half an hour left for all the wonderful workflows that we want to show you still. Last Q&A question now, can you discuss validation of the various classification results? Well it is described in our paper, so there is like this whole section there which is called when it works and when it doesn't and how you can kind of get a feeling for it. In general, I would say the gold standard is that you label things by hand or you do the task by hand and then you compare that. This is what you really need to do. You can also of course, so this is what you should be doing, what most people do is just qualitatively show that look it kind of worked, but quantitative comparison against manual ground tools is the gold standard in the field. So I hope that helps and with that maybe let's carry on. Dominik, how do you feel? Yeah, sure. Maybe it's worth mentioning that elastic not only works on 2D, like all those workflows I've shown you also work with 3D data multi-channel stuff. Okay, so next is a totally different class of workflows where we look into boundary-based objects. So you can see here a little bit of the M slides of the brain, I think, where you have objects that are separated by a boundary. Sorry, and the first one I want to show you is carving. Let me just quickly get elastic up and running again. So carving and I will right away show you the advanced way to carve, which is by adding two different data sets. The first one, okay, and now you have to bear with me. So carving. So the first one, the raw data is the data we are actually calculating things on. So in this case we want to calculate things on a boundary probability map. This is why we added here. And in order to see things as normal, we had some raw data here. The dialogue pops up. Ah, it's here. It was mean. Okay. So the first step in the carving workflow is to compute superpixels. Superpixels are just like pixels grouped together on basis of some criterion. And in carving we use the distance from the boundary evidence as a criterion. And maybe let's just start it because it takes a little bit of time. Right lines. Yep. Okay. So what happens right now is like the distance and transform, like for each pixel it's calculated how far is the way from the next closest boundary. And like in the minima you have like then seats and from there you start to grow exactly like you have seen before in the watershed. You start to grow until you hit the boundary basically. And this can be done very efficiently for smaller regions. And you will see in a minute that this gives you an over segmentation of the data. So you see there the data like is there are a lot more pieces for each object. Yeah, for each object. And the task in carving is then basically to join those objects. But while we are here, I know we are short on time, but I just want to mention that in order to get good results, you need good superpixels. So you have to at least quickly look at those superpixels you get here, randomize colors a little bit, see that nothing is connected to the next region because what is connected here cannot be disconnected later. So yeah. Okay, just this part of caution. Okay, and then we already go to the labeling step. Here, like the carving workflow is semi-automatic and you work on each object individually like one after one. And you do this by supplying labels pretty much like in pixel classification. So the labeling is done by brushstrokes and you mark something as foreground and something as background or object or background. Then you hit the segment button. And what you happen is that you get like a first segmentation of the object in 3D. But you can also see that it's not completely correct. So you add some more annotations there. And of course, you proofread this. I will not go through this whole volume. And you get like also a little 3D preview of the object here. And then the idea is, okay, like in the ideal words, you will spend some time looking at this object, correcting it where it's wrong. But then you save the object, save it, and then it disappears. And you can start working on the next object. I mean, right now you still know which object you have been working on, but this can get tedious. So there's this layer, complete objects, completed segments where you can see which objects you have not segmented yet. And then you go on to your next object, you mark this up. That was the wrong annotation. You mark it as either objects and the background is background. And you hit segment. And you get the next object. And then you can save it and so on and so on. Yeah, I should probably correct it, but we are really short on time. So what you can get out of this workflow. First of all, you can click, of course, on all of these and show them as we did together as well. Yeah, like this. But you can also export all your objects to OBJ files, which might be more interesting to edit for you. So I can, yeah, I would just export them to my 10 folder. So this exports mesh files like triangular meshes to a format which usually all software, like all 3D softwares understand, and then you can do much nicer visualizations than just this. What else can I mention here? There is no batch processing here, because as you have seen, I have to like mark every object individually. So if I put in more data, like there's nothing to be done because you really have to go for each object. Yeah, and maybe a little look behind the scenes. What's actually happening. So suppose we have a super pixel pass relation like this, so all the color things here are super pixels, like you have seen in the preprocessing, and here you have some kind of boundary. What is actually done internally is to represent this as a graph. Like each super pixel is a node and has a connection to its neighbor. And when you, yeah, and then if you provide a seed, for example here in the green one and then the blue one, you mark the first object as belonging, the first super pixel as belonging to a certain class. And what happens next, I will remove the super pixels to make it a little bit more clear. What happens next is you look at all the neighbors to all the neighboring super pixels which are associated with the cost, which is usually like the strength of the boundary between those super pixels. And you take the lowest one and you add it to the same class. Then again, you put all the neighbors of this one on like on your processing list and you will take again the one with the lowest cost. And yeah, you go on and on and on like this, always assign, like always go to the one with the lowest cost and assign it to the to the class and come up with a pass relation like this. And so this is, yeah, basically basically only watershed on graphs. The algorithm in carving is a little bit more, a little bit more, yeah, a little bit different. You can see here in two parameters. I don't want to go into much detail here, but how this actually achieves this performance, like just segmenting one object and having the rest as background is by prioritizing background seats or, yeah, background or super pixels connected to background seats with this factor. And if you see problems, you can probably play around with this and that, but usually you don't have to. Okay. So I mean, this is nice and cool, but this might take a long time for all those objects you have in your data. And luckily, we have something for you, which I want to show you next, which is the multicut. And you see the exact same data as before and a solution like this. And so what the multicut does is it's also the same sort of pass relation problem as carving, but it does it for the whole volume at once. And I'm thinking of skipping it, but we are still on good time. Maybe. So sorry. Okay. So the boundary segmentation of multicuts. Again, you save the project. You reload the same data as before, but this time the raw data as the gray level image and the it's probably behind again. The probabilities here, we do the same kind of super pixel segmentation. Here in the past, we have like in the past months, we have updated this workflow a little bit to be like really the state of the art because like this, like this algorithm is actually used in a lot of cool image analysis problems we see in our group. And this produces very, very good results. So we have updated elastic to use the same backend as we do in research. So everyone can profit from this. I hope this finish very quickly. So again, here you would do ideally some proofreading and like looking at the super pixels making sure that nothing is connected. It shouldn't be. But yeah, no time. And what you have here now, you have to train a classifier in order to find edges that should be kept and edges that should be brought away. And again, you will select some features. And this is like always the worst moment when I show elastic because this is the worst dialogue in all elastic. And we will of course change this. But right now it's still in place. What I recommend for now, because there are so many features, no one knows what to select. Just use the standard edge features and the value of this and both of the images. So you have your computed in raw data and the probabilities. So just use those and then train the classifier and hear the interaction is a little bit different than in other elastic workflows. So the two mouse buttons left on the right one have different meanings. If I press with the left mouse buttons, I mark edges as green and okay, this might be a little bit counterintuitive, but those edges are voted to be thrown away. And if I click with the right mouse button, right, red, edges turn red, and those edges will be kept or like those cast votes, let those edges to be kept. And you do some training. You go to live predict. This is fairly similar as other elastic workflows. And you see, I don't even know. This is an edge or not. It might be one. Yeah, let's see. And this is like, this is not the result yet. So what you actually go from there is you let the multi-cut algorithm solve the problem for you. You do this simply by clicking an update now. The color of the edges will change into blue in a few seconds. So you can see here. And maybe if I turn on this multi-cut segmentation layer, you can see that really like all objects were segmented simultaneously. In a really short amount of time, you can still go on and correct it where it's wrong. If it merged something that should not be merged, like if there is an edge, then you can merge it and run the multi-cut again. But yes, this is how you get the result for the whole volume basically in the same time as I did two objects in Kali. Okay. Let me quickly remove this and maybe mention in a few words why this works or how this works. So the multi-cut is like this. It's really a similar problem, but a different formulation. And you don't supply any seeds. So you don't know in advance how many objects you have. And you have attractive and repulsive edges. And you just use a very, very smart algorithm with a few set-up rules, how to parcel it, like how to merge and split edges in this graph that solves this problem for you into a global optimal. And that's of course a paper accompanying it. Yeah. And I think with this, this would be a natural point to have questions or in view of time switch to Anna. Yeah. So, you know, we've been pretty good with answering questions so far. And I don't see anything new over the last 10 minutes that has appeared that would really just to discuss like now or rather than in the Q&A window. So maybe we could just like keep going, right? Okay. And it's, I believe it's now my turn. Okay. Then I should be sharing my screen, right? Wait, I lost the whole window. What the heck? Okay. Almost there. Okay. So you have now seen, I could already start talking. Yeah. So you have now seen that it is, you know, pixel classification is kind of the basis of many things there. All right. So you can do, you hear me all right? It's all good? Yes. Yeah. Okay. Yeah. Good. So, yes. So pixel classification is the basis of many things. And sometimes you can work directly on the raw data, right? Sometimes your raw data is like that clean. But in many of the elastic workflows, you would really need to get the pixel classification running first. And, yeah, as I'm sure if you've been in the field for any amount of time, you have seen that some data is just difficult, right? And sometimes you need something better than pixel classification. And sometimes, yeah, things need more context, things need, yeah, just, yeah, more powerful algorithm. Although pixel classification is, you know, don't discard it, it is quite powerful. So, but it's, yeah, it's 2020. And so sometimes you just need a neural network. And the thing with the neural networks is that while they are difficult to train. So in elastic classifiers of a random forest is you've seen it, right? It has no parameters, nothing to tune, you just click around a little bit, it trains, right? You know, it was a hard to train. It got much easier over the last five years, but it's still too hard for us to just throw it at our users and say, yeah, now go on and tune all the hyperparameters. On the other hand, they're really fun to use because they work really well. So, yeah, that's why we have started like putting in the time to get them to work in elastic as well. And you can see there in the screenshot that in the latest release, release 1.4 beta, it's already there. And there is a neural classification workflow. And yeah, you're welcome to try it. Expect that there will be things that won't work. We would be very happy if you tried it and gave us some feedback and we would be there to help you run it. The difference to the normal elastic, well, elastic, as you have seen it, is the desktop application, right? It's a monolith. It's one thing. You install it, it runs. If you want to use neural networks, you need a GPU. If you want to use a GPU, well, and you can still do it with a CPU, but honestly, like GPU is so much better. So, we have a separate backend that you also would then need to install locally on your machine or on the remote server. If you install it locally, you can also run on the CPU if you want. It's not hard to install, but it's like it's not a single click. So, get in touch if you read the instructions on the installation and you feel like you can't handle it alone. We are very happy to help. What this applet now, so what this workflow can do, it can execute pre-trained networks. You can train yourself. This is still kind of in the alpha stage. So, it's there. You can kind of evoke it, but we are not responsible for the results you get. So, with pre-trained networks, yeah, you take it, you apply it, it runs. Where would you actually take a pre-trained network in that case? Well, there is this model zoo that you can get at the bioimage.io. It's early days, right? It's not very full, but it can only get full if more people actually contribute there. And what this model zoo is, it's a joint development between my lab and Emma Lundberg's lab, and in particular from Wei, who does it from their side, and also from Florian Jukes' lab, who do CSB deep. And so, we have put in a very substantial time, honestly more than we expected in the beginning, to define a model format that would be appropriate for the bioimage analysis case, including pre- and post-processing step, and also kind of putting this particular emphasis on the tractability of training data, which is very important if you have sort of non-computational users, do this kind of things. And it's already, this format is already, of course, supported by Elasticand by CSB Deep and Enjoy. And yeah, we expect more uptake in the future. If you want to contribute to your models, we would be so happy to see you. We are always ready to talk about this more. If you just want to try out the models that are there, do it. If you don't find something that would be useful for you, you can also, first of all, tell us that you need a model for this or for that. And we will also be putting out more and more of our own as we go. And now, to illustrate to you how this workflow works, we wanted to, sorry, yeah, I'm in the, yeah, we wanted to show you this one network that is actually already uploaded. And this is some research work from my group that Adrian Borny has been doing. And this is actually, it needs two steps. You need to predict the boundaries first with the neural network. And this is the neural network that we have made available. And the second step is multi-cut. And yeah, so we'll show to you live now. The network has been trained on the Arabidopsis Oval's data with membrane staining, as you can see there. And it then, so it's been trained with a lot of data augmentation. So it works fairly well. On all the plant data we have tried it all. So it's been trained on confocal. So yeah, it's probably more prudent to stay on confocal. But like confocal plant with membrane staining, it works really well. And this is what Dominik is going to show you now. The following segment has been recorded after the seminar. As Anna has mentioned earlier, a second component to the elastic desktop application is necessary to run the neural network classification workflow. And ideally, this component is installed on a machine that has a GPU. I'm only here with a little laptop. So my laptop doesn't have a GPU. So I installed it on a server with a GPU. During the seminar, the server was not accessible. So the following segment has been recorded afterwards when access to the server component was really established. In order to access the neural network classification workflow, you have to install at least version 1.4.0 of elastic, which is currently in beta. After selecting the input data you want to work with, the next step is to configure the server component of the neural network classification workflow. So this could be installed either locally on your own machine or on a remote server. In my case, I connect to a remote server, so I have to configure my username safely at the part. After getting the device info, I'm now ready to go to the next step. In the prediction step, you have to supply a pre-trained neural network model that you want to apply to your data. Such pre-trained models are available in the model zoo at bioimage.io. When you have found one that has in the preview something that looks like your data and also has the elastic logo in the banner, you can download it. When you have downloaded the network or already have it on your hard disk, you can click on load model, select the model and open it. Now, the model is sent to the server backend, where it is configured, so this takes a little bit of time. When all the weights are loaded to the GPU, you are ready to go and can click on live predict. Now, what happens is that the image is cut up into little blocks and those blocks are sent to the server. For the network I have selected, the block size is 172 by 172 by 88. So after sending the blocks to the server component, the forward pass for the network is triggered and once this is done, you get predictions back. This can take a little bit of time and depends on the connection you have to your server and also on which hardware you are running on the server. I made them here a little bit more visible so you can really appreciate how good those predictions are. That was the live prediction mode and what I'm going to do next is to export the whole volume so I can use it in the multi-cut workflow. Internally, the procedure is very similar to the one in live update mode, but this time the whole volume is cut up into pieces, not only the portion you see and all those blocks are sent to the server. They come back and are written to a file. Progress is reported in the progress bar and once we have reached 100%, we can go on to the next workflow. In order to get all those little objects out of the data, I want to use the probability map for the boundaries we have just created in the boundary-based segmentation with multi-cut workflow. So I create a new project and load the original image as well as the probability map of the neural network into this workflow. The first step after loading the data is to compute superpixels on this data. I just started right away because it takes a while to compute. The default settings are fine. I also want to mention here that in the latest version of Elastic, so everything from 1.4.0, we have worked a lot on the back end of the multi-cut, so that it includes the cutting edge code that is running also in our research work. So you see the user interface is a little bit different, so you have different values to change. So you can select the channel, the threshold after which you want to cut off the boundaries, so everything below the threshold will not be included in the computation. With the next parameter, you set the minimum segment size. With the smooth parameter, you have control on how over-segmented your image will be, so higher values usually produce a smaller number of segments and lower values of smoothing produce higher number of segments. I thought it's used to blend boundaries and distance transform when doing the watershed. Once the watershed is finished, you should in principle inspect the result and make sure that there are no superpixels connecting to separate objects in your data. If you find something like this, just change the parameter. For example, if you see a connection, you can try to use a smaller sigma value for the smoothing and run the watershed again. Once you are satisfied with your watershed, you can go to the next applet, the training multi-cut applet. And in the beta version, we have added a checkbox to not train an edge classifier. The reason is that if you have predictions from a neural network, they are usually that good that you don't need that. So you can just proceed to the multi-cut directly. And it finishes rather quickly. I return on the multi-cut segmentation layer so you can really appreciate how well it worked. And with this, I want to switch back to the live webinar recording where I will talk about automation of elastic. Elastic does not exist alone in the world of image analysis. We usually elastic is part of an image analysis pipeline. And in order to be more flexible and to allow integration with other pipelines, we have various ways of automating elastic processing. One is the headless mode where you call elastic without opening the graphical user interface. So you use a pre-train project. You call it with the elastic headless mode and just apply new images to it, which then will be written out and processed in the same way that your training data was processed. Because a lot of users do the post and pre-processing in Fiji, we actually have a Fiji plugin where you can run and train elastic classifier on your own data. And this is maybe the last thing I will try to show. So I start up Fiji. Let's see the logo already appearing. The elastic plugin is installable like any other plugin via the Fiji update site. Just look for elastic for IJ and then download it like install the plugin automatically and you get this additional entry. And it lets you do different things. Like the most important one is configure the elastic executable location. If you click on this, you have to find the elastic installation. There are hints on where to find those on the different OSs and put it in there. You can control how your computing resources are used and say okay. And from there on, now let's load some data. So I load some, sorry, I load some DAPI image here, which I then want to process with my pre-trained project. So I just take a project that I have trained before, want to run pixel classification on it. Here I can select it. It's pixel classification DAPI, which I trained on similar data. I hit okay. And in the background from Fiji, this elastic is now called in the set list mode that I mentioned earlier. The data that you have on the screen is written to a temporary file. The data is processed into a temporary file again and then written out and loaded into Fiji. So pretty seamlessly, you get clear results. And here you have three different channel predictions for the data you can see here. But this is not all, of course. You can also, the plugin is also micro-able. And maybe I can just quickly show it. So this macro will also be provided to you. But if you're familiar with macros, then you will not be afraid of this. I became a little fan of macros because they are so convenient. And I mean, there's a lot of fluff around this. In this macro, you will actually extract one of the channels and do a mask of this, like do simple fresh holding on it. So you can do other processing or write the file out. But the most important line is really just this run pixel classification prediction and with your parameters that you can fully specify. And then it will process a whole folder for you and write out results. Or you can do anything else. Okay. I guess with this, I will switch to Anna one last time. I already thank you because this was the last thing I wanted to show. Anna, are you there? Yeah, I think, I don't really have much more. I'm there. Can you actually go to the next slide? Because I don't want to switch the screens for just one slide now. Okay. Yes. So there is a follow-up section next week. And yeah, it would be awesome if you could come. It would be even more, like, we always happy to see you. If you try out elastic in the middle and in the meantime, and actually come with the particular hands-on question, then yeah, that would be excellent time. If you don't manage to join, no problem. We are waiting for you on the forum. We are always happy to answer. Thank you very much for coming. Good. So thank you very much for participating and thank you, Anna and Dominic for a very nice presentation. And hope to see everyone or some of you next week. We will send all the details through email. Okay.